Destruction of Medicine? A Bill to Allow Prescriptions by AI

[ad_1]

On September 30, 2021, David Schweikert (R-Ariz.) introduced a bill (HR5457) that, if enacted, would qualify AI as a medical practitioner, eligible to prescribe drugs.

Whoa there. That puts the conversation about the professional sovereignty of physicians in a whole different context — and sadly, that reformist context has been there all along. Let’s first look at the text of the bill. It says the following:

“To amend the Federal Food, Drug, and Cosmetic Act to clarify that artificial intelligence and machine learning technologies can qualify as a practitioner eligible to prescribe drugs if authorized by the State involved and approved, cleared, or authorized by the Food and Drug Administration, and for other purposes.”

The part where it says “for other purposes” is interesting. What if AI determines that something is safe and effective — and it should be mandated? It’s hard to debate with AI. The safe and effective thing could be a therapy, a nutritional system to fight climate change, a mental hygiene regime — anything, really. Drug prescription is just a foot in the door but the sky is the limit. And who do we sue?

And who is David Schweikert? Currently, he is serving his sixth term in the United States Congress. At the moment, he “sits on the Ways and Means Committee, having previously served on the Financial Services Committee.

He also sits on the bicameral Joint Economic Committee, serving as the Senior House Republican Member, Co-Chairs the Valley Fever Task force with House Minority Leader Kevin McCarthy, is the Republican Co-Chair of the Blockchain Caucus, Co-Chair of the Tunisia Caucus, and Co-Chair of the Tele Health Caucus.”

According to his website, he “championed key reforms such as the Secret Science Reform Act, which has passed the House of Representatives.” That particular reform was interesting as it used the “good” language of scientific transparency to limit the powers of the already not very useful EPA.

Interestingly, his website also calls him a “national leader on tribal policy,” working with Arizona’s tribal communities on important priorities. Lots to ponder.

So let’s talk about why this bill is significant — regardless of whether it passes or not. The bill serves the obvious practical purpose and a psychological purpose. If it passes and stays, then we can kiss our medical sovereignty goodbye for some time — since arguing with a robot is much harder than arguing with the most menacing and unmovable Soviet cashier of my childhood (a personal memory; I was terrified of cashiers)!

But whether it passes of not, it’s a sign of where the wind is blowing and an act of widening of what’s psychologically acceptable. In other words, an act of eating at our sanity. And of course, the desire to possibly replace the current medical system with AI and telemedicine predates COVID. Let’s go back in time.

The year was 2019. The name of the committee was National Security Commission on Artificial Intelligence (NSCAI). The committee, chaired by Eric Schmidt of Google, issued a report called, “Chinese Tech Landscape Overview.”

In that report, obtained by EPIC through a FOIA request, the committee talked about the AI race between the U.S. and China, and what kind of “legacy systems” in the U.S. were in the way of winning. (If you want to see their most recent report — which is very long and, unlike the 2019 FOIA-obtained report, was meant for public display and thus was worded far more diplomatically — you can find it here.)

Now, an important note. I personally believe that the competition between different countries is not what drives this trend — and that Eric Schmidt, out of all people alive today, is using “international competition” as an excuse to attempt installing his favorite AI on top of the people (but underneath his friends, of course).

Which is not to say that international competition does not exist — of course it does — but the attempted reform that goes under the names of the Great Reset, 4IR, or Happytalism, is supranational, in my opinion. However, using a bogeyman (be it “China,” “America,” “Russia,” or “COVID”) is a proven strategy.

(Speaking of using the “crisis mode,” here is a very interesting 2015 paper titled, “Rapid Medical Countermeasure Response to Infectious Diseases: Enabling Sustainable Capabilities Through Ongoing Public- and Private-Sector Partnerships: Workshop Summary.” It talks about coronaviruses. This bit below mentions a quote by Peter Daszak talking about the development of pan-influenza and pan-coronavirus vaccines.)

quote by peter daszak

Going back to the 2019 NSCAI report, the committee, first written about in detail by Whitney Webb on Last American Vagabond, “was created by the 2018 National Defense Authorization Act (NDAA) and its official purpose is ‘to consider the methods and means necessary to advance the development of artificial intelligence (AI), machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.'”

To further quote Whitney, who did an outstanding job two years ago covering this development, the report “says that ‘creation,’ followed by ‘adoption’ and ‘iteration’ are the three phases of the ‘life cycle of new tech’ and asserts that failing to dominate in the ‘adoption’ stage will allow China to ‘leapfrog’ the U.S. and dominate AI for the foreseeable future.”

three phases of the life cycle of new tech

The report mentions that while the United States is leading in the “creation” phase of AI development, China is leading in the “adoption” phase due to specific “structural factors,” framed in the report as very advantages to winning the AI race.

To be clear, the report does not directly say anything like, “it would be great if we were a little more like China in terms of those structural factors.” It does not say that directly. But it seems to imply it as it describes winning the AI race as absolutely desirable and then lists the American “legacy” systems as obstacles on the way.

legacy systems
good enough hinders adoption

What does the report says about medicine? It seems to frown at the medicine of today and to favor AI and telemedicine. Please see below:

AI for medical diagnosis

Conclusion?

My subjective conclusion is that perhaps putting AI in charge of medicine has always been the goal? Perhaps the illogical and abysmal state of “human” medicine in 2021 is not coincidental? 

So far, we’ve seen scientific and medical censorship, unhelpful official protocols, forced closures of hospitals, artificially created staffing shortages — and perhaps at least in part we are seeing it because it helps the advancement of 4IR?

Perhaps?

It looks like the things that we consider generally good for people, such as having in-person access to a caring doctor — or the “regulatory barriers” protecting our privacy — all those natural things are viewed as undesirable by our aspiring masters.

Their proverbial New Normal is not for people. It’s for the people’s “owners.” (By the way, AI is not human or conscious, no matter how hard they spin it. AI is software. Somebody owns the software, including medical software. Somebody pays for it, somebody writes it, somebody patents it, somebody owns it. This whole “AI going conscious” thing is a well-funded con job in my opinion — just like the sale of “immortality” via converting to a data bundle.)

When it comes to medicine, in the New Normal, sovereign physicians (and sovereign patients) are a liability and an inconvenience. The New Normal does not fancy human subjectivity — not philosophically and not economically. The New Normal is about effective asset management, where regular citizens are a class of assets, just like machines or minerals.

Assets are supposed to be useful and are not supposed to think. The entire foundation of the New Normal social system is the denial of free will.

The New Normal does not account for privacy or personal space. It’s a new digital order — with citizens united in homogeneity, under AI. It’s not about balance or sustainable frolicking in the grass while the robot toils — it’s about converting our creative energy into the fuel for the machine. It’s essentially anti-life.

Another tangential but very curious window into how the crazies think is this 2001 report titled, “Future Strategic Issues/Future Warfare [Circa 2025]“. It has every dystopian trick on the book, even “co-opted insects.” And I suppose it’s possible to weaponize insects, viruses, or art. It’s all been tried. But no matter what the broken ones think or do, their abuse is temporary.

Dysfunction produces pain. Pain produces questions. Questions produces resistance. Resistance produces change. And then it breaks. This time around, too, life will prevail — and that’s regardless of what the technocrats desire.

A note: Philosophy and emotions are important at the time of the Great Reset because they remind us of who we are and help us to fight for life. The maniacs can think up a world in which we are soulless assets. They can pass bills in which we report to AI. But if we refuse to shut down our hearts, in the end we win.

I’d like to finish this story with an open question from the beautiful film called “A Hidden Life.” In this film, the protagonist asks a priest, “If our leaders, if they are evil, what does one do?” What does one do? I am thinking, maybe we have a duty to our hearts and the generations past. And maybe this time around, we get to live.



[ad_2]

Source link

Get the best deals and offers !
Getnicheplus
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0
Shopping cart