Opinion | AI could improve medical practice, but only if it’s done right


  • author['full_name']

    Fred Pelzman of Weill Cornell Internal Medicine Associates and weekly blogger for MedPage Today follows what is happening in the world of primary care medicine from the perspective of his own practice.

I don’t know about you, but right now I’m concerned that there is someone out there trying to figure out how to make AI make healthcare workers work harder, not work better.

And above all, I’m concerned that that “someone” is not us.

I’m concerned that there are a lot of forces out there, some aligned, some working independently, that are taking a look at AI and saying, “That’s the answer! This is how we’re finally going to fix the healthcare system.” But I’m worried it’s not us.

We’ve seen this happen before, with seemingly well-meaning people feeling like they know the best way to do things by telling those of us on the front lines how to do healthcare. Someone tries to tell us what drugs we can prescribe, what treatments and procedures will be covered for our patients, how to write a note in the electronic health record or what we are obligated to check.

The art of medicine

All those years ago, when we learned how to write a progress note in medical school, we were taught by people who never really bothered to create a billing compliant document. My greatest mentors have been outstanding doctors, brilliant diagnosticians, and caring, compassionate people — people who weren’t interested in committing fraud but they were interested in taking care of patients.

They viewed this process that we went through — the history, the physical examination, and the data collection — as a work of art. Drawing on our stored knowledge, collective memories, knowledge of literature and so much more, all of this has gone into taking care of people. It wasn’t just about making sure I reviewed 10 organ systems, each with a large collection of detailed symptoms that weren’t really relevant to patient care that day.

Of course, a long time ago in medical school I remember being taught what a “complete” systems overhaul involved, all the stuff you had to go through to make sure you didn’t miss anything. But after a while, much of that falls by the wayside. The more experience a doctor has, the less reliance on these obscure and often irrelevant elements.

The powers that be have insisted that we continue to include them, along with so many other ephemera and curiosities to flesh out our notes and keep someone other than ourselves, our colleagues and our patients healthy and happy. Some have also decided that we should ask for a pain score at every office visit, check for depression and suicide at every office visit, ask about falls at every office visit, and tick a series of boxes on the social determinants of health when no one has ever really demonstrated that doing all of this solves these problems directly.

A solution that leads to problems

Now comes something big, promising, and actually very terrifying: the looming promise of AI in healthcare. Even as I write this, there are probably meetings going on, conferences, think tanks, and lots of people who have brilliant ideas on how to use this to do more in healthcare.

But those of us who work in the day-to-day world of patient care know that solutions like this, more often than not, create more problems than they solve. If an AI system just generates a vast differential diagnosis and pushes us a huge number of suggestions about what to do next, we end up being forced to cover these bases, order these tests, follow these misguided paths of our patient care.

Over the past few months, there have been a number of well-publicized examples of what I’ve heard described as “hallucinations”: the creation of false data to support something the AI ​​says is true. There was also that recent lawsuit case against an airline, in which the plaintiff’s lawyer presented the judge with a written argument full of false court references, created by the artificial intelligence chatbot, which in reality the lawyer he would ask the system to confirm they were true, which he did with conviction.

But they weren’t.

See the potential

As I’ve written before, I feel like there’s really incredible potential for these types of systems to do a lot of rote work, labor intensive, and repetitive tasks to make our lives easier, not harder. And perhaps, as our radiologist colleagues are already doing in some way, it can serve as an aid to working alongside us, helping us not to miss anything, without overdoing it and creating excessive worries.

I see a future where intelligent systems like these work as our assistants, helping to ensure that tasks get done, that patients are reminded of what they are due for, that appropriate follow-up appointments are made and kept, that the data is collected and collated appropriately, perhaps even with some useful interpretation and suggestions provided for good measure.

But if we let pharmaceutical companies, insurance companies, hospital systems, electronic health record makers, and others do so without the direct input and guidance of those who will ultimately use these tools, we risk plunging once again into a even deeper quagmire of stuff that people think is a really good idea but ends up not really helping anyone.

So, I wish the people who are working on it, in any tech company out there, would be willing to reach out to primary care physicians, surgeons, radiologists, ophthalmologists, dermatologists, everyone on the healthcare team, to ask us how we think may this stuff help. Let them show us what it can do, and then let us suggest ways it could help, and warn them about what it shouldn’t do.

If that happens, we’re more likely to end up with something that saves money, prevents burnout, and saves lives. As we learned in the first Terminator movie (1984) (and all since), we always let things get out of hand. Skynet, the AI ​​system they thought would fix everything, begins to learn rapidly and become self-aware at 2:14 am on August 29, 1997. Who could have predicted this would happen?

Amazing to me now how long ago that date seems from where we are now. And how worrying that we may still not have learned anything.

#Opinion #improve #medical #practice

Previous article5 features the Google Pixel should take from third-party Android skins
Next article10 security settings that protect your iPhone from thieves


Please enter your comment!
Please enter your name here