Robots Learn to Do Surgery on Their Own by Watching Videos
The artificial intelligence boom is already beginning to enter the medical field through the use of AI-based visit summaries and analysis of patient conditions. Now, new research shows how AI training techniques like those used for ChatGPT can be used to train surgical robots to operate on their own.
Researchers from John Hopkins University and Stanford University created a training model using video footage of human-controlled robotic arms performing surgical tasks. By learning to mimic the actions on video, the researchers believe they can reduce the need to plan each movement required in the process. From the Washington Post:
The robots have learned to use needles, tie knots and sew wounds on their own. In addition, trained robots go beyond imitation, correcting their slip-ups without being prompted ― for example, picking up a dropped needle. The scientists have already started the next phase of the work: to combine all the different skills in the complete surgery performed in the animal cavities.
To be sure, robots have been used in the operating room for years now—back in 2018, the “grape surgery” meme highlights how robotic arms can help with surgery by providing a higher level of precision. Approximately 876,000 robotic-assisted surgeries were performed in 2020. Robotic instruments can reach places and perform tasks on the body where a surgeon’s hand can never reach, and they don’t suffer from vibrations. Small, precise tools can prevent nerve damage. But robots are usually guided by a surgeon with a controller. The surgeon is always in charge.
The concern of skeptics of autonomous robots is that AI models like ChatGPT are not “intelligent,” but rather simply mimic what they’ve seen before, and don’t understand the basic concepts they’re dealing with. The endless variety of pathologies in countless varieties of human hosts presents a challenge, so what if an AI model has never seen a particular condition before? Something can go wrong during surgery in a split second, and what if the AI isn’t trained to respond?
At a minimum, autonomous robots used in surgery will need to be approved by the Food and Drug Administration. In some cases where doctors use AI to summarize their patient visits and make recommendations, FDA approval is not required because the doctor must technically review and approve any information it generates. That’s worrying because there’s already evidence that AI bots will make bad recommendations, or see things that aren’t there and insert information into meeting documents that was never discussed. How many times has a tired, overworked doctor rubber-stamped anything AI produced without carefully examining it?
It sounds reminiscent of recent reports about how the Israeli military is relying on AI to pinpoint attack sites without vetting the information. “The soldiers were not properly trained in the use of technology and attacked people without confirmation [the AI] predictions at all,” a Washington Post The story is read. “Sometimes the only verification needed was that the target was male.” Things can go wrong when people don’t care and aren’t involved enough.
Health care is another field with high stakes—certainly higher than the consumer market. If Gmail summarizes an email incorrectly, it’s not the end of the world. AI systems misdiagnosing a health problem, or making a mistake during surgery, is a much bigger problem. If so, who is responsible? I Submit interviewed the director of robotic surgery at the University of Miami, and this is what he had to say:
“The numbers are very high,” he said, “because this is a matter of life and death.” The makeup of each patient is different, as is the way the disease behaves in patients.
“I’m watching [the images from] CT scans and MRIs and do surgery,” by controlling robotic arms, Parekh said. “If you want a robot to operate on itself, it will have to understand all the images, reading CT scans and MRIs.” In addition, robots will have to learn to perform keyhole, or laparoscopic, surgeries that use very small incisions.
The idea that AI will not fail is hard to take seriously when no technology has ever been perfected. Sure, this autonomous technology is interesting from a research perspective, but the blowback from informal surgeries performed by an autonomous robot can be monumental. Who do you punish when something goes wrong, who has their medical license revoked? People aren’t infallible either, but at least patients have the peace of mind of knowing they’ve gone through years of training and can’t be held accountable if something goes wrong. AI models are crude simulacrums of humans, sometimes behave in unpredictable ways, and have no moral compass.
If doctors are tired and overworked—a reason the researchers suggest why this technology could be valuable—perhaps the systemic problems causing the shortages should be addressed instead. It is widely reported that the US is facing a severe shortage of doctors due to the increasing inaccessibility of this profession. The country is on track to experience a shortage of 10,000 to 20,000 surgeons by 2036, according to the American Association of Medical Colleges.
Source link