Doctors and nurses in North Adams, Massachusetts, may use robots to assist in operations. Even though new technology is helpful, it can still contribute to medical errors and complicate malpractice cases.
Innovation of medicine
Medical robots and chatbots are monitoring vulnerable elders with simple tasks, and AI-driven therapy apps are helping some mentally ill people. AI also controls drug ordering systems to help doctors avoid dangerous combinations. Robots even aid in surgery to make operations safer and more precise. As AI is more embedded in medicine, it muddies liability questions. If these robots fail, who’s at fault?
Robotically assisted surgical devices, or RASDs, help surgeons control small cutting devices instead of scalpels. If a surgeon’s hand slips and cuts a tendon, the surgeon is at fault under medical malpractice. However, if the tendon avoidance subroutine alarms on a device fail and the surgeon doesn’t catch it in time, is the manufacturer at fault or just the surgeon?
A therapist or counselor may tell a patient to use an AI therapy app to keep track of cravings and feelings. The therapy apps often tell patients to do certain therapeutic actions if the counselor isn’t available. What if the therapeutic actions contradict each other and hurt the patient? Is the publisher of the app at fault or just the counselor?
Standards for liability
Medical malpractice is for physicians who have made a mistake. If a robot makes a mistake, the manufacturer could be liable. Under medical malpractice law, a strict liability standard is in effect for an adverse event for a manufacturer, retailer, or distributor of a product.
A manufacturer can be liable without evidence of negligence. AI and robots assist in medicine, and there needs to be a way for the courts to see if manufacturers were negligent with their products.