Who Is Responsible When AI Fails?

Gautham Thomas
JUNE 27, 2018
self driving car death,ai responsibility,artificial intelligence liability,tesla death,uber death,hca news

In March, Elaine Herzberg walked her bike across the street in Tempe, Arizona, unaware of the Uber self-driving car moving toward her at 38 miles per hour. The car, with a human driver behind the wheel but not in control, struck Herzberg. She later died in a hospital. It was perhaps the first fatality of an unconnected pedestrian involving a self-driving car. Herzberg’s daughter and husband were preparing to file a lawsuit. But 10 days later, Uber settled with Herzberg’s family for an undisclosed sum.

After the accident, Arizona Governor Doug Ducey suspended the testing of Uber self-driving cars in the state, according to a letter he sent to the company. At the time, he said, about 600 vehicles with “automated driving systems” from several companies were cruising Arizona roads. The National Transportation Safety Board (NTSB) is still investigating the incident. A preliminary report was expected to be published in May, but the final examination could take two years to complete, said Christopher O’Neil, a spokesman for the board.

>> LISTEN: Finding Fault When AI Kills

There have been only a handful of fatal accidents involving self-driving cars, so case law guiding liability has not been clearly established.

For healthcare applications of artificial intelligence (AI), even the nascent case law that has begun for automobiles and self-driving cars is not firmly established. Although self-driving car systems are being tested on a regular basis at scale in the real world, AI systems for healthcare applications do not have the same authority over the lives of patients.

In the healthcare sector, what is broadly called AI takes a number of forms, from simple algorithms programmed to create efficiencies to machine-learning systems used to analyze images and make treatment recommendations. But some AI applications go beyond diagnostic tools, equipping robots to deliver treatment. As AI systems integrate with patient care and make more autonomous decisions, liability questions become more important in an environment without much legal guidance.


Healthcare Diagnostics and Tools Are Not “True AI”

“We’re not operating in the world of true AI yet,” said Tracey Freed, JD, a transactional attorney who teaches at Loyola Law School’s Cybersecurity & Data Privacy Law program. “We haven’t reached artificial general intelligence, where machines are making autonomous decisions on their own.”

To anticipate that development requires “thinking of what that world could look like and what liability looks like,” Freed said. “From a legal perspective, [AI could be considered] ‘agents’ of these companies or hospitals. Having these machines be my agent means I take on the liability for any incident that arises because of the machine. That’s the situation we’re more likely in today.”

>> LISTEN: Is AI Real?

An Israeli technology company, Ibex Medical Analytics, designed its Second Read system to check clinical pathology diagnoses. To train the system, the company fed it thousands of images of prostate core needle biopsies to teach it how to distinguish between benign and cancerous diagnoses.

Such tools—machine learning systems that are fed massive amounts of imaging or other data and trained to distinguish between different results—have become increasingly common. The federal National Institute of Standards and Technology has begun to push for the adoption of standards for measurements and medical imaging. These benchmarks could ease the design of machine learning systems and make them more broadly useful.


SHARE THIS SHARE THIS
2
Become a contributor