in

AI needs to be controlled. But lazy humans may not be up to the job

Artificial intelligence, for all its benefits, needs human oversight. Government reports, and experts all over the world have stressed the importance of keeping a human decision-maker in the loop when using AI. 

“Human agency and oversight” is the first key requirement laid out in the EU Commission’s white paper on the regulation of AI published earlier this month; and establishing oversight of “the whole AI process” is also a recommendation from the UK’s Committee on standards in public life. Only this week, the Metropolitan police chief Cressida Dick reiterated her commitment to having human workers always making final decisions in policing, rather than letting new technologies overrule officers’ authority.  

In the US, the Pentagon released guidelines last year on the ethical use of AI for military purposes. Among the chief recommendations, the document also features the need for an “appropriate” level of human judgement whenever deploying an autonomous system.

Artificial Intelligence

It’s all well and good to recommend that humans consistently monitor the decisions made by AI systems, especially if those decisions impact decisive fields like warfare or policing. But in reality, how good are humans at catching the flaws of those systems? 

Not good enough, according to Hannah Fry, associate professor in the mathematics of cities at University College London. Speaking at a conference organised by tech company Fractal in London, Fry explained that having a human overseeing an AI system does not entirely solve the problem – because it doesn’t do much to overcome innate human flaws. According to Fry, and we place excessive trust in AI systems with consequences that can sometimes be dramatic.

“If there is one thing you can say for sure, it’s that you cannot trust people,” said Fry. “As humans, we are lazy and we take cognitive shortcuts. Misplacing our trust in machines is a mistake that all of us are capable of doing.”

Case in point: a few years ago, three Japanese tourists found themselves driving into the Pacific Ocean off the coast of Australia while trying to reach North Stradbroke island, because their GPS system had failed to account for the nine miles of water lying between the island and the mainland.

The anecdote might be entertaining; but it turns out that people are a lot more like those Japanese tourists than we’d think them to be, said Fry. In this case, the main damage caused by the tourists’ over-reliance on GPS technology was the loss of the their rented Hyundai Getz; but our trust in technology can come at a much greater cost, for example when we get to relying on self-driving cars. 

In driving, she explained, humans are bad at paying attention, at being aware of their surroundings and at performing under pressure. And yet, she noted, the idea behind driverless cars is that the human monitor should step in at the last possible moment and operate at peak performance, at the moment where it matters the most. 

Having a human overrule the automated decision-maker in the car? “That’s not something that’s going to happen,” warned Fry.

That is not to say that algorithms should not be deployed altogether. Quite the opposite: Fry herself is a self-professed defender of artificial intelligence, and of the huge benefits it could bring to fields such as healthcare. But there is one simple rule that should apply to all AI systems, according to the mathematician: we should only use algorithms as long as we can trust humans to overrule them when necessary.

In a research paper published in 2018, think tank Pew surveyed almost 1,000 technology experts to gather their insight on the future of humans in the age of artificial intelligence. One of the main take-aways was a similar concern to that of Fry: that people’s growing dependence on algorithms would eventually erode their ability to think for themselves.

The solution, for Fry, lies in adopting a “human-centric” approach when developing new technologies; in other words, an approach that accounts for human flaws. The mathematician advocated for a “partnership” between humans and machines that could combine the best of both – while also ensuring that there is always space for humans to question the algorithm’s results.

One particular field where such a partnership could have promising results is healthcare. When diagnosing cancer, for instance, doctors face the imperative of being both sensitive, to not miss any sign of a tumor; as well as specific, to avoid the unnecessary over-flagging of suspicious tissue. 

While humans are “rubbish” at sensitivity, Fry said that algorithms are “ultra-sensitive”; and on the other hand, she described specificity as our “human superpower”. Combining both sets of skills, she concluded, could have tremendous results for healthcare. 

“This is the kind of future I am hoping for,” she said. One where we acknowledge, when deploying new technology, that it’s not only machines that have flaws; but that humans do too.


Source: Information Technologies - zdnet.com

How to build a better voting system that resists hacking

To self-drive in the snow, look under the road