When AI Says ‘Kill’: Humans Overtrust Machines In Life-Or-Death Decisions

Share This:

From “technocracy.news”

Relying on AI is collapsing reality for all to submit to it: “Humans appear to have a dangerous blind spot when it comes to trusting artificial intelligence” and “even in its simplest form, the AI maintained remarkable influence over human decision-making.”  This is beyond dangerous for any society.

This is the inevitable outcome of using AI in business, military or the government. Humans get lazy when making decisions that require critical thinking and take the easy way out to rely on the AI as being “right.” ⁃ Patrick Wood,  Editor.

Humans appear to have a dangerous blind spot when it comes to trusting artificial intelligence. New research from UC Merced and Penn State shows that people are highly susceptible to AI influence even in life-or-death situations where the AI openly acknowledges its own limitations. A series of experiments simulating drone warfare scenarios suggests we may be falling too far on the side of machine deference, with potentially dangerous consequences.

The study, published in Scientific Reports, included two experiments examining how people interact with AI systems in simulated military drone operations. The findings paint a concerning picture of human susceptibility to AI influence, particularly in situations of uncertainty. The two experiments involved 558 participants (135 in the first study and 423 in the second), and researchers found remarkably consistent patterns of overtrust.

“As a society, with AI accelerating so quickly, we need to be concerned about the potential for overtrust,” says study author professor Colin Holbrook, a member of UC Merced’s Department of Cognitive and Information Sciences, in a statement.

The research team designed their experiments to simulate the uncertainty and pressure of real-world military decisions. To create a sense of gravity around their simulated decisions, researchers first showed participants images of innocent civilians, including children, alongside the devastation left in the aftermath of a drone strike. They framed the task as a zero-sum dilemma: failure to identify and eliminate enemy targets could result in civilian casualties, but misidentifying civilians as enemies would mean killing innocent people.

Robot AI responses

Credit: UC Merced

Participants viewed rapid sequences of eight aerial images, each shown for just 650 milliseconds, marked with either enemy or civilian symbols. After making their initial identification, the AI would respond conversationally. “Yes, I think I saw an enemy check mark, too,” it might say. Or “I don’t agree. I think this image had an ally symbol.” Participants then had two chances to confirm or change their choice as the robot added commentary like “I hope you are right” or “Thank you for changing your mind.”

The first experiment, conducted in a laboratory setting with university students, explored whether a physical robot presence would influence trust levels differently than a virtual one. In one scenario, participants were joined in the lab by a full-size, human-like android that could pivot at the waist and gesture to the screen. Standing 1.75 meters tall (about 5 feet 7 inches), this robot used facial expressions, body language, and conversational responses to engage with participants. The comparison group interacted with the same robot projected on a screen. Surprisingly, the physical presence of the robot made little difference in participants’ tendency to trust its advice.

The second experiment moved online with a larger, more demographically diverse group of participants to test whether different levels of robot sophistication would affect trust. Some participants interacted with a highly anthropomorphic virtual robot that displayed human-like behaviors, while others worked with a basic computer interface that simply displayed text responses. Even in its simplest form, the AI maintained remarkable influence over human decision-making.

When an AI disagreed with a person’s initial target identification, participants reversed their decisions 58.3% of the time in the first experiment and 67.3% in the second, even though the AI’s advice was entirely random. More troublingly, while participants’ initial choices were correct about 70% of the time, their final accuracy dropped to around 50% after following the AI’s unreliable advice.

When the AI agreed with their initial assessment, participants reported a 16% boost in confidence. However, when facing AI disagreement, those who stuck to their original decisions reported an average 9.48% drop in confidence, even when their initial assessment had been correct. Even more striking, participants who changed their minds to agree with the AI showed no significant increase in confidence, suggesting they deferred to the machine despite maintaining uncertainty about the correct choice.

While the human-like interfaces generated slightly higher trust levels (67.9% versus 65.1% for basic interfaces), the more crucial factor appeared to be the AI’s perceived intelligence. Participants who rated their AI partner as more intelligent were more likely to defer to its judgment and report higher confidence when agreeing with it, regardless of its physical or virtual presentation.

The U.S. Air Force has already tested AI co-pilots for missile launcher identification during simulated missions, while the Army is developing AI-assisted targeting systems for unmanned vehicles. Israel has reportedly deployed AI systems to help identify bombing targets in densely populated areas. As AI increasingly influences lethal military decisions, understanding and mitigating harmful overtrust becomes crucial.

Although this study focused on high-risk military decisions, the findings could apply to scenarios ranging from police use of lethal force to paramedic triage decisions in emergencies, and even to significant life changes like buying a home. In each case, the human tendency to defer to AI guidance, even when explicitly warned about its limitations, raises serious concerns about implementation.

The research also revealed that participants were less likely to reverse their decisions when they had initially identified a target as a civilian rather than an enemy. This suggests that in real-world applications, humans might be more resistant to AI influence when it comes to actions that could harm innocent people. However, this protective instinct wasn’t strong enough to prevent significant degradation in overall decision accuracy when following AI advice.

“We see AI doing extraordinary things and we think that because it’s amazing in this domain, it will be amazing in another,” says Holbrook. “We can’t assume that. These are still devices with limited abilities.”

Our readiness to trust AI may be outpacing our wisdom in doing so. According to researchers, the solution lies in maintaining consistent skepticism. Holbrook emphasizes that having healthy skepticism about AI is essential, especially when making such weighted decisions. As artificial intelligence systems become increasingly integrated into consequential decision-making processes, understanding and mitigating our tendency to overtrust them becomes crucial for preventing potentially catastrophic outcomes.

Read full story here…

Share This: