How do we really judge

by Brenden Burgess

When you buy through links on our site, we may earn a commission at no extra cost to you. However, this does not influence our evaluations.

Suppose you have shown that an artificial intelligence tool offers precise predictions on certain actions you have. How would you feel like using it? Now suppose you apply for a job in a company where the HR service uses an AI system to filter CVs. Would you be comfortable with that?

A new study reveals that people are neither entirely enthusiastic nor completely opposed to AI. Rather than falling into technio-optimists and luddite camps, people are demanding on the practical result of the use of AI, case by case.

“We propose that the assessment of AI occurs when AI is perceived as being more capable than humans and personalization is perceived as useless in a context of a given decision,” explains MIT Jackson Lu, co-author of a newly published article detailing the results of the study. “The AI ​​aversion occurs when one or the other of these conditions is not met and the assessment of the AI ​​only occurs when the two conditions are met.”

Paper, “AIVERSION or AC appreciation? A capacity customization framework and a meta-analytical review», Appears in Psychological bulletin. The document has eight co-authors, including LU, who is an associate professor of work career and organization career at MIT Sloan School of Management.

A new executive adds information

The reactions of people at AI have long been subject to a complete debate, often producing apparently disparate results. An article influential of 2015 on “algorithm aversion” revealed that people forgive less the errors generated by AI than human errors, while a largely rated article of 2019 on “the appreciation of algorithms” revealed that people preferred the advice of AI, compared to the advice of humans.

To reconcile these mixed results, LU and its co-authors led a meta-analysis of 163 previous studies which compared people's preferences for AI compared to humans. The researchers have tested whether the data supported their proposed “capacity framework” – the idea that in a given context, both the perceived capacity of the AI ​​and the perceived need for personalization shape our preferences for AI or man.

In the 163 studies, the research team analyzed more than 82,000 reactions to 93 distinct “decision -making contexts” – for example, that participants feel comfortable or not with AI used in cancer diagnostics. The analysis confirmed that the capacity-to-personalization framework helps to take into account people's preferences.

“The meta-analysis supported our theoretical framework,” explains Lu. “The two dimensions are important: individuals evaluate whether the AI ​​is or no longer capable than people to a given task, and if the task calls for personalization. People will only prefer AI if they think that AI is more capable than humans and the task is not personal. ”

He adds: “The key idea here is that the capacity perceived alone does not guarantee the appreciation of the AI. Personalization is also important. “

For example, people tend to promote AI when it comes to detecting fraud or sorting large data sets – areas where AI capacities exceed those of humans at speed and scale, and personalization is not required. But they are more resistant to AI in contexts such as therapy, job interviews or medical diagnoses, where they believe that a human is better able to recognize their unique situation.

“People have a fundamental desire to consider themselves unique and distinct from others,” explains Lu. “AI is often considered impersonal and operating in a heart. Even if AI is formed on a multitude of data, people think that AI cannot enter their personal situation. They want a human recruiter, a human doctor who can see them as distinct from others. ”

The context is also important: tangibility to unemployment

The study also revealed other factors that influence the preferences of individuals for AI. For example, IA appreciation is more pronounced for tangible robots than for intangible algorithms.

The economic context is also important. In countries with lower unemployment, IA appreciation is more pronounced.

“It has an intuitive meaning,” says read. “If you worry about being replaced by AI, you are less likely to kiss it.”

Lu continues to examine the complex and scalable attitudes of people towards AI. Although he does not consider the current meta-analysis as the last word on the question, he hopes that the capacity-personalization framework offers a precious objective to understand how people evaluate AI in different contexts.

“We do not claim the perceived capacity and personalization are the only two dimensions that matter, but according to our meta-analysis, these two dimensions capture a large part of what shapes people's preferences for humans through a wide range of studies,” concludes.

In addition to read, the newspaper co-authors are Xin Qin, Chen Chen, Hansen Zhou, Xiaowei Dong and Limei Cao from Sun Yat-Sen University; Xiang Zhou of Shenzhen University; and Dongyuan Wu of Fudan University.

Research was supported, in part, by subsidies to Qin and Wu from the National Natural Science Foundation of China.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.