Every five years, a team of researchers and industry leaders with expertise in varying fields tied to artificial intelligence come together to create a report on the most significant questions and developments around AI as part of the One Hundred Year Study on Artificial Intelligence. “Gathering Strength, Gathering Storms” — the latest report and second edition of AI100 — aims to capture a growing sense of responsibility in how to proceed with AI.
The report reflects a shift in public conversation from excitement around the technology to concerns regarding how it is and will be used equitably and ethically, said Michael Littman, professor of computer science and chair of the study panel.
The main takeaway is the “mixed reception that AI is having in society,” said Steven Sloman, a member of the study panel and professor of cognitive, linguistic and psychological sciences. “On one hand, it’s making our lives easier and doing some work to (improve) judgments and decisions we make, but, on the other hand, it’s creating dangers that we haven’t faced before,” he said.
Shifting perspectives on human vs. machine intelligence
Over the last five years, the view that human intelligence is collective — that individuals are just a part of a greater intellectual machine — has gained prominence, the report found.
“AI started out as a field that was very closely related to cognitive science,” with the “common goal of trying to understand intelligence,” Sloman said. “There was this underlying assumption that humans were the intelligent agent, and so, if we tried to make computers smart, we would necessarily be learning something about people.”
“As the fields have evolved, it's become clear ... there are kinds of intelligence that machines have that humans don't,” Sloman said.
Contributions of individual humans to the collective intelligence will differ from those of machines due to their different strengths, according to the report.
“Machines have bigger memories and faster processing speeds, and in some ways, they have more sophisticated learning algorithms that don’t suffer the intrusions of other human demands, like the need to deal with our emotional reactions,” Sloman said. These forms of intelligence allow machines to be better chess players than any human being, for instance, because it involves “skills that it turns out humans aren't the best at,” he added.
A mission to study AI over 100-plus years
The mission of the AI100 project is to “raise awareness of what we find to be the most important issues of artificial intelligence across these different stakeholders and audiences,” which include policymakers, the general public and researchers, said Peter Stone, chair of the AI100 Standing Committee, professor of computer science at the University of Texas at Austin and executive director of Sony AI America. After the release of the previous 2016 report, Stone, who chaired the study panel, was invited by the prime minister of Finland, Juha Sipilä, to discuss the report.
AI100 is unique in that it is a longitudinal study. “We’re going to be writing a report every five years for the next 100 years, at least and possibly beyond,” Stone said.
As this is the second report produced by AI100, it sets “the trajectory for the years to come” on how reports will build off of one another, Stone added. In determining the direction of the report, the Standing Committee thought carefully about how they could establish a pattern that could be followed in future reports, he said.
In addition to setting the intellectual direction of the study, Stone and the rest of the Standing Committee are responsible for inviting the chair of the study panel and work with them to staff the remaining panel members. The standing committee’s first choice was Littman due to his expertise, “skill and desire to be able to reach out to the public (and) to be able to reconcile different views and perspectives,” according to Stone.
“It was important to us that (the study panel) included a large variety of perspectives and diversity in all the relevant ways,” Stone said.
The first report faced some criticism that it was not sufficiently inclusive of diverse perspectives, Littman added.
According to an article reflecting on the creation of AI100, “the shortened time frame (of the first report) led to the Study Panel being less geographically and field diverse than ideal, a point noted by some report readers.”
For this panel, around half of the members have expertise in social issues, rather than a technical computer science background, Littman said. This representation creates room to think beyond just structuring programs, asking more fundamentally: “What should these programs even be?” he said.
Envisioning the long line of AI100 reports laid out on a shelf someday in the future, Littman aimed to have their “time capsule” on the shelf “accurately capture the snapshot of what is on people's minds right now.”
Tackling controversies over what role AI should play
The study panel spent a “fair amount of time discussing ethical issues,” Sloman said.
Part of the problem of training AI to conform to ethical principles lies in differences in opinion on what is right. “That is a challenge for AI, because it has to make decisions about what kind of information it’s going to produce,” he said. This creates a problem for the machine because it must navigate various ethical opinions, he added.
Despite the representation of different voices on the panel and controversial topics discussed, the panel overall encountered less conflicts in opinion than expected, Littman said.
One topic of particular controversy was whether AI has potential as a beneficial tool in the military or should be banned, with outspoken advocates from either side on the panel finding common ground, he said. “The greater integration of AI by militaries around the world appears inevitable in areas such as training, logistics and surveillance,” the report finds, stating that “governments will need to work hard to ensure they have the technical expertise and capacity to effectively implement safety and reliability standards.”
Another complex topic discussed was AI’s capability to replace human caregivers. The report makes the claim that “the use of AI technologies in caregiving should aim to supplement or augment existing caring relationships, not replace them,” adding that aspects central to caregiving like respect and dignity cannot currently be coded into procedural algorithms.
Littman, who was previously involved in a National Science Foundation grant proposal to create robots to support those who would like to live at home independently longer, said his own perspective changed on the issue over the past few years. While he felt the research community at the time was pushing to roll out new AI applications as quickly as possible to improve people’s lives, now “I definitely see the complexity of it much more clearly,” he said.
“It's really hard to put the genie back in the bottle,” Littman said, referring to the lack of foresight on how recommendation algorithms control how we interact with information on our devices. The AI community originally built recommendation systems with largely good intent, but “it morphed into this thing, which is a little bit monstrous,” he said.
“We have to roll (AI) out in a really thoughtful way … that just takes time, that takes attention and it takes expertise from a lot of different kinds of people.”
Looking ahead to 2026
Currently, AI systems are programmed to crunch numbers and mine patterns; unlike humans, the systems are unaware of the larger goal behind an action and the interrelation of people and objects in the world, Littman said.
By the next report in 2026, Littman hopes that the AI panel will be able to report that there has been progress on “making AI more cognizant about the world that it lives in and the impact that it has, and that the field as a whole has done a better job of integrating its contributions into broader society.”