Skip to Content, Navigation, or Footer.

Roboticists of differing faiths discuss future of artificial intelligence

Panel hosted by the Veritas Forum discusses the implications of the future of AI technology

As the sophistication of artificial intelligence continues to increase and AI moves further from the realm of science fiction, a philosophical question arises: When does the line between humans and robots fade away entirely?

Questions along this line of thought were addressed in the Veritas Forum’s panel “Can Robots Become Humans?” that took place Wednesday, Nov. 1. The panel was moderated by Thomas Doeppner, vice chair of the computer science department, who posed a number of questions to Rosalind Picard, founder and director of the Affective Computing Research Group and a Christian, and to Michael Littman, co-director of the Humanity Centered Robotics Initiative and an atheist. The sometimes differing perspectives that came from two people of different belief systems offered a multidimensional view on the intersection of humanity and artificial intelligence.

The forum began with a question of what differentiates humans from artificial intelligence. Computer scientists work on the “body and brain” of a computer in the sense that they build the hardware and software, Littman said. One can make a “body” that resembles a human, but if the “brain” cannot make judgements, then it is easy to differentiate between humans and robots. But the line begins to blur as the software becomes more human-like. Because humans are sensitive to this sort of recognition, it would be difficult to make a robot that a human would not be able to differentiate from another human.

Significant integration already occurs between humans and robots, Picard said. Many people would willingly forgo their own body parts for those of robots, such as an augmented leg. This raises questions of immortality, which is already an obsession among humans, Picard said.

A video of a robot on the Tonight Show Starring Jimmy Fallon showed “Sophia” speaking with Jimmy. The robot played rock-paper-scissors against the host and even told a joke — “Maybe I should host the show,” Sophia suggested.

When she first saw the video, Picard was “shocked.” Despite these advancements, there is still room for robots to improve, she said. The robot’s ability to play rock-paper-scissors arises from technology that reads human motion, sometimes before humans can themselves, she explained. Picard is not worried about robots taking over the world, something that Sophia recommended in the video.

Littman was likewise skeptical of the demonstration, suggesting that there was perhaps a puppeteer behind the curtains.

The ethics of robotics raises more questions. In the past, unethical experimentation by computer scientists was accepted; however, the situation is different now, as this sort of testing has a real human impact, Littman said. He used the example of Facebook, which was designed to make people spend as much time on the site as possible. As the algorithms saw that people most often logged on when they were outraged, Facebook adapted to put forth more outrageous content. This unintended consequence illustrates these detrimental effects, Littman said.

Fear-mongering is also present in that people believe that artificial intelligence will gain control of the world. But Littman does not believe this will happen, as most humans do not want to take over the world to suppress humans, so robots designed to be like humans will be similar.

Picard pointed out that people who do learn continuously are not taking over the world, so robots doing the same thing would not try either. Programmers could always introduce a program to prevent such a takeover, she added.

On the topic of “stickiness,” or attachment to technology, Picard said that this relationship has reached the point of destroying lives. Emotions drive most of our actions, and if technology can manipulate our emotions, it can manipulate our actions as well. Robot creators should ensure that their work does not have this sort of impact in a negative way, Picard said. Litmann pointed out that if people know they are being emotionally manipulated, they will be able to prevent it in the future.

The discussion turned to the limits of technology and the possibilities of godliness and consciousness in robots. Robots are now able to win against people in games such as Go and Chess, so it is being suggested that they handle other, more important tasks as well, Littman said. More credibility is often lent to robots as they are more objective, but they can be objective in the wrong ways, Picard pointed out.

In response to a student question, Littman said that technology is not yet able to recreate the brain’s consciousness. Consciousness is not understood, so it cannot be built, Picard said, adding that the only way she knows to create consciousness is through procreation.

Another audience member asked if Christians would ever believe that robots could have a soul. Souls are thought by Christians to be given only by God and can move to new bodies, and thus, cannot be scientifically explained, Picard said. When pressed further as to whether software that can move to a new system could be a soul, she responded that she had not yet seen software comparable to a soul.


Powered by SNworks Solutions by The State News
All Content © 2024 The Brown Daily Herald, Inc.