19 June 2024

What can be said about the future of artificial intelligence?

The third and last Ar Event of the series Past, Present and Future of AI took place in May at the Champalimaud Foundation. Three invited speakers shared their views on what artificial intelligence may or may not become in the next (few) years.

What can be said about the future of artificial intelligence?

Artificial intelligence (AI) is a spectacular tool that has been in development for the last decades; its role in our lives is already pervasive and will inevitably grow; and, most importantly, we can, through regulation, avoid its abuses (such as fake news and the manipulation of human beings). Indeed, whatever the future of AI is to be, we have the power to choose – wisely – to use it for the common good.

In a nutshell, this is the take-home message of the series of three Ar Events that have taken place at the Champalimaud Foundation (CF), on the theme “Past, Present and Future of AI”, during the last few months.

The past and the present of AI were discussed during the first two events of the series (see past & present).

The last event, which happened in May, was placed under the title “Shaping Tomorrow’s Intelligence”. Three invited speakers – guided by Sabine Renninger and Scott Rennie, two CF neuroscientists acting as “creative directors”, shaping the event and adding context – presented their vision of the future of AI and compared the natures of animal/human and artificial intelligence. 

The debates were launched by Rennie, who listed some of the achievements credited to AI: “GPT-4 recently passed the bar exam, and it has become AI as good or better than radiologists for cancer screening.” However, not all uses of AI are for the common good: while AI is being used to detect and target dangerous asteroids, it is also being used “to define targets in the ongoing destruction of Gaza”, Rennie said.

“But the future is unpredictable”, he added, “and large companies, in fact the same people, are announcing both a coming AI utopia and apocalypse. They make this argument to make AI seem inevitable and out of our control. Nothing about AI is inevitable", he pointed out.

“What do we want artificial intelligence to be like if it is to become more like our own?”, Rennie then questioned. "Do we want AI systems just to make better predictions or AI systems that will help explore our differences so as to better understand ourselves?" Tonight’s question, he announced, is: “How should we shape tomorrow’s AI?”

Can the future of AI be predicted?

The first speaker was Luís Correia, professor at the Department of Informatics of Faculdade de Ciências of Universidade de Lisboa, whose research interests are computational artificial life, autonomous robots, self-organisation in multi-agent systems and machine learning.

“I won’t make predictions”, Correia said from the start. But he emphasised the need for humans “to choose between living happily ever after or taking some action against The Matrix” – in other words, the need to choose seriously, and not just for fun, what they want from AI. 

As to AI versus natural intelligence, Correia stated that while the first was recently developed and is limited to symbolic reasoning, the second took billions of years of evolution and the use of sensorimotor capabilities to get where it is today. “All forms of natural intelligence are embodied, while AI is not”, he added.

Correia then asked, in teaser mode: “Will AI be embodied in a network [such as the Internet] in the near future?”, and become a networked intelligence? He gave no answer to this. 

In return, we may ourselves ask: Can there be true intelligence (and true understanding, and true consciousness) without a body?. Or will AI always remain a mere (but powerful) simulation of intelligence?
 
For Correia, the best future for AI will be something in the middle: a collaboration between AI and natural intelligence. “Collaboration is currently one of the most interesting and positive aspects of using AI”, he pointed out. “AI is a tool that empowers humans to solve big problems faster”. 

But AI also “amplifies the bad things in our society”, Correia cautioned: “unethical activities, fake news, fake images…”. And it amplifies our own biases through the biased datasets we feed it to train it.

Correia concluded that “AI’s greatest risk is to use it without regulation” – and going back to what he had said at the beginning, “that we should not use it just to amuse ourselves”. 

The second speaker was Kevin Mitchell, Professor of Genetics and Neuroscience at Trinity College Dublin. His current research focuses on the biology of agency and the nature of genetic and neural information.

Can we give AI a “soul”?

Mitchell started by saying that some people’s vision of AI goes as far as to say: “give AI all the knowledge in the world and it’ll acquire a ‘soul’” – that is, it will become an entity, it will acquire agency. And then he countered this view. “I want to argue that expecting agency to pop up is just the opposite of natural evolution. Agency and autonomy come first; intelligence evolves later for survival”, he said. So on the question of natural versus artificial intelligence, nothing could be more distant, more different than these two, according to him.

“Evolution fashioned organisms that made sense of the world and acted on it, tested it, explored it”, Mitchell continued. “Organisms that don’t just sit there like AI machines”. And while babies are really good at interacting with the world, LLM (Large Language Models such as Chat-GPT) are bad at it, he added. “Currently, AI lives in logical reasoning and language, but that’s not where most of natural intelligence lives.”

Could we create autonomous artificial agents in the future? “I believe we can – but should we?”, Mitchell replied. If we do, there will be questions of ethical and moral responsibility. “Who will be responsible for the behaviour of these new moral agents?”, Mitchell questioned. “What will the companies [who made these AI machines] say about who’s responsible for the evil AI may do?” Could they claim they are not responsible?

Can an AI become a moral entity?

The third of the evening’s speakers was Pooja Viswanathan, based in Lisbon and currently working as a freelance scientist, writer and ceramic artist. With a PhD and MS in neuroscience from the University of Tuebingen, Germany, she is interested in intelligent behaviour in primates and neural networks.

“I argue that we should ground AI on a moral consideration framework”, Viswanathan declared. “Can AI be qualified as moral agents?”, she asked. “No”, she answered. 

“Can they be qualified as moral patients?, she asked next. Specifically, can we extend our ethical considerations to AI the way we extend them to humans and animals? In other words, can we feel moral obligations towards AI, as we do towards suffering, defenseless animals? 

AI does not seem to meet the criteria we use for animals, replied Viswanathan – such as the need to reproduce, the capacity to suffer, biological complexity, intelligence, genetic similarity, beliefs about the future, sociability, culture, norms. 

“Is AI social?”, Viswanathan then posited. To which she answered: “AI does not form social groups”. “Does AI have beliefs about the future? Animals do – they make food reserves – but there aren’t many indications that AI does”, she replied. “Is AI intelligent? Octopi and crows are intelligent, dolphins are creative – but not AI”, she pointed out. “Is AI sentient? Animals feel pain, suffer injuries that can be treated. AI? No.” Clearly, AI cannot be considered a moral patient. 

This led Viswanathan to say: “We had no hand in animal evolution, but we do in AI evolution. Maybe we should not create sentient, embodied AI and bestow responsibility on it.” Why? “Because we are the ultimate moral agents, even if AI does things that are morally questionable.” Once again, what is at stake here is the need for choosing wisely and carefully the kind of AI we want for the future.

The evening ended with a general discussion between the speakers, led by Renninger and shaped by the questions and concerns submitted by the audience. Some examples of the topics broached: “Do you think that AI will move from reasoning to embodied intuition”, Renninger asked. From Viswanathan: “Likely”; from Mitchell: “Is it really reasoning?”.

Another question: “Can we create a test to detect when AI machines reach reasoning?” Mitchell again: “We created something we don’t understand”. Correia: “We have to look at AI as a set of possibilities of new forms of embodiment.”

Other challenging issues were also evoked. The whole event was indeed “aimed at stimulating critical thinking about the challenges posed by certain AI developments and more inclusive discussions about how we want AI to be part of future lives”, Renninger said.

 

Text by Ana Gerschenfled, Health & Science Writer of the Champalimaud Foundation.
Ar Series on Artificial Intelligence - Shaping Tomorrow's Intelligence
Loading
Please wait...