06 January 2024

Ar Event: Roots of AI - The decades-long, up-and-down story of artificial intelligence

Algorithms that try to simulate natural intelligence have been around for decades. Their development has known ups and downs – and even gone through periods of relative oblivion due to their alleged lack of potential. Today, thanks to neural networks, unprecedented computing power, and the availability of extremely large and diverse data sets, AI machines are making a spectacular comeback and superseding humans in many fields. How did it all start?

Ar Event: Roots of AI - The decades-long, up-and-down story of artificial intelligence

“HALLO,” Robin typed on the keyboard.
“HALLO, HOW ARE YOU FEELING?” replied ELIZA, the printed words appearing instantly on the VDU [computer screen]. 
“I’M FEELING TERRIBLE,” Robin replied. 
“WHAT DO YOU MEAN BY THAT PRECISELY?” 
“I’M FEELING DEPRESSED.” 
“WHY DO YOU THINK YOU ARE DEPRESSED?”
“BECAUSE I’M LONELY.” 
“DO YOU LIVE WITH YOUR FAMILY?”
“NO.”
“DO YOU HAVE ANY FRIENDS?” 
“NOT REALLY.” 
“WHY IS THAT?”

In his 1984 book “Small World”, British writer David Lodge imagines a series of conversations between one of his characters, university professor Robin Dempsey, and Eliza, an early natural language processing computer program, a conversation simulation software invented in the 1960’s by Joseph Weizenbaum from the Massachusetts Institute of Technology (MIT).

Eliza’s answers were modelled on a real psychologist’s way to interact with his patients, namely by returning questions that contained words from the patient’s last comment (“Why do you think you’re depressed?”). Weizenbaum himself was surprised by how people who interacted with Eliza rapidly attributed human-like feelings to it – which is precisely how Dempsey increasingly feels about the program in Lodge’s novel. 

Eliza was actually one of the first chatbots (as we say today) to be developed – and “it didn’t understand a word”, as noted, in front an audience of interested laypeople, by one of the Creative Directors (CDs) of the latest "Ar (Respire Connosco) Event" to date, which took place at the Champalimaud Foundation in mid-December 2023. Under the title “Roots of AI”, this event – the first of a series of three on artificial intelligence (AI) – was focused on the history of AI.

Artificial intelligence is not new. The aim to create machines that think like human beings – or at least appear to do so – has been around for decades in the minds of computer scientists. 

But since Eliza, AI algorithms have come a long way to making this goal closer to reality. Today, says a statement by the Champalimaud Foundation, which has recently embraced AI research in Oncology, “AI is all around us, accelerating medical drug development, selecting what content we see across social media or outclassing us on the most human of tasks, from driving to medical diagnoses”. 

The evening had begun with the event’s CDs – Ana Maia, a PhD student at the Neuropsychiatry Unit of Champalimaud Foundation, and Prannay Reddy, a PhD student in neuroscience at the Champalimaud Research – asking: “What is intelligence?”. This elicited a multifaceted description, partly provided by answers previously collected from the audience. Things like: “A combination of problem solving and memory”; “being able to connect the dots”; “the ability to adapt”; “the capacity to create”. Natural intelligence is clearly not confined to humans, since it serves “a basic survival strategy” shared by all living species. It is “diverse and content-specific”, they concluded.

Then came the turn of Tiago Marques, co-leader of the Digital Surgery Lab at the Champalimaud Clinical Centre, to explain the evolution of AI in his presentation, entitled “The Rise of the Algorithm”, which was the plat de résistance on the evening’s menu. In other words: How did we get here?

According to the 2018 definition of the European Commission, “Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals”. Therefore, said Marques, like natural intelligence, “AI is diverse and can exist – and adapt – in many contexts”.

It was John McCarthy, he continued – an American computer scientist and cognitive scientist – who coined the expression “artificial intelligence” in 1956 (almost 70 years ago!). This happened during a gathering of a small group of scientists at Dartmouth College, in New Hampshire, for the Darmouth Summer Research Project on Artificial Intelligence – a workshop on “thinking machines” and the event that marked the birth of AI as a field of research. 

AI is essentially divided into two approaches: traditional or symbolic AI, best known for its “expert systems” – that is, the computer programs based on a set of instructions derived from human expert thinking; and machine learning, the algorithms “that learn from data, have self-adaptive capabilities and are not explicitly programmed”, Marques explained. It’s in this latter category that the popular artificial neural networks belong.

“Neuroscience was invaluable to inspire the development of modern AI”, he stressed. Neurons are the building blocks of the nervous system, and software or hardware artificial neurons, which simulate how biological neurons process information, are the building blocks of learning machines.

How do these artificial neurons learn? Marques gave the example of how such a machine can be trained to recognise watermelons from melons. Watermelons have several features, he said, such as colour, weight, size. The artificial network, which is organised in an input layer, an output layer and several internal, or hidden, layers, receives these numerical features, or inputs, multiplied by their importance or weights, and gives a binary response depending on whether a certain threshold has been attained or not: 0 for No and 1 for Yes. This response is akin to action potentials in real-world neurons, which will fire or remain silent depending on certain conditions. 

The outputs are then compared to the expected responses (whether the inputted features were actually those of a watermelon or not), and wrong answers (errors) are “backpropagated” into the hidden layers of the neural network to correct – adapt – its internal parameters (the weights attached to the different features). The machine thus learns and gets better at recognising watermelons, by trial and error, during training.

“But what exactly is the neural network algorithm actually learning”?, Marques asked next. In fact, through the adaptation of the features’ weights, he replied, it is learning representations  – in this case of watermelons versus melons. “It is learning better ways to represent”, he answered.

The first artificial neural network that could learn this way was the Perceptron, an electronic retina invented by American psychologist Frank Rosenblatt, of Cornell University, in 1958. But it was very simple and had many limitations – and in the 1960s and the 1970s several big names in the field of AI, among them MIT’s Marvin Minsky, considered that the Perceptron had no real future. Following this, researchers concentrated their efforts on the expert systems of traditional AI, and neural networks endured a “winter” of more than a decade before emerging into the limelight again.

Today, expert systems are still in use in many applications, such as finance and banking. One of the most famous, as Marques recalled, was IBM’s Deep Blue chess-playing program, which thanks to its dedicated computer’s huge combinatorial power defeated world champion Garry Kasparov in 1996. But it is now undoubtedly ChatGPT and neural network algorithms in general that have become the face of modern AI.  
  
“The debate around neural networks [the connectionist approach] versus symbolic AI [expert systems] went on for several decades”, noted Marques. “In the 1970s, artificial neural networks were not very efficient”. But in the end, “there was a need to go back to neural networks.” 

The introduction of intermediate layers, the successful solving of the problem of backpropagation of errors, the development of so-called convolutional neural networks (a type of artificial neural networks that are able to recognise patterns in images), “enabled artificial neural networks to learn more complex representations and perform more complex tasks”, Marques further explained.

Then, in the late 2000s, the development of GPUs (Graphic Processing Units), much more powerful than the classic computer CPUs (Central Processing Units), combined with the availability of huge datasets thanks to the advent of the World Wide Web and of the first social networks, were the breakthroughs that were needed to make artificial neural networks truly come of age. To the extent that recently, noted Marques, “neuroscientists themselves have begun using many of these models to understand how the brain works”, in a kind of reversed situation. Neural networks still have many limitations, he concluded realistically. But they have certainly come a long way.

During the third and last part of the evening, CDs Ana Maia and Prannay Reddy asked questions about AI to… an AI nicknamed Lesly. “Are machines smarter than we are?”, they inquired. Sixty-six percent of the audience had answered “no”, but Lesly view was sightly different: “It depends on how we define intelligence”, its voice resounded, admitting nonetheless that, as as a machine, it had “no ethics and no emotions, no consciousness or experiences”, and saw themself as complementing human intelligence through cooperation.

Closing the event, the CDs also evoked a less frequently mentioned, but essential, difference between a neural network and a brain: “it takes an amount of energy equivalent to the daily power consumption of a city to train Lesly, while we carburate on fruits and nuts”.

The next two Ar events, on the Present and Future of AI, will take place in February and April 2024.

 

Text by Ana Gerschenfeld, Health&Science Writter of the Champalimaud Foundation.
Photos by Carla Emilie Pereira and Catarina Ramos.
Ar Event: Roots of AI - The decades-long, up-and-down story of artificial intelligence
Ar Event: Roots of AI - The decades-long, up-and-down story of artificial intelligence
Ar Event: Roots of AI - The decades-long, up-and-down story of artificial intelligence
Loading
Please wait...