With the current buzz around artificial intelligence (AI), it would be easy to assume it is a recent innovation. In fact, AI has been around in one form or another for more than 70 years. To understand the current generation of AI tools and where they might lead, it is helpful to understand how we got here.

Each generation of AI tools can be seen as an improvement on those that came before, but none of the tools are headed toward consciousness.

The Early Days of AI
The mathematician and computing pioneer Alan Turing published an article in 1950 with the opening sentence: "I propose to consider the question, 'Can machines think?'". He goes on to propose something called the imitation game, now commonly called the Turing test, in which a machine is considered intelligent if it cannot be distinguished from a human in a blind conversation.

Five years later, the phrase "artificial intelligence" was first used in a proposal for the Dartmouth Summer Research Project on Artificial Intelligence.

From those early beginnings, a branch of AI known as expert systems was developed from the 1960s onward. These systems were designed to capture human expertise in specialized domains, using explicit representations of knowledge, making them an example of symbolic AI.

The Rise of Expert Systems
There were many well-publicized early successes, including systems for identifying organic molecules, diagnosing blood infections, and prospecting for minerals. One of the most eye-catching examples was a system called R1 that, in 1982, reportedly saved the Digital Equipment Corporation $25 million per annum by designing efficient configurations of its minicomputer systems.

The key benefit of expert systems was that a subject specialist without any coding expertise could build and maintain the computer's knowledge base. A software component known as the inference engine then applied that knowledge to solve new problems within the subject domain, providing a form of explanation.

These systems were popular in the 1980s, with organizations clamoring to build their own expert systems, and they remain a useful part of AI today.

Enter Machine Learning
The human brain contains around 100 billion nerve cells, or neurons, interconnected by a dendritic structure. While expert systems aimed to model human knowledge, a separate field known as connectionism emerged, aiming to model the human brain more literally. In 1943, researchers Warren McCulloch and Walter Pitts produced a mathematical model for neurons, where each neuron would produce a binary output depending on its inputs.

One of the earliest computer implementations of connected neurons was developed by Bernard Widrow and Ted Hoff in 1960. These developments were interesting but of limited practical use until the development of a learning algorithm for a software model called the multi-layered perceptron (MLP) in 1986.

The MLP, an arrangement of typically three or four layers of simple simulated neurons, enabled the first practical tool that could learn from a set of examples (the training data) and then generalize to classify previously unseen input data (the testing data).

The MLP achieved this feat by attaching numerical weightings to the connections between neurons and adjusting them to get the best classification with the training data before being deployed to classify previously unseen examples. It could handle a wide range of practical applications, provided the data was presented in a format it could use, such as recognizing handwritten characters.

Newer AI Models
Following the success of the MLP, numerous alternative forms of neural networks began to emerge. An important one was the convolutional neural network (CNN) in 1998, which was similar to an MLP but included additional layers of neurons for identifying key features of an image, removing the need for pre-processing.

Both the MLP and the CNN are discriminative models, meaning they could classify inputs to produce interpretations, diagnoses, predictions, or recommendations. Meanwhile, other neural network models were developed that were generative, meaning they could create new outputs after being trained on large numbers of prior examples.

Generative neural networks could produce text, images, or music and generate new sequences to assist in scientific discoveries. Two notable generative neural networks are generative-adversarial networks (GANs) and transformer networks. GANs achieve good results by using an "adversarial" component that demands improved quality from the "generative" component.

Transformer networks have come to prominence through models such as GPT-4 (Generative Pre-trained Transformer 4) and its text-based version, ChatGPT. These large-language models (LLMs) have been trained on enormous datasets drawn from the Internet. Human feedback improves their performance further through reinforcement learning.

These networks are now generalized to cover any topic, not just specialized narrow domains like their predecessors.

Where is AI Going?
The capabilities of LLMs have led to dire predictions of AI taking over the world. Such scaremongering is unjustified. Although current models are more powerful than their predecessors, the trajectory remains firmly toward greater capacity, reliability, and accuracy, rather than any form of consciousness.

As Professor Michael Wooldridge remarked in his evidence to the UK Parliament's House of Lords in 2017, "the Hollywood dream of conscious machines is not imminent, and indeed I see no path taking us there." Seven years later, his assessment still holds true.

There are many positive and exciting potential applications for AI, but machine learning is not the only tool. Symbolic AI still has a role, as it allows known facts, understanding, and human perspectives to be incorporated.

A driverless car, for example, can be provided with the rules of the road rather than learning them by example. A medical diagnosis system can be checked against medical knowledge to provide verification and explanation of the outputs from a machine learning system.

Societal knowledge can be applied to filter out offensive or biased outputs. The future is bright, and it will involve a range of AI techniques, including some that have been around for many years.

More: https://techxplore.com/news/2024-07-history-ai.html