Artificial General Intelligence (AGI) – a hypothetical machine capable of any and all of the intellectual tasks performed by humans – is considered by many to be a pipe dream. A long-standing feature of science fiction, AGI has achieved a cultural reputation of both reverence and fear, but above all an appreciation for the possibilities it presents. However, despite what the movies might suggest, there is still considerable debate around what constitutes general intelligence in humans, let alone machines.
Before diving into AGI, it’s worth establishing what has become the accepted meaning of ‘general intelligence’. The term ‘general intelligence’ is an evolving term. When the first electronic computers were created, many leaders in the field attributed their ability to do complicated sums as evidence of a higher intelligence than was previously known. What followed was the ability to best humans at strategy games like chess, and eventually speech and image recognition. It seems likely that this evolution will too apply to Artificial General Intelligence, particularly as the concept becomes increasingly abstract.
However, as it stands, there are some generally accepted factors which determine if a human or machine is capable of artificial general intelligence. First, that they must have the ability to learn from a limited amount of data or experience – often referred to as few shot learning. Secondly, to be able to learn, and improve its ability to learn, from a wide variety of contexts, known as meta learning. This directly feeds into the final factor: causal inference. This is the capability for scenario generation: to be able to plan for future events, or non-events, through an understanding of cause and effect.
Of course, many artificial intelligence machines, whilst not exhibiting general intelligence, are enormously capable for specific uses. These machines are referred to as ‘narrow AI’ and are in some ways more useful than AGI would be, as they are designed to solve very specific problems. The most common example of this is the recommender systems that tech companies use for their social networks and e-commerce platforms. This is often for deciding what piece of content should be shown next, whether that’s an ad, news or video – or alternatively what product a user might want to buy based on their previous spending patterns and those of similar users.
While specific, these narrow AI systems can be extremely complex, solving incredibly challenging problems. A good example of this is Causalens which has become a pioneer in causal inference, meaning they can more accurately and robustly model trends in time-series – such as indices for the global economy, or predict how shocks in one sector might affect another. Another interesting use case is natural language understanding. Wluper are a pioneer in this area, by being able to understand sentiment and intentions, understanding deeper meaning in questions and answers and even producing written content on demand.
Such AI systems are undoubtedly seeking to solve some of the key problems of the future, but of course, we are now facing a major challenge to society in the form of Covid-19. AI has huge promise as a method to speed up new drug development, although this area is still very much in its infancy.
However, an area where AI is increasingly being used is diagnostics and recommendations for therapeutic interventions. For example, companies such as Pangaea, can use AI to identify the best prospective candidates for clinical trials. This process can take up to 15-18 months, but with the use of AI can be shortened to just a few weeks, saving crucial time in drug development. It’s not just that AI can help on the medical side, another interesting recent development in relation to Covid-19 is WhatsApp’s launch of a chatbot to answer people’s questions about it. This exemplifies AI being used to help combat the spread of fake news, and ensuring people can get access to useful advice at any time of the day or night.
It’s difficult to say what AGI would bring to the table in regard to Covid-19, but any steps must be taken with caution. Indeed, it’s hard to think of a practical use case for an AGI, as opposed to the ‘narrow’ artificial intelligence businesses mentioned previously, which are especially well adapted to their particular use cases. The advantage of narrow AI is that it is often more transparent than other forms – allowing businesses to understand how it works and, if necessary, take steps to correct it if it goes wrong.
When developing advanced AI or even AGI, there are several things to consider to avoid a hypothetical, ‘machines are taking over’ scenario, known as the ‘control problem’. First, there should be the standard considerations when developing new AI systems, beyond just cost and accuracy. Ensuring that the AI can explain its reasoning can go a long way to calming fears about biases coming into its decision making, and if it makes a wrong choice allowing humans to correct it.
This should be followed by ensuring datasets are fair and balanced. At the moment many are not because they are based on human datasets, which often carry their own biases into the machine – a terrifying thought when it comes to AGI machines. Companies are working to address this, such as Synthesized, which creates artificial but ‘realistic’ datasets to train and test new AI models – because these can be designed to be more representative, they are much more likely to be free from bias.
The reality is that we are a long way off, years or even decades, from building an AGI simultaneously capable of metalearning, few shot learning and causal inference. There are over 40 companies working towards creating AGI, but it remains to be seen whether they would pass the necessary tests, such as the Turing or Employment test, to have created true AGI. However, one thing is clear, and made all the more pertinent by the current Covid-19 crisis: technology, and specifically AI, can do an enormous amount of good in helping humans and societies to thrive and succeed against adversity.