11 min read

The state of Artificial General Intelligence

(Disclaimer: A layman’s view with biases. Could be wrong. Gathered information for personal understanding. For tweet format, follow the link here.)

The human mind is a marvellous thing in this universe. It enabled us to build and leverage technology - build skyscrapers, communicate instantly across large distances, fly across the ocean and explore space - and yet we don’t understand how it works. There definitely have been negative impacts due to certain human minds and technologies, but looking at the bigger picture, we realize human minds are creative and we can use it to solve problems. It is the only known object in the universe that can use the knowledge that is not in the genes. And on the other hand, our minds are so adaptive that we hardly ever stop and wonder how far we have evolved which wouldn’t be possible without our minds. What role does Artificial Intelligence play in helping us understand the mind?

There seem to be different viewpoints today on what constitutes Artificial Intelligence and its implications. The initial objective, from the mid 20th century, of building a machine that reciprocates a human mind - to explain, to experience emotions, to be creative, to be conscious - has been either relegated to a subset of AI called Artificial General Intelligence(AGI) or taken a different name such as cognitive science.

Some people believe we continuously keep setting higher bars for AI while some think ‘saying whether we have already created AI or arriving at a state when we can say we have achieved AI’ is a matter of philosophy.

The entire spectrum of Artificial Intelligence systems could be categorized into:

(i) An AI that can perform a particular task/a set of tasks better than a human, i.e Artificial Narrow Intelligence(ANI)

(ii) An AGI that passes the Turing test - a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human

(Any statement with AI further in this essay denotes ANI)

From Alpha Go to solving the problem of protein folding, we’ve already attained the first category and we continue to make great strides.

Artificial Intelligence already enables us to build cool applications that transform industries and it could help us in solving some of the most pressing problems but the question remains, why should we strive towards building Artificial General Intelligence?

The goal of AGI is to understand the mind by building a theory that can be tested. Joshua Bach believes it is a very big cultural project for science and enlightenment.

Bach argues that in building an AGI we can understand how the mind works, what constitutes the mind and what makes us intelligent. In this fascinating talk, he narrates why minds are not chemical/biological/social/ecological processes but information processors. Computer Science happens to be the science of information processing systems which is why he believes understanding the mind should stem from computer science while unifying a theory with philosophy and neurophysiology.

Geordie Rose also believes that building an AGI will help us understand the mind and claims, perhaps, the greatest test in calling ourselves intelligent species is to understand how intelligence and the mind works. He argues, and I agree, an intelligent machine could free humans from the tasks that they don’t want to do and focus on the tasks that they would like to do. And that one can also do all of the things that they’ve ever done in the background of having a bunch of intelligent machines running around.

Is building an AGI possible?

David Deutsch argues that it must be possible because of the underlying principle - the universality of computation - which states that everything that the laws of physics requires a physical object to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.

From a biological perspective, he argues that since humans have it and apes do not, the information for how to achieve intelligence must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. For a start, the Brain’s architecture is encoded in the genome.

Is it possible? It must be. Is it easy? We don’t know.

Building an AGI:

Artificial intelligence has made great strides since the deep learning and neural network revolution, but AI systems still struggle to extrapolate outside of their training data and adapt to new situations. David Deutsch, in his 2012 article, argues that AGI has not made any progress since the time of Turing and articulates coherently that AGI cannot be programmed using the techniques that suffice for writing any other type of program. His views still seem to hold true.

In fact, Deutsch further argues in his 2019 essay, Beyond rewards and punishment catalogued in Possible Minds, AI is the very opposite of AGI and giving AIs more and more predetermined functionalities in the hope that these will eventually constitute Generality will likely fail. Trying to build an AGI without a theory of the mind (Popperian epistemology) is bound to be futile.

Yes, natural evolution seems to have led to the creation of human-level intelligence - which do we neither understand how it came about nor understand how it works - but a similar random emergence of AGI by advancing AI seems highly unlikely. Merely improving the performance of programmable tasks, increasing the complexity of computational systems or additively increasing the number of tasks performed by a machine does not offer a clear path towards constructing an AGI. David Deutsch’s reason being: unless we understand how creativity works - the ability to produce new explanations - we cannot achieve AGI.

There are multiple approaches to building an AGI. Most of the current approaches in AI are fixated on enabling the machine to learn specific things to achieve a task and additively adding the tasks to a machine. However, Joshua Bach believes the key to building an artificial mind lies in unified learning. The human mind doesn’t perform specific tasks due to isolated functionalities.

Hence, a model to build/understand a mind should integrate psychological, social, physiological interface to the world such as language, reasoning, emotion, information processing and memory. This involves building a model of the connected unified universe onto which a system matches everything that happens in real-time and understands their own role in it.

Another subject that has been in the circle of debate surrounding the feasibility of building an AGI is consciousness - whether it is necessary and if yes, what does it constitute. The definition of the term ranges from qualia - subjective sensations - to self-awareness. Bach believes

the features that will make AI fully general —autonomy, motivation, self-awareness—  are traits associated with human consciousness.

Bach has developed a cognitive architecture called the MicroPsi architecture. It describes the interaction of emotion, motivation and cognition of situated agents. A key postulate in his architecture is: to make machines conscious is to make them self-referential. He defines consciousness as a control model of perceptual attention i.e to ask whether a given animal is conscious comes down to asking whether it is aware of its awareness.

Bach argues that our actions(and thoughts) occur unconsciously and conscious experience comes into play (immediately) after to make sense of what we’ve already done. And thus consciousness can be considered to naturally emerge from a system that makes a model of its own attention. Therefore, the system is self-referential if it can both:

a.) remember that it experienced certain things, and

b.) remember that it was aware of experiencing them.

The MicroPSI theory is still far away from building a testifiable theory of the mind but Bach sure has conjectured fascinating ideas about consciousness, reality, perception and more. It aims to use information processing as the dominant paradigm to understand the mind. This podcast with Lex Friedman gives a fascinating oversight into Bach’s views on Artificial Consciousness and the nature of reality. (TLDR; readable summary here.) The only way to falsify these theories would be to build whole architectures that can be tested and not individual modules.

MicroPSI Industries:

As the name suggests, the company originated from AGI research, particularly MicroPSI theory. Since the theory is still under development and commercially viable AGI applications are further away, the company develops generic cognitive machines- bringing AI to robots-  to commercialize well-understood AGI results.

Hanson Robotics:

The creator of Sofia, Hanson Robotics is an AI and robotics company dedicated to creating socially intelligent machines that enrich the quality of our lives. Influenced by MicroPsi, the company uses the OpenPSi framework developed by OpenCog to model human motivation and emotion.

Another similar yet different view is the embodied system approach. This approach does not conjecture a unified theory for the mind but aims to build a human-like system by enabling it to perform tasks mimicking humans.

Geordie Rose is another pioneer in the field of AGI with the objective to look at general intelligence through the biological lens. He believes intelligent machines need to be built to deal with the peculiarities and weirdnesses of the real physical world. We as humans perceive the world based on information provided through senses. Geordie advocates for an intelligent system to have a body, a vehicle through which their actions are taken and a vehicle through which information about the world comes into them through their senses.

Sanctuary AI:

The company is on a mission to build and scale embodied artificial general intelligence (AGI).

Dr Gildert and her synth (Image: Daniel Marquardt and Sanctuary AI 2019) Credits: PCMag

The premise behind Sanctuary AI, lead by Geordie Rose, is that the clearest path to cutting through this extraordinarily difficult problem is to mimic biological systems and the types of intelligence that they need to navigate the world. Not copying the brain, but thinking about what properties of the brain are required for an intelligent agent, for example, to know how to reach out and grasp an object, or to know how to walk on uneven terrain, or to know how to reason about the world.

Vicarious:

At first glance on the website, the company seemed to be working with an objective of extending AI functionalities towards AGI. However, in this fascinating podcast, Scott Phoenix throws light into his perspective on building an AGI. Vicarious also seems to take an embodiment approach in eventually building an AGI - something that's embodied inside of a robot that's subject to the constraints of the real world like physics and friction and unreliable sensors and change. To build a human-like brain, Scott believes, the system needs to learn the model of the actual world as well as high-level concepts through reasoning.

The other dominant category revolves around data-intensive pattern matching approaches. Using deep learning methods on, perhaps, all existing human knowledge - usually specific knowledge - to achieve an end objective. Eg: Win a game of chess/Go by searching through all possible approaches, find the most efficient ways for energy consumption, scan through all existing conversations, words and sentences to translate/converse without understanding anything.

While these approaches will continue to be enormously powerful and can help us find new solutions while advancing AI, how close are OpenAI’s GPT-3 and Deepmind’s AlphaZero towards AGI? Nowhere close. Will they achieve AGI? Time shall tell but seems highly unlikely.

These are ANIs that have specific objectives defined by humans. While these AI programs can learn, they are in fact just utilizing knowledge from the existing database, they are not creating new knowledge. In other words, the knowledge creation as well as output definition remains with the programmer/user.

Yes, AlphaZero could come up with new and efficient approaches to win the game but it cannot explain why it took the approach. The explanation and understanding still lie with humans. Additionally, the reason why the machine could come up with a new move and not a human can be annotated to the fact that a machine has access to greater computational resources which enables it to mechanically, without using creativity, search for possible approaches.

Thought experiment: If we could find a way to integrate a machine’s computational resource to a human mind(say, Neuralink in the future), could a human+machine beat the machine?

How scary is the future surrounded by AI and AGI?

Like any technology, AI is another powerful tool that when used for the right purposes could be exceptionally beneficial and when used by harmful people could be disastrous. Sometimes, the tool even when used with the right intentions could cause sufficient damage. The defence against potential evil-doers or unexpected negative side-effects is being transparent about advances in research - such as OpenAI and OpenCog or even Deepmind releasing all its research - enabling us to be better prepared to tackle a problem if it arises.

Will AI replace jobs? Yes, that is bound to happen. As AIs get better and better at specific tasks, it will become harder for corporations not to replace humans with AIs citing efficiency - cheaper, more productive and more accurate. What does this mean for the workforce? It won’t likely be a binary answer. While some of the jobs will be replaced, there will be new jobs created that require either creativity or control over these machines.

It could also mean humans are relieved of non-creative tasks that they don’t want to do. But this needs to be parallelly supplemented with either a Universal Basic Income model, a large-scale skill development programme run by governments/corporations or both.

What about humans or corporations getting powerful with the advent of advanced AI? In this fascinating podcast with Matt Clifford, Ian Hogarth discusses the advances in AI and argues for the need to have a strong governance model.

But it’s an all-together different ball game when we talk about the fear of AGI.

There are camps of all sorts:

I. AGI is imminent and if they turn out to be evil and take control, humans will become obsolete

II. AGI is still a few decades away and humans might solve the philosophical implications before

III. AGI will be equally intelligent, if not more. Let’s leave the moral compass to the AGI.

There is no grand agreement among most people on which camp is right. Although it is certain that most people seem to be afraid of the implications should an AGI be eventually created.

The fear largely stems from two angles:

  1. Humans are the most dominant and controlling species, an AGI’s presence would mean we will be forced to relinquish control
  2. AGIs will not have the capacity to share the same moral values as humans

Geordie argues that the idea that one has to have control over AGI or that humans have control over other species should be discarded. Humans are capable of empathizing with fellow humans and other animals. The same should apply to an AGI and if AGI is anything like a human with the capacity to create knowledge, we should also expect an AGI to be capable of empathy.

David Deutsch is a proponent of the view that treating AGI - even though the atoms in their brain would be emulated by metal cogs and levers rather than organic material - any different from humans would be nothing less than racism. Similarly, trying to align an AGI based on rewards and punishments is to limit their ability to be creative and criticize ideas or values. Critical rationalists do believe that an AGI will be capable of creating moral knowledge along with other forms of knowledge. AGI creation should be akin to raising a child and any understanding entity as persons(humans or AGIs) is deserving of the same rights as humans.

What if AGI turns out to be more powerful with its cognitive computation capacity?

David also argues that human brains can think anything that AGIs can, subject only to limitations of speed or memory capacity, both of which can be equalized by technology.

So, how should we proceed with caution?

A. Perhaps, having philosophical discussions around possible scenarios and how to come up with collaborative governance models (that eventually includes AGIs) without the intent of controlling it.

B. Inculcating values or rather specifically arbitrary motivation systems just as we should be dealing with children since humans are predominantly driven by motivation.

But here is an interesting essay, ‘On hostility, and a disjunctive case for AGI Doom’ by Eli Tyre that counter-argues the critical rationalist view.

Here is an interesting conjecture on the neo-Darwinism theory of the mind by Dennis Hackethal.

In any case, we should prepare to live in a world that treats humans and AGIs with the same rights.

Comments powered by Talkyard.

Comments

Become a Infiniti Ventures member below to join the conversation (it's free!). As a member, you will also receive new posts by email (you can unsubscribe at any time).