
Aristotle and Descartes on Consciousness and Instincts
Aristotle posited that only humans possess rational souls, while other animals rely solely on instincts necessary for survival. This view was later echoed in the 17th century by René Descartes, who argued that animals exhibit only reflexive behaviors, lacking the rationality and consciousness attributed to humans.
For a long time, it was widely believed that consciousness was unique to humans. However, recent scientific advances have led to the recognition that animals, especially those closely related to humans, also possess a form of consciousness. But for animals that are very different, like bats or octopuses, the presence of consciousness is still debatable, as discussed by Thomas Nagel in his 1974 book “What is it like to be a Bat?”
The Emergence of AGI and the Debate Around Artificial Sentience
When Sam Altman, CEO of OpenAI, expressed his goal to develop Artificial General Intelligence (AGI), many, including myself, initially wondered if this would involve creating a sentient AI. However, upon deeper investigation, it became clear that Altman was focused on AGI rather than Artificial Consciousness.Hopefully, the explanation below will clarify this distinction for you as well.
The Role of ChatGPT and Multimodal Models
My fascination with AI began in a Natural Language Processing class when I first encountered ChatGPT. This large language model (LLM) could answer a wide range of questions, drawing on vast amounts of human knowledge. Initially, I thought such systems could replace human experts. However, I soon realized that LLMs, while knowledgeable, struggle with understanding nuanced human intentions and emotions. For example, when I tried to explain a project aimed at helping people, it was challenging to convey my exact intentions, as it is difficult to express all emotions and subtleties in words.
The release of ChatGPT-4, a multimodal AI capable of processing both video and text, significantly improved user interactions. Despite this advancement, the system still only mimics human conversations, as demonstrated by Emily Bender’s “Octopus Test.” This test illustrates that even if an AI learns to mimic human communication patterns, it does not imply genuine understanding or intelligence; it merely follows learned patterns without true comprehension.
The Pursuit of AGI: Reinforcement Learning and Beyond
Fig: Reinforcement Learning Model
The concept of AGI is closely linked to the idea of creating machines with creative and
adaptable intelligence. Jürgen Schmidhuber, a pioneer in AI, introduced reinforcement learning models in the 2010s. These models involve an agent that interacts with its environment, learns from these interactions, and strives to maximize its rewards. This approach is seen as a path towards AGI, where the system not only follows predefined rules but also learns and adapts.
Thomas Pinella eloquently defines intelligence as “the process of observing the world and compressing it into a simpler model capable of accurate predictions.” This description aligns with how scientists aim to understand the world by simplifying complex phenomena into understandable models.
Why Not Artificial Consciousness?
Sam Altman’s focus on AGI rather than Artificial Consciousness is due to the profound complexities involved in understanding consciousness itself. Consciousness involves subjective experiences, or qualia, which are difficult to define and measure scientifically.
David Chalmers, a philosopher known for his work on consciousness, distinguishes between the “easy” and “hard” problems of consciousness:
1. The Easy Problem: This involves explaining the biological processes in the brain that occur when we receive stimuli. For example, after eating a pretzel, sensory information about taste, appearance, and texture is processed by different brain regions, creating a unified experience.
2. The Hard Problem: This concerns how these processes result in subjective experiences. How do physical processes in the brain give rise to the experience of taste, color, or pain? This remains an unresolved mystery, often referred to as the “hard problem” of consciousness.
Several theories have attempted to explain aspects of consciousness, but none have fully succeeded. Two notable theories include:
– Global Workspace Theory (GWT) by Bernard Baars: This theory suggests that the brain functions like a theater, where consciousness is the spotlight, sensory inputs are actors, and memory is the audience. The spotlight focuses on specific inputs, creating a coherent experience. However, this theory does not fully explain how memories and past experiences influence our current conscious experiences.
– Integrated Information Theory (IIT) by Giulio Tononi: This theory proposes that consciousness can be quantified by a value called phi (Φ), representing the amount of integrated information a system can generate. While IIT provides a framework for understanding different levels of consciousness, it does not necessarily equate high phi values with conscious experience. For example, a complex digital circuit could have a high phi value without being conscious.
Fig : Integrated Information Theory
Given the current state of knowledge, understanding and replicating consciousness in machines is a formidable challenge. This likely explains why Sam Altman and others in the field of AI are focusing on AGI, which aims to replicate human-like intelligence without the need for subjective experiences or consciousness.
References
1. Aristotle’s theory of the soul and Descartes’ view on animal reflexes
2. Sam Altman’s vision for AGI
3. Emily Bender’s “Octopus Test”: Grounding the Vector Space of an Octopus: Word Meaning from Raw Text
4. Schmidhuber, J. (2010). Formal Theory of Creativity, Fun, and Intrinsic Motivation. IEEE Transactions on Autonomous Mental Development, Vol. 2, No. 3
5. The Connection Between Intelligence and Consciousness. The Challenges We Face, Thomas Pinella,4.20.2016
6. Chalmers, D.J. 1997. Moving forward on the problem of consciousness. Journal of Consciousness Studies
7. Baars, B. J. (1997). In the theater of consciousness: The workspace of the mind.New York: Oxford University Press.
8. Tononi, G. (2008). Consciousness as Integrated Information: a Provisional Manifesto. Biol. Bull. 215: 216 –242
9.Franz, A. (2015). Artificial general intelligence through recursive data compression and grounded reasoning: a position paper.
10 Thomas Nagel :The Philosophical Review, Vol. 83, No. 4 (Oct., 1974), pp. 435-450