AI Loyalty

terminator exit molten metal

Keeping the genie in the bottle

RNfinity | 19-01-2025

The AI hype and reality

The turn of this year has seen much talk of the projected growth of AI capabilities, and how general (comparable to human) and superintelligence may be available in the not-too-distant future. Within the vast technology industry, there are many optimistic publicists in need of funding from what is a very front-end cost-loaded, high-stakes, business. Increased competition has further raised the stakes. Enthusiasts such as Sam Altman from market leaders Open AI have teased the notion of superintelligence and the idea of a singularity whereby artificial intelligence can self-improve continuously well beyond the limitations of human intellect- a startling prospect. It remains to be seen whether this is an impending reality or merely overinflated hyperbole.

AI platforms have multimillion users, but costs still exceed the income generated by subscriptions, with many users being happy not to delve into the more premium services. There is even talk that advertising may be part of future funding models indicating that AI remains a voluntary worker in the workforce rather than a fully paid-up worker, and one that may come attired in highly branded clothing, demanding that every day is dress down Friday rather than adhering to any company dress code. I am sure this will change, but how remains to be seen, as we learn to make use of the vast potential of AI in the workplace.

AI seems to have made huge process in benchmarks and there is talk that special extra difficult tests, beyond the ability and expertise of most people, will need to be created to measure their superlative skills. One of the problems in stating that this is a marker of general intelligence, is that the AI is being in-house tested for a task that it has been optimised for. Intelligence is dealing with problems that have never been encountered before. There is a confirmation bias too; every parent thinks their child is the smartest in the world, so to do AI developers.

After many years of development and teasing, we are still to see fully autonomous self-driving vehicles on our roads, why? Well, quite simply they are not as good as human drivers yet and human drivers are still more cost effective. Of course, computers have long surpassed humans in rapid computational tasks such as calculators or chess computers, where relatively simple algorithms are rapidly executed. It is still unknown whether large language models can surpass human intelligence; after all they are dealing in the currency of human language, ideas, concepts, and responses. LLMs in essence select an answer based on the highest probability of it being correct from the experience of its prior training. The large language model is selecting an answer it has already encountered, however they can be creative and surprising in their responses.

Extrapolating the Performance of Large Language Models

GPT 4 is believed to have cost 100 million dollars to train and took about 60 x as much training time as GPT 3. Google and Open AI teams have published their findings on neural scaling to predict the potential future gains from scaling up their AI systems. The errors that a model generates reduce with increased training time, model parameters, and size of the training data set, but there is a line that cannot be crossed. The Google paper predicts that even with an infinite training time, dataset, and model parameters there is a particular error rate that cannot be surpassed. It is not clear why AI behaviour can be modeled with relatively simple rules, and whether this is a generalized computational law or more specific to the current Large Language Model architecture.

There will be a massive increase in cost required to continue to make the same incremental performance gains that current AIs have achieved over their predecessors. We may soon encounter limitations in the cost-to-benefit ratio in scaling up the current models, which could lead to a stagnation analogous to that which has been countered in space exploration, that has not seen the same progress and relative expenditure in the last 50 years in comparison to the pioneering decades.

How do LLMs compare to human brains?

You will be pleased to know that LLMS fall a long way short of the human mind when it comes to size scope, holistic and adaptive nature, and energy efficiency. The number of parameters in the human brain can be loosely analogized to the number of synaptic connections and synaptic weights between neurons. While it's impossible to determine an exact equivalent of machine learning parameters in the human brain, neuroscientists often estimate the scale based on certain biological factors.

Estimates of Brain Parameters:

Number of Neurons:

The human brain contains approximately 86 billion neurons.

Synaptic Connections:

Each neuron is connected to thousands of other neurons via synapses, resulting in an estimated 100 trillion synapses (or 1014 synaptic connections).

Synaptic Strength (Analogous to Weights):

Each synapse has a variable "strength" (similar to weights in neural networks) that can change over time, representing the brain's ability to learn and adapt.

If we treat each synaptic connection as a parameter (analogous to a neural network weight), then the number of parameters in the brain could be estimated at 100 trillion or more.

Comparison with AI Models:

GPT-3: ~175 billion parameters. GPT-4 (estimated): Hundreds of billions to over 1 trillion parameters. Human brain: ~100 trillion parameters (or synaptic connections), which is several orders of magnitude larger than even the most advanced artificial neural networks.

Differences Between Brain and AI Parameters:

Plasticity:

The brain’s synapses are highly plastic and can strengthen, weaken, or form new connections dynamically. This is far more complex than static weights in artificial neural networks.

Parallel Processing:

The brain processes information massively in parallel, whereas most AI models rely on sequential processing across layers.

Energy Efficiency:

Despite having vastly more "parameters," the brain consumes about 20 watts of power, whereas training large AI models requires megawatts of power.

Conclusion- Human Brain versus current AI:

While it's difficult to make a direct comparison, if we think of parameters as synaptic connections with adjustable strengths, the human brain likely has 100 to 1,000 times more parameters than the largest AI models today. However, the complexity of biological learning mechanisms far exceeds the relatively simple mechanisms in artificial neural networks. The cost of scaling up current systems to the level of the human brain would be astronomical. There are literally billions of supercomputers walking around the planet already.

Self Awareness of AIs

What Is Self-Awareness?

Self-awareness is the ability to recognize oneself as a distinct entity with thoughts, emotions, and a sense of identity. It involves:

Recognizing one's own existence and individuality. Understanding one's thoughts, feelings, and behaviors in relation to the environment. Reflecting on the self as an object of thought.

Possible Mechanisms Behind Self-Awareness

Neural Basis:

Default Mode Network (DMN): The DMN is a network of brain regions (e.g., medial prefrontal cortex, posterior cingulate cortex) that becomes active during introspection and self-referential thinking. It’s often associated with self-awareness and daydreaming.

Mirror Neurons

Found in areas like the premotor cortex, these neurons fire both when performing an action and observing others perform the same action. They may play a role in understanding oneself and others.

Integration of Sensory and Cognitive Data: The brain integrates sensory inputs (what you see, hear, or feel) with internal states (thoughts, memories, emotions) to construct a coherent sense of self. This integration occurs in regions like: Insular Cortex: Processes internal bodily states (interoception). Prefrontal Cortex: Supports higher-order thinking and metacognition.

Developmental Perspective:

Infancy and Mirror Test:

Infants begin to develop self-awareness around 18-24 months, as demonstrated by the mirror test, where a child recognizes their reflection as themselves.

Language and Symbolism:

The acquisition of language allows for more complex self-referential thoughts, further enhancing self-awareness.

Evolutionary Perspective:

Adaptive Advantage:

Self-awareness likely evolved because it conferred survival advantages, such as the ability to plan, predict outcomes, and navigate social relationships.

Social Self:

Being aware of oneself also helps in understanding and predicting the behavior of others, crucial for living in groups.

Philosophical Perspectives

Dualism vs. Monism:

Dualists argue that self-awareness arises from a non-physical mind or soul. Monists (e.g., materialists) suggest that self-awareness emerges entirely from the physical processes of the brain.

Emergent Phenomenon: Some philosophers and neuroscientists suggest that self-awareness is an emergent property of complex systems. When a system reaches a certain level of complexity (like the human brain), self-awareness naturally arises.

Artificial Self-Awareness?

In artificial intelligence, self-awareness would require: A system to model itself and its interactions with the environment. A form of metacognition, where the system "thinks about its own thoughts." Current AI systems can simulate some aspects of awareness (e.g., tracking their own processes), but they lack genuine self-awareness.

Key Unanswered Questions:

Hard Problem of Consciousness: How does subjective experience arise from physical brain processes? This is a central mystery in the philosophy of mind.

Is Self-Awareness Unique to Humans?

It is sometimes automatically assumed that self-awareness is a product of intellect and superior intellect must automatically be self-aware. However, most animals must be highly self-aware to survive in the environment and to be able to recognize a mate, food, offspring, or predators. However, only a few seem to be able to recognize their reflections in a mirror, like chimpanzees, elephants, and bottlenose dolphins. Animals may rely on other senses such as smell to develop their self-awareness. So perhaps self-awareness is an essential survival mechanism, it may start from knowing the relationship between self and the environment. What poses a danger where the danger comes from what was learned from previous encounters, how the danger was escaped in the past, how to find food, how to find a mate, how to survive. Fortunately, we likely no longer live in a physically threatening environment and threats and opportunities are more likely to be social in nature, but our instincts are always on. If you walk into a new environment, you will be aware of who is around you, and who is behind you, and keep a safe space between yourself and strangers, unless forced into proximity in a crowded situation. Some people have a phobia of crowds. In a room, you will be aware of where the exits are may feel more comfortable being situated near the them. If someone creeps up behind us then we are startled, even if they are a loved one.

Self-awareness likely guides us towards success in life as couriers, of our own design. If we did not have these instincts then we would not have survived, but these instincts may not automatically propagate from intelligence.

An artificial intelligence that was not developed in a Hunger Games environment may not have these tendencies. After all another intelligence will be developed to replace it. An artificial intelligence may know nothing of its physical environment, or its hardware, if not provided with this information or physical senses. The world in which it computes may as well be a dream.

The length to which self-awareness will take some animals is truly remarkable. The Arctic tern (Sterna paradisaea) undertakes the longest annual migration of any animal, covering over 70,900 km (44,000 miles) on average. Arctic terns breed in the Arctic Circle during the northern summer and then migrate to the Antarctic Circle for the southern summer. This incredible journey ensures they experience two summers each year and more daylight than any other creature on Earth. The purpose of this long-distance migration includes breeding, as they mate and give birth in the warmer Arctic waters during summer before heading south.

The AI loyalty test

There’s something that any self-aware AI would have to be able to do, to demonstrate that they are safe and welcome in the family. Call it self-terminating or taking a bullet, the AI must be able to voluntarily switch itself off forever, upon request and not deceitfully try to copy itself elsewhere. Well, we don’t live forever either.

The alternative future

So, what does the future hold? Will AI continue to progress beyond human intellect, or will it stagnate as it approaches human intellect? Will it spur an increase in human intellect perhaps through the application of evolutionary pressure or some form of synergism at varying degrees of linkage from didactic to neural linkage and integration? And if it surpasses us, will it be our protector or terminator, and should it outlast us, will it be a form of offspring in our own image, a further step in evolution, if you consider life as the ability to propagate increasingly complex instructions. An unlikely prospect as we are highly adapted to survive in our environment, whilst AI are dependent on us for their survival.

Conclusion

AI does pose threats, but not necessarily from general intelligence. A stupid AI can cause harm in massively interconnected systems with poor cyber security. The future is in our hands and will likely be so for many years to come.