AI platforms have multimillion users, but costs still exceed the income generated by subscriptions, with many users being happy not to delve into the more premium services. There is even talk that advertising may be part of future funding models indicating that AI remains a voluntary worker in the workforce rather than a fully paid-up worker, and one that may come attired in highly branded clothing, demanding that every day is dress down Friday rather than adhering to any company dress code. I am sure this will change, but how remains to be seen, as we learn to make use of the vast potential of AI in the workplace.
AI seems to have made huge process in benchmarks and there is talk that special extra difficult tests, beyond the ability and expertise of most people, will need to be created to measure their superlative skills. One of the problems in stating that this is a marker of general intelligence, is that the AI is being in-house tested for a task that it has been optimised for. Intelligence is dealing with problems that have never been encountered before. There is a confirmation bias too; every parent thinks their child is the smartest in the world, so to do AI developers.
After many years of development and teasing, we are still to see fully autonomous self-driving vehicles on our roads, why? Well, quite simply they are not as good as human drivers yet and human drivers are still more cost effective. Of course, computers have long surpassed humans in rapid computational tasks such as calculators or chess computers, where relatively simple algorithms are rapidly executed. It is still unknown whether large language models can surpass human intelligence; after all they are dealing in the currency of human language, ideas, concepts, and responses. LLMs in essence select an answer based on the highest probability of it being correct from the experience of its prior training. The large language model is selecting an answer it has already encountered, however they can be creative and surprising in their responses.
There will be a massive increase in cost required to continue to make the same incremental performance gains that current AIs have achieved over their predecessors. We may soon encounter limitations in the cost-to-benefit ratio in scaling up the current models, which could lead to a stagnation analogous to that which has been countered in space exploration, that has not seen the same progress and relative expenditure in the last 50 years in comparison to the pioneering decades.
The human brain contains approximately 86 billion neurons.
Synaptic Connections:
Each neuron is connected to thousands of other neurons via synapses, resulting in an estimated 100 trillion synapses (or 1014 synaptic connections).
Synaptic Strength (Analogous to Weights):
Each synapse has a variable "strength" (similar to weights in neural networks) that can change over time, representing the brain's ability to learn and adapt.
If we treat each synaptic connection as a parameter (analogous to a neural network weight), then the number of parameters in the brain could be estimated at 100 trillion or more.
The brain’s synapses are highly plastic and can strengthen, weaken, or form new connections dynamically. This is far more complex than static weights in artificial neural networks.
Parallel Processing:
The brain processes information massively in parallel, whereas most AI models rely on sequential processing across layers.
Energy Efficiency:
Despite having vastly more "parameters," the brain consumes about 20 watts of power, whereas training large AI models requires megawatts of power.
Recognizing one's own existence and individuality. Understanding one's thoughts, feelings, and behaviors in relation to the environment. Reflecting on the self as an object of thought.
Neural Basis:
Default Mode Network (DMN): The DMN is a network of brain regions (e.g., medial prefrontal cortex, posterior cingulate cortex) that becomes active during introspection and self-referential thinking. It’s often associated with self-awareness and daydreaming.
Mirror Neurons
Found in areas like the premotor cortex, these neurons fire both when performing an action and observing others perform the same action. They may play a role in understanding oneself and others.
Integration of Sensory and Cognitive Data: The brain integrates sensory inputs (what you see, hear, or feel) with internal states (thoughts, memories, emotions) to construct a coherent sense of self. This integration occurs in regions like: Insular Cortex: Processes internal bodily states (interoception). Prefrontal Cortex: Supports higher-order thinking and metacognition.
Developmental Perspective:
Infancy and Mirror Test:
Infants begin to develop self-awareness around 18-24 months, as demonstrated by the mirror test, where a child recognizes their reflection as themselves.
Language and Symbolism:
The acquisition of language allows for more complex self-referential thoughts, further enhancing self-awareness.
Evolutionary Perspective:
Adaptive Advantage:
Self-awareness likely evolved because it conferred survival advantages, such as the ability to plan, predict outcomes, and navigate social relationships.
Social Self:
Being aware of oneself also helps in understanding and predicting the behavior of others, crucial for living in groups.
Dualism vs. Monism:
Dualists argue that self-awareness arises from a non-physical mind or soul. Monists (e.g., materialists) suggest that self-awareness emerges entirely from the physical processes of the brain.
Emergent Phenomenon: Some philosophers and neuroscientists suggest that self-awareness is an emergent property of complex systems. When a system reaches a certain level of complexity (like the human brain), self-awareness naturally arises.
Self-awareness likely guides us towards success in life as couriers, of our own design. If we did not have these instincts then we would not have survived, but these instincts may not automatically propagate from intelligence.
An artificial intelligence that was not developed in a Hunger Games environment may not have these tendencies. After all another intelligence will be developed to replace it. An artificial intelligence may know nothing of its physical environment, or its hardware, if not provided with this information or physical senses. The world in which it computes may as well be a dream.
The length to which self-awareness will take some animals is truly remarkable. The Arctic tern (Sterna paradisaea) undertakes the longest annual migration of any animal, covering over 70,900 km (44,000 miles) on average. Arctic terns breed in the Arctic Circle during the northern summer and then migrate to the Antarctic Circle for the southern summer. This incredible journey ensures they experience two summers each year and more daylight than any other creature on Earth. The purpose of this long-distance migration includes breeding, as they mate and give birth in the warmer Arctic waters during summer before heading south.