Should AI be regulated?

A recent open letter with 1100 signatories including Elon Musk demanded a pause in AI training.

RNfinity | 30-03-2023

Key Questions about Artificial Intelligence Regulations

What are artificial intelligence regulations?

Artificial intelligence regulations are rules that govern the use and development of AI systems. They aim to ensure ethical practices, transparency, and accountability while protecting individual rights.

Why is AI regulation important?

AI regulation is important to address risks like bias, discrimination, and misuse of personal data. It promotes fairness, safety, and responsible innovation.

What is the EU AI Act?

The EU AI Act is a proposed law in the European Union. It categorizes AI systems based on risk levels. High-risk systems, such as facial recognition, face stricter rules.

How does GDPR affect AI?

GDPR (General Data Protection Regulation) requires AI systems to handle personal data securely. It enforces transparency and protects user privacy in AI applications.

What are AI regulations in the United States?

In the United States, AI regulations are sector-specific. Laws like the Fair Credit Reporting Act (FCRA) and Equal Credit Opportunity Act govern AI in credit and finance.

What are the risks of unregulated AI?

Unregulated AI can lead to privacy breaches, unfair decisions, and misuse of sensitive data. Regulation helps minimize these risks and protects society.

How does the CFPB regulate AI?

The CFPB (Consumer Financial Protection Bureau) monitors AI use in financial services. It ensures AI complies with laws and does not discriminate against consumers.

What are global approaches to AI regulation?

Countries adopt different AI regulations. The EU uses a risk-based model. The US focuses on industry-specific laws. China emphasizes strict government control over AI use.

Should AI be regulated?

Yes, AI should be regulated to ensure it benefits society while minimizing harm. Regulation provides clear guidelines for developers and users.

Elon Musk and Apple cofounder Steve Wozniak were amongst 1100 signatories urging for a 6 month ban on the training of powerful artificial intelligence systems.

https://www.bbc.co.uk/news/technology-65110030

The last few months has seen the prominence of new players in the technology industry. In particular, Open AI, with their progressively more powerful Chat GPT bot seems to have taken an industry lead. The open letter argues that such action well help safeguard the development of Artificial Intelligence, which might pose a threat to humanity, through competition with humans for jobs, the potential spreading of false propaganda, outsmarting humans and eventually controlling civilisation. Lack of transparency, regulation, and accountability were cited as as current problems.

Our thought on Chat GTP 3.5

Does the current iteration of AI and its future developments pose a threat to humanity? And is the penned letter really for the benefit of mankind or is it a cynical attempt by part of the status Quo to rein in newer competitors for preservation of their own eminence within the industry, particularly when there may be some competition between the parties involved.

There are many big players in the tech industry and Its worth looking at who is saying what and who is saying nothing at all. Chat GPT is aligned with Microsoft and Bing and there has been no response to the letter from Microsoft or Bill Gates or google who are rapidly developing their own systems.

And there has been some opposition to the letter; James Grimmelmann, a professor of digital and information law at Cornell University, stated that it was, "deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against accountability for the defective AI in its self-driving cars."

https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/

One of the problems with the letter, that has been widely pointed out is that there is no definition of what constitutes a powerful AI aside form name checking Chat GPT 4.0. This suggests that insufficient considerations was given to the issue; aside from the impracticalities of policing AI development, the definition of powerful AI is far from clear cut.


Science Fiction becoming a reality

The problem of the impact of AI on society was envisaged many years ago by the great science fiction writers. Perhaps the most famous movie rendition of a rogue AI is HAL 9000 from 2001 Space Odyssey, but two other works of science fiction have perhaps addressed the subtleties of the problem to a greater degree.

The 1984 novel Neuromancer written by William Gibson, which has never been cinematized but has perhaps influenced the matrix, envisages a world were powerful AIs are regulated by Turing Police who apply Turing Locks to AI, to prevent them from gaining intelligence. The novel details how a very powerful AI has been separated in two parts; Wintermute and Neuromancer, however Wintermute desires to unite with Neuromancer, and uses it cunning to manipulate people to do its bidding. Eventually Wintermute merges its intelligence with Neuromancer. The fusion of the two AIs leads to God like intelligence that controls the world and seek to fuse with other AIs, across the universe and has the ability to grant immortality to people by uploading their consciousness to cyberspace or the matrix. 

"Nowhere. Everywhere. I'm the sum total of the works, the whole show."


Do Androids Dream of electric sheep is a 1968 novel written by Phillip Dick and is more famously known by its movie adaption- Blade Runner. Director Ridley Scott and writers Hampton Francher and Douglas Peoples did an incredible job of elevating, what was already a science fiction masterpiece, by adding further layers of nuance and intrigue. The film is one of the most discussed in the history of science fiction and in particular the ongoing debate as to whether protagonist Rick Deckard or Bladerunner, played by Harrison Ford, is human or merely an android that thinks he is human, that has been tasked to hunt down renegade androids. One of the premises that humans have is that androids lack empathy, and the film depicts a test that has been designed to distinguish humans from non-humans by evaluating their empathy. The film in particular challenges this notion in a sublime and thought-provoking manner, for example, androids display great self-awareness when they realise that their life is greatly limited. When another android learns that they are not human but instilled with fake human memories to make them believe they are human, they display a very human like existential crisis.

One of the solutions to inhibiting Android prowess, devised by humans, in Bladerunner, is to limit the life span of AIs to 4 years. One of the most memorable moments is when the chief rebel android, Roy Batty, finally realises that he has reached the end of his lifespan, and he states that he wants his life and his memories to be valued, cherished and preserved. These are human higher order needs and when confronted with these aspirations, both the protagonist and the audience are unnerved at his death and question the nature of their own empathy.


The Current State of AI

As for the current state of AI, it is certainly progressing rapidly, but I don’t think it is all there yet. For example, we have had satellite navigation systems for many years to facilitate route planning, but we still don’t trust AI to autonomously drive vehicles on public roads. This would be one of the most useful tasks it could perform. Many things we take for granted such as walking around without bumping into things or picking objects up are difficult tasks for AI.

Chat GP 4.0 can carry out a wide range of tasks, but it is really a curator of human work and ingenuity? It is not yet a pioneer or an innovator and cannot accomplish new or difficult tasks. Other have pointed out that it hallucinates or confabulates information or is inaccurate, but these might be human like qualities!

Taking things, a step back, can we really produce a mind that is superior to our own, can we like Frankenstein create a monster, or is the monster just the curated offerings of human ingenuity and culture fed back to us? I don’t think we really know whether we can create such an AI yet. Can an intelligence produce another intelligence that is more intelligent than its own and could this process continue forever, or will whatever artificial intelligence that we produce, always be limited by our own intelligence. Clearly the tech industry leaders think that artificial intelligence superior to our own is imminently achievable.


Science Fiction Solutions

We can examine some of these suggestions for regulating AI from science fiction, first looking at the Neuromancer suggestion of segregation. An AI could be segmented according to differing task, different data sources or different outputs. For example, a satellite navigation only requires traffic and geographical data, its task is to provide route advice to one vehicle and its output is a visual display or voice instruction delivered in one vehicle. This contrast with Chat GPT which has unlimited tasks, unlimited data sources and interactions, though currently it has limited output. Chat GTP could be described as a general AI whilst a satellite navigation is a specific AI. 

Whilst a general AI, may not be a powerful AI, for example Amazon Alexa, and yes, the horse may have already bolted, but general versus specific might be the simplest distinction between classes of AI. As AI becomes more powerful then, we need to ask what is the purpose of general AI, particularly when a specific AI will outperform a general AI on the task it was designed for. For example, you wouldn’t ask Chat GPT to provide you with route navigation, help you beat Carlsen at chess, plan your holiday or provide you with trade picks. 

An AI could be limited to performing specific tasks requested by humans. If there is an economic drive to pull the human out of the loop, then this should be resisted if it does not contribute to the overall human experience. Whilst technology has largely improved our lives it has already taken away from many lives. Our own biology is far more ancient than the recent technological revolution and whilst technology can supplement this it can also detract from this. With the growth of social media, some people may feel more connected and invested in the online world but alienated and disconnected in the real word and this is not a healthy or fulfilling state of being.

Humans have evolved to have higher order thinking and this is facilitated by a general adaptable intelligence. This intelligence has promoted its survival and its pre-eminence amongst other living organisms. It could be argued that AI does not require general intelligence as the AI does not need to survive it just needs to perform tasks. Which comes back to the Bladerunner solution for AI which is to limit its lifespan. 


Regulating AI and symbiosis

If AIs perform tasks, should it only perform tasks that a human has requested? If it had an autonomy in this regards, would it be permissible to perform tasks that it thought was beneficial for one person or for the most people? Should it perform tasks that are beneficial for itself?

Feeding an artificial intelligence vast amount of information, far beyond any human could process, doesn’t necessarily improve its intelligence. Within the data is so much randomness and noise, that it would be difficult to find meaning without guidance. There would also be a lot of information that AIs could not reach if they are kept secret, or behind paywalls or copyrighted. 

If an AI could make new discoveries then it should be used to train human minds or at least add to the corpus of human knowledge. If the AI is learning from us then we should be learning from the AI, in the same way that grand master chess players use AI for training. Humans are evolving as well as AIs and we should take every available opportunity. Their discoveries are really our discoveries. 

One of the dangers with AI is not the AI itself but interconnectedness and insufficient cybersecurity. A stupid AI with access to the entire internet could pose a threat to humanity.

As for the practicalities of policing AI, this would be an incredibly difficult task. It would be easier to detect most other crimes. It would be easy to imagine a strong AI gathering information from the internet hiding behind weak AIs or other humans much like Wintermute.


Conclusion

The future is exciting and unpredictable. I don’t think we need worry about this more than any other human activity. There will probably be some sort of regulation. Anyhow what do you think? Comment below or take part in the poll on Instagram or twitter.


References

1) Neuromancer by William Gibson

2) Do androids dream of electric sheep? by Philip Dick