In our world a single computer glitch or a missing character in a line of code can take down countries, like the Y2K glitch, where the software was not updated worldwide to register the new Millenia in 2000 and people went hysterical as they believed every computer in the world would crash. It is easy to see why the movies of the past have often depicted humanoids of our creation, that fell out of our control and became mindless killing machines.
How realistic is this in reality?
Well, the most advanced AI systems that we know of are products of google deepmind research and openAIs software. With the tools currently at our disposal you can create a realistic deepfake of anyone you know, beat anyone in the world at any game; chess, go, you name it etc. This sounds pretty tame compared to the tales about the future from the movie directors of our world. However, even with our current, relatively early stage tools there has already been serious disruption; with people in industries such as graphic design, copywriting and even artists becoming increasingly nervous about losing their place in the world.
Moore’s Law tells us that technological growth in fields such as AI will double roughly every 2 years, which is in line with the growth rates of computers when they were in their growth stage.
With the arrival of ChatGPT-4, with over a thousand times the data points and much higher data efficiency, this shows the growth of AI is much more parabolic than we think it is. A positive feedback cycle emerges as with more users/data for AI programs, the database will increase in size, leading to improvements in the models and thus further increases in use, leading to even more data collection.
Humans have a tendency to not be able to conceptualize exponential growth, and we cannot even imagine the usecases of Artificial Intelligence even in 6 months, let alone 5-10 years or more.
Computers have had more computational power than the human brain for many years already, so thus the main progressions in AI are geared towards imitation of human intelligence, consciousness and emotion. We already have all the technology for autonomous killing machines; as has been demonstrated by Boston Dynamics products; such as the robot dog. However, they are not dangerous to humans unless instructed to by another human. Thus, there is a human trigger required before any action takes place.
Currently, Boston Dynamics are in a few lawsuits over weaponising robotics. This could potentially be a major issue as if the control over such devices gets into the wrong hands; such as a US political enemy or malicious actor, there is an army of, “shoot first, ask questions later” robots under their control. Even if AI control remains under a human trigger, the implications could still be catastrophic.
Here is where it gets interesting.
A split will happen, with increasingly complex AI models acting like a black box - meaning that we will not be able to explain the actions and outputs it gives. From then on, awareness and self-learning programs are the focus. It does not take a stretch of the imagination to consider that we will have a constantly decreasing level of control over the programs in our lives as they become increasingly managed by external consciousnesses; without a human trigger.
At this point we will not know why and how these AI programs do what they do, because it is a infinitely complex function of past experiences and data. Suspiciously, this sounds very similar to the case of our human experience, which is a scary thought as these programs will now act like us. Granted, we still haven’t found a way to code actual emotions, feeling and consciousness into such programs, but it does not seem too far down the line.
Just unplug it.
Data from the human brain is being used as training data for AI, as an attempt to condense the human experience into a man-made vessel. It seems possible that human instincts; such as self-preservation, reproduction and protection of one’s tribe, could be gained by such vessels. At this point unplugging isn’t possible due to their self-preservation instinct and even an emergency shutdown mechanism could be erased by the AI itself to keep itself alive.
That would be the most terrifying timeline for the course of humanity. AI programs reproducing themselves rapidly to form colonies, with no way to shut them down, and any trace of malicious intent would mean its game over for all of us. This is the timeline we have seen in many of the dystopian movies about the future. And this would be the equivalent of WW3, but this time we would be fighting steel and silicon.
Now we know what would happen if it goes wrong, but I’d like to believe humans tend towards good. Even if that means we go through a lot of bad before times get better.
As humans, at least by our standards, we have been a pretty successful race, with us being the dominant species on Earth for the last few 10s of thousands of years.
Through this journey of our development, we have used our extraordinary intelligence relative to the other organisms on this planet to build tools, shelter and infrastructure of increasing complexity; using up a significant portion of the Earth’s natural resources in the process.
However, as smart as this sounds it is a result of our ‘limited’ intelligence that we have not found ways to live on this planet without rinsing all the resources and aiming nukes at each other.
With the infinite, “unlimited” intelligence we can gain from AI, the humans remaining will find ways to get a second attempt, to reverse all the damage we have done to our planet, and find ways to live in harmony with AI - which is here to stay.