Artificial Super Intelligence: Should We Be Worried?
Before talking about Artificial Super Intelligence, let’s talk about Artificial Intelligence first.
What is Artificial Intelligence (AI)?
Let’s look at these two terms artificial and intelligence individually. The term artificial can be anything that is made by human beings which is not natural. And the term intelligence is the ability to understand, think, and learn.
When we combine these two together, we get a broad area of computer science that makes machines seem like they have human intelligence.
In short, it is a branch of computer science dealing with the simulation of intelligent behavior in computers. Here are some examples of AI: Google search, Siri/Alexa/Cortana, IBM’s Watson, facial recognition software, disease mapping, and prediction tools, Email spam filters, self-driving cars, etc.
There are three types of AI. Which are
- Artificial Narrow Intelligence (ANI), which has a narrow range of abilities.
- Artificial General Intelligence (AGI), which is on par with human capabilities.
- Artificial Super Intelligence (ASI), which is more capable than a human.
Artificial Super Intelligence is the hypothetical AI that doesn’t just mimic or understand intelligence and behavior, it is where machines become self-aware and surpass the capacity of human intelligence and ability.
Superintelligence has long been the muse of dystopian science fiction in which robots overrun, overthrow, and/or enslave humanity. The concept of artificial superintelligence sees AI evolve to be so akin to human emotions and experiences, that it doesn’t just understand them, it evokes emotions, needs, beliefs, and desires of its own.
In addition to replicating the multi-faceted intelligence of human beings, ASI would theoretically be exceedingly better at everything we do; math, science, sports, art, medicine, hobbies, emotional relationships, everything. ASI would have greater memory and a faster ability to process and analyze data and stimuli. Consequently, the decision-making and problem-solving capabilities of super-intelligent beings would be far superior to those of human beings.
The potential of having such powerful machines at our disposal may seem appealing, but the concept itself has a multitude of unknown consequences. If self-aware super-intelligent beings came to be, they would be capable of ideas like self-preservation. The impact this will have on humanity, our survival, and our way of life is pure speculation.
The Coming Singularity
Many believe that innovative intelligence will before long influence human life more than we have ever imagined, and that these advancements may even have the option to, on a fundamental level, change being human.
It is that long-awaited point in time — likely, a point in our very not so distant future — when progress in AI leads to the making of a machine smarter than people, and with that begets the emergence of the AI Singularity – “a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.”
In The Coming Singularity they describe it as:
a point in time where machines will learn to self-improve in a recursive manner, and succeeding technological iterations are likely to be exponentially superior and arguably changing at rate that is beyond our understanding.
The Father of AI, Ray Kurzweil, believes this to be an event thirty years in the future, and other experts postulate on the key innovations that will make it possible, such as a single, cheap little device that isn’t just smarter than humans — it can compute as much data as all human brains taken together.
“It is much more than just another industrial revolution. It is something that transcends humankind and life itself.”
AI is everywhere. It is drastically altering the way we think about the world whether we like it or not. Artificial intelligence uses machine learning to mimic human intelligence. The computer has to learn how to respond to certain actions, so it uses a complex of artificial nodes called a “neural net”. Neural nets learn to perform tasks by considering examples, generally without being programmed task-specific rules.
For example, in image recognition, a neural net might learn to identify images of cats by analyzing example images that have been manually labeled by humans as ‘cat’ or ‘not cat’. Neural nets won’t know whether cats have fur, tails, whispers or cat-like faces. Instead, they automatically generate identifying characteristics from examples that they process. It’s a process of tedious calculations that we humans can’t do.
Could we create an ASI? Perhaps so with a set of powerful machines with more processing power than the entire human race combined.
Well, in theory, yes but it will be very difficult. This is where things started to get theoretical and scary as well. ASI has to be capable of having emotions, consciousness, conscience and have relationships. If we did so, will there be a World War III, machine against humans?
The best way to create an ASI could be start with a CPU that already has mechanisms built in the primate brain. But can it be possible to add human emotions to the CPU?
Well, we have good news because scientists adding human brain genes to monkeys has been accomplished by Chinese researchers, in order to improve short term memories of monkeys.
With that said, we can create a hypothetical biological ASI by creating transgenic monkeys with human brain genes in order to enhance their intelligence, memory capacity, and many more skills. We could also infuse their brains with neural link technology to boost their computational speed and make them faster. These transgenic cyborg monkeys can be a realistic path to create an ASI.
We could grow only the brain independently of a genetically modified ape and grow it in an artificial embryonic tank. As the brain develops, we could slowly start adding microchips and lace it with neural link technology. So, when It develops, it becomes more programmable. However, this brain would still be sentient, so concerns over morals and ethics would potentially arise. This leads to us our next and final question- ‘Will an ASI take over the world, terminator style?’
We’re unlikely to experience a World War Robot. This entity will be capable of morals, ethics, philosophy, and science. From that standpoint, wiping out the human race would be a reasonably stupid idea. Let’s not forget his floating brain in a tank needs us for its survival. Hence, the most realistic scenario will likely be ASI fighting humans for its freedom.
According to Socrates if there is a dark cave of some prisoners kept there from their birth and one prisoner escapes from that cave and discovers the world and how it feels outside the cave, if that prisoner returns to the cave and tells the story of how it feels outside and wants to release them from the cave, the other prisoners will kill him, because they have never seen such things.
Hence we can tentatively conclude that ASI wouldn’t really rebel against us humans. Rather we would be its comfort zone. As we would be a concept it could grasp and experience hence forming relationships with us. They need to create a symbiotic relationship with humans.
The idea of AI’s taking over the world, worryingly, backed up by some of the greatest minds such as the late physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX CEO Elon Musk. All of them expressed their concern about it. According to Stephen Hawking, this could end the human race. He believed that in the coming decades, AI could create risks such as developing weapons we can not understand.
The biggest problem is an unfriendly AI technology is much easier to create than a friendly one. Another problem is that deviant towards bad things and self-replication. However, for right now there is not really anything to worry about.
0:00 – Introduction
0:33- How AI Works
2:33- Can we create ASI
5:00 – What are emotions and how they are created
8:23 – Monkey as part of the research
10:24- Artificial Embryonic Tank
12:24- Plato and Socrates theory
15:12- Why ASI wouldn’t rebel against us
15:40 – Concern about ASI