Artificial intelligence and moral issues: AI between war and self-consciousness

In the third article of the series, Prof. Valori explores the option of whether mankind has already created a massive brain, or if an AI entity has already managed to take on an identity of its own

Artificial intelligence and moral issues: AI between war and self-consciousness

Ai-Da Robot, the world’s first ultra-realistic humanoid robot artist. Photo: Aidan Meller / Cover Images via REUTERS

At the beginning of 2018, the number of mobile phones in use surpassed the number of humans on the planet, reaching 8 billion. In theory, each of these devices is connected to two billion computers, which are themselves networked. Given the incredible amount of data involved in this type of use, and considering that the computer network is in constant contact and growing, is it possible that mankind has already created a massive brain? An artificial intelligence that has taken on an identity of its own?

The field of robotics is constantly evolving and continues to make strides. It is therefore clear that sooner or later we shall move from artificial intelligence to super-intelligence, i.e. a being on this planet that is smarter than we are and will soon not be any smarter. It will not be pleasant when artificial intelligence with its knowledge and intellectual abilities corners the human being, surpassing flesh and blood people in any field of knowledge. It will be a pivotal moment that will radically change world history - as for now our existence is justified by the fact that we are at the top of the food chain, but the moment when an entity is self-created that does not need to feed itself on pasta and meat, what will we exist for if that entity only needs solar energy to perpetuate itself indefinitely?

If sooner or later we are to be replaced by artificial intelligence, we must begin to prepare ourselves psychologically. Portland, Oregon, April 7, 2016: the US Defence Advanced Research Projects Agency (DARPA) launched the prototype of the unmanned anti-submarine vessel Sea Hunter, marking the beginning of a new era. Unlike the Predator and Air Force drones, this vessel does not need a remote operator and is built to be able to navigate on its own while avoiding all kinds of obstacles at sea.

It has enough fuel to withstand up to three months at sea and is very silent. It also transmits encrypted information to Defence Intelligence Services. When the US Department of Defence says that an unmanned submarine would not be launched without remote control, they are telling the truth. But there is more to consider, i.e. that Russia has developed a remotely piloted submarine with a nuclear weapon. This means that between 5 and 15 years will elapse before the US Defence can respond to a remotely piloted submarine with a nuclear weapon on board.

It has always been said that the war drone replaces the flesh and blood soldier, who becomes a remote “play station” operator. Hence the idea of the drone as a substitute for the human soldier, who would be guaranteed total safety and security, so as to avoid unnecessary dangers. It had been forgotten, however, that remote control could be intercepted by the enemy and change targets by striking its own army. At that point, however, drones would have to be made completely autonomous.

Such a drone would be a killing machine that would wipe out entire armies, which is the reason why care should be taken to avoid their proliferation on battlefields. Any kind of accident, a fire or even a minor malfunction would trigger a “madness” mechanism that would cause the machine to kill anyone. Developing killer robots is possible. Facial recognition technology has made great strides and artificial intelligence can recognise faces and detect targets. In fact, drones are already being used to detect and target individuals, based on facial features: they kill and injure.

The application of artificial intelligence to military technology will change warfare forever. It is possible for the army's autonomous machines to take wrong decisions, thus reaping tens of thousands of casualties among friends, enemies and defenceless civilians. What if they even go so far as to ignore instructions? If so, if autonomous, self-driving killing machines independent of human commands are designed, could we be facing a violent fate of extinction for the human race?

While many experts and scholars agree that humans will be the architects of their own violent downfall first and destruction later, others believe that the advancement of artificial intelligence may be the key to mankind salvation.

Los Angeles, May 2018: at the University of California, Professor Veronica Santos was working on the development of a project to create increasingly human-like robots capable of sensing physical contact and reacting to it. She was also testing different ways of robot tactile sensitivity. Combining all this with artificial intelligence, there may one day be a humanoid robot capable of exploring the space as far as Mars. Humanoid robots are increasingly a reality, ranging from the field of neuroprosthetics to machines for colonising celestial bodies.

Although the use of humanoid robots is a rather controversial topic, this sector has the merit of having great prospects, especially for those who intend to invest in the field. Funding development projects could prove useful in the creation of artificial human beings that are practically impossible to distinguish from flesh and blood individuals.

These humanoids, however, could conceivably express desires and feel pain, as well as display a wide range of feelings and emotions. It is actually well-known that we do not know what an emotion really is. Hence would we really be able to create an artificial emotion, or would we make fatal errors in software processing? If a robot can distinguish between good and evil and know suffering, will this be the first step towards the possibility of developing feelings and a conscience?

Let us reflect. Although computers surpass humans in data processing, they pale into insignificance faced with the complexity and sophistication of the central nervous system. In April 2013, the Japanese technology company Fujitsu tried to simulate the network of neurons in the brain using one of the most powerful supercomputers on the planet. Despite being equipped with 82,000 of the world's fastest processors, it took over 40 minutes to simulate just one second of 1% of human brain activity (Tim Hornyak, Fujitsu supercomputer simulates 1 second of brain activity).

Japanese-born astrophysicist Michio Kaku - graduated summa cum laude from Harvard University - stated:

"Fifty years ago we made a big mistake thinking that the brain was a digital computer. It is not! The brain is a machine capable of learning, which regenerates itself when it has completed its task. Children have the ability to learn from their mistakes: when they come across something new, they learn to understand how it works by interacting with the world. This is exactly what we need and to do this we need a computer that is up to the job: a quantum computer”.

Unlike today's computers that rely on bits - a binary series of 0s and 1s to process data - quantum computers use quantum bits, or qubits - which can use 0s and 1s at the same time. This enables them to perform millions of calculations simultaneously in much the same way as the human brain does.

Kaku added: “Robots are machines and as such they do not think and have no silicon consciousness. They are not aware of who they are and their surroundings. It has to be recognised, however, that it is only a matter of time before they can have some awareness”.

Is it really possible for machines to become sentient entities fully aware of themselves and their surroundings?

Kaku maintained: 'We can imagine a future time when robots will be as intelligent as a mouse, and after the mouse as a rabbit, and then as a cat, a dog, until they become as cunning as a monkey. Robots do not know they are machines and I think that, by the end of this century, robots will probably begin to realise that they are different, that they are something else than their master.”

Professor Giancarlo Elia Valori is a world-renowned Italian economist and international relations expert, who serves as the President of the International World Group. In 1995, the Hebrew University of Jerusalem dedicated the Giancarlo Elia Valori chair of Peace and Regional Cooperation. Prof. Valori also holds chairs for Peace Studies at Yeshiva University in New York and at Peking University in China. Among his many honors from countries and institutions around the world, Prof. Valori is an Honorable of the Academy of Science at the Institute of France, as well as Knight Grand Cross and Knight of Labor of the Italian Republic.

You might be interested also

US President Joe Biden in a meeting with Saudi Arabia’s Crown Prince Mohammed bin Salman Al Saud at the Royal Palace in Jeddah, Saudi Arabia, July 15, 2022. Photo by Balkis Press/ABACAPRESS.COM via REUTERS

US approves possible multi-billion arms deal with Saudi Arabia, UAE

Patriot missile deal with Saudi Arabia valued at $3.05 billion; THAAD deal with UAE valued at $2.245 billion