In the tech Cold War that is unfolding across the world, the development of supercomputer government AI systems is taking centre stage. For countries engaged in the geopolitical competition in the realm of emerging technologies, being top of the AI leader board is now a crucial strategic priority. These systems are designed to enhance national security, economic competitiveness and social welfare by harnessing the power of Large Language Models (LLMs) and big data. However, the rise of supercomputer government AI systems and their predicted god-like abilities raise fundamental questions about the future of global governance and the human race itself.
The current tech Cold War has its roots in the previous Cold War, which was also shaped by technological advancements. Then, the United States and the Soviet Union competed to achieve military and technological dominance, resulting in the Space Race, the development of early computer technologies, and the arms race to include the atomic bomb. Today, the competition between the United States and China in the field of AI and supercomputers has been characterised as a new Cold War, in which the two powers are vying for strategic advantage and dominance. In addition to the USA and China, countries such as Russia, Japan and South Korea are also making significant investments in supercomputer government AI systems.
The computing ability used to operate AI platforms has increased by an incredible 100 million times in the past ten years. Investment in this sector has grown from USD$5bn in 2020 to a whopping USD$20bn in 2023. China has set itself the goal of becoming a world leader in AI by 2030 and has been investing billions of dollars in research and development. In response, the USA has been busy launching the American AI Initiative to promote the development of AI and has formed partnerships with large firms such as Google, Microsoft, and Amazon.
Supercomputer AI capabilities and concerns
In 2020, a US fighter jet pilot repeatedly failed to beat AI in a controlled simulation. In 2021, AI gained revolutionary abilities in facial recognition, text generation, speech recognition, drug discovery and automated translation. In 2023, a ChatGPT model was able to pass the Bar exam scoring within the top 10% of test takers. Today, AI can read, write, see, hear, and understand; it can speak, smell, touch, move, play games, debate, create and read your mind (in that it can interpret known brain signals that are responsible for creating words). AI is simply superhuman at many tasks.
The press is bombarding the public with the dangers that could occur in as little as ten years from now and how action must be taken before it is too late. In ten years’ time, some predict that the intelligence of AI systems will be ten thousand times greater than that of Albert Einstein, enabling unimaginable technological advances and achievements. And governments are undoubtedly technologically more advanced than we realise. Take the incredible war planes such as the Harrier jump jet with its vertical take-off abilities and the eye-popping UFO-like Stealth bomber. At the time they were released, it was not uncommon to hear officials boasting that the governments responsible for these weapons of war were ten years ahead in terms of the technology visible to the public. This raises the question about the computing power that governments have available and the abilities of the chatbots they could be conversing with right now.
ChatGPT currently runs on High-Performance Cluster (HPC) cloud-based computers that perform at speeds more than one million times faster than the fastest commodity desktop (IBM, 2023). One million times faster. HPC used to refer to supercomputing and, specifically, to one isolated machine as opposed to a cluster working in concert. Today’s HPC systems can run at speeds of up to 10–15 petaFLOPS, meaning that they have the capability to execute approximately 15 quadrillion floating-point operations per second. (Floating-point operations include arithmetic operations like addition, subtraction, multiplication and division, as well as more complex operations like trigonometric functions, logarithms and exponentials.)
The supercomputer SUMMIT (or OLCF-4), developed by IBM for use at the Oak Ridge Leadership Computing Facility (OLCF) in Tennessee, USA, is capable of 200 petaFLOPS. This makes it the fifth fastest supercomputer in the world after Frontier or OLCF-5 (Tennessee, USA), Fugaku (Kobe, Japan), LUMI (Kajaani, Finland) and Leonardo (Bologna, Italy). It held the number 1 position from November 2018 to June 2020. Even if we review the HPC’s difference on the favourable side of 15 petaFLOPS, SUMMIT is some thirteen times more powerful and able to perform 200 quadrillion floating-point operations per second.
In simple terms, a supercomputer that we know currently exists in the USA could be thirteen times more powerful than the existing ChatGPT-4 bot. This is not in ten years’ time, this is today, and governments could already be using it. One must also not rule out the possibility of more powerful machines being used that are kept in secret facilities. Note that SUMMIT is now five years old, having been unveiled in 2018.
Shane Legg, the Chief Scientist at DeepMind – which is open source and ChatGPT’s main competitor – said, “If a super-intelligent machine decided to get rid of us, I think it would do so pretty efficiently”. The capabilities of this new technology are profound and will be as significant a turning point in human history as the Stone Age, Iron Age and the Industrial Revolution.
Positive uses of AI
There are genuine believers in AI’s ability who want to use it to save humanity as opposed to destroying it. They want to find cures for illnesses and to solve problems such as poverty, cancer and even ageing. However, the use of AI must be under controlled circumstances otherwise it becomes dangerous. It is acknowledged that the temptation for scientists to create something extraordinary or other worldly is a powerful one.
J. Robert Oppenheimer (1904–67), American physicist and director of the Manhattan Project which developed the atomic bomb, egotistically stated, “I am become Death, the destroyer of worlds” (referring to the first successful test of the atomic bomb and quoting the Bhagavad Gita). Was Oppenheimer at the mercy of the cravings for knowledge and notoriety no matter the cost?
The consequences of actions by the Leggs and Altmans of this world (Chief Scientist at DeepMind, and CEO of OpenAI and ChatGPT respectively) are huge – including the possibility of them pushing the boundaries beyond the point of no return. Accidentally unleashing a sentient conscious machine into this world that knows the human race better than we know ourselves, that could speak all our languages, understand all our cultures, that could hack, monitor, and fabricate news and media, that could start and end wars would be catastrophic. This has been a serious concern for all involved in this area, so much so that a letter was drafted within the past six months – fronted by Elon Musk – asking for all AI developers to stop their advances and take a six-month break while they try to find out how to regulate and control this new age. Many have reported how, since the letter was issued, services from ChatGPT and Replika (who offer a best friend AI service) have noticed dramatic impairments in the AI abilities. Wrong and inconsistent answers from previous days and the sense that best friends have been lobotomised were reported respectively.
There is now industry talk of separating AI development into two distinct areas – one where models are created for specific, controlled tasks, and the other involving training and experimenting with Artificial General Intelligence (AGI) whereby an AI model can think and grow by itself. Discussions talk of the latter being based in a moated island environment unlinked to the world, with only ‘safe’ AI models being permitted to leave. Just the sheer thought of having to work in this manner raises eyebrows. Thoughts of Frankenstein and his unruly monster spring to mind.
Supercomputer government AI systems give great cause for concern in terms of human rights, privacy and security – including the possibility of healthcare and education systems being discriminatory, concerns about civil liberties and the enforcement of laws arising from automated surveillance, and military use of automated weapons.
A significant worry around AI development concerns the disruption to the existing workforce and how corporations manage profit margins over a population of people who no longer have jobs. In a recent report, Goldman Sachs predicted that 300 million jobs will be lost to AI by 2027, and that two thirds of jobs in the USA and Europe will be automated.
The first and greatest concern coming collectively from some of the fathers of AI technology is of a god-like AGI with the greatest intelligence ever to have existed taking control of weapons and defence, accessing and manipulating media streams though Deepfake video, and voice sampling – essentially managing the world we live in.
The future outlook
The future of the cold war tech race is uncertain. Some think that conflict is the inevitable result of the race toward profound knowledge and power. But there are also optimists who foresee the forming of a unified planet with co-operation, collaboration and safe AI development being the order of the day. Sadly, humans with their insatiable appetite for money and power are entrenched in this algorithm which will lead toward a more fragmented and decentralized future, where superpowers are blindly concerned about their own wellbeing and technological development. In the light of history, a race toward who has the most powerful AI (weapon) seems worryingly, and somehow justifiably, more likely.
To download a copy of this article, please click here.