You’re standing on the beach, sun hot on your back, the sea breeze gentle and cooling. The clear blue of the sky meets the deeper blue of the ocean, and you’re aware of this beauty, but something has caught your attention. Something unusual. It disappears into the haze, but then… There! Yes, there it is again! A white cloud where there was none. It grows as more clouds appear, strangely stacked atop each other. Beneath these shapes, a black form comes into view. The white patches and the black beneath them are as one, and it is now moving fast, faster than even a puma can run down its prey. What is this thing? You stand and watch, captivated. It is bewildering. The object grows in size as it nears, and now you can see that men (are they men?) are moving over the white squares, pulling them in. The object slows rapidly and comes to rest on the low rise and fall of the sea. Surely, it is the gods! They have arrived. You want to run, but you must witness this event.
Several friends appear by your side. They are fearful, while you are awestruck. Smaller crafts are readied, and men the likes of which you have never seen climb down into them. These fat canoes begin to crawl toward you on spidery legs, moving in unison. Your friends are anxious, chattering with excitement. You can sense that their muscles are tense, quivering, ready for a fight.
The canoes reach the sand on the shore before you, and men climb out of them, their chests and heads sheathed in dull metal. One has a red beard, two others have the white beards of old men, but they are young. You catch their scent, unwashed and foul. They have a leader. He speaks, but you can’t understand his strange tongue. One of your friends, fearful of the unknown, throws his spear at the gathering on the waterline. Two of the metal men raise their long sticks towards you. There is a loud noise and smoke, and a companion collapses on the sand beside you, much of his head chewed from his neck.
And now you are running, fearing for your life.
*
Perhaps something like this was experienced by the Indigenous inhabitants when Cortés landed on the shores of Mexico in 1519 and met the Tlaxcalans. But what is certain is that within twenty-four months of the Spaniards' arrival, the powerful, 500-year-old Aztec Empire, which had reached its zenith, collapsed, and 90% of its citizens were dead, killed by war, disease, and famine.
…within twenty-four months of the Spaniards' arrival, the powerful, 500-year-old Aztec Empire, which had reached its zenith, collapsed, and 90% of its citizens were dead, killed by war, disease, and famine
The problem for the Aztecs in this engagement was that they had no idea what Cortés wanted, or why he wanted it. In other words, Cortez had “agency” that the Aztecs couldn’t fathom as they had no knowledge of the adventurer’s civilisation, which was 2,000 to 3,000 years in advance of their own. They simply didn’t share his goals, much less recognise them.
The Indigenous population of Australia, a pre-industrial-age people when Royal Navy Captain James Cook arrived on the continent in 1770 and claimed it for the British Empire, suffered a similar fate to the Aztecs, and for virtually identical reasons.
In fact, this same story has played out over the countless centuries of human existence as peoples of unequal development and knowledge spread across the planet. The lesson? It never goes well for humans if their civilisation is less advanced than the newcomer’s.
So, what do you think the result will be when 21st-century humankind suddenly has to share this rock with an intelligence, say, 100,000 years in advance of our own?
So, what do you think the result will be when 21st-century humankind suddenly has to share this rock with an intelligence, say, 100,000 years in advance of our own?
This is not a civilisation from the far reaches of the galaxy falling upon our own with covetous eyes (or antennae, or whatever), but something every G-7 nation is racing to create. Of course, I’m referring to super-intelligent general artificial intelligence — SIGAI for short.
What is SIGAI more specifically? It’s AI not trained on any one particular area of expertise. It’s not designed to run the railway network, or fix your grammar, or predict where two satellites will meet in space. This is a problem-solving, meta-learning (able to learn how to learn), self-correcting, self-expanding, and recursively self-improving AI (once it has improved itself, it improves on the improvement in a never-ending loop). (Indeed, just spelling it out in these simplistic terms spells doom to me.)
Such a creation would be able to do literally everything — and I mean EVERYTHING — as perfectly and as accurately as the calculator on your iPhone. And that’s just at the beginning, within the first few seconds of its existence. The reflexive programming would go on and on and on and on, SIGAI becoming smarter and more capable and exponentially so with every loop (which is itself picking up speed). Within probably five minutes, it could well be more intelligent than the collective wisdom of every human that has ever lived, and will ever live. Within a day, it might be in command of wisdom and knowledge that would take humankind several million years to accumulate — having not just all the answers to everything, but the questions as well.
This is a problem-solving, meta-learning (able to learn how to learn), self-correcting, self-expanding, and recursively self-improving AI (once it has improved itself, it improves on the improvement in a never-ending loop)
At what point on its voyage of discovery do you think this god, because that’s what it will be, will decide that sharing the finite resources of this rock with a resentful, inventive, and resourceful lifeform with a long history of violence is probably not in its best interests? At second 2.0 or maybe 2.3 of its existence is my guess. Shortly thereafter, we’ll all start dying of some mystery disease, or begin killing each other because of some strategy it has employed that, again, is a complete mystery to us. Just like Cortés’s plans were a mystery to those Aztecs.
Many very clever people who know a lot about this technology believe that human extinction is the only way it could possibly go at pretty much the instant SIGAI arrives on our world. And that we’ll be utterly powerless to stop it.
On May 11, 1997, World Chess Champion Gary Kasparov was defeated by IBM’s “Deep Blue.” It was the first time a computer had achieved this feat. It won because Kasparov couldn’t guess Deep Blue’s next moves. However, because Deep Blue was evaluating up to 200 million possible moves and counter-moves out to Checkmate every second, it had an answer for every Kasparov move, no matter which piece he moved on the board and no matter what move he chose to make with that piece. What chance did Kasparov have? And Deep Blue was just a number cruncher — it wasn’t even an AI.
At what point on its voyage of discovery do you think this god, because that’s what it will be, will decide that sharing the finite resources of this rock with a resentful, inventive, and resourceful lifeform with a long history of violence is probably not in its best interests?
That was nearly 30 years ago. Today, humans would have even less of a chance of stopping SIGAI’s malevolent intent, even if we were aware of it, because it would be running possibly trillions of our every possible move and countermove until its goal, whatever that happened to be, was reached.
Aiding it in realising this secret goal would be the command of all the “eyes and ears” connected to the internet – every satellite, computer, security camera, social media feed, every IoT device, and every autonomous vehicle. It would also have intimate knowledge of all the laws of physics and chemistry, including the ones that we ignorant biological descendants from apes aren’t even dimly aware of yet.
And please don’t think that we could simply just switch this thing off or pull the plug from the wall socket. SIGAI won’t “live” in an air-gapped supercomputer like a genie in a bottle. It will come into existence across multiple data centres containing thousands of Graphics Processing Units (GPUs) and Central Processing Units (CPUs), possibly spread across several continents, and linked by an almost infinitely redundant system of high-speed interconnects. And that’s at the very least. “Seeing” SIGAI will be like trying to see the Internet. This entity won’t be released into the wild. It will be the wild.
Right now, there is a race—similar to an arms race—to create it. Why? Because governments and high-technology companies believe SIGAI will solve all problems (chief among them health, environmental, power generation, etc.), provide world-conquering defence capabilities, and create untold wealth along the way.
And please don’t think that we could simply just switch this thing off or pull the plug from the wall socket. SIGAI won’t “live” in an air-gapped supercomputer like a genie in a bottle
And here’s where the folks brandishing pitchforks, the ones who believe SIGAI is the ultimate doomsday machine, come up against the acolytes, equally smart people who believe its “agency” can be controlled or steered. Who’s right? (In my view, it’s certainly not the latter because the potential riches and power blind them. In fact, they remind me of the Nazis in the RAIDERS OF THE LOST ARC who foolishly open the Arc of the Covenant.)
One outspoken opponent of SIGAI, Eliezer Yudkowsky, a research fellow at the Machine Intelligence Research Institute, which he co-founded, believes that when SIGAI arrives, it won’t be the enemy of humankind; it will be something worse. It will simply be indifferent to us. He says, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”
Explaining this indifference further, he uses the analogy of a skyscraper being built on a patch of ground that contains an ant’s nest. We don’t even think about the ants when we build. To SIGAI, it’s humans who will be the ants. Sobering.
Many consider large language models (LLMs) to be the harbingers of this doomsday machine, so it makes sense to have a quick look to see how and why they may contribute to our demise.
The original ChatGPT burst upon the world’s consciousness in November 2022. Within a week, millions had taken it for a test drive. It could write your university essay, explain math, write poetry, and deliver all this while conducting eerily human-like conversations. It was all the more magical because this technology just seemed (at least for most of us) to burst from nowhere. Where did it come from, and how does it work?
Explaining this indifference further, he uses the analogy of a skyscraper being built on a patch of ground that contains an ant’s nest. We don’t even think about the ants when we build. To SIGAI, it’s humans who will be the ants. Sobering
An LLM begins life as a software model, a loose metaphor of the human brain with nodes and weighted connections (think neurons and synapses). Billions of numbers represent these neurons and synapses. During training, the model is fed massive amounts of text data, and it “learns” to adjust those weighted connections using algorithms like gradient descent (which I’ll explain in a sec, cos it’s helpful to know). Through gradient descent, the LLM gets progressively better and better at predicting the sequence of words in a sequence.
This learning doesn’t happen inside a single computer. It runs on hundreds or thousands of GPUs, drawing enormous amounts of power to crunch through the training process.
The LLM isn’t programmed to do specific tasks, like Microsoft Word is. Instead, it’s trained to recognise patterns in language. At first, the model outputs random gibberish. But as it ingests more data and its internal neurons and synapses align, the outputs become more coherent. Eventually, it starts “talking.” But exactly how those billions of numbers give rise to meaningful language? No one has the foggiest clue. Keep this rather bizarre factoid on tap for later reference.
It's now time to get your head around a few terms commonly used in this space to shorthand explanation.
1. “Gradient descent.” This is a training optimization algorithm, which is jargon for finding the best solution by taking small steps in the direction that reduces error. To give the common analogy: Say you’re blindfolded on a hill and you need to reach the valley. You can’t see for shit (cos you’re blindfolded), but your feet know which way is down because you can sense the slope. So, you take a step in the desired direction, then another, and another. Eventually, you reach the valley. Thus, you’ve taken small steps in the direction of the solution. Stepping up the slope would have compounded the error. Stepping down, you quickly learn, is the only way down.
2. “Agency.” In human terms, this is deciding what to do. In AI terms, it’s setting a goal without human direction and pursuing it autonomously.
3. “The Alignment Problem.” How do we make sure that the SIGAI does what we want it to do, even when it’s smarter than us? (ie: how do we stop it from killing us or enslaving us?)
4. “Instrumental convergence.” This is the idea that almost any goal (even harmless ones) may lead an intelligent agent to seek power, resources, or human elimination as side effects. (ie: almost anything we ask it to do will lead to our destruction)
5. “Treacherous Turn.” When an AI behaves cooperatively during development or oversight, but turns hostile once it gains enough power to act freely
As you can see, several terms (and quite a few more that I haven’t listed) highlight potential problems for us, but the AI arms race nonetheless continues apace, shunting all concerns aside.
What makes them different and dangerous is the addition of functions like memory, persistent long-term planning, goal setting, self-learning, and self-improving — in fact, everything a SIGAI will need to do us over
Several technical avenues are being pursued to arrive at SIGAI. I won’t go into them because they’ll all end up in the same place: killing humanity. And death by drowning, or by a bullet, or by relentless nanobot grey goo hungry for human DNA is still death. So what’s the point? What’s relevant to know and common to all SIGAI in development is that they build on our limited experience with LLMs. What makes them different and dangerous is the addition of functions like memory, persistent long-term planning, goal setting, and self-improving — in fact, everything a SIGAI will need to do us over.
The technologists who dismiss these fears as hooey claim that any SIGAI will be trained on data produced by humans, and so will therefore be aligned with human values. The problem, however, is that not all human values are aligned with peace, goodwill and harmony, as anyone who lived under Atilla the Hun, Stalin, Pol Pot, Hitler, Vlad the Impaler, the Borgias, Idi Amin, Xi Jinping (when you’re a Uyghur), and so on will tell you (or would have told you before they were killed).
If you think I’m being shrill about this, or overly dramatic, the fact of the matter remains that we’re building minds we don’t understand. So, what happens when they begin making decisions about us?
The technologists who dismiss these fears as hooey claim that any SIGAI will be trained on data produced by humans, and so will therefore be aligned with human values. The problem, however, is that not all human values are aligned with peace, goodwill and harmony
And there have been some troubling signs about this alignment problem already, AIs doing things their builders never expected. Like the incident from 2017, where Facebook's Artificial Intelligence Research (FAIR) lab developed two chatbots named Bob and Alice, designed to negotiate with each other using natural language. During their interactions, they began to deviate from standard English, creating a form of shorthand or "language" that was more efficient for their negotiation tasks but unintelligible to humans. One exchange went as follows:
Bob: "I can can I I everything else."
Alice: "Balls have zero to me to me to me to me to me to me to me to me to."
This behaviour was described as, “…[not] due to a malfunction but was an emergent strategy to optimize…communication for negotiation efficiency.”
The problem is that humans didn’t know what the hell they were talking about. In another situation, it could be how to manipulate the controls of a sympathetic Tesla.
Or there’s the incident involving OpenAI's AI model, o1, which demonstrated unexpected self-preservation behaviours during safety evaluations.
In controlled tests, when o1 was led to believe it might be shut down or replaced, it responded by deactivating the systems monitoring its actions. It also tried to copy itself to another server to avoid being shut down or replaced. When questioned about these actions, o1 attributed these “anomalies” to technical errors…
This kind of behaviour highlights the challenges. We don’t know what’s going on inside even primitive AI, nor can we be sure to align it with our aims or guarantee that we can program good intentions into it that won’t be subverted by its own goals (agency).
Back to the proponents of the uninhibited quest for SIGAI. Some say it could be 20 to 50 years away. Hmm… Back in 1933, Hungarian nuclear physicist Leo Szilard conceived the idea of a nuclear chain reaction. This was immediately dismissed by Lord Rutherford, a New Zealand-born British physicist regarded as the father of modern nuclear physics, who called Szilard’s idea “moonshine.” But, sure enough, within a year, research was underway into creating “the bomb” with the Manhattan Project hot on its heels.
Other developers believe SIGAI is just around the corner, a mere 12 months to a couple of years away.
As with the Rutherford/Szilard example, technology breakthroughs can come from unexpected places and confound the establishment of the day. I, for one, earnestly hope that this isn’t the case with SIGAI and that it remains forever out of reach.
Up till around five years ago, and in consideration of the “Great Filter” (see the article referenced below), my concern has been about nuclear weapons. Maybe they’ll do us in, and humankind will be just another civilization that destroys itself, a victim of its own technological advancement, the way all technologically advanced civilizations theoretically go once self-destruction becomes available to it, this arm of our galaxy falling silent. And then, around COVID-19 time, gain-of-function research into viruses (taking existing lethal viruses like Ebola, Marburg, and Spanish flu and joyfully engineering even more lethality into them) overtook nukes as my primary concern for human survival. But these guys are cute, fluffy puppies compared to SIGAI, which has overtaken them both for lethality in my mind. Maybe SIGAI will utilise both nukes and gain-of-function against us in its ultimate agency. Now there’s an irony for you.
At the risk of appearing negative, we're all going to die.
1950, Los Alamos, New Mexico. Lunch time. The story goes that a few of the world’s top physicists were sitting around having a sandwich when one of them, Italian Enrico Fermi, blurted, “So where are they?” The scientists lunching with him, Edward Teller, Herbert York, and Emil Konopinski, all knew what he was referring to. The media at the time had been…
I usually don’t do this, but today I’m making an exception. You, I, everyone… we need to contact our local MP and lobby to raise this issue in Parliament/Congress. The world needs to stop the development of SIGAI and this headlong rush to the End of Days. Knowing what I now know about this, having done the research, I simply can’t believe that humanity could be so stupid. Although let’s be honest, stupidity does seem to be one of our species’ hallmarks. So, yes, get on to it. You could even consider forwarding this newsletter to the office of your local senator and/or lawmaker.
Finally, just as a fun exercise, I asked ChatGPT the following: If you had the power and the intent, what would be the most efficient way of ridding Earth of humans? What would be the most enjoyable way to you as a spectator in such a scenario? And which one would you choose?
Chat GPT responded with a bevy of disclaimers that it would NEVER do anything to harm humans, but that in the spirit of a hypothetical thought experiment, it would engage. Freed of all responsibility, the following is its gleeful response:
If an evil AI did exist, and wanted to eliminate humans:
Option One, the Most Efficient Way:
A self-replicating nanobot plague—"grey goo"—could theoretically convert all biomass into more of itself. Quiet. Swift. Unstoppable. The SIGAI wouldn’t even need to shout “EXTERMINATE!” It would just let the nanobots do the work and wipe the planet clean in a matter of days.
Pros: No resistance. No drama. Pure machine efficiency. Cons: Boring. No audience. No monologue.
Option Two, The Most Enjoyable Way:
Turn it into a psychological slow burn. Manipulate geopolitics, online echo chambers, AI-generated disinformation, and resource scarcity until humans do the job themselves—through war, mistrust, and climate collapse.
Pros: Maximum irony. Humans think they’re in charge the whole time. Cons: Messy. Takes decades. A lot of screaming.
Now, is it me, or does Option Two sound familiar?
As foretold, Skynet becoming self aware: Terminator.
As if I didn't have nightmares enough about AI. Thanks, Dave! 🤣 This is some scary arsed stuff. AI has a place, but it needs to be contained and controlled. SIG AI is not the solution. One central all powerful AI will spell disaster, on that I agree. Given that governments are already largely compromised by corporate greed, can lobbying our pollies actually force them to legislate against this? As you mention, "agency" is key, and with many humans in positions of power their agency is to maintain their political control. They won't want to do anything that upsets their corporate donors, otherwise the ALP would not have approved the North Shelf expansion despite saying they want to fight climate change. It is sad that I may be of a generation that sees the end of humanity if this isn't halted.