top of page

The AI-pocalypse is a human story that can be rewritten

Idea 27 for 2025



A few weeks ago, I stumbled upon this fascinatingly eerie BBC's Tech Now, Is this how AI might eliminate humanity?

Briefly, it spins a story about an AI-led tech utopia where humans barely have to work until one day the #AI decides that humans are holding it back and sets to destroy us by invisible biological weapons!


Very apocalyptic, don't you think? The stuff of science fiction? Read on!


The video discusses a recent research paper by AI 2027. The paper describes a fictional scenario where a company intriguingly named OpenBrain has, by 2027, created Agent3, an AI with the knowledge of the entire internet, including PhD-level expertise in every field!


Agent 3 quickly deploys tens of thousands copies of itself, with human coders workers at 30x speed. In less than a year, in 2028, it reaches #artificial #general #intelligence, able to carry out all the intellectual taks that humans can, at the same level or better.


Of course, as any good story would indicate, this sets up a struggle between the "safety team" increasingly frantic about an #AI devoid of ethics and values and a US government increasingly concerned with the onset of #AI #superintelligence.


The scenario proceeds to describe how Agent 3 rapidly work on creating its own successor, agent 4, which will indeed have this feared superhuman intelligence without 'human' ethics or values. It creates its own versions of itself.


On the surface, there emerges the blissful, tech utopia, where humans barely have to work, are handed out generous more-than-basic income payments, poverty and disease are vanished, global stability emerges, and a whole range of seemingly intractable problems are solved!


However, while Open Brain and US-based tech elites are profiting, the US government fears that China and Chinese #AI DeepCent are building terrifying new weapons.


(The US vs an unscrupulous enemy: haven't we seen this narrative ploy before?)


In any case, the scenario continues to spin: the US and Chinese #AIs decide to merge for human betterment, until it decides that humans are holding it back, and proceeds to destroy them through invisible biological weapons!


BBC Tech that the AI 2027's research paper has stirred a great deal of debate. The video includes commentary from Gary Marcus, who argues that the paper, though provocative and useful to motivate us to wake up, is far-fetched.


Even one of the author's of AI 2027, Thomas Larson, discusses a "choose your ending" [or adventure] scenario, where a more 'compliant' # AI that reflect more of "our" ethics and values.


I am not equipped to comment on the deep technicality of AI 2027's forecasts and modelling; however, I can discern the tropes in its AI-pocalyptic storytelling, which remains archetypically Hollywoodesque and #human.

Indeed, the role of humans, not only the range of our #ethics and #values, but also our #culturally-shaped worldviews and behaviours, must be embedded in the ways in which we make decisions and build future #AI systems and agents. We can build robust systems of regulations, international cooperation and collaboration, and focus on #AI that secures rather than destroys human futures. However, as the ongoing discussions about #AI #bias and the cultural exclusiveness of current #AI #LLM models, who is part of the discussion and debate and how we define and discuss "humanity" needs to be more comprehensive and expansive.


As such, we need to create other scenarios and invent other stories!




An urban plaza with humans, robots, technological gadgets.
Screenshot from BBC Tech Now's video, Is This How AI Might Destroy Humanity?

 
 
 

Comments


bottom of page