AI Takeoff, Global Conflict, and Positioning for the Post-AGI Game Board
A quick note the leads me to maybe hoping for aliens
Private Information
States may have private information about their own technological base, and about future technological pathways (which inform both their strategic picture, but also their research strategy). If underlying technologies are changing faster, the amount and value of private information will probably increase.
While not conclusive, an increase in private information seems concerning. It could precipitate war, e.g. from someone who believes they have a technological advantage, but cannot deploy this in small-scale ways without giving their adversaries an opportunity to learn and respond; or from a party worried that another state is on course to develop an insurmountable lead in research into military technologies (even if this worry is misplaced).
Everyone at this point has seen Oppenheimer (or hopefully read some history) and is familiar with the dynamics that were at play during prior World Wars and the invention of novel machines of mass consequence (or weapons of mass destruction, as we now call them).
There was an effective “necessary”1 show of power that would then set the ground rules for a future prisoner’s dilemma for the rest of humanity since WW2 in which we try at all costs to escalate only to a certain level and avoid WW3.
This has been successful perhaps because we think of humans as a collective unit in this regard (aka we don’t want to end the world), or perhaps because those that set the precedent (US and Allies) have remained in a position of global power and those not in global power positions have not had the scaled ability to overthrow this position.
Thus far, we’ve been mostly lucky that the most powerful nations in the world have also been leaders in AI, because it means that there is a longer delay in possible AI takeoff leading to early onset wars. The importance of AI is not one of just economic importance but also geopolitical importance and is perhaps why many have a view on sovereign model development in some form and why it is game theoretically dominant to at least have some sort of frontier AI lab funded by the government and/or those with ties to the government.
From this game theoretical perspective, becoming an AI superpower is perhaps the only way for most countries to leapfrog themselves out of global obscurity.2 (Un)fortunately talent and CapEx are barriers to this type of unexpected leapfrogging for the time being, pushing frontier model development into the hands of a few countries, with others trying to throw dollars at large-scale infrastructure investment in hopes of not being left behind, and perhaps a longer-tail hoping that this technology is diffused to materially increase quality of life.
That said, if I were an emerging nation trying to leapfrog with less talent, infrastructure, and money than the US and Allies I would concentrate all research to very orthogonal and non-consensus AGI approaches or perhaps very singular narrow use-cases of AI meant to create tighter and nearer-term instability.3 This isn’t a novel idea and it’s likely the talent and intel of the US and Allies would prove insurmountable (or it would put a target on your back) but hey, it’s a shot.
If the lead and thus power dynamics for frontier models were to change somehow that would necessitate a dynamic where the military superpowers would have to early-strike the AI superpowers as they moved up the progress curve (assuming this information was knowable or detectable).
Alas, this is currently not the case, however it is something worth thinking about as it relates to national security4, research moving private, as well as a point in the post referenced above about perhaps false takeoffs of progress leading to war and a potential cure being disseminating research at some level of delay and progress a bit more openly, and framing AI as positive-sum for the world.
Ironically, AI itself might offer a solution to the very problem it creates. Large-scale simulations powered by AI agents could provide insight into the complex, multi-faceted consequences of global conflicts. These simulations could model intricate geopolitical dynamics, economic repercussions, and long-term societal impacts far beyond what human strategists can comprehend. By presenting decision-makers with vivid, data-driven scenarios of potential outcomes, AI could cut through the fog of emotion and nationalist fervor that often clouds judgment in times of crisis.
The very technology that might destabilize or exacerbate global power structures could also be the key to preventing catastrophic miscalculations. AI as a tool for maintaining global stability.
Naturally, the last thing I think about here is collectivism in humanity and how we might shift to view ASI as a goal for humanity versus an individual nation or collection of nations. This is a good theory but unfortunately probably not how the world works unless we were to be invaded by aliens around the same time or before AI takeoff.
The last time the world had a collectivist moment one could argue was early COVID5 and naturally the US cornered the market for vaccines (that admittedly, they produced and spent R&D for), then had trouble getting anyone to use the vaccines internationally. Perhaps before that was…the moon landing?
There are a lot of open questions and future answers we’ll gather over the coming years as AI progresses and as governments begin to position themselves on the game board of a post-AGI world. In the long-term we will be forced to understand how this plays out, in the mid-term we will begin to see whether we must rely on prisoner’s dilemmas in AI takeoff, and if this all accelerates, in the short-term we should…hope for aliens?
I put this in quotes because I don’t want to have to deal with some mob on either side trying to cancel me for an opinion on WW2 and the usage of nukes.
I get that framing most countries in the world as part of “global obscurity” is maybe overly American of me and not politically correct.
Candidly, the most obvious area is in bioterrorism, though this also is perhaps the hardest to do undetected.
Shout out Situational Awareness
Ironic that this then basically fractured countries and the world.