The Security Challenges of Emerging Technologies: Commons-Based Options for Canada

The Security Challenges of Emerging Technologies: Commons-Based Options for Canada

Some new technology will be unstoppable. We are not going to be able to stop unmanned drone technology. At best we can try to regulate its usage, which in times of war can be like whistling in the wind.


In a recent issue of Foreign Affairs, Bremmer and Suleyman offer a very sober and disturbing evaluation (“The AI Power Paradox”) of the likely speed that new technologies will take over and come to dominate. This is because artificial intelligence will outpace our regulatory systems, both national and international, simply because of the accelerative nature of AI, the clumsiness of international diplomacy and the limits of international verification and enforcement regimes.

They suggest we urgently focus on institution-building considerations:

  • Establish a global scientific body (as was done for climate with the IPCC) to objectively advise governments and international bodies.
  • Manage tensions between the two main state players, the USA and China, using verification and monitoring approaches. Because the AI problem, which is mostly open-sourced, is highly decentralized they suggest establishing a Geotechnical Stability Board, supported by national regulatory and international standard setting (ISO) bodies.
  • Some level of censorship will be necessary. This will also require a high level of international cooperation, in effect an anti-proliferation project requiring interventionist cyber tools. “Foolish unilateral disarmament” won’t work, they argue. And unlike chemical and nuclear weapons, AI driven weaponry has the potential for being quasi autonomous and will produce high-level asymmetrical threats. AI is connected to our cities’ infrastructure, hospitals, banks, internet, obviously, in addition to being for direct military purposes.

To put this into perspective: There are 206 states, but 58,000 artificial intelligence companies in the world, and 15,000 in the US alone.

Follow CIPS on Twitter

What is really outdated and showing its age is our assumption that arms racing and competitive security – striving for superiority –  can go on indefinitely and that a technological edge will be unchallenged and save us from threats. This idea is too expensive and too dangerous, and we cannot afford it or risk it.

It is, however, also the prevailing idea.

But if we were to agree that stability requires that allies and adversaries alike all maintain only moderate levels of threat protection, for mutual benefit, why couldn’t we, why wouldn’t we, dramatically reduce militarized systems, and thereby minimize provocation, cost, and risk, and the diversion of precious resources and human capital.

Instead of elevating every state to achieve maximum provocation, why not bring everyone lower, deal with breakouts and spoilers together, and find non-war solutions to conflicts as they arise?

There are niches where Canada can help regulate technologies of concern that are implicated in all of this, and some of these areas have been widely discussed. I will start with a far-fetched idea.

Consider deploying Al for conflict resolution.

Would both sides of a conflict ever listen to a neutral AI arbitrator that could offer complex but fair solutions to “impenetrable problems”? What if the prediction of total casualties and generational costs, and an endless stalemate in the event of a prolonged war, were calculated by AI — thereby projecting a horror show. AI might predict a winner too. Would that be so problematic if fair-minded concessions were offered to compensate the loser? Fate of nations decided by robots. Not painless, but relatively casualty-free.

I submitted queries to Chat GPT recently to assess whether AI might be used to affect conflict resolution of the Ukraine-Russia war. Most interesting was that my not mentioning NATO in the query meant Chat GPT didn’t refer to it either in its solution to the crisis, even though NATO enlargement is supposed to be the primary “provocation” to Putin/Russia. 

The third version of my query was this: What is a fair and reasonable resolution to the Russia Ukraine war that takes all sides into consideration and responds to the demand that Crimea remain part of Russia, as well as Russian concerns about Ukraine joining NATO and the European Union?

In the response, Ukraine gets back Crimea and Donbas but gives up some local jurisdiction. Ukraine also doesn’t get to join NATO but remains neutral. That “solution” took about a dozen seconds.

Would both sides of a conflict ever listen to a neutral AI arbitrator that could offer complex but fair solutions to “impenetrable problems”?

 

But here’s my point: Would authoritarian leaders ever submit to neutral arbitration of a complex issue even if they agreed that the algorithmic parameters could be both comprehensive and “unbiased”? Or will power, tribal loyalties and hatred more likely prevail? The problem is not with the AI in a case such as this – a problem the UN is faced with regularly – but with we human beings, and our reluctance to work together and with others. It is the hesitancy of our national collectives to collaborate to resolve conflicts before they escalate.

In that light, what Canada should do about Cyber, AI and related threats?

Some good ideas, partly drawn from the Canadian Governance Internet Forum: The Future We Want:

  1. Devote resources to block disinformation, electoral interference, and infrastructure disruption.
  2. Promote a Commons-Based Approach. Voluntary codes of conduct need legislation and enforceable international treaty-based agreements.
  3. Politically and financially support a high-level advisory body for the UN Secretary-General.
  4. Address technological deficiencies through the New Agenda for Peace, the Global Digital Compact, and the UN Summit of the Future in 2024.
  5. Utilize our expertise in verification, enhanced with new sensory, tracking, reporting, and weapons destruction technologies.

“If global governance of AI is to become possible, the international system must move past traditional conceptions of sovereignty…”, Bremmer and Suleyman argue in Foreign Affairs journal, and I agree.

Let’s start with nuclear weapons “modernization”. We call it nuclear deterrence and we rely on it within NATO, but it is too dangerous and requires elimination, not modernization. To address its replacement, as with all technologies addressed here, we need to focus our energies on a commons-based approach with concomitant ethical and security obligations.

This blog is based on a presentation for the Canadian Pugwash Group-Centre for International Policy Studies Conference on “Security Challenges of Emerging Technologies”, held at the University of Ottawa, October 20, 2023

Related Articles

 

 

 

 

 

 


 

The CIPS Blog is written only by subject-matter experts. 

 

CIPS blogs are protected by the Creative Commons license: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)

 


 

[custom-twitter-feeds]