The pursuit of AI supremacy will impact international relations

The emergence of AI has been altering the balance of power between global actors and their alliances in a number of ways.

As Artificial Intelligence continues to evolve, it is having profound impact on a range of sectors seemingly unrelated to it, such as international relations. Some countries are pursuing AI more or less within the confines of international law and generally accepted principles of doing business, while others are choosing to do what is necessary to attempt to achieve AI supremacy outside those boundaries. In the process, AI is slowly altering the balance of power between global actors and among alliances in a number of ways.

Just as becoming adept in the cyber arena levels the playing field—giving countries such as Iran and North Korea the ability to go head to head with China, Russia and that US in cyberspace—the pursuit of AI supremacy is providing an increased competitive edge in international business to some smaller, otherwise less competitive nations, enhancing their ability to secure preferential trade and investment arrangements with other countries, raising their global profile, and enabling them to progress into previously unimagined areas of international trade, investment, and diplomacy.

How AI is deployed by governments can have serious consequences in international relations, particularly if a given government has unusual capabilities in the AI arena. Access to and the use of autonomous weapons, for example, can potentially change the global balance of power. Challenges to a region’s balance of power may occur as some states move to leverage AI technology to reverse historic military disadvantages vis-a-vis their neighbours. Those governments that choose to embrace AI responsibly and ensure that humans remain the ultimate arbiters of life and death decisions may be admired, but they may also be putting themselves at a strategic disadvantage by doing so. Pursuing AI on the military battlefield with ethics in the mix may prove to be a luxury few countries can afford, since not all countries will do so.

There is, in addition, great danger that AI powered military systems and military led decision-making will eventually undermine existing approaches to conflict containment and de-escalation. The institutions and treaties designed to address 20th century foreign policy, arms control, and non-proliferation were never intended to apply to a world order incorporating AI. Perhaps the greatest threat from AI weapons will ultimately come from non-state actors. The cost of AI weapon deployment (via drone deployment) is already low enough to fall within the scope of even unsophisticated terrorists, which implies an increasing degree of symmetry between national militaries and non-state actors going forward in some aspects of warfare.

In the future, “data warfare” may include virtual battlefields between forms of AI seeking to disable one another, automatically infecting command and control systems with disinformation and/or malicious code. It may include sophisticated media forgeries developed by AI, designed to induce opposing populations into relying on falsehoods or acting contrary to their interests with maximum efficiency and impact. The “taste” of this that has been unleashed via fake news and disinformation campaigns in the US and Europe is nothing compared to what such actions may look like in the future—damage to the integrity of democratic discourse and the reputation of state institutions and their representatives will be easier to inflict and more difficult to repair.

This puts arms control and non-proliferation strategies in a whole new light and implies a need to align common foreign policies among allies in order to deter such actions by enemy states and nefarious non-state actors. There is too little understanding inside governments and among diplomats about how these technologies function, however, and what options are available to counter them. Governments have begun planning and investing for the AI future, but none have yet developed and articulated actionable red lines about how AI technologies may be used according to the norms of existing international law or in the context of human rights.

The emergence of AI technologies poses serious challenges to the notion of strengthening democratic institutions and protecting social equality, since AI-enhanced surveillance practices can constrain civil rights and liberties, and socio-cultural conflict can be exacerbated via the perpetuation of social bias and discrimination rooted in AI algorithms. It cannot be the task of foreign policy to design and implement checks and balances on the surveillance practices of security agencies, but it is within the domain of international diploma  cy to communicate these policies with the rest of the world.

The advent of the Internet era demonstrated the tension that exists between security considerations and the freedoms implied via connectivity (such as freedom of speech, movement, and congregation). AI has already aggravated this tension as enhanced government surveillance and censorship capabilities reach new levels of intrusiveness, especially in the name of national security. The challenge for foreign policy in the AI era will be to promote an enlightened agenda as such surveillance capabilities and hyper-competitiveness continue to rise. Foreign policies that leverage the existing tools of diplomacy while encouraging a responsible, thoughtful, and systemic adaptation of AI will, in the end, not only be more widely accepted and less resisted, but stand a greater chance of achieving national objectives.

Daniel Wagner is CEO of Country Risk Solutions. Keith Furst is Managing Director of Data Derivatives. They are the co-authors of the forthcoming book AI Supremacy, which will be published in September.

Editor's Note: A version of this article first appeared here in The Sunday Guardian on August 11, 2018.