Artificial Intelligence, Drone Swarming, and Escalation Risks in Future Warfare

Please download to get full document.

View again

of 12
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Similar Documents
Information Report
Category:

Resumes & CVs

Published:

Views: 2 | Pages: 12

Extension: PDF | Download: 0

Share
Description
The rapid proliferation of a new generation of artificial intelligence (AI)-augmented and AI-enabled autonomous weapon systems (AWS), most notably drones used in swarming tactics, could have a significant impact on deterrence, nuclear security,
Tags
Transcript
  1 © RUSI Journal [ONLINE FIRST] pp. xx–xx Research Article RUSI Journal T he prolieration o a broad range o artificial intelligence (AI)-augmented autonomous weapon systems (AWS) could have significant strategic implications or nuclear security and escalation in uture warare. Several observers anticipate that sophisticated AI-augmented AWS will soon be deployed or a range o ISR and strike missions. 1  Experts generally agree that AI machine-learning systems are an essential ingredient to enable ully autonomous systems. 2  Even i AWS are used only or conventional operations, their prolieration could nonetheless have destabilising implications and increase the risk o inadvertent nuclear escalation. 3   1. See Robert J Bunker, Terrorist and Insurgent Unmanned Aerial Vehicles: Use, Potentials, and Military Applications  (Carlisle, PA: Strategic Studies Institute and US Army War College Press, 2015); Zachary Kallenborn and Philipp C Bleek, ‘Swarming Destruction: Drone Swarms and Chemical, Biological, Radiological, and Nuclear Weapons’,  Nonproliferation Review (Vol. 25. No. 5–6, 2018), pp. 523–43.2. See Stuart Russell and Peter Norvig,  Artificial Intelligence: A Modern Approach , Third Edition (Harlow: Pearson Education, 2014), p. 56.3. This article is adapted rom sections o a orthcoming article by the author with  Strategic Studies Quarterly  entitled, ‘Artificial Intelligence: A Threat to Strategic Stability’.4. ‘Autonomy’ in the context o military applications can be defined as: the condition or quality o being sel-governing to achieve an assigned task, based on a system’s own situational awareness (integrated sensing, perceiving, and analysing), planning, and decision-making. See, US Department o Deense, Directive 3000.09, Autonomy in Weapon Systems, 21 November 2012, <http://www.esd.whs.mil/Portals/54/Documents/DD/issuances/DODd/300009p.pd>, accessed December 1 2019. An autonomous weapon system (or lethal autonomous weapon system, LAWS) is a weapon system that, once activated, can select and engage targets without urther intervention by a human operator. A distinction is ofen made by some between automatic, automated and autonomous systems, while others use these terms interchangeably. For this article, it is simply necessary to acknowledge that the debate exists. For example, AI-augmented drone swarms may be used in offensive sorties targeting ground-based air deences by nuclear-armed states to deend their strategic assets (or example, launch acilities and their attendant command, control and early-warning systems), and to exert pressure on a weaker nuclear-armed state to respond with nuclear weapons – in a use-them-or-lose-them situation. Recent advances in AI and autonomy have significantly increased the perceived operational value great military powers attach to the development o a range o AWS, 4  potentially making the delegation o lethal authority to AWS an increasingly irresistible  Artificial Intelligence, Drone Swarming and Escalation Risks in Future Warare  James Johnson The rapid prolieration o a new generation o artificial intelligence (AI)-augmented and -enabled autonomous weapon systems (AWS), most notably drones used in swarming tactics, could have a significant impact on deterrence, nuclear security, escalation, and strategic stability in uture warare.  James Johnson argues that emerging iterations o AWS used with AI systems will presage a powerul interplay o increased range, accuracy, mass, coordination, intelligence, and speed in a uture conflict. In turn, the risk o escalatory use-them-or-lose-them situations between nuclear-armed military powers and the attendant dangers posed by the use o unreliable, unverified and unsae AWS will increase, with potentially catastrophic strategic outcomes.  2 DOI Insert DOI here but destabilising prospect. 5  That is, deending or capturing the technological upper-hand in cutting-edge warfighting assets o strategic rivals (traditionally conservative militaries) may eschew the potential risks o deploying unreliable, unverified and unsae  AWS. Today, thereore, the main risk or stability and escalation are the technical limitations o the current iteration o AI machine-learning sofware (brittleness, explainability, the unpredictability o machine learning, vulnerability to subversion or ‘data 5. To date, no state has ormally declared an intention to build entirely autonomous weapon systems. Currently, only the US, the UK and Israel have used armed drones operationally.6. In this context, ‘brittleness’ reers to the inability o AI to contextualise in a ast-moving and complex environment. AI machine-learning systems rely on high-quality datasets to train their algorithms; thus, injecting so-called ‘poisoned’ data into those training sets could lead these systems to perorm in undesired and potentially undetectable ways. 7. Will Knight and Karen Hao, ‘Never Mind Killer Robots – Here are Six Real AI Dangers to Watch out or in 2019’,  MIT Technology Review , 7 January 2019.8. Military-use AI, and the advanced capabilities it enables, can be conceptualised as a natural maniestation (rather than the cause or srcin) o an established trend in emerging technology towards co-mingling and increasing the speed o warare, which could lead states to adopt destabilising launch postures. See Hans M Kristensen, Matthew McKinzie and Theodore A Postol, ‘How US Nuclear Force Modernization is Undermining Strategic Stability: The Burst-Height Compensating Super-Fuze,’  Bulletin of the Atomic Scientists , 1 March 2017, <https://thebulletin.org/2017/03/how-us-nuclear-orce-modernization-is-undermining-strategic-stability-the-burst-height-compensating-super-uze/>, accessed 5 December 2019. poisoning’, and the allibility o AI systems to biases). 6  To be sure, immature deployments o these nascent systems in a nuclear context would have severe consequences. 7 From what is known today about emerging technology, 8  new iterations o AI-augmented advanced conventional capabilities (e.g. cyber weapons, precision munitions, and hypersonic weapons) will compound the risk o military escalation, especially inadvertent and accidental Autonomously operating unmanned rigid-hull inatable boats close in on a contact of interest during an Oce of Naval Research (ONR)-sponsored demonstration of swarmboat technology, September 2016. Courtesy of US Navy   Articial Intelligence, Drone Swarming and Escalation Risks in Future Warfare 3 © RUSI Journal [ONLINE FIRST] escalation. 9  Co-mingling and entangling nuclear and non-nuclear capabilities 10  and the increasing speed o warare may undermine strategic stability. 11   While the potential escalation risks posed by emerging technology have been widely discussed in the academic literature, the potential o military  AI to compound these risks and spark inadvertent escalation has thus ar only been lightly researched. 12  This article addresses how and why AI-enhanced drone swarming might affect strategic stability between nuclear-armed great powers. AI Force-Multiplied Drone Swarms Conceptually speaking, autonomous systems will incorporate AI technologies such as visual perception, speech, acial recognition, and decision-making tools to execute a range o core air interdiction, amphibious ground assaults, long-range strike, and maritime operations independent o human intervention and supervision. Currently, only a ew weapon systems select and engage their 9. ‘Inadvertent escalation’ reers to a situation where one state takes an action that it does not believe the other side will (or should) regard as escalatory, but escalation occurs unintentionally nonetheless. See, Barry R Posen,  Inadvertent Escalation: Conventional War and Nuclear Risks  (Ithaca, NY: Cornell University Press, 1991); Forrest E Morgan et al.,  Dangerous Thresholds: Managing Escalation in the 21st Century  (Santa Monica, CA: RAND Corporation, 2008); Lawrence Freedman,  Evolution of Nuclear Strategy  Third Edition (London: Palgrave Macmillan, 2003), Chap. 14. 10. ‘Entanglement’ in this context reers to dual-use delivery systems that can be armed with nuclear and non-nuclear warheads; the commingling o nuclear and non-nuclear orces and their support structures; and non-nuclear threats to nuclear weapons and their associated command, control, communication, and inormation (C3I) systems. 11. ‘Strategic stability’ as a concept in political science has been defined in many ways. See, or example, Colby Elbridge and Michael Gerson (eds),  Strategic Stability: Contending Interpretations  (Carlisle, PA: Army War College, 2013).12. For notable exceptions, see Vincent Boulanin (ed.) The Impact of Artificial Intelligence on Strategic Stability and Nuclear  Risk: Vol. I Euro-Atlantic Perspectives  (Stockholm: SIPRI Publications, 2019); Edward Geist and Andrew J Lohn,  How  Might Artificial Intelligence Affect the Risk of Nuclear War?   (Santa Monica, CA: RAND Corporation, 2018); Kareem Ayoub and Kenneth Payne, ‘Strategy in the Age o Artificial Intelligence’,  Journal of Strategic Studies  (Vol. 39, No. 5–6, 2016), pp. 799–819; Technology or Global Security and Center or Global Security Research, ‘AI and the Military: Forever Altering Strategic Stability,’  Medium, 13 February 2019; Jürgen Altmann and Frank Sauer, ‘Autonomous Weapon Systems and Strategic Stability’,  Survival   (Vol. 59, No. 5, 2017), pp. 117–42; James Johnson, ‘Artificial Intelligence and Future Warare: Implications or International Security’,  Defense & Security Analysis  (Vol. 35, No. 2, 2019), pp. 147–69.13. To date, the only known operational loitering attack munition (LAM) is Israel’s Harop (or Harpy II), a ully autonomous anti-radar loitering weapon that can remain in light or up to six hours and dive-bomb radar signals without human direction with lethal effect on the battlefield. Also, several states are known to be developing ully autonomous weapons including China, Germany, India, Israel, South Korea, Russia, and the United Kingdom.14. LAMs are hybrid offensive capabilities sitting between guided munitions and unmanned combat aerial systems. To date, the only known operational LAM is Israel’s Harop (or Harpy II), combining a human-in-the-loop and ully autonomous mode. See, Tyler Rogoway, ‘Meet Israel’s ‘Suicide Squad’ o Sel-Sacrificing Drones’, The Drive  , 8 August 2016, <https://www.thedrive.com/the-war-zone/4760/meet-israels-suicide-squad-o-sel-sacrificing-drones.>, accessed 10 December 2019.15. For example, Daesh (also known as the Islamic State o Iraq and Syria, ISIS) used remote-controlled aerial drones in its military operations in Iraq and Syria. See Ben Watson, ‘The Drones o ISIS,’  Defense One  , 12 January 2017.16. Rogoway, ‘Meet Israel’s ‘Suicide Squad’ o Sel-Sacrificing Drones’, The Drive  . targets without human intervention. 13  Loitering attack munitions (LAMs) – also known as ‘loitering munitions’ or ‘suicide drones’ – pursue targets (such as enemy radar, ships or tanks) based on pre-programmed targeting criteria, and launch an attack when its sensors detect an enemy’s air-deence radar. 14  Compared to cruise missiles (designed to ulfil a similar unction), LAMs use AI technology to shoot down incoming projectiles aster than a human operator could and can remain in light (or loiter) or ar longer periods than human-operated munitions. In contrast to existing human-operated automated systems (or example, manned systems and remote-controlled drones), 15  AWS such as LAMs could complicate the ability o states to anticipate and attribute autonomous attacks reliably. 16  A low-cost, lone-wol UAV would be unlikely, or example, to pose a significant threat to a US F-35 stealth fighter, but hundreds o AI machine-learning autonomous drones in a swarming sortie may potentially evade and overwhelm an adversary’s sophisticated deence capabilities even in heavily deended regions such as China’s east and coastal   James Johnson 4 regions. 17 Moreover, stealth variants o these systems, 18  along with miniaturised electromagnetic jammers and cyber-weapons, may be used to interere with or subvert an adversary’s targeting sensors and communications systems, undermining its multi-layered air-deences in preparation or drone swarms and long-range stealth-bomber offensive attacks. 19  In 2011, or example,at Creech US Air Force Base, aircraf cockpit systems – operating MQ-1 and MQ-9 unmanned drones in the Middle East – were inected with malicious malware, exposing the vulnerability o US systems to cyber-attack. 20  This threat might, however, be countered by the integration o uture iterations o AI technology into stealth fighters such as the F-35. 21 Manned F-35 fighters developed by the United States will soon be able to leverage AI to control small drone swarms in close proximity to the aircraf perorming sensing, reconnaissance and targeting unctions, including countermeasures against swarm attacks. 22  In the uture, the extended endurance o UAVs and unmanned support platorms could potentially increase the ability o drone swarms to survive these kinds o countermeasures. 17. Paul Scharre, ‘Highlighting Artificial Intelligence: An Interview with Paul Scharre’,  Strategic Studies Quarterly  (Vol.11, No 4, November 2017), pp. 18-9.18. China, the US, the UK and France have developed and tested stealth UAV prototypes. See, Dan Gettinger, The Drone  Database   (New York, NY: Center or the Study o the Drone, Barnard College Press, 2019).19. The Russian military, or example, reportedly deployed jammers to disrupt GPS-guided UAVs in combat zones, including Syria and Eastern Ukraine. See, Madison Creery, ‘The Russian Edge in Electronic Warare’, Georgetown Security Studies  Review , 26 June 2019, <https://georgetownsecuritystudiesreview.org/2019/06/26/the-russian-edge-in-electronic-warare/>, accessed 5 December 2019.20. Noah Shachtman, ‘Computer Virus Hits US Drone Fleet’, Wired,  10    July 2011.21. AI-inused algorithms able to integrate sensor inormation, consolidate targeting, automate maintenance and navigation and sensor inormation are currently being developed and tested to anticipate the kinds o high-intensity uture threat to environments posed by drone swarming. Kris Osborn, ‘The F-35 Stealth Fighter: The Saest Fighter Jet Ever Made?’ The  National Interest  , September 27 2019, <https://nationalinterest.org/blog/buzz/-35-stealth-fighter-saest-fighter-jet-ever-made-83921>, accessed 6 December 2019.22. A combination o restrictions contained within the DoD’s ‘Autonomy in Weapons Systems’   guidance, as well as the cultural and bureaucratic norms and practices within the US armed services, will likely stymie efforts to incorporate AI-enabled systems. See Department o Deense Directive, ‘Autonomy in Weapon Systems’  , 3000.09, Incorporating Change 1, 8 May 2017, <https://as.org/irp/doddir/dod/d3000_09.pd>, accessed 6 December 2019.23. ‘Escalation ladder’ in this context reers to the orty-our ‘rungs’ on a metaphorical ladder o escalating military conlict. For the seminal text that introduced the concepts o an ‘escalation ladder’ as it applies to the entire range o conlict rom conventional conlict to all-out nuclear warare see, Herman Kahn, On Escalation: Metaphors and Scenarios  (New York: Praeger, 1965).24. For recent debate surrounding AWS, and the technical limitations aced by engineers in their production see, Zachary Kallenborn and Philipp C Bleek, ‘Swarming Destruction: Drone Swarms and Chemical, Biological, Radiological, and Nuclear Weapons’,  Nonproliferation Review , pp. 523–43.25. While recent breakthroughs in AI have made possible the automation o several tasks previously considered to be too complex (or example, dependable vehicle control and air-traffic control), there remain technical limits to what computers and robots can achieve autonomously. See, or example, Boulanin (ed.), The Impact of Artificial Intelligence on Strategic  Stability and Nuclear Risk Vol. I Euro-Atlantic Perspectives , Chap. 3. Taking Humans out of the Loop  As military commanders are concerned with tightly controlling the rungs on the ‘escalation ladder’, 23  they should, in theory, be against delegating too much decision-making authority to machines – especially when nuclear weapons are involved. Competitive pressures between great military powers and the ear that others will gain the upper hand in the development and deployment o military AI (and the  AWS that AI could empower) might overwhelm these concerns, however. It is worth highlighting a caveat. The hypothetical uses o drone swarming described below do not assume that militaries will  necessarily  be able to implement these AWS in the near term. Certainly, disagreements exist among AI researchers and analysts about the significant operational challenges aced by states in the deployment o AI-enabled AWS; in particular, issues relating to machine-to-machine communications, swarm coordination in complex and contested environments, and battery technology, to name a ew. 24 Several prominent researchers have opined that, notwithstanding the remaining technical challenges, 25  as well as the legal and ethical  Articial Intelligence, Drone Swarming and Escalation Risks in Future Warfare 5 © RUSI Journal [ONLINE FIRST] easibility, it is likely that operational AWS could be seen within a matter o years. The moral and ethical considerations related to the use o autonomous control weapons and autonomous targeting is complex and highly contested; humans creating autonomous control technology to attack a human is inherently problematic. 26  According to ormer US Deputy Secretary o Deense Robert Work, the US, ‘will not delegate lethal authority to a machine to make a decision’ in the use o military orce. 27  Work added, however, that such sel-restraint could be tested by a strategic competitor (especially China or Russia) ‘who is more willing to delegate authority  to machines than we are and, as that competition unolds, we’ll have to make decisions on how we can best compete’. 28  Removing human judgement rom the crisis decision-making process, however, and pre-delegating authority to autonomous systems may severely challenge the saety, resilience and credibility o nuclear weapons in uture warare. 29  History is replete with examples o near nuclear misses, demonstrating the importance o human  judgement in mitigating the risk o miscalculation and misperception, o the intentions, redlines and willingness to use orce between adversaries during crises. 30  But despite these precedents, the risks associated with unpredictable AI-augmented autonomous systems operating in dynamic 26. For example, see Paul Scharre,  Autonomous Weapons and Operational Risk – Ethical Autonomy Project,  (Washington, DC: Center or a New American Security, 2016); Rob Sparrow, ‘Ethics as a Source o Law: The Martens Clause and  Autonomous Weapons’,  ICRC Blog  , 14 November 2017, <https://blogs.icrc.org/law-and-policy/2017/11/14/ethics-source-law-martens-clause-autonomous-weapons/>; Heather Roff,  Autonomy, Robotics, and Collective Systems  (Geneva: Centre or Security Policy, 2016).27. Quote rom an interview: Washington Post,  ‘David Ignatius and Pentagon’s Robert Work Talk About New Technologies to Deter War’, 30 March 2016, <https://www.washingtonpost.com/blogs/post-live/wp/2016/02/29/securing-tomorrow-with-david-ignatius-whats-at-stake-or-the-world-in-2016-and-beyond/>, accessed 5 December 2019.28. Washington Post  , ‘David Ignatius and Pentagon’s Robert Work Talk About New Technologies to Deter War’. 29. UAVs used in swarming operations do not necessarily need to be ‘ully-autonomous’; humans could still decide to execute a lethal attack. See, Boulanin (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk Vol. I Euro- Atlantic Perspectives , Chap. 3.30. Patricia Lewis et al., Too Close for Comfort: Cases of Near Nuclear Use and Options for Policy  (London: Chatham House, 2014).31. Developing AWS that are able to interact and communicate with other agents (especially humans) in either a competitive or a collaborative context is inherently problematic because human behaviour is ofen unpredictable. See Andrew Ilachinski,  AI, Robots, and Swarms, Issues, Questions, and Recommended Studies  (Alexandria, VA: CNA, 2017), p. xv.32. The Pentagon reers to this unmanned underwater vehicles (UUV) as ‘Kanyon’. The nuclear warhead carried by this drone is reportedly capable o destroying ports and cities. Matthew Griffin, ‘Russia Tests its New Autonomous Nuclear Submarine off the US Coast’,  Fanatical Futurist  , 11 December 2016, <http://www.anaticaluturist.com/2016/12/pentagon-detects-tests-o-russias-new-nuclear-capable-drone-submarine/>, accessed 7 December 2019.33. Elsa Kania,  Battlefield Singularity: Artificial Intelligence, Military Revolution, and China’s Future Military Power   (Washington, DC: Center or a New American Security, November 2017), p. 23.34. In addition to drone swarms, an expanding range o advanced non-nuclear strategic weapons (such as cyber, anti-satellite weapons (ASAT) and hypersonic vehicles) are also well suited to conducting preemptive strikes – primarily used in cross-domain operations. and complex, and possibly a priori   unknown environments, remain underappreciated by global deence communities. 31  Eschewing these risks, China and Russia plan to incorporate AI into UAVs and unmanned underwater vehicles (UUVs) or swarming missions inused with AI machine-learning technology. 32  Chinese strategists have reportedly researched data-link technologies or ‘bee-swarm’ UAVs, emphasising network architecture, navigation and anti-jamming military operations, in particular, to target US aircraf carriers. 33   Drone Swarming and New Strategic Challenges Drones used in swarms are well-suited to conduct preemptive attacks and nuclear-ISR missions against an adversary’s nuclear and non-nuclear mobile missile launchers and nuclear-powered ballistic missile submarines (SSBNs) and their attendant enabling acilities (or example, C3I and early-warning systems, antennas, sensors, and air intakes). 34  Some observers have posited that autonomous systems, such as the DoD’s  Sea Hunter  , a prototype autonomous surace vehicle, may render the underwater domain transparent, thereby eroding the second-strike deterrence utility o stealth SSBNs. The technical
Recommended
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks
SAVE OUR EARTH

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!

x