Content

4 Cyber and Artificial Intelligence – Some Thoughts about their Impact on Strategy-making in:

Maximilian Terhalle

Strategie als Beruf, page 95 - 114

Überlegungen zu Strategie, Weltordnung und Strategic Studies

1. Edition 2020, ISBN print: 978-3-8288-4409-4, ISBN online: 978-3-8288-7409-1, https://doi.org/10.5771/9783828874091-95

Tectum, Baden-Baden
Bibliographic information
Citation download Share
95 II. Thematische Aspekte der Strategic Studies 97 4 Cyber and Artificial Intelligence – Some Thoughts about their Impact on Strategy-making In the last five years, war between the major powers has come to be seen as more likely than in the preceding one and a half decades of the 21st century. This has invited a return to the classical question of how technological developments in the realm of security affect the ways in which states devise their strategies.1 In particular, the recurring interest in technology-related dynamics pertains to the degree to which the impact of cyber-power and artificial intelligence (AI) narrows, widens, manipulates, obfuscates, or even controls, the room to manoeuvre available to top decision-makers when making strategic choices. To be clear, this is not to say that technology is the driving factor determining the processes that devise grand strategies. However, currently it seems that a substantial part of the debate about the impact of cyber technology and artificial intelligence has begun to yield to an understanding of technological determinism, which may distort the bigger picture within which technology is merely one, if significant, factor. In this vein, Lawrence Freedman concluded his observations on the impact of technology during the tenure of the bipolar confrontation by firmly stating that, “[v]iewing nuclear weapons in isolation, or assuming that they provided a satisfactory vantage point 1 van Creveld (1991); Boot (2006). to discuss strategy as a whole, distorted strategic studies” (Freedman/ Michaels 2019: 670). Thus, while spearheading the R&D processes that relate to the production of superior security-related technology is without doubt a strategic must for any major power operating in a highly competitive international environment, it is the politics of, in and amongst major states that is crucial for understanding what impact such technology may have and how its presence affects dynamics of the system. “Once in the world, technology creates pressures of its own, which again impacts the political process, but this is a complex process of feedback between technology and … other forces and human decisions, not one of determinism” (Buzan/Hansen 2009: 54; Hoijtink/Leese 2019). As a caveat, this analysis and its suggestions about cyber warfare and AI are not predicated on a clear-cut distinction between the realms of war and peace. Rather, it seems more appropriate to look at today’s international affairs through a strategic lens which views relations between non-allied major states as always being shaped by varying degrees of political, economic and military competition.2 This is not to say that war and conflict are inevitable. However, the pursuit of power among the major states of a given era tends to occur in the context of the security dilemma, which has been described as the insurmountable characteristic of international politics. The latter implies, first, that the true meaning of another state’s intentions can never be securely deciphered, which makes their anticipation as well as the reaction to them extremely challenging; and, second, international orders tend to reflect the interests and normative ideals of a certain group of states, not of all major states (Terhalle 2018; Wheeler/Booth 2018). Thus, in a contested strategic environment, it is the major powers in their pursuit of power that invariably, though not exclusively, 2 In many ways, this is, of course, an age-old condition underlying strategic affairs, which Clausewitz had pointed out almost two hundred years ago, without being the first one to do so (1980/1831: 218–9). 98 Maximilian Terhalle employ all means available to them during peacetime, including cyber warfare and AI. Precisely for this structural reason, some states regularly and deliberately engage in activities below the threshold of open and broad-scale military operations, which include limited “active conflict”, in order to propel forward their strategic aims (Carter 2019).3 Needless to say, the continual testing of this threshold today oscillates rather dangerously between the unintended escalation of already existing tensions amongst the great powers and the deliberate attempt at pushing back against the perceived predominance of the United States, or the perceived injustice of the current order for that matter. Miscalculations and, thus, the possible failure of such testing are inherent to the strategies that drive this process in the first place. Cyber Cyber-power presupposes cyber-space, defined as “place and time where information exists and flows” (Lonsdale 2004: 181). Due to the requirements of joint warfare, cyber-space necessarily permeates all three of the conventional domains (land, sea, air) and the field of political strategy-making. It is this space in which information can be exploited, both, for benign as well as malign purposes. Ultimately, “cyber-power is the process of converting information into strategic effect” (Sheldon 2019: 294). With regard to the impact of cyber-power, four points merit particular attention. To begin, the coercive capability of cyber-warfare is limited in that it does not facilitate the enemy’s politico-military surrender, only its temporary weakening. Though, precisely because some major powers are keen to challenge the current international order, they have engaged in sub-threshold activities “short of war” in 3 Sanger (2018). 99 Cyber and Artificial Intelligence order to advance their revisionist goals.4 Therefore, it is more accurate to conceive of cyber-warfare as one element, if new and integral, of the broader political and socio-cultural notion of war; its key military features remain conventional and nuclear. In other words, cyber-warfare needs to be seen as inherently complementary to other, well-known realms, both, of strategy-making and war-fighting. In contrast to the “hyperbole” (Payne 2018: 7) featuring in the literature on the topic and its, at times, exaggerated invocations of cyber-warfare as the non-plus-ultra to future war-fighting, military history offers manifold references to the distinctly more nuanced and complementary understanding of the evolving subject matter. For instance, General Mattis offers an analogy in that direction that, while the French army possessed the more advanced tanks at the beginning of WWII, it was the Germans who integrated communication devices into their tanks in order to facilitate synchronized tactical advances. Consequently, he suggests that any future thinking about war and technology needs to be directed towards “fusing all the advances together”5 with existing means, instead of focussing on one technological advance only. Related, cyber-warfare may present a ‘tech trap’, both, to strategic planners as well as to the mind-sets of top decision-makers. In particular, procurement policies that derive from an exclusive cyber focus may exacerbate a given state’s deficiencies in responding to incursions and threats of a non-cyber nature. In fact, surprise attacks at the onset of a war may well continue to be delivered through conventional and nuclear forms of organized military violence. Therefore, if a given state’s future military thinking and planning is narrowly informed by the assumption that the adversary launches a cyber-attack before conflict erupts, such a belief may well prove to be a trap, as the 4 For a contemporary and more empirical account of how this competitive element has played out in practice, see Wright (2017). Also, see the UK Chief of the Defence Staff ’s recent remarks in this direction (2019). 5 Quoted in the Financial Times Magazine (2018: 20). – Regarding Mattis’ example, see Stolfi (1970). 100 Maximilian Terhalle exclusive focus on cyber-warfare may lead such an approach to underestimate, or even worse, to largely ignore other means of warfare. In other words, such a ‘cyber mindset’ comes at the expense of other views of conflict, precisely because it prevents analysts and strategists from thinking about its conventional and cyber-based versions simultaneously. The ‘black swan’ implicit to such thinking seems all too obvious. It is not difficult to see how such a misconceived, if self-imposed, prioritisation of cyber technology, which exclusively expects cyber-attacks to precede military strikes, is prone to clever adversarial manipulation and exploitation. Moreover, cyber-warfare also raises critical questions about the related dynamics of signalling, and intentions more broadly. In particular, as cyber-warfare provides only limited coercive power, how do great powers judge cyber-attacks against them by other great powers and/or against their allies; and, how do offenders plan them in the first place? Regarding NATO-Russia relations, for instance, the question arises whether cyber-attacks by a great power (e. g. Russia) against another great power’s smaller allies (e. g. Baltic states) are intended as limited acts of punishment, signalling the perceived discontent with the overall stance to actions previously undertaken by the smaller powers’ larger ally (e. g. US). Or, is the act of cyber-warfare launched against a given great power’s smaller allies only intended as a means of intimidation? Or, does it, in fact, reflect a great power’s deliberate decision to break out of the sub-threshold level before subsequently launching a larger conventional and, possibly, (tactical) nuclear attack. In any event, while Clausewitzian surprises, frictions and accidents inevitably remain the known unknowns in all of the above, it becomes discernible that the room for misperception in all of the above is vast. Finally, great powers may use their superior cyber-warfare capabilities against smaller powers as a means of coercion and sabotage. While the substantive and long-term impact of the use of such coercive power is, as stated, inherently limited, it may significantly harm 101 Cyber and Artificial Intelligence existing war-fighting capabilities, postpone the development of emerging technologies and curb (or possibly boost) the country’s will-power. Not the least, such efforts may also signal to the attacked power that it remains vulnerable to similar future offenses. In many ways, this is what the joint Israeli-American attack on the centrifuges of Iran’s nuclear programme did when the strikes were undertaken in 2008. In turn, 12 years later, Iran’s determination to attain a modern nuclear arsenal has by no means vanished, thus confirming cyber-warfare’s limited coercive capability. More generally, the widely held fear that even smaller states could more or less easily launch “cyber Pearl Harbors” in the future has been deemed unlikely as the preparations that preceded the implementation of the Stuxnet worm (Operation “Olympic Games”) were extensive timewise and required substantial knowledge and capacities on the part of the attacker.6 These experiences also demonstrate that while the possibility by major powers to penetrate and/or infiltrate the command-and-control structures of an adversary’s military nuclear facilities through comprehensive efforts of cyber-warfare has become an instance of serious concern, the immense difficulties inherent to attempts at sabotaging nuclear decision-making structures have, so far, limited the level of threat posed by such activities (Freedman/Michaels 2019: 663).7 6 Kello 2013; Lindsay 2013. Similarly, Russia’s cyber shutdown of Estonia’s digital infrastructure in 2007 required a scope of advanced capacities that a smaller state could not have easily and swiftly brought to the fore. 7 Methods that could be used to distort the decision-making processes for firing off nuclear weapons include: “data manipulation, cyber jamming of communication channels, or cyber spoofing”(Unal/Lewis 2018: 4). Significantly, cyber spoofing, which “creates false information that seems to come from a legitimate source and is seen as genuine” could have devastating consequences (ibid.: 4 fn 4; see also, Fitzpatrick 2018: 82–4). 102 Maximilian Terhalle Artificial Intelligence Artificial Intelligence (AI) is predicated on the ‘deep learning’ of computer systems, both, in its more narrow as well as in its more complex, so-called Artificial General Intelligence (AGI), version.8 Such systems of AI use algorithms to process tremendously vast amounts of data at an unprecedented degree of high speed, especially when powered by quantum-based semi-conductors. This is why, as Kissinger stressed, “adversaries’ ignorance of AI-developed configurations will become a strategic advantage” (quoted in Economist 2019a: 16). Characteristically, such algorithms are capable of learning with little data that provide substantive educational foundations, with no need for external tutoring, and with an ability to deal with contradictory and unstructured pieces of input. Moreover, they learn through the study of past actions and/or the observation of simultaneous actions undertaken by other actors within their network.9 As AIs have begun to comprehensively permeate force structures, the inferences from those data form the basis of AIs’ influencing of the critical command and control structures in strategic affairs.10 Thereby, their inferences enable AI-supported strategy-making, albeit leaving open the question to which degree humans remain in control; or, alternatively, whether AIs can prudently act in an uncertain environment, the implications of which may, at times, require strategic patience, imagination and adaptivity. Undoubtedly, Buzan’s aforementioned feedback loops be- 8 According to Cukier, the former AIs „do discrete tasks very well, such as self-driving cars, voice recognition technology, and software that can us[e..] advanced imaging“ and the latter (AGIs) „can think, plan, and respond like a human and also possess ‚superintelligence‘“ (2019: 194). 9 Among others, Brockman (2019); McAfee/Brynjolfsson (2018). 10 At the tactical level, AIs have already begun to affect logistics systems, weapons design as well as intelligence and reconnaissance. 103 Cyber and Artificial Intelligence tween the art of strategy-making and new, and possibly game-changing, technologies are clearly reflected here as well. If by no means fully developed as of yet, AIs are about to bring momentous changes to the analysis and practice of strategic matters. Crucially, as speed stands out as one of the fundamental characteristics of AI systems, “the distinction most relevant will be between the best algorithm and the rest” (Payne 2018: 24). Notwithstanding the R&D-based preconditions that need to be in place for states to enter the race after those algorithms, one fundamental change triggered by AIs is that it will likely deprive some core military capabilities of their strategic value. AIs have, thus, the potential to critically affect, and possibly alter, the current military balance of power (Economist 2020: 58). For instance, autonomous underwater robots with sensors designed to detect small-scale changes in the earth’s magnetic field may reveal the secret positions of nuclear submarines put in place to reassure a given state’s second-strike capability (Fitzpatrick 2019: 90). Other significant changes pertain to the risks of using AI-enabled force more frequently, especially when the (Western) states in question are casualty-averse; also, precisely because of the vital advantages provided by speed, AI-enabled strategy-making may favour the offence (whereas nuclear weapons have, for instance, a distinctly defensive character); finally, another aspect highlighting the ambiguous consequences of the factor speed is that AI systems may also propel forward processes of automating escalation (Payne 2018: 25–6). Taken together, these changes betray the complexity of the subject matter, a matter which is unprecedented and still uncomfortably little understood as to its implications. In fact, one of the most significant dilemmas related to AIs pertains to the processes through which the answers, or inferences, develop from AIs. As Cukier notes, not only are the mathematical foundations “so complex that it is impossible to say how a … machine obtained its result”, but “the most obscure systems also offer the best performance” (2019: 198). 104 Maximilian Terhalle This leads the analysis back to the question about the degree to which human agents will retain control over the developments that evolve from AIs in the strategic realm. Not the least, the rationale behind AIs is precisely that they are supposed to support and, thereby, enable better strategy-making by human agents. The agreed starting point for most authors involved in the debate is the insights derived from the psychology of decision-making. In particular, Daniel Kahneman’s important distinction between System 1-based thinking (referring to high-speed neuronal processing through instincts and heuristic shortcuts, located in the brain stem and the amygdala) and System 2-based thinking (referring to the slower, more analytical approach, located in the prefrontal cortex), in which System 1 more often than not prevails, is seen as the main reference point (2011).11 Pointing out well-known inherent cognitive biases, such as confirmation bias, group-think, loss aversion, some authors have strongly welcomed the advent of the more rational, probabilistic and quantitative approaches offered by AIs, driven, as they are, by big-data analyses. As Dear put it, for instance, “psychology shows our limitations, while big data provides information to develop more accurate models of human behaviour than has ever been possible” (2019: 25). Other authors do concur, recognise the fundamental shifts set in train by AIs systems and see the undeniable advantages provided by them. They remind their audience that one of the greatest advantages AIs may benefit from is that, in contrast to humans, they do not get (over-)tired, aggravated and, thus emotional (Whetham/Payne 2019). Though, while Payne, for instance acknowledges that the alignment of the execution of AI-enabled strategy with human intentions will be increasingly challenging, he argues that “human psychology is not necessarily a weakness” (2018: 29–30). In fact, he suggests that, as rational AI machines will indeed learn about and 11 See Freedman (2013: chs. 36–38) for the introduction of Kahneman’s thinking to Strategic Studies. 105 Cyber and Artificial Intelligence aim to imitate human behaviour, these machines will also and inevitably imitate, apart from System 2’s rationality, the emotions of System 1, such as fear and prestige. For instance, Payne argues that an AI “that escalates to meet pre-specified criteria of reputation … may not be able to reflect on the consequences of its actions in time to allow a change of course” (2018: 29). As a reminder, during the Cuban and Berlin crises (1962, 1958), to mention only two examples, very strong emotions were clearly on display amongst American and Russian key stakeholders; though, they were eventually harnessed by their prudential self-reflection about the dramatic consequences of escalation (Allison 1999). In this vein, Stuart Russell’s admonition that AIs are prone to facing the ‘King Midas’ problem is indeed of major concern (2019). Similar to the ancient Greek king’s golden touch (540 BC), which turned everything, including food, water and friends, into gold, AIs may execute the commands of a given algorithm without necessarily understanding that its pervasive application may not lead to the intended consequence. Payne and others’ cautioning against overly rational promises to be delivered by AIs does not aim to imply, rather unrealistically, that AIs need to, or should be, stripped of their dynamic impact on strategic affairs. Much more importantly, as AIs’ purpose is to enable and improve human strategy-making, the crucial point is whether an AI “should act on what humans wanted at the outset, what they want in the present moment, or what the AI understands to be most closely suited to the future [humans] want” (Payne 2018: 29). In other words, how algorithms develop and alter their perception of a strategic context, and of the environment in which they are tasked to operate, remains as much unknown as troubling. Similarly, the prospect that algorithms successfully read the inherently adaptive strategy and perception of the opponent/adversary is even less certain. There is another key aspect related to the classic difficulty of accurately reading a given counter-part’s intentions. In fact, when a given 106 Maximilian Terhalle state assumes that it has just broadly deciphered such intentions, this leads to the next step that needs to be taken: the response. In fact, in a ‘high-speed’ environment, decision-makers might be tempted to respond with great, AI-enabled confidence as their machines encourage such heightened perception of certainty. The AIs might do so even though, in certain circumstances, patience and caution may provide the better judgement for the choice to be made than the seemingly advantageous, though possibly deceiving, momentum of swift action spurred by quantum-driven computers. In order to address this problem, Russell has suggested that humans need to design AIs so that the latter are “deliberately uncertain about our instructions” and thereby avoid the aforementioned King Midas problem. However, not only does he leave the question of how AIs do not “ap[e..] our overconfidence” largely unresolved, he also overlooks the flipside of his argument to the extent that AIs may turn into strategic Hamlets, unable to demonstrate more proactive determination when needed (Whetham/Payne 2019). Nevertheless, some very ambitious technophiles (unsurprisingly) propagate that complex AGIs may evolve as early as 2030 (Financial Times 2020: 1). One way for human intentions and AI inferences to operate in sync, they suggest, is to lead AIs to learn from networks of information which aim to channel input into the machines that is conducive to aligning the two. For instance, Russell proposes “provably beneficial AI” which is defined by a clearly set purpose (2020: 189). While the question of whether or not such a reflective, adaptive and imaginative (quintessentially human) AI mindset is indeed achievable (and desirable), remains an open question, four main obstacles appear to make the emergence of such future AGIs a highly intricate process, likely packed with strong setbacks and possible failure. For one, as strategic environments remain characterised by competition as well as uncertainty about the other side’s true intentions, traditional reflexes will seamlessly continue to instil notions of secrecy and decep- 107 Cyber and Artificial Intelligence tion, combined with cheating and disinformation, into AGIs on either side of the aforementioned security dilemma.12 Thus, the unresolvable challenge to the human perception of the distinction between benign and malign intentions will retain its firm grip on strategic affairs, now possibly extended into AGIs. Second, while the drivers behind technology advancement foresee, at one point, to develop AGIs comparable to the human mind, it seems questionable that the societal appreciation of the processes propelling forward such technological progress may be easily accomplished. With the stakes so high, the societal repercussions and resistance against fundamental changes of the role of humans on the planet may be crucial. Nevertheless, as the security dilemma will relentlessly urge states to compete over attaining, and sustaining, the edge over potential adversaries in terms of their military technology, the incentives to produce AGIs that allow states to compete are systemic. Third, as recent neuro-scientific research has recently (again) confirmed, it is still very remote from understanding the sheer complexity of the human brain. In particular, scientists have demonstrated that the so-called dendrites within System 2’s cortex do not simply forward nerve impulses to the cells of neurons so that the latter can connect incentivising and constraining signals (to feed the synapses) but, rather, that the millions of dendrites in and of themselves actively partake in this activity. What this means is that the vast number of dendrites itself forms a large part, if much neglected thus far, of the brain’s computing power. Thus, it might be altogether near to impossible to imitate the cognitive complexity-based productivity of the brain and implement it into an AGI, accordingly (Mueller-Jung 2020). Finally, while McAfee and Brynjolfsson argued that Moore’s Law, suggesting the patterned doubling of processing speed computer 12 The comparison of the lessons learned from the second nuclear age, regarding the nature of deterrence, arms control and safety measures, and to which degree they may, or rather may not, be useful for the AI-based arms races already underway, is as valuable as it is concerning (Economist 2019a: 16; Fitzpatrick 2019: 89–91). 108 Maximilian Terhalle machines possess, will be reliably proved correct, does not seem to match up evidence provided in the last ten years or so. Rather, while semi-conductors do indeed still become faster, engineers have conspicuously struggled to continue to produce a patterned doubling of their speed in recent years (Bernau 2020). Conclusion Will new technologies control human thinking and decision-making? Based on what has been established in this chapter, the answer is still no. Though, even if further technological advancements will not emerge according to linear patterns (in fact, they never have), with AGIs further developing it is likely that the narrowing of the scope for human-based analysis and decisions will continue. If so, how well are analysts and strategists prepared for the task? The trouble is that, both, analysts and strategists conduct their respective businesses based on a shared though little useful assumption. In particular, they tend to concur in that, as complexity is undeniably the central characteristic of international politics, both have responded to such complexity with ever greater specialization (Epstein 2019). In fact, the scholarly analysis of global affairs, political and otherwise, has heretofore favoured approaches based on hyper-specialisation, and entrenched them in the related research cultures, leading to ever more knowledge about increasingly less.13 In this vein, it was no one less than Max Weber, who promoted this idea some 100 years ago in his “Science as a Vocation”. He adamantly warned younger scholars that, only if they wilfully put on academic “blinders” and pursued the “most narrow specialisation”, they would become accepted into their guild; moreover, such degree of specialisation, he hastened to 13 Amongst others, Bew (2016: 155); Epstein (2019: 49); Hurrell (2007: 20); Nipperdey (1986: 14). 109 Cyber and Artificial Intelligence add, would “indefinitely shape the future” of the whole of academia (1992: 11–12).14 Similarly, think-tanks, government apparatuses and policymakers tend to cope with complexity by compartmentalising it into neatly divided, rationalistic parts, if undertaken without any greater oversight. Thereby, such compartmentalisation is meant to provide higher bureaucratic efficiency (Epstein 2019: 286). Moreover, for some leaders, such as Mrs. Merkel, this process of compartmentalisation reveals the principal assumption of policy-making in that politics is precisely and exclusively about solving problems. Nevertheless, as Henry Kissinger recently reminded his readers, “the traditional thinking [in foreign-policy making] has been that issues could be segmented into the resolution of individual problems – in fact that the solution of problems was the issue” (2019: 131). Crucially however, if left unchanged, such a mindset will lead to the self-imposed and continually increasing degree of subordination of human agency to AI-directed decision-making. In this vein, Russell has recommended a “cultural movement to reshape our ideals and preferences towards autonomy, agency and ability and away from … dependency” (2019: 255–56). As a consequence, an integral part of such a ‘movement’, if not spelt out by Russell, needs to set into motion processes that do exactly that: ‘reshape’ existing research and administrative cultures ‘away from dependency’ caused by the prevailing hyper-specialisation in analytical thinking about, and the bureaucratic compartmentalisation of, international complexity. Such a new culture of strategic thinking and decision-making should take advantage of the human brain’s still unrivalled ability for complex thinking through synthesis, imagination and self-reflection. Its tremendous cognitive advantages need to be systematically employed through strategic concepts and theories and will, thereby, enable strat- 14 Author’s translations. 110 Maximilian Terhalle egists and analysts to grasp the complexity of the big picture.15 “The extraordinary times that we are living through demand nothing less” (Economist 2019b: 24). References Allison, Graham (1999) Essence of Decision: Explaining the Cuban Missile Crisis, 2nd ed., New York: Longman. Bernau, Patrick (2020) Digitalkonferenz DLD. Das Maerchen vom schnellen Fortschritt, 19 January https://www.faz.net/aktuell/wirtschaft/netzkonferenz-dld/digitalkonferenz-dld-it-welt-und-ihre-technischen-revolutionen–16584880.html. Bew, John (2016) Realpolitik. Oxford: Oxford University Press. Booth, Ken/Wheeler, Nicholas (2018) Uncertainty. In Paul Williams ed., Security Studies. An Introduction. New York: Routledge. Boot, Max (2006) War Made Knew. New York: Chatham Books. Brockman, John (2019) Possible Minds: Twenty-Five Ways of Looking at AI. New York: Penguin. Buzan, Barry and Hansen, Lene (2009), The Evolution of International Security Studies. Cambridge: Cambridge University Press. Carter, Nick (2019) Chief of the Defence Staff ’s Annual RUSI Speech, 5 December https://www.gov.uk/government/speeches/chief-of-the-defencestaff-general-sir-nick-carters-annual-rusi-speech. Creveld, Martin van (1991) Technology and War. New York: Free Press. Clausewitz, Carl von (1980), Vom Kriege. Bonn: Duemmler, 18. A. Cukier, Kenneth (2019) Ready for Robots? How to Think about the Future of AI. Foreign Affairs 98:4, pp. 192–198. 15 See the introduction to his volume. – Further ideas can be found in Epstein’s fascinating juxtaposition of the notions of hyper-specialization and range (2019) as well as in Gaddis’ erudite views on Isaiah Berlin’s classical distinction between fox- and hedgehog-like ways of strategic thinking (2018). 111 Cyber and Artificial Intelligence Dear, Keith (2019) Artificial Intelligence and Decision-making. RUSI Journal 164:5, 18–25. Economist (2020) The Digital Divide: America and China Talk Past Each Other About the Dangers of Artificial Intelligence, 18 January, p. 58. Economist (2019a) AI and War: As Computers Play a Bigger Role in Warfare, the Dangers to Humans Rise, 7 September, p. 16. Economist (2019b) The end of history, 20 July, p. 24. Epstein, David (2019) Range: How Generalists Triumph in a Specialized World. London: Macmillan. Financial Times (2020) The Shape of Things to Come (Life & Arts Section), 4–5 January, p. 1. Financial Times Magazine (2018) “The Future of War”, Interview with James Mattis, 17–18 November, p. 20. Fitzpatrick, Mark (2019) Artificial Intelligence and Nuclear Command and Control. Survival 61:3, 81–92. Freedman, Lawrence/Michaels, Jeffrey (2019) The Evolution of Nuclear Strategy, 4th ed., New York: Palgrave Macmillan. Gartzke, Erik (2013) The Myth of Cyberwar: Bringing War in Cyberspace Back Down to Earth. International Security 38:2, 41–73. Hoijtink, Marjin/Leese, Matthias, Hgg., (2019) Technology and Agency in International Relations. London: Routledge. Hurrell, Andrew (2007) On Global Order. Oxford: Oxford University Press. Kahnemann, Daniel (2002) Thinking, Fast and Slow. New York: Penguin Books. Kello, Lucas (2013) The Meaning of the Cyber Revolution: Perils to Theory and Statecraft. International Security 38:2, 7–40. Libicki, Martin (2007) Conquest in Cyberspace: National Security and Information Warfare. Cambridge: Cambridge University Press. Lindsay, Jon (2013) Stuxnet and the Limits of Cyber Warfare. Security Studies 22:3, 365–404. Lonsdale, David (2004) The Nature of War in the Information Age: Clausewitzian Future. London: Frank Cass. 112 Maximilian Terhalle McAfee, Andrew/Brynjolfsson, Erik (2018) Machine, Platform, Crowd: Harnessing Our Digital Future. New York: Norton. Mueller-Jung, Joachim (2020) Unser etwas anderes Gehirn. Der Mensch tickt wirklich anders, 29 January https://www.faz.net/aktuell/wissen/gehirn-desmenschen-imitieren-was-roboterentwickler-wissen-sollten–16592278.html. Nipperdey, Thomas (1986) Nachdenken ueber die deutsche Geschichte. München: Beck. Payne, Kenneth (2018) Artificial Intelligence: Revolution in Strategic Affairs? Survival 60:5, 7–32. Rid, Thomas (2020) Active Measures. New York: Profile Books. Russell, Stuart (2019) Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking. Russell, Stuart (2019) The Purpose Put into the Machine. In Brockman, John (2019), ed., Possible Minds: Twenty-Five Ways of Looking at AI. New York: Penguin, 20–32. Sanger, David (2018) The Perfect Weapon: War, Sabotage, and Fear in the Cyber Age. New York: Crown. Sheldon, John (2019) The Rise of Cyberpower. In James Baylis et al.,eds., Strategy in the Contemporary World. Oxford: Oxford University Press. Stolfi, Robert (1970) Equipment for Victory in France in 1940. History 55 (183). Terhalle, Maximilian (2018) Strategie und Strategielehre. Zeitschrift f. Aussen- und Sicherheitspolitik 11:1. Unal, Beyza/Lewis Patricia (2018) Cybersecurity of Nuclear Weapons Systems, January https://www.chathamhouse.org/sites/default/files/publications/research/2018–01–11-cybersecurity-nuclear-weapons-unal-lewis-final.pdf. Whetham, David/Payne, Kenneth (2019) AI: In Defence of Uncertainty, 9 December. https://defenceindepth.co/2019/12/09/ai-in-defence-of-uncertainty/. Wright, Thomas (2017) All Measures Short of War. The Contest for the 21st Century & the Future of American Power. New Haven: Yale University Press. 113 Cyber and Artificial Intelligence

Chapter Preview

References
Allison, Graham (1999) Essence of Decision: Explaining the Cuban Missile Crisis, 2nd ed., New York: Longman.
Bernau, Patrick (2020) Digitalkonferenz DLD. Das Maerchen vom schnellen Fortschritt, 19 January https://www.faz.net/aktuell/wirtschaft/netzkonferenz-dld/digitalkonferenz-dld-it-welt-und-ihre-technischen-revolutionen-16584880.html.
Bew, John (2016) Realpolitik. Oxford: Oxford University Press.
Booth, Ken/Wheeler, Nicholas (2018) Uncertainty. In Paul Williams ed., Security Studies. An Introduction. New York: Routledge.
Boot, Max (2006) War Made Knew. New York: Chatham Books.
Brockman, John (2019) Possible Minds: Twenty-Five Ways of Looking at AI. New York: Penguin.
Buzan, Barry and Hansen, Lene (2009), The Evolution of International Security Studies. Cambridge: Cambridge University Press.
Carter, Nick (2019) Chief of the Defence Staff’s Annual RUSI Speech, 5 December https://www.gov.uk/government/speeches/chief-of-the-defence-staff-general-sir-nick-carters-annual-rusi-speech.
Creveld, Martin van (1991) Technology and War. New York: Free Press.
Clausewitz, Carl von (1980), Vom Kriege. Bonn: Duemmler, 18. A.
Cukier, Kenneth (2019) Ready for Robots? How to Think about the Future of AI. Foreign Affairs 98:4, pp. 192-198.
Dear, Keith (2019) Artificial Intelligence and Decision-making. RUSI Journal 164:5, 18-25.
Economist (2020) The Digital Divide: America and China Talk Past Each Other About the Dangers of Artificial Intelligence, 18 January, p. 58.
Economist (2019a) AI and War: As Computers Play a Bigger Role in Warfare, the Dangers to Humans Rise, 7 September, p. 16.
Economist (2019b) The end of history, 20 July, p. 24.
Epstein, David (2019) Range: How Generalists Triumph in a Specialized World. London: Macmillan.
Financial Times (2020) The Shape of Things to Come (Life & Arts Section), 4-5 January, p. 1.
Financial Times Magazine (2018) “The Future of War”, Interview with James Mattis, 17-18 November, p. 20.
Fitzpatrick, Mark (2019) Artificial Intelligence and Nuclear Command and Control. Survival 61:3, 81-92.
Freedman, Lawrence/Michaels, Jeffrey (2019) The Evolution of Nuclear Strategy, 4th ed., New York: Palgrave Macmillan.
Gartzke, Erik (2013) The Myth of Cyberwar: Bringing War in Cyberspace Back Down to Earth. International Security 38:2, 41-73.
Hoijtink, Marjin/Leese, Matthias, Hgg., (2019) Technology and Agency in International Relations. London: Routledge.
Hurrell, Andrew (2007) On Global Order. Oxford: Oxford University Press.
Kahnemann, Daniel (2002) Thinking, Fast and Slow. New York: Penguin Books.
Kello, Lucas (2013) The Meaning of the Cyber Revolution: Perils to Theory and Statecraft. International Security 38:2, 7-40.
Libicki, Martin (2007) Conquest in Cyberspace: National Security and Information Warfare. Cambridge: Cambridge University Press.
Lindsay, Jon (2013) Stuxnet and the Limits of Cyber Warfare. Security Studies 22:3, 365-404.
Lonsdale, David (2004) The Nature of War in the Information Age: Clausewitzian Future. London: Frank Cass.
McAfee, Andrew/Brynjolfsson, Erik (2018) Machine, Platform, Crowd: Harnessing Our Digital Future. New York: Norton.
Mueller-Jung, Joachim (2020) Unser etwas anderes Gehirn. Der Mensch tickt wirklich anders, 29 January https://www.faz.net/aktuell/wissen/gehirn-des-menschen-imitieren-was-roboterentwickler-wissen-sollten-16592278.html.
Nipperdey, Thomas (1986) Nachdenken ueber die deutsche Geschichte. München: Beck.
Payne, Kenneth (2018) Artificial Intelligence: Revolution in Strategic Affairs? Survival 60:5, 7-32.
Rid, Thomas (2020) Active Measures. New York: Profile Books.
Russell, Stuart (2019) Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking.
Russell, Stuart (2019) The Purpose Put into the Machine. In Brockman, John (2019), ed., Possible Minds: Twenty-Five Ways of Looking at AI. New York: Penguin, 20-32.
Sanger, David (2018) The Perfect Weapon: War, Sabotage, and Fear in the Cyber Age. New York: Crown.
Sheldon, John (2019) The Rise of Cyberpower. In James Baylis et al.,eds., Strategy in the Contemporary World. Oxford: Oxford University Press.
Stolfi, Robert (1970) Equipment for Victory in France in 1940. History 55 (183).
Terhalle, Maximilian (2018) Strategie und Strategielehre. Zeitschrift f. Aussen- und Sicherheitspolitik 11:1.
Unal, Beyza/Lewis Patricia (2018) Cybersecurity of Nuclear Weapons Systems, January https://www.chathamhouse.org/sites/default/files/publications/research/2018-01-11-cybersecurity-nuclear-weapons-unal-lewis-final.pdf.
Whetham, David/Payne, Kenneth (2019) AI: In Defence of Uncertainty, 9 December. https://defenceindepth.co/2019/12/09/ai-in-defence-of-uncertainty/.
Wright, Thomas (2017) All Measures Short of War. The Contest for the 21st Century & the Future of American Power. New Haven: Yale University Press.

Abstract

Thinking and making strategy serve states’ vital interests. Innately bound up with power, strategy devises a future that reflects vital interests, using its willpower to protect them. Unprecedented, “Strategy as Vocation” introduces Strategic Studies while also offering Germany practical strategies.

The book contains articles in German and in English.

Zusammenfassung

Strategisches Denken und Handeln dient vitalen Interessen. Es verlangt den Blick auf die Macht – und in eine Zukunft, die diese vitalen Interessen entsprechend widerspiegeln soll. Dies gilt immer, besonders aber, wenn Weltordnungen im Umbruch sind. Strategie als Beruf widmet sich den zentralen Konzeptionen der hierzulande vernachlässigten, wiewohl von Deutschen mitgeprägten Strategic Studies und bietet strategischem Denken und Handeln damit erstmalig Grundlagen auf dem Stand der internationalen Forschung an. Konkrete Strategievorschläge sind integraler Bestandteil des Buches.

Das Buch enthält deutsche und englische Beiträge.

Prof. Maximilian Terhalle (@M_Terhalle) lehrt Strategic Studies an der Universität Winchester, ist mit dem King’s College London affiliiert und berät das britische Verteidigungsministerium. Zuvor hat er einige Jahre an den Universitäten Columbia, Yale, Oxford und Renmin (Peking) geforscht und gelehrt.

Terhalle's insightful, balanced, and perceptive essays bring the tools of strategic studies to bear on a range of current international issues. Theoretically sophisticated and empirically grounded, the analysis will be of great value to both the scholarly and policy communities.”

Prof. Robert Jervis, Columbia University, New York

Maximilian Terhalle gehört zu den frühen Streitern für eine strategische Ausrichtung unseres internationalen Ordnungsdenkens und der deutschen Außenpolitik. Sein scharfsinniges Buch bietet eine klare Analyse der instabil gewordenen Welt. Und zieht daraus konkrete Folgerungen für die Verantwortung Deutschlands und seiner Partner für westliche Werte und Interessen.“

Prof. Matthias Herdegen, Universität Bonn

Maximilian Terhalle is a refreshing independent voice on European and German security policy. There is a pressing need for systematic, clear-eyed, and realistic thinking about Germany’s role in a rapidly changing world, and this wide-ranging collection of essays is an important contribution to a much-needed set of debates.”

Prof. Stephen Walt, Harvard University, Kennedy School of Government

The Germans have, for very understandable historical reasons, long been reluctant to engage in the kind of strategic thinking that comes naturally to the Anglo-Saxon world. Maximilian Terhalle, who is one of the Federal Republic’s most innovative experts in the field, is rightly dissatisfied with this opting out of the real world. His new book is a must-read for anyone who wants to understand modern German strategy, or rather the lack of it, and the need for a National Security Council in the FRG.”

Prof. Brendan Simms, Cambridge University

Drawing on wide reading and with a nod to Max Weber, this thoughtful collection of essays by Maximilian Terhalle demonstrates the importance of strategic thinking and how it can be applied to the big issues of war and peace in the modern world.”

Prof. Lawrence Freedman, King’s College London

Die NATO ist strategisch nicht hirntot. Vielleicht aber bald eines seiner Mitglieder. Wer auch immer Deutschland führen wird, täte gut daran, sich den von Terhalle vorgelegten strategischen Kompass sehr genau anzusehen. Die eventuelle Wiederwahl Trumps und der unwahrscheinliche Machtverzicht Putins und Xis bedürfen nicht nur einer erkennbar europäischen Hand im Kanzleramt, sondern auch eines völlig neuen, eben strategischen Mindsets. Terhalles Konzepte für Entscheider sowie seine konkreten Ideen für die Zukunft westlicher Sicherheitspolitik bieten genau das.“

Karl-Theodor zu Guttenberg, Bundesminister a.D., New York/München

Strategisches Denken fehlt im Land des Carl von Clausewitz in allen Bereichen. In der Politik, der Wirtschaft und der Entwicklung von Leitlinien, wie Europa in einer Welt im Umbruch gestaltet werden sollte. Prof. Terhalles Buch zeigt Grundlagen auf und gibt Anregungen in wesentlichen Feldern der Politik. Es sollte von Entscheidern gelesen und genutzt werden.“

General a.D. Klaus Naumann, ehem. Vorsitzender des NATO-Militärausschusses und Generalinspekteur, München

Can Germany think strategically?’ Indeed, and more broadly, can the European Union become a strategic actor? These questions lie at the heart of Maximilian Terhalle’s no-holds-barred assessment of Europe’s options as the continent faces mounting challenges from both partners and adversaries East, South and West.”

François Heisbourg, Special Advisor, Fondation pour la Recherche Stratégique, Paris

Terhalle has produced a rich and wide-ranging series of essays on some of the enduring and more recent dilemmas of international security. These subtle but piercing reflections are in the best tradition of strategic studies, from Clausewitz to Freedman.”

Prof. John Bew, War Studies Department, King’s College London

A thought-provoking and illuminating series of essays that grapple with some of the toughest and most important questions facing contemporary Germany, Europe, and the United States, written by one of Germany's most forward-looking strategists.”

Elbridge Colby, Principal, The Marathon Initiative, former US Ass’t Deputy Secretary of Defence, Washington D.C.

Das neue Buch von Maximilian Terhalle, Strategie als Beruf, ist ein wichtiger Baustein bei der Grundsteinlegung für die hierzulande vernachlässigten ‘Strategic Studies’. Der Autor bürstet kräftig gegen den Strich und stellt liebgewordene Denkmuster in Frage. Man muss Terhalle keineswegs in jeder Hinsicht zustimmen. Aber wenn Deutschland und Europa tatsächlich die ‘Sprache der Macht’ erlernen wollen, wie vom EU-Außenbeauftragten Anfang 2020 gefordert, wird man nicht umhinkommen, sich mit seinen Thesen auseinanderzusetzen.“

Boris Ruge, Berlin

For too long, Germany’s deafening silence on strategic matters has struck international academic and policy observers alike. This is about to change. Maximilian Terhalle’s realpolitik-based as well as erudite deliberations on the art of strategy, closing with novel practical ideas for Europe’s future strategic security, betray exactly that.”

Prof. Christopher Coker, London School of Economics/LSE IDEAS

In Strategie als Beruf schreibt Maximilian Terhalle mit außerordentlich klarem Blick über Fragen sicherheitspolitischer Strategie und füllt damit ein Vakuum in Deutschland. Seine Ergebnisse sind unbequem für die von der Friedensforschung dominierten Debatten. Jeder, dem die Strategiefähigkeit des Landes und Europas wichtig ist, sollte seine Ideen kennen.“

Dr. Bastian Giegerich, International Institute for Strategic Studies, London

“For over a decade, Western scholars of strategy have almost exclusively focused on the likeliness of the Thucydides trap to emerge between the US and China. Remarkably, while Prof. Terhalle acknowledges their global strategic importance, he spells out what the potential trajectory of their relationship implies for NATO’s European members vis-à-vis Russia. – Realpolitik reigns.”

Prof. Wu Zhengyu, Renmin University, Peking