Algorithms of war: The military plan for artificial intelligence



At the outbreak of World War I, the French army was mobilized in the fashion of Napoleonic times. On horseback and equipped with swords, the cuirassiers wore bright tricolor uniforms topped with feathers—the same get-up as when they swept through Europe a hundred years earlier. The remainder of 1914 would humble tradition-minded militarists. Vast fields were filled with trenches, barbed wire, poison gas and machine gun fire—plunging the ill-equipped soldiers into a violent hellscape of industrial-scale slaughter.

Capitalism excels at revolutionizing war. Only three decades after the first World War I bayonet charge across no man’s land, the US was able to incinerate entire cities with a single (nuclear) bomb blast. And since the destruction of Hiroshima and Nagasaki in 1945, our rulers’ methods of war have been made yet more deadly and “efficient”.

Today imperialist competition is driving a renewed arms race, as rival global powers invent new and technically more complex ways to kill. Increasingly, governments and military authorities are focusing their attention not on new weapons per se, but on computer technologies that can enhance existing military arsenals and capabilities. Above all is the race to master so-called artificial intelligence (AI).

From its earliest days, military priorities have guided AI research. The event widely considered as marking the foundation of AI as a discipline—a workshop at Dartmouth College in 1956—was funded partly by the US military’s Office of Naval Research. The close military ties continued in the following decades. As the historian of computer science Paul Edwards explained in his book The Closed World: “As the project with the least immediate utility and the farthest-reaching ambitions, AI came to rely unusually heavily on ARPA [Defense Advanced Research Projects Agency] funding. As a result, ARPA became the primary patron for the first twenty years of AI research”.

AI is a slippery concept. None of the achievements in the field in the 20th century actually involved independently “intelligent” machines. Scientists simply found ways to computerize decision-making using mathematics. For many decades AI remained either stuck in the realm of theory or limited to the performance of narrow algorithmic tasks. While a machine could beat a chess world champion in 1997, it took until 2015 before a machine could defeat a human champion in the Chinese strategy game Go. AI systems have long struggled to perform more complex, intuitive tasks irreducible to abstract logic.

Nonetheless, the 21st century has seen an explosion of interest and investment in AI. The term AI remains a catchall for anything that impressively uses algorithms. But machine learning has definitely become more advanced, and the data sets from which they learn vaster. Universities around the world are experimenting in “deep learning”—the process where machines, designed to replicate neural networks in the human brain, embark on intelligent self-driven learning. The field appears to have entered a golden age, which may still be in its infancy.

The AI “boom” is centered, as before, on the military. A recent report from Research and Markets forecast that the global defense industry market for AI and robotics will grow to a value of US$61 billion in 2027, up from US$39 billion in 2018 (a cumulative expenditure of US$487 billion in ten years). This market growth is being driven by massive spending from major nation states, above all the US but including countries like China, Israel, Russia, India, Saudi Arabia, Japan and South Korea.

The big sellers in this market include the usual suspects—defense contractors such as Lockheed Martin, Boeing, Raytheon, Saab and Thales—as well as major technology and computing companies such as Microsoft, Apple, Facebook, Amazon and Alphabet (Google’s parent company). Indeed, the new AI arms race has seen increasingly close links forged between the California tech giants and the Pentagon. Amazon’s new CEO Andy Jassy sits on the National Security Commission on Artificial Intelligence (NSCAI), which is advising on the rollout of AI in the US armed forces. Jassy’s fellow NSCAI commissioners include Microsoft executives, and the former Alphabet CEO Eric Schmidt.

What, then, do militaries plan to do with the AI they’re spending billions on developing? There are three main areas of interest: lethal autonomous weapons (also known as “killer robots”), cyber-attacking software, and surveillance and tracking systems.

The first category is what it sounds like: take existing weapons and make them autonomous. This is a military strategist’s dream—to have machines engage targets with minimal human direction or supervision. The guiding principle is “force multiplication”—that is, helping a smaller number of people achieve a greater level of destruction.

Under the Obama administration, unmanned (albeit remotely controlled) drones became a key tool of US force and intimidation in the Middle East and South Asia. Edward Snowden’s leaks in 2013 revealed how global metadata surveillance allowed drones to geo locate the SIM card of a suspect, then track and strike whoever held the device. “We kill people based on metadata,” admitted US General Michael Hayden. Or as one National Security Agency unit’s motto put it: “we track ‘em, you whack ‘em”.

But even this technology seems crude compared to the newer US drones. Frank Kendall, Secretary of the US Air Force, announced in September 2021 that the Air Force had recently “deployed AI algorithms for the first time to a live operational kill chain”. While the details of the operation remain secret, Kendall has boasted that AI’s provision of “automated target recognition” would “significantly reduce the manpower-intensive tasks of manually identifying targets—shortening the kill chain and accelerating the speed of decision-making”. A complex algorithm, which learns from other drones’ “experiences”, could allow a drone to independently perform all stages of the “kill chain”—identifying the target, dispatching force to the target, deciding to kill the target, and finally physically destroying the target.

A new age of weaponry was also signaled by Mossad’s use of a semi-autonomous satellite-operated machine gun to assassinate Iran’s top nuclear physicist Mohsen Fakhrizadeh last year. The software used in the assassination was able, reportedly, to recognize the Fakhrizadeh’s face, with none of the bullets hitting his wife seated inches away.

The main global militaries are now trying to introduce autonomous elements to almost all equipment—tanks, submarines, fighter jets and more. The new AUKUS pact, for example, involves more than just nuclear submarines. There is also talk of AI, quantum technology, hyper sonic missiles, cyber weapons, undersea drones and “smart” naval mines. Marcus Hellyer of the Australian Strategic Policy Institute envisions a set up where a “mother ship” with a human crew can deploy fleets of smaller, autonomous drones to launch attacks on rival vessels.

Recently, one weapon in particular has climbed to the top of every military’s wish list: drone swarms. In this case, a large number of small drones act in unison, replicating the swarming patterns of birds and bees. It is frighteningly easy to program each participating drone to follow simple guidelines (towards motion coherence but against collision), allowing the swarm to act as a swift and unpredictable whole. These swarms, the subject of the 2017 short film Slaughter-bots, are rapidly becoming a reality. Aside from the US and China’s swarm programs, 2021 has seen testing and development in Spain’s RAPAZ program, Britain’s Blue Bear, France’s Icarus Swarms, Russia’s Lightning swarms, and Indian Air Force projects.

The first deployment of a drone swarm in combat was in Libya in March 2020, when Turkish attack drones were able to autonomously hunt and kill rebels in Tripoli. In May this year, Israel used a drone swarm in Gaza to find and attack Hamas fighters.

Aside from its use in lethal autonomous weapons, AI is central to the emergent frontiers of cyber conflict. In 2009, a computer worm called Stuxnet (of joint US-Israeli origin) was able to find its way into software controlling Iran’s uranium enrichment facilities. Hiding its own tracks, the worm searched for and attacked a specific piece of code, causing uranium centrifuges to spin out of control and destroy themselves.

Stuxnet sparked a cyber arms race. As Michael Webb, director of Adelaide University’s Defense Institute, told the Sydney Morning Herald, the world’s militaries are now “fighting for supremacy in cyber”. The first acts of the next major war might involve the paralysis of satellites, radar systems or electricity grids. Nations are placing their cyber defenses in the control of AI systems, which can react and retaliate to cyber threats faster than any human can. As Webb explains, “Many of the cyber attacks that are the hardest to combat use AI already”.

Imperialist military rivalry has always involved dangerous guessing games. The Cuban Missile Crisis is a classic example, where neither side could confidently know the intentions of its rival. The Soviet submarine B-59—too deep underwater to pick up radio broadcasts—almost launched its nuclear torpedo as the captain believed war had broken out. The possibilities of cyber warfare intensify such dangers—as belligerents are forced to guess what an AI-guided virus might target. If a nuclear-armed state feels vulnerable in cyberspace, and fears the targeting of its nuclear weapons systems, it may choose to deploy such weapons sooner rather than later.

The final AI capacity that militaries are interested in is mass surveillance. This is probably the most developed area to date. Already, AI is being used to scan and analyze footage from millions of drones and CCTV cameras around the world—searching for patterns and tracking particular faces at a scale and speed that no human can. Particularly when combined with the potential use of autonomous weapons, drone swarms and so on, AI-enhanced surveillance has a worrying potential to assist military attacks—whether on those deemed “enemy combatants” on an international or domestic level.

Many scientists have warned against AI’s use by military, and particularly autonomous weapons. Thousands of AI and robotics researchers have signed an open letter, initiated in 2015, demanding a UN ban on killer robots to avoid a global AI arms race.

Unfortunately, this arms race is already well underway. The Pentagon is in a self-described “sprint” to catch up with China’s AI capabilities. In July, US Defense Secretary Lloyd Austin announced a new US$1.5 billion investment to accelerate the military adoption of AI over the next five years. A new sense of urgency was added in September, when the Pentagon’s chief software engineer Nicolas Chaillan resigned in protest, arguing that the US was losing the AI arms race.

Despite the rapid development of AI technology in the past decade, it’s important to note that it still has major limits. Robots are still far from rivaling the abilities of human brains. They often fail to understand new problems or circumstances foreign to their coding or prior “experience”. Autonomous systems used by militaries today almost always involve humans in decisions to target and kill. There are big question marks about how much trust authorities are prepared to place in autonomous systems.

Nobody knows exactly how AI could shape a future war. But we shouldn’t expect the fully-automated conflict imagined in some science fiction. There is a good reason why China has built the world’s largest standing army, and why fully-staffed US submarines and aircraft carriers surround China’s coast.

Nor does new technology make military hierarchies all-powerful or immune to insubordination. It can introduce new instabilities, as well as avenues for resistance. We got a small glimpse of this in 2018, when thousands of Google workers protested against the company’s participation in the Pentagon’s Project Maven, which used AI to interpret drone footage and assist air strikes. Their rebellion forced Google to drop the project.

AI enthusiasts point to the immense potential of this technology to help create a more productive economy and a healthier society. But the promise of AI will be limited so long as its development occurs in the context of the violent and destructive dynamics of capitalism. Today, a large part of AI research and development is bound-up with the projects of competing military regimes. It is used above all for its destructive—not progressive—potential. Its development in the hands of imperialists should be opposed by socialists, just like the nuclear arms race of the 1950s and ‘60s.

Source: https://redflag.org.au

 

No comments

intech company. Powered by Blogger.