War algorithms and international law - Interview with Dustin A. Lewis

Ahead of the Expert Briefing on "war algorithms" and international law that we will be hosting on 1 November, we had the chance to talk to one of the speakers: Dustin A Lewis, Senior Researcher at the Harvard Law School Program on International Law and Armed Conflict (PILAC).
 

As we will be discussing during PHAP’s next Expert Briefing on 1 November, Harvard Law School PILAC recently published a report on war algorithms, exploring the legal implications of technical autonomy in armed conflict. Why might this be an important topic today for humanitarian actors?

There are at least two key reasons.

First, humanitarian actors generally operate in relation to armed conflict. And the development and use of more and more forms of technical autonomy are likely to shape—and, in certain respects, transform—how, where, and by whom wars are conducted. Part of the concern, then, is for humanitarians to understand how war algorithms might affect contemporary battlespaces.

Of course, some armed conflicts will not involve those technical advancements. But technologically-sophisticated parties—think here not only of states but also, perhaps, of some organized armed groups—are likely to seek a military advantage, or even just logistical efficiencies, through autonomous technologies.

So far, the greatest focus in this context has been on “autonomous weapons.” In light of those weapons’ lethality and destructive capabilities, that focus is understandable. Yet in addition to the conduct of hostilities, there are numerous other functions of warring parties that autonomous technologies could implicate. Consider, for instance, guarding and transporting persons deprived of liberty, or tasks concerning crowd control and public security in occupied territories. Those activities might soon be delegated, at least in part, to algorithmically-derived systems.

Second, humanitarian actors themselves might increasingly utilize—for the benefit of affected populations—advancements in artificial intelligence, robotics, and related fields. Telesurgery (surgery performed by a physician distant from the patient, using robotics and visualization systems) is one example. Another example concerns algorithmically-based technical systems aimed at allocating food, clean water, and shelter among an affected population. Part of the efficacy of humanitarian response, then, might increasingly turn on developing and using war algorithms.

Building competence in technical architectures might therefore become more important for effective humanitarian action. At the same time, it will be vital to understand how these technologies fit within—or might frustrate—existing legal frameworks pertaining to armed conflict.

In the report, you use the term “war algorithm.” What does this signify and why did you decide to use this term?

The background concern, as highlighted by such scholars as Frank Pasquale, is that power and authority are increasingly expressed algorithmically. In other words, at least for those of us with access to certain advanced technologies, algorithms are playing a more important role in our everyday lives. (Think of self-driving cars, online movie suggestions, automated credit ratings, and machine review of job applications.) War is no exception. Indeed, a number of militaries are actively pursuing a qualitative edge through autonomous technologies. Algorithms are a technical pillar of the bulk of those technologies.

We put forward the concept of “war algorithm” because we believe it is a useful lens through which to think about—and to pursue—accountability concerning technical autonomy in relation to armed conflict. We define “war algorithm” as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict. We therefore do not limit our inquiry to weapons.

Our foundational technological concern is broader: the capability of a constructed system, without further human intervention, to help make and effectuate a “decision” or “choice” of a war algorithm. Distilled, the two core ingredients are an algorithm expressed in computer code and a suitably capable constructed system. (By focusing on constructed systems, we sidestep the question of what does and does not constitute a “robot.”)

The report lays out a number of challenges relating to accountability related to technical autonomy in armed conflict. How would you say this report fits into overall efforts to address those challenges – what would you hope would be the impact on developments in this area?

An array of initiatives in this domain is already underway. These approaches cut across legal, technological, and operational communities. Some of the initiatives are led by states, others by United Nations system actors; some by technology experts, others by advocacy NGOs. Against that backdrop, we hope the report helps provide a useful frame through which to formulate and take concrete steps to address accountability.

One of our assumptions is that focusing on the technical ingredients underlying war algorithms might help identify what can be regulated. That, in turn, might help efforts to define and pursue what should be regulated in this domain. For instance, the war-algorithm framing might further accountability efforts concerning “autonomous weapons.” That is because some of those efforts have been frustrated to date in part by lack of a definitional consensus on what can and should be addressed.

In this context, accountability is not a well-defined term. So it might be useful to say what we mean by it. In short, we are primarily interested in the duty to account for the exercise of power over the design, development, or use—or a combination thereof—of a war algorithm. In other words, we might think of accountability in terms of holding someone or some entity answerable for the design, development, or use of war algorithms.

That power may be exercised by a diverse mixture of actors. Some are obvious—such as states and their armed forces. But myriad other individuals and entities may exercise power over war algorithms, too. Consider the broad classes of “developers” and “operators,” both within and outside of government, of such algorithms and their related systems. Also think of lawyers, industry bodies, humanitarian actors, political authorities, members of organized armed groups—and many, many others.

Through the war-algorithm lens, we attempt to link international law and related accountability architectures to relevant technologies. In doing so, we sketch a three-part accountability approach: state responsibility under international law, individual responsibility for international crimes, and a broader notion of “scrutiny governance.” While not exhaustive, the framework highlights traditional and unconventional accountability avenues. The former include war reparations and war-crimes prosecutions. And the latter include normative design of technical architectures and self-regulation among technologists.

We focus largely on international law. The reason is that international law is the only normative regime that purports—in key respects but with important caveats—to be both universal and uniform. In this way, international law differs from the numerous domestic legal systems, administrative rules, or industry codes that govern the development and use of technology in other spheres. Moreover, by not limiting our inquiry only to weapon systems, we take an expansive view. We do so in part to show how the broad concept of war algorithms might be susceptible to regulation—and how those algorithms might already fit within the existing regulatory system established by international law.

Join us with Naz K. Modirzadeh and Dustin A. Lewis on 1 November in this Expert Briefing on War algorithms and international law – Accountability for technical autonomy in armed conflict to learn more.