WhatIF Lab: Narrative Scenarios for Trusted Autonomous Systems

 

Soldier action with gun on hand holding to protect, survey around safety area in soldier mission with white smoke background.
Image: Adobe Stock

Trusted Autonomous Systems (TAS) is a Defence cooperative research centre with a focus on autonomous and robotic technology. In collaboration with the Australian Department of Defence and its stakeholders they have created the Method for Ethical AI in Defence, a framework to reduce the ethical risks incurred in the use of artificial intelligence (AI) in military contexts.

As a science fiction author, Joanne Anderton became involved in the project to create short, targeted narratives, which could be accessible to an audience made up of Defence and Defence industry personnel and through which they could test the framework, looking for potential stress points, risks, and triggers. While she had no history in Defence or specialized knowledge of AI, she was able to adapt her skills as a writer to generate the kind of creative futurist scenarios they required.

The Challenge

There are five key facets to the Method for Ethical AI in Defence: Responsibility (who is responsible for AI?), Governance (how is it controlled?), Trust (how can it be trusted?), Law (how can we use it lawfully?), and Traceability (how are its actions recorded?). Defence wanted to pinpoint these facets in narratives that covered the three domains of air, land, and sea.

These narratives would be set five to ten years in the future, and Joanne was given detailed case study notes for each, including the technology she could utilize and the kind of situation where that technology might be deployed. These case studies were a kind of checklist, a set of boxes that had to be ticked in each scenario. At the same time, she had to make the narratives engaging for a general and a Defence-affiliated audience.

The Solution

Joanne found writing to a checklist in this way deeply strange. To overcome this, she mapped the case studies onto a simple narrative arc: a set up that established character, technology, and mission; an inciting incident and turn which highlighted the key ethical conundrums; and a resolution that emphasised the human impact. Using this arc she could reimagine each case study into a story, and generated three scenarios for each domain (air, land, and sea). These drafts were then sent to representatives of the different stakeholders, and once they had reviewed them, she met with them online and together they teased each scenario apart. Over multiple meetings and rewrites, they worked to refine the scenarios, and their resulting narratives, until all participants were all happy with the pieces in play.

The Impact

The Method for Ethical AI in Defence is a practical toolkit designed to aid in the development and use of artificial intelligence in a Defence context. Given the exponential growth of AI capabilities and the disruption it is already creating across society, frameworks like this are becoming increasingly important. The narrative scenarios Joanne created are only one of the tools in the kit, but one that prioritises humans and their interaction with technology, rather than defaulting to a focus on the technology itself.

Stories are a powerful way to connect with other humans and imagine possible futures. Storytelling is something many of us take for granted, but it is a complex skill that takes many years to master. By employing the techniques traditionally associated with science fiction writing, these narrative scenarios enable the users of the toolkit to empathise with the personnel who will, one day, be working closely with technology that is only now in its infancy.