Talking about artificial intelligence alone isn’t enough. We need to discuss the systems in which it will be a critical component, Genevieve Bell, Katherine Daniell, and Amy McLennan write.
Defining artificial intelligence (AI) is messy. Ask 10 experts to define AI and you will get 10 different answers. It is easy to think that the most important policy question, then, is a definitional one that tidies up the mess: what is AI?
But AI is not a singular thing. According to Kate Crawford and Meredith Whittaker, AI “refers to a constellation of technologies, including machine learning, perception, reasoning, and natural language processing”.
That means there are lots of AIs, in lots of different places. And that’s only the beginning. Systems which contain AI are built from natural resources, powered by energy, created and sold on markets, incorporated into our societies in myriad ways, and have a range of effects on people’s lives.
So what if the mess is the important part? What if, instead of focusing on AI as if it is some singular object, we think about the complex systems into which AI is being built? To understand systems, single definitions don’t work because they erase multiple perspectives, definitions, values, and relationships that shape the system and how it behaves.
None of this is new. All public policy areas are messy. Health, education, defence, immigration, environment, social policy and others are interconnected and complex. They change over time, they look different depending on who we are and where we are positioned, and decisions in one area affect others.
For some time, we have drawn on systems thinking to help us make sense of this mess. Systems concepts such as feedback loops have become increasingly common in public policy and service development, especially since the 1970s.
Decision-making relating to AI might also benefit from systems approaches. We are not the first to suggest this. In 1946, when the first general-purpose computing machines began operating, a mathematician and philosopher named Norbert Wiener proposed a new area of study – cybernetics – to understand the relationships between biological, technical, and human systems.
Wiener developed the idea of the feedback loop, and he helped shape systems engineering and control theory. In 1950, he considered how cybernetic thinking might be applied to society. He emphasised the role of policy feedback in the system, and the importance of thinking carefully, creatively, and critically about the decisions we make about new technology.
Can cybernetic theory be applied in policy practice?
Thinking cybernetically requires us to understand the dynamic relationships between our ecosystem, the technologies we create, and people. To do this, we need to bring in diverse perspectives from the breadth of the system.
We also need to ask questions to understand the system and our place in it, rather than simply solve the problems we can see ourselves. Governing cyber-physical systems for public good requires us to continually ask useful and salient questions, to stay curious, to admit we don’t know everything, and to expect the system to continue to change.
Good practice in every area of public policy relies on all of us – including policymakers, politicians, and citizens – asking good questions.
Others have made a similar point. For example, Bruce Pascoe recently emphasised the importance of asking questions, raising doubt and wrestling with concepts rather than simply trusting government or others.
Of course, asking questions is easy to say and much harder to do. So where might we start? What is a cybernetic question?
As with any complex system, the list of questions we could ask is limitless and some questions will more important than others. The 3A Institute at The Australian National University has begun to develop some core questions about AI at scale. Around this set of questions, we are developing a new applied science to scale AI safely into cyber-physical systems.
In what context is the system operating? How might we determine the boundaries of the system?
Will the system – or parts of it – have autonomy, and to what degree will it be able to make decisions about its actions without an operator or other constraints?
Will the system have agency? Will it act or exert power on behalf of others? Will it bear responsibility? Will it be subject to laws, controls, and restrictions?
How will we think about assurance in the system? What are the safety, ethical, risk, security, and policing concerns? How will we reassure people that they are safe?
What indicators will we use? How will we monitor performance? What will success look like, and how do we ensure the metrics align with the future we want to live in?
What interfaces will the system have, and how will we know we are interacting with it? What form will that interaction take and how might it change the system?
Finally and most importantly, we must consider the system’s most important aspect: its purpose. Deciding what we actually want to achieve with a system is the core piece of this framework.
The 3A Institute is developing research to test and iterate these core questions. We are also building training programs, including a Masters program and a range of short-courses, to develop a new generation of practitioners who can put cybernetic questions into practice.
Why is all this important? Because the public policy we create when it comes to AI at scale will only be as good as the questions we ask.