17.08.2023

C2 and Trust

Trust has always been a central concept in military command and control: it can be based on a ‘Band of Brothers’ construct or something a bit more complex with allies and partners. Yet this human-to-human rubric is not the same when we consider the concept of trust as it applies to human-machine trust. There is a need to think more about human machine trust as more C2 systems are added into military headquarters and their assigned forces. What emerges is a demand for less coders (or software savvy commanders), and more about diverse education sets and inquisitive minds. Especially if the idea of delegated and mission-command philosophies are to remain more than pipedreams.

Trust within the C2 paradigm has been a constant for as long as wars between humans have been conducted. Some of this trust relates to the relationship between commanders (the Nelsonian Band of Brothers, likewise with Monguls, Persians or Carthaginians), but more interesting today is the issue of trust between humans and the systems within C2 Headquarters. Human relationships (usually build on trust), and training are vital for success in contemporary warfare: good battlefield C2 complements this by reducing friction and isolation. Bad C2, poor relationships undermined by a lack of trust, add friction, increase isolation and reduce the possibility of success on operations.

Underpinning all this is the concept of TRUST. Still human-to-human, but increasingly about human-machine trust than we have been used to. How much can staff and commanders trust the systems in their HQs? How far should they trust them? What does trust even mean in this context?

There is no agreed single definition of trust that works for military and national security sectors. Philosophers will have one definition, scientists another. In any case, there are two common facets of trust that transpose themselves well: that trust is built on the dual components of competence and motive (or intent). Traditionally, for military personnel this means that trust is founded on a reasonable belief in the competence of colleagues and a common motive or intent. ‘One team, One fight’ as the saying goes. Definitions like this make sense in a military context, even if they are starting to lose resonance across wider society.

Yet we cannot simply transpose those this concept onto the instruments of C2 being inserted into military headquarters and forces. Machines, by their very being, do not possess sentient thought and therefore cannot have a motive. In a strictly academic sense, then, trust is a concept that can only exist between humans.

Be that as it may, informally trust also develops through a sense of familiarity and shared experiences. It is not simply an emotional bond, it is the normalisation of behaviours. It is in this way that an automated C2 system in a military HQ (like an automated Common Operating Picture) becomes trusted. Not through motive, intent, or competence in a human sense, but by familiarity and normalisation: where a picture provides evidence and has been shown to be correct the majority of the time, it becomes a ‘trusted’ tool. In this, there is a predictability and shared understanding that emerges from human-machine teams.

The extension to this, as AI increasingly arrives in HQs and military forces, will be the implicit trust that becomes invested in those systems that will provide course of action recommendations and intelligence assessments. As behaviours and familiarity with these systems become embedded over time, confidence in their ability to recommend the right decisions will grow. They will, doubtless, become as trusted as precision navigation has become in providing the ‘right’ answer. We have, to an extent, already started delegating decisions and interpretation to machines. The question we should be asking is whether we have done this consciously or out of habit?

As aspects such as navigation systems have evolved, military operators tend to understand how to interrogate the data effectively: buildings, landmarks, geography being out of place immediately inform a user that something is amiss. Radar or other systems can provide secondary confirmation. Yet this type of interrogation is not immediately transferrable to more sophisticated C2 systems entering military HQs and formations today. Our lack of an ability to intelligently interrogate and validate these systems is an aspect of AI in C2 that warrants a good deal more concern than is currently evident.

Data is influencing decision-making in a way that could fundamentally alter what we get from the systems and the very way we conduct C2. Our people need to be trained to understand how to think about these systems, indeed we are yet to define the model of thinking for the digital age: a renaissance man (or woman) for the contemporary world. The policy agendas have skipped this point of reference, diving immediately into the need to regulate and safeguard AI developments. Without understanding how we are train people to think and question these systems, we have skipped a vital step.

Without some of these critical building blocks, we will develop a familiarity – a potential an implicit trust – in AI within C2 systems that is without validity or merit. Until we educate and inform our people with skills needed, we will continue to be reliant on the good will and presumed common cause (motive and intent) of the coders and developers of such systems.  In selecting AI C2 systems for military use now, picking companies that match Western military culture has never been more important.

[A complete discussion of Trust in C2 is available in the podcast, Command and Control, Season 1, Episode 5, with guest Christina Balis]

Leave a Reply

*

BACK TO TOP