The Australian Department of Defence has released a new report on its findings for how to reduce the ethical risk of artificial intelligence projects, noting that cyber mitigation will be key to maintaining the trust and integrity of autonomous systems.
The report was drafted following concerns from Defence that failure to adopt emerging technologies in a timely manner could result in military disadvantage, while premature adoption without sufficient research and analysis could result in inadvertent harms.
“Significant work is required to ensure that introducing the technology does not result in adverse outcomes,” Defence said in the report [PDF].
The report is the culmination of a workshop held two years ago, which saw organisations, including Defence, other Australian government agencies, the Trusted Autonomous Systems Defence Cooperative Research Centre, universities, and companies from the defence industry come together to explore how to best develop ethical AI in a defence context.
In the report, participants have jointly created five key considerations — trust, responsibility, governance, law, traceability — that they believe are essential during the development of any ethical AI project.
When explaining these five considerations, workshop participants said all AI defence projects needed to have the ability to defend themselves from cyber attacks due to the growth of cyber capabilities globally.
“Systems must be resilient or able to defend themselves from attack, including protecting their communications feeds,” the report said.
“The ability to take control of systems has been demonstrated in commercial vehicles, including ones that still require drivers but have an ‘internet of things’ connection. In a worst-case scenario, systems could be re-tasked to operate on behalf of opposing forces.”
Workshop participants added there is a risk that a lack of investment in sovereign AI could impact Australia’s ability to achieve sovereign decision superiority.
As such, the participants recommended increasing early AI education to military personnel to improve the ability for defence to act responsibly when working with AI.
“Without early AI education to military personnel, they will likely fail to manage, lead, or interface with AI that they cannot understand and therefore, cannot trust,” the report said. “Proactive ethical and legal frameworks may help to ensure fair accountability for humans within AI systems, ensuring operators or individuals are not disproportionately penalised for system-wide and tiered decision-making.”
The report also endorsed investment into cybersecurity, intelligence, border security and ID management, investigative support and forensic science, and for AI systems to only be deployed after demonstrating effectiveness through experimentation, simulation, or limited live trials.
In addition, the report recommended for defence AI projects to prioritise integration with already-existing systems. It provided the example of automotive vehicle automation as it provides collision notifications, blind-spot monitoring, among other things that support human driver cognitive functions.
The workshop members also created three tools that were designed to support AI project managers with managing ethical risks.
The first two tools are an ethical AI defence checklist and ethical AI risk matrix, which can be found on the Department of Defence’s website.
Meanwhile, the third tool is an ethical risk assessment for AI programs that require a more comprehensive legal and ethical program plan. Labelled as the Legal and Ethical Assurance Program Plan (LEAPP), the assessment requires AI project managers to describe how they will meet the Commonwealth’s legal and ethical assurance requirements.
The LEAPP requires AI project managers to create a document with information, such as legal and ethical planning, progress and risk assessment, and input into Defence’s internal planning, including weapons reviews. Once written, this assessment would then be sent for review and comment by Defence and industry stakeholders before it is considered for Defence contracts.
As the findings and tools from the report are only recommendations, the report did not specify what AI defence projects fit within the scope of the LEAPP assessment.