London, 05/12/2024

Article 36 AI System Assessment

Use of AI developed for, or aimed solely at, defence or national security use during conflict is governed by the International Humanitarian Law (IHL) (source: House of Lords) and the legal basis for its use is indeed IHL. In case of any additional and secondary use of AI during peacetime (e.g. law enforcement) the assessment will also involve human rights aspects. However, in the context of the former, the correct assessment is one that is based on Article 36 of 1977 Additional Protocol I to the Geneva Conventions of 1949 (source: House of Lords).

Below, is an overview of what a comprehensive Article 36 AI assessment should contain in order to ascertain whether the system is capable of being used in a manner that is compliant with IHL (source: House of Lords). It should be noted however that given AI’s “black box” character, as well as it being autonomous nature (in case of autonomous systems), it may be difficult to ensure the legality of a particular system at all times and with every use (source: House of Lords). This is why having in place a comprehensive assessment that is capable of identifying risks at the design stage, and indeed throughout the lifecycle of an AI system, is not only vital to establishing the legality of such system by the deployer, but also essential to showing compliance and accountability.

So, what should such an assessment include? The main parts that should be covered include a critical analysis of the applicable international humanitarian law principles, and the technical aspects of the system. It should be emphasised that in case of the former requirement the assessment should assess the legality with the humanitarian law principles, including with the principles of public international law as a whole. Indeed, Article 36 stipulates that “[i]n the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable […].”  In case of the latter requirement, the assessment should include a detailed descriptions of the features which form part of the capability and which are instrumental in operating the capability, whether with human assistance, or completely autonomously. 

Lastly, the largest part of the assessment should relate to risk identification and mitigation. Indeed, a comprehensive Article 36 assessment should not only lead to effective risk identification but must also provide ample space for detailed mitigation analysis to ensure any risks can be addressed. As per the CCW principles and guidance (source: United Nations), risk assessments and mitigation measures should be part of the design, development, testing and deployment cycle of emerging technologies in any national security/defence systems (source: House of Lords).

Subscribe to our legal newsletter and stay on top of legal developments in AI!