London, 05/12/2024

Last updated, 03/10/2025

Human rights law and AI in National Security Context

human rights law and AI. It is debatable whether human rights law and AI can be easily linked within the national security context, and specifically, whether the human rights angle should be taken into account during the design, development and deployment phases of AI capability in the context of defence and/or national security. Below is a brief summary of the most recent developments in AI that confirm that indeed a human rights review may be required. Albeit, any such need for human rights assessment should be established on a case by case basis taking into account various relevant points. Indeed, there is a clear link between human rights law and AI.

Council of Europe Convention on AI

The Council of Europe Convention on AI (the Convention) is the world’s first binding treaty on AI covering human rights. However, despite its importance and wide scope, it contains a national security and defence exemption, thus making it out of scope in terms of technology developed for use in these two areas. Although these exclusions have been implemented, the Convention will still have certain national security implications in all participating countries, and governments around the world should still take note of it (source: CETaS). 

Specifically, the Convention covers the private sector to some extent. This is due to the fact that the Convention applies fully to private entities, and indeed in practice, it is the private sector that is currently largely responsible for the development of AI capability due to its expertise and high development costs. Furthermore, according to Taylor Woodcok, “the key relevance of the international human rights law rests on the procedural obligations, such as the duty to investigate, that will be triggered as a result of the violation of the international humanitarian law and the international human rights law” (source: Human Rights Here).

Given the above, it is clear that the national security community should take human rights into account when developing AI systems. To this end, it should therefore consider which of the Convention’s provisions they might adopt voluntarily, while tracking the progress of this Convention for what it reveals about global trends in AI regulation. 

The AI Act and the dual use AI systems 

The AI Act (the Act) is not a human rights treaty per se, and in the same way as the Convention, does not apply to defence and national security matters. However, as identified by the Centre for Emerging Technology and Security (CETaS), in cases of any dual use of an AI system, meaning where such system may, in addition to national security or defence, also be deployed for civilian, humanitarian, and/or law enforcement, or public security purposes, the Act will still apply (source: CETaS).

The Act mandates a risk-based approach to assessing AI systems. In other words, AI systems are divided into five risk categories, each with different requirements. Such risks should be assessed via a suitable impact assessment, and satisfactory mitigation measures for any these risks should be identified, as part of any such assessment.

Assessing AI Systems from the human rights angle

As concluded by CETaS, “[w]hile conducting human rights impact assessments [is not] mandatory in national security contexts (source: CETaS), methodologies which have been prepared in association with the Council of Europe [do] provide detail on how to identify human rights risks associated with new AI projects. These methods could be incorporated within national security approaches to AI assurance”. It should be noted that the Convention itself is not a ‘template’ for an impact assessment. Such an assessment should instead be developed and applied by the interested parties based on the principles contained in the Convention.

The Council of Europe’s Ad-hoc Committee on Artificial Intelligence provides guidance on how to conduct such an assessment by identifying and assessing eight key dimensions of an AI systems.

Another methodology specifically designed for the purpose of AI system assessment which could potentially be utilised, is the ‘Human rights, democracy, and the rule of law assurance framework for AI systems’ (the HUDERAF framework). The HUDERAF framework covers a broad range of areas that should be assessed from the human rights perspective when developing AI systems – from the design stage, right through to the deployment, use and technology retirement stages. It must be noted that HUDERAF is not intended to be an official, interpretative tool for the Convention, but rather should be seen as an aid to a risk assessment, which can form part of larger set of tools (source: Council of Europe Committee on AI).

Lastly, the OECD’s Advancing Accountability in AI paper, which is based on the OECD AI Principles, further provides helpful guidance on how to undertake AI system reviews whilst taking into account human rights implications and assessing these so that any risks can be mitigated. The guiding principles on business and human rights developed by the OECD further support this.

Summary

In summary, at least in the context of the AI Act and/or the Convention, there will be some human rights impact on the defence and national security sectors regardless of the exemptions/carve-outs, and governments should take this into account when developing or procuring development of AI systems. Risk based assessment should be undertaken early on, as early as at the design stage. The above-mentioned impact assessment resources, either the HUDERAF framework or the Council of Europe-developed impact assessments, should provide a good base as well as a source of information on how to assess the legality of AI systems in practice from the human rights law perspective. Indeed, such assessments should be built into existing assessments utilised by defence and national security departments, as well as by those procuring AI technology. This is especially evident given the rapid development of AI laws and the close links between the two sectors.

Subscribe to our legal newsletter and stay on top of legal developments in AI!