Last updated, 04/10/2025

Artificial Intelligence in conflict and public international law

According to the UN, “the human rights framework provides an essential foundation that can provide guardrails for efforts to exploit the enormous potential of AI, while preventing and mitigating its enormous risks (source: UN High Commissioner for Human Rights).

Considering AI is such a new field of technology, law may at times struggle to keep up as the technology further develops and improves. It is still not clear whether international human rights law should be applicable when undertaking assessments of the legality of AI capabilities aimed to be used in conflict. Certain countries, such as the United kingdom, are currently leading the discussions on this subject (source: ICRC Law and Policy Blog). 

It could be argued that because the human rights framework is applicable at all times, this essentially means that it should also apply, at least to some extent, to the legality assessments of AI capabilities. This is further supported by the commentary under the customary law’s Fundamental Guarantees and Rule 87 relating to the principle of humanity.

However, it should be emphasised that during conflict the international humanitarian law (IHL) is the default legal basis/framework that takes precedence over international human rights law. In such situations, the role of the human rights framework is to supplement IHL where any gaps exist in such circumstances. In addition, the applicability of human rights protections depends on whether the individual concerned is within the “jurisdiction” of the State involved.

Lastly, there is one major difference between the applicability of human rights and IHL. Under human rights law all persons are equally protected against any breaches. However, under IHL combatants and civilians are not afforded equal protection against attacks. Force can be used against combatants unless they lay down their arms, although such force should not be superfluous (source: ICRC International Humanitarian Law, a Comprehensive Introduction, p. 35, 07/2022).

In terms of human rights instruments that are truly AI focused, there have been two recent developments in Europe in this respect.

The first international treaty on AI – the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law* – was introduced in 2024. The treaty unfortunately lacks enforcement mechanisms, and moreover, it does not apply to defence and national security matters. Secondly, the updated data protection Convention 108+ has recently been supplemented with the addition of the AI Guidelines. These guidelines too are non-binding as per their legal status as advisory instruments rather than legal instruments.

Finally, there is the EU AI Act (the “AI Act”) which is yet to come fully into force. The AI Act has an extraterritorial scope, applying to organisations that place AI systems on the EU market, put them into service in the EU, or whose AI output is used in the EU, regardless of the organisation’s location. This means non-EU companies must comply if their AI targets EU users or produces output intended for use within the EU. The AI Act makes exceptions for certain uses, such as AI in national security, defence, and some research activities. However, the defence/military use does become relevant in the context of the AI Act if a system has a dual military and civilian purpose, as well as in the military/defence procurement stages, including during the development stages by private companies with the use intended for military/defence purposes. Although not a human rights instrument per se, the AI Act provides for a mechanism to assess high risk AI systems from the human rights law angle. Since AI systems can lead to biases and discrimination, the AI Act aims to safeguard against such negative outcomes by imposing strict requirements on high-risk AI systems that can affect fundamental rights of individuals.

Relevant Laws

The current (pre-AI) human rights framework consists of: 

(i) regional treaties agreed at a regional level, of which the most notable are:


(ii)
state laws about human rights

( iii) global treaties by international bodies

  • 9 core treaties principally established by the United Nations, listed here.

The ratification status of a treaty is nevertheless important as it indicates whether a treaty has been incorporated into local state law. In other words, whether it is binding on the state in question.

* The Framework Convention does not apply to national defence matters nor to research and development activities, except when the testing of AI systems may have the potential to interfere with human rights, democracy, or the rule of law.

Last updated, 25/09/2025

Artificial Intelligence in conflict and public international law

What does the UN state about AI in the context of International Humanitarian Law?

“In his 2023 New Agenda for Peace, the Secretary-General reiterated this call, recommending that States conclude, by 2026, a legally binding instrument to prohibit lethal autonomous weapon systems that function without human control or oversight, and which cannot be used in compliance with international humanitarian law (IHL), and to regulate all other types of autonomous weapons systems. He noted that, in the absence of specific multilateral regulations, the design, development and use of these systems raise humanitarian, legal, security and ethical concerns and pose a direct threat to human rights and fundamental freedoms” (source: UN).

International Humanitarian Law, which is only applicable during armed conflict (both international and non-international)*, consists of treaties and customary international law. At the moment, IHL treaty framework in its current , non-modernised form, is used to asses the legality of military AI capabilities. Whether or not the framework needs to be updated in order to accommodate for such rapid developments in technology, is debatable. Some states are of the opinion that the current framework is sufficient to handle novel technology (source: ICRC Law and Policy Blog).

Relevant Treaty Law

 

(ii) The Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects as amended on 21 December 2001, usually referred to as the Convention on Certain Conventional Weapons (“CCW”, “the Convention”), is a key IHL instrument. CWW deals with legally acceptable uses of weapons rather than with regulation of the technology itself (source: EU Parliament Committee paper).

Relevant Customary Law

Although IHL treaties are the basis for assessing the legality of AI systems, the customary law should also be taken into account. Customary law may provide additional assistance where treaties do not address a specific issue in enough detail.

Below is a list of specific customary IHL rules (out of the currently 161 coded, as identified in the ICRC’s 2005 study) that are most relevant to the aspects/use of AI (source: ICRC).

Relevant Rules

Distinction

Rule 1Rule 2Rule 3Rule 4Rule 5Rule 6Rule 7Rule 8Rule 9Rule 10

Indiscriminate Nature

Rule 11Rule 12Rule 13

Proportionality

Rule 14

Precautions

Rule 15Rule 16Rule 17Rule 18Rule 19Rule 20Rule 21Rule 22Rule 23Rule 24

General Principles on the Use

Rule 70Rule 71

Humane Treatment

Rule 87

* The Amendment to Article 1 of the Convention, regarding the scope of application of the Convention and its Protocols, was decided in 2001 and entered into force in 2004. By joining the amendment to Article 1 of the Convention, High Contracting Parties will ensure that CCW and its Protocols apply also to situations of non-international armed conflicts.

Subscribe to our legal newsletter and stay on top of legal developments in AI!

Copyright © AI International Law [dot] com.

2024 – 2025