Artificial intelligence and mental health: what does the law say?


A symposium on artificial intelligence (AI) and mental health was held in Caen on 29 and 30 January 2024. The two days of the conference were an opportunity for approximately twenty national and international researchers and clinicians to discuss the development of these technologies in the field of mental health. It was also an opportunity to challenge the current framework for AI jobs in healthcare on a legal and ethical level.

AI, the tool of the future in mental health

Various conferences have shown that artificial intelligence can be useful both to support research and to facilitate the journey of care for mental health patients.

For example, scientists from the University of Cergy are using artificial intelligence to study the coordination of people with schizophrenia.. The artificial intelligence implemented by the robot allowed them to control the parameters and thus investigate what is the origin of the social deficit associated with this disorder.

There are also avenues being developed for the use of AI in the diagnostic phase. They still remain imperfect, but their use is becoming more and more plausible. The most developed and robust systems are currently based on interview transcripts and specialize in detecting a specific disorder such as schizophrenia or depression.

A more concrete solution that can affect everyday life is the ability to use AI to detect the onset of a symptom (anxiety, withdrawal, etc.). Once a symptom is identified, it can be treated as quickly as possible with, for example, breathing exercises.

What legal framework?

The increasingly concrete arrival of these technologies in the solution of health and especially mental health raises the question: what is the framework for these uses?

During the symposium in Caen, Christian Byk, the representative of France in the Intergovernmental Bioethics Committee of UNESCO, recalled that these legislative questions are important in view of the prevalence statistics in the European population. In 2021, about 30% of the European population (500 million inhabitants) suffered from mental health problems.

In France, as Aurore Catherine, lecturer in public law at the University of Caen, explains, it is the 2021 bioethical law that prevails over the current law for these issues of the use of artificial intelligence in the context of health, and therefore mental health. This law aims to regulate the use of digital methods in health, in particular by specifying the obligation to inform the patient (or his legal representative) about the use of these tools.

To store sensitive data useful for training, the government, with the support of the State Council, decided to use servers certified by the National Agency for the Security of Information Systems (Anssi). Solutions such as Microsoft Azur or AWS (data hosting services provided by Microsoft and Amazon) can no longer be used to store health data.

Limits of bioethical law

Aurore Catherine clarifies that bioethics law is limited in its stated goal of protecting patients, and especially the most vulnerable patients (minors or under guardianship). This applies in particular to Article L4001-3 of the Public Health Act.

The article is written in these terms: “A healthcare professional who decides to use a medical device containing algorithmic data processing, the learning of which has been carried out from massive data, for the act of prevention, diagnosis or care, shall ensure that the person concerned has been informed of this and applicable, informed of the resulting interpretation.”

We can note several limitations imposed by Aurora Catherine. First, there is no mention of the patient’s consent to the use of their data. The healthcare worker only has the duty to ensure that the patient is informed.

The second problem is the clear absence of specifying which means should inform the patient. In its terms, it can be the practitioner himself, as well as any other intermediary.

A third problem, the practitioner must ensure that the patient is aware of the interpretation, but not of the manner in which the interpretation was made. How data processing works is not necessarily explained to the patient.

Recommendations for changing the law

Aurore Catherine suggests several avenues for amending the Bioethics Act, starting with the respecification of cases for the most vulnerable patients (possibly mentioning the involvement of a guardian or legal representative).

According to her, it is also important to re-specify the patient’s consent to the use of such tools. This importance of consent was also pointed out by health professionals during this symposium.

The complexity of the algorithms and their operation requires, again according to Aurora Catherine, the implementation of support systems. We can imagine the duty of an AI clerk in hospitals who supports the patient and doctor in using these new tools and who can help communicate how they work.

Leave a Comment