Federal Council of Medicine Resolution Regulates Artificial Intelligence Use in Medicine

The Federal Council of Medicine (CFM) published on February 27, 2026, Resolution CFM No. 2,454/2026, which regulates the use of Artificial Intelligence (AI) in the sector and establishes obligations regarding the responsible use of solutions that adopt AI models, systems, and applications in medicine.

The Resolution provides that the governance of AI models, systems, and applications in medicine must respect the autonomy of physicians and medical institutions, and it explicitly presents AI as a support tool for medical practice. It is the physician’s duty, when using AI systems, to remain the final responsible party for clinical, diagnostic, therapeutic, and prognostic decisions, and the physician must also record the use of AI systems as support for medical decisions in the patient’s medical record.

The Resolution prohibits the use of AI for the direct communication of diagnoses, prognoses, or therapeutic decisions without human mediation, reinforcing throughout its text that the use of AI cannot compromise the physician-patient relationship.

Annex II of the Resolution deserves special mention, as it addresses the classification and categorization of risks provided for in articles 12 and 13, stipulating that medical institutions—public or private—that develop or use AI models, systems, and applications must conduct a preliminary assessment to define their risk level, classifying them as low, medium, high, or unacceptable, taking into account factors such as:

(I) Potential impact on fundamental rights and patient health;

(II) Criticality of the usage context;

(III) Degree of model autonomy;

(IV) Purposes;

(V) Level of human intervention in the outcome; and

(VI) Quantity and sensitivity of the data used.

In practice, both healthcare institutions—public or private—the medical professional, and other agents involved in the development, training, validation, and implementation of AI models, systems, and applications will need to observe the duties and obligations set forth in Resolution CFM No. 2,454/2026, in addition to strictly complying with the General Data Protection Law (LGPD) and applicable information security standards.

The Resolution will come into force 180 (one hundred and eighty) days after its publication date.

Thus, companies that develop, contract, or distribute AI solutions in the healthcare sector will need to observe the following:

Implementation of structured AI Governance, with definition of responsibilities, risk classification of the AI solution, and human supervision flows for AI solutions;

Compliance with LGPD, implementing Privacy by Design and Privacy by Default, especially regarding the processing of sensitive health data and the use of data for AI model training;

Adoption of robust information security measures compatible with the risk level of the AI application;

Continuous monitoring of biases and model performance, with records of mitigation measures; and

Contractual review for clear provisions on responsibilities, delimitation of obligations, duty of cooperation in audits, and access to technical information.

In light of these recent regulatory updates, Peck Advogados has a team of specialists ready, with extensive experience in AI Governance and contractual risk management, to support sector institutions in strategic and regulatory alignment.

Prepared by: Dra. Bianca Melo da Cruz and Dra. Sofia Diniz, both Lawyers of the Digital Advisory, and Dra. Graziella Rosa, Manager of the Digital Advisory.

AUTHOR

Share

Latest news

25/08/2025

End of the Transition Period for ANPD Resolution No. 19 Regarding International Data Transfer

The National Data Protection Authority (ANPD) published CD/ANPD Resolution No. 19 on August 23, 2024. This Resolution approves the Regulation on International Data Transfer. It […]

19/03/2026

Analysis of Bill 5,582/2025: Innovations in Combating Gangs and Structured Criminality

Bill No. 5,582/2025 represents one of the most relevant proposals for updating the criminal and criminal procedural legal framework aimed at combating organized crime in […]

02/04/2026

Dependency Engineering | Protection of Minors in the Digital Environment: From Expressive Global Condemnations to the Digital ECA (Law No. 15.211/2025)

The recent verdict issued by the Los Angeles Court on March 25, 2026, in the trial of the case K.G.M. vs. Meta Platforms and Google […]

View more posts

SUBSCRIBE TO OUR NEWSLETTER

Receive content on Law, Innovation, and Business.

SIGN UP

Our Office

Rua Henrique Schaumann, nº 270, 4º andar
Edifício Pinheiros Corporate,
São Paulo – SP | CEP: 05413-909
(11) 2189-0444