The Federal Council of Medicine (CFM) published on February 27, 2026, Resolution CFM No. 2,454/2026, which regulates the use of Artificial Intelligence (AI) in the sector and establishes obligations regarding the responsible use of solutions that adopt AI models, systems, and applications in medicine.
The Resolution provides that the governance of AI models, systems, and applications in medicine must respect the autonomy of physicians and medical institutions, and it explicitly presents AI as a support tool for medical practice. It is the physician’s duty, when using AI systems, to remain the final responsible party for clinical, diagnostic, therapeutic, and prognostic decisions, and the physician must also record the use of AI systems as support for medical decisions in the patient’s medical record.
The Resolution prohibits the use of AI for the direct communication of diagnoses, prognoses, or therapeutic decisions without human mediation, reinforcing throughout its text that the use of AI cannot compromise the physician-patient relationship.
Annex II of the Resolution deserves special mention, as it addresses the classification and categorization of risks provided for in articles 12 and 13, stipulating that medical institutions—public or private—that develop or use AI models, systems, and applications must conduct a preliminary assessment to define their risk level, classifying them as low, medium, high, or unacceptable, taking into account factors such as:
(I) Potential impact on fundamental rights and patient health;
(II) Criticality of the usage context;
(III) Degree of model autonomy;
(IV) Purposes;
(V) Level of human intervention in the outcome; and
(VI) Quantity and sensitivity of the data used.
In practice, both healthcare institutions—public or private—the medical professional, and other agents involved in the development, training, validation, and implementation of AI models, systems, and applications will need to observe the duties and obligations set forth in Resolution CFM No. 2,454/2026, in addition to strictly complying with the General Data Protection Law (LGPD) and applicable information security standards.
The Resolution will come into force 180 (one hundred and eighty) days after its publication date.
Thus, companies that develop, contract, or distribute AI solutions in the healthcare sector will need to observe the following:
Implementation of structured AI Governance, with definition of responsibilities, risk classification of the AI solution, and human supervision flows for AI solutions;
Compliance with LGPD, implementing Privacy by Design and Privacy by Default, especially regarding the processing of sensitive health data and the use of data for AI model training;
Adoption of robust information security measures compatible with the risk level of the AI application;
Continuous monitoring of biases and model performance, with records of mitigation measures; and
Contractual review for clear provisions on responsibilities, delimitation of obligations, duty of cooperation in audits, and access to technical information.
In light of these recent regulatory updates, Peck Advogados has a team of specialists ready, with extensive experience in AI Governance and contractual risk management, to support sector institutions in strategic and regulatory alignment.
Prepared by: Dra. Bianca Melo da Cruz and Dra. Sofia Diniz, both Lawyers of the Digital Advisory, and Dra. Graziella Rosa, Manager of the Digital Advisory.
AUTHOR