I. What is artificial intelligence?
Artificial intelligence, according to Professor John McCarthy, ‘is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.’1 In other words, AI is a technology that enables machines to examine information and learn from the environment.
From a different perspective, Stuart Russell and Peter Norvig offer four definitions of AI in their textbook Artificial Intelligence: A Modern Approach. The first one, known as the ‘Turing test’, consists of asking the computer questions and, through the analysis of its written responses, assessing whether the machine is able to think like humans.2 This test is still relevant today. Take the example of receiving a call without knowing if one is speaking to another person or to a chatbot. This matters because, according to the law, the person involved should know in advance when he or she is speaking to a machine in order to make an informed decision about whether to continue or cut off this interaction.
The second one, known as the ‘cognitive modelling’ approach, provides a mix of cognitive science and experimental techniques from psychology to verify theories of the human mind in relation to AI. For instance, provided that a computer program’s behaviour could match an equivalent human behaviour, this would demonstrate that this system could act like a human. The third one, known as the ‘law of thought’ approach, initiated their study in the field of logic and focused on the theory of probability to build a model of rational thought. However, it did not produce intelligent behaviour.3
Consequently, there is the last definition, called the ‘rational agent’ approach, which focuses on the rational agent as one that must act with the aim of attaining the best possible outcome. This perspective has two benefits over the other approaches: it is more responsive to scientific development (as it is rather unlikely to imitate human behaviour or thought processes) and it is broader than the ‘law of thought’ approach, as there are far more possibilities for attaining rationality than just making correct inferences.
Evidently, AI represents a huge advantage to many sectors of society. Yet it also constitutes a great challenge both in terms of training professionals to use these new technologies as well as dealing with their consequences for society, which involve all legal spheres.4 Moreover, one of the most complex issues of humanity in past years has been the use and protection of personal data, particularly because people and their daily content production (both at work and in their lives) are the motor behind this entire technological revolution.5
II. AI, privacy and data protection
In this context, AI might impact privacy in numerous ways: in relation to informational privacy, including surveillance privacy, and even to an individual’s autonomy. Privacy and data protection are not only a concern in relation to AI. People have the fundamental right to control their own data and the decisions taken based on them, as this information tends to be quite significant to conceptions of self, values, goals and preferences. In other words, informational privacy relates to a person’s capacity to control information about him or herself. In this way, it is possible to ensure the necessary autonomy for the individual’s decisions, which is an inherent trait of identity construction, dignity and freedom.6
According to Mark Coeckelbergh, many ethical issues surrounding AI are related to the fact that machine learning depends entirely on data science, or better, on a huge number of databases. Also, the algorithm is developed to identify patterns, make decisions based on statistical processes, target predictions and find rules that were not direct instructions from the programmer, which brings the need for supervision.7
1. What is personal data?
The definition of personal data may vary according to jurisdiction. In the case of Brazil and the European Union, the concepts are mainly similar since the Brazilian data protection legislation8 was largely inspired by the EU’s GDPR.9 For instance, both define personal data as any information relating to an identified or identifiable natural person. This means that when different pieces of information are gathered leading to the identification of an individual, it constitutes personal data. The GPDR defines special categories of data10 in a way that is also quite like the definition of sensitive data in the Brazilian LGPD,11 which includes racial or ethnic origin; religious belief; membership in a trade union or a religious, philosophical or political organisation; health; sexual life; and genetic or biometric data (when connected to a natural person).
As is widely known, AI depends on a huge variety of datasets, which in turn only makes it harder to define when data protection laws apply by enlarging the ability to link data or recognise patterns of data that might make non-personal data identifiable. This may happen in two ways: by expanding the range of data collection and making it mandatory or by offering gradually more advanced computational capabilities to work with collected data. The first scenario could be exemplified by sensors in cars and cell phones, while the second could be represented by fingerprints, facial features and biometric recognition technologies.12
2. Data protection principles and their impact on AI
Data protection principles embody the spirit of data protection legislation. Consequently, failure to comply with these principles can lead to serious financial and reputational consequences for businesses. When it comes to AI, the GDPR and the LGPD do not directly address it; however, there is a considerable attention to large-scale automated processing of personal data and automated decision-making. In other words, when AI requires the use of personal data, it will fall within the scope of the GDPR and the LGPD and, consequently, their principles will be applicable. According to the GDPR, there are seven main principles that must be observed when processing personal data:
a) Lawfulness, fairness and transparency
Lawfulness means that a lawful basis is required to process personal data. Fairness is inherently connected both to lawfulness and transparency. To meet the transparency requirement, the data processing must be described in detail to the data subject with clear and accessible information. The lawfulness and fairness principles find correspondence in the non-discrimination principle mentioned in the LGPD, as it states that data processing must not serve discriminatory or illegal purposes. This is particularly relevant when it comes to AI because, according to data protection laws, companies must provide specific information to data subjects relating to the logic behind automated decision-making that has a legal impact on them, which turns out to be a complex obligation to fulfil, as the decisions made by AI algorithms frequently cannot be anticipated.13
Another situation that involves legal requirements for the use of AI systems relates to profiling, which is the use of personal data to assess certain personal attributes concerning a natural person, particularly to examine or predict features relating to that person’s performance at work, economic condition, health, personal preferences, interests, reliability, behaviour or location. The requirements for the use of AI systems in those cases are fairness and transparency (as previously explained) and the right to human intervention, which enables the person to challenge the automated decision. One example that could involve both profiling and automated decision-making is the use of an AI system to filter job applications in order to define which of the applicants are a good fit for the vacancy. Once the system discards a group of candidates not considered a good fit, it is making an automated decision.14
b) Purpose limitation
According to this principle, personal data may only be collected for specified and legitimate purposes and must be processed in ways that are compatible with those purposes. This principle is related to the adequacy principle of the LGPD. In relation to AI, it becomes considerably harder to meet these requirements, as the data provided by algorithms tend to produce unforeseen and unpredictable results. What is more, the functioning of AI normally demands the collection and analysis of a huge amount of data for it to learn and make intelligent decisions. Thus, once we limit AI systems’ access to a specific dataset that represents only a small segment of the population, these systems will generate a biased and limited viewpoint.15
c) Data minimisation
This principle requires data controllers to limit their data collection to the minimum necessary for the accomplishment of their purposes. The LGPD has the same principle, but it is called the ‘need principle’. In other words, it means that the data controller must only collect data that are relevant and proportional to the achievement of the data processing purposes. Nonetheless, the definition of what is truly ‘necessary’ should be assessed in each concrete case. For instance, in some areas, such as medicine, there is often no acceptable margin for error, which does not necessarily hold true for areas that do not involve people’s health.
This is the reason why AI systems may require a huge dataset to operate properly in the medical sector, especially during the training phase. To illustrate the point, during the training phase, an AI system looking at heart attacks will be supplied with specific medical data on heart conditions as well as more broad information from several patients’ medical records and also information related to lifestyle. However, during the system deployment and use, the data of a certain patient will be examined within a framework produced by the AI system based on all the previous data analysed during the training phase. Therefore, in this sort of situation, it is unlikely that AI systems would be able to perform without first going through the training with a broader dataset.16
To comply with this principle, personal data must be kept accurate and up-to-date. In Brazil, the equivalent principle would be the data quality principle. This is a particularly important principle to follow considering that the simple use of no longer current data can generate serious consequences, according to the area in question. For example, it could vary from a simple ineffective targeted advertisement that does not match the data subject’s profile to an incorrect medical diagnosis.17
e) Storage limitation
Personal data may only be kept in an identifiable way for as long as they are needed to complete the data processing, according to the storage limitation principle. In Brazil, the most similar principle would be, once again, the need principle. Moreover, the LGPD states that personal data must be deleted after the conclusion of processing, with some exceptions: (i) when needed for compliance with a legal or regulatory obligation; (ii) if they are going to be used for study purposes by a research entity; (iii) if they are transferred to third parties that also comply with the law; or (iv) when anonymised and for the exclusive use of the controller.
f) Integrity and confidentiality
According to this principle, the data controller must keep the data secure from any threats (either internal or external). In other words, it means that the controller must ensure the collected data is protected from any unauthorised or unlawful processing as well as from any accidental loss, destruction or damage.18 In the LGPD, this principle is known as the security principle. In the context of AI, it is particularly relevant to guarantee security because unauthorised access by third parties that are capable of manipulating the algorithm and impacting its outcomes could have dangerous consequences for the individuals affected by the algorithm’s decisions.19
The LGPD has this same principle, which basically means that data controllers must be able to demonstrate the adoption of effective organisational policies and procedures to comply with data protection rules. In relation to AI, those processing personal data must be accountable to both regulators and individuals and need to consider the likelihood and gravity of the consequences of the use of AI systems for individuals. In other words, the AI system cannot be blamed after the fact for the harm done by unexpected results.20 While the GDPR does not specifically require the controller to engage in certain activities to meet the obligations connected to this principle, the LGPD recommends that controllers and processors adopt policies that ensure compliance with good practices and that include a privacy governance programme which demonstrates its effectiveness.21
III. EU AI Act
In April 2021, the European Commission published its proposal for an AI Act. The proposed EU Artificial Intelligence Act sets out horizontal rules for the development, use and trade of AI-based products and services within the EU territory. This regulation will impact businesses across several, if not all, sectors of the economy. It applies to (i) providers of AI systems that are established within the EU or in a third country; (ii) users of AI systems established within the EU; and (iii) providers and users of AI systems that are established in a third country whose output is used in the EU. The proposed EU AI Act adopts a technology-neutral definition of AI and establishes a product safety framework built around a set of four risk categories, as explained below.
1. Risk-based approach
The EU AI Act proposes a risk-based approach to classify AI systems based on a pyramid of criticality to attribute different requirements, applying harsher rules as the risks increase. In other words, the more serious the risks, the stricter the regulations that would be applicable. These would vary from non-binding soft law impact assessments accompanied by codes of conduct to heavy, externally audited compliance requirements throughout the life cycle of the application.22 The EU AI Act establishes four levels of risk (presented below) and leaves it to businesses to identify which risk group their AI systems fall into.
a) Unacceptable risk
The highest level of risk is called ‘unacceptable’, and it basically means that AI systems under this category will be banned. The AI Act proposes to ban AI systems that could cause physical or psychological harm to an individual by subliminal techniques or the exploitation of the fragility of vulnerable groups. Similarly, it bans AI systems that serve purposes of social scoring when carried out by public authorities. Finally, it also prohibits the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes, unless certain exceptions apply.23
b) High risk
According to the regulation, AI systems identified as ‘high-risk’ include AI technology used in: critical infrastructures which could jeopardise people’s life and health; educational or vocational training that could define people’s access to an educational and professional course; safety components of products, such as robot-assisted surgery; employment, worker management and access to self-employment; essential private and public services, such as credit scoring for loan eligibility; law enforcement that might interfere with citizens’ fundamental rights; migration, asylum and border control management; and the administration of justice and democratic processes.24
These applications are not prohibited but are subject to several new restrictions. The requirements for high-risk AI systems include establishing a risk management and mitigation system; meeting the specified quality criteria for the training and testing data; creating technical documentation before the AI system is placed on the market so authorities may easily assess its compliance; enabling AI systems to automatically record events that must conform to standards to ensure the traceability of results; ensuring transparency in the design and development of the systems to provide clear information to the user; enabling AI to be overseen by humans to minimise risks; and achieving accuracy and cybersecurity.25
c) Limited risk
This category refers to AI systems that require specific transparency obligations such as biometric categorisation systems, emotion recognition systems and systems that use chatbots. In other words, systems that interact with humans tend to present ‘limited risk’, and in the specific case of chatbots, users must be notified when they are interacting with a machine so they can make an informed decision whether to continue or end this interaction.
d) Minimal risk
The Act does not impose any restrictions on the free use of these AI systems, such as in the case of spam filters. Today, most AI systems used in the EU would fall into this category. Nevertheless, the regulation still proposes the creation of codes of conduct to incentivise providers of minimal-risk AI systems to willingly adopt the same requirements demanded for high-risk AI systems.26
Each Member State shall be responsible for the monitoring and enforcement of the AI Act. Nonetheless, it is up to the European Commission to start an investigation and even require a Member State to adopt corrective measures if the Commission has reasons to believe that one of the conformity assessment bodies is in breach of the regulation.
A breach of the regulation could cost up to six percent of annual global turnover or up to 30 million euros (whichever is higher), according to the severity of the infringement. Each Member State will define the rules for how to apply penalties and administrative fines as well as how to ensure proper enforcement.27
The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) released a joint opinion on the Artificial Intelligent Act in June 2021 to point out important data protection implications. Despite being overall supportive of the risk-based approach, the EDPB and EDPS presented some concerns and points for improvement.
They recommended that the legislator include a statement confirming that the GDPR is applicable to the processing of personal data within the scope of the AI Act. In other words, the EDPB and the EDPS regarded it as fundamental to have it explicitly mentioned in the proposal for the EU AI Act. Another issue mentioned regarded the list of high-risk situations, which in their view was too selective as to which uses of AI qualified. In this way, the proposal did not address situations such as assessments for medical treatment or for health research purposes. Moreover, the EDPB and EDPS also pointed out that this list would need constant updates to ensure that the scope remains applicable.28
Similarly, the EDPB and the EDPS argued that all intrusive forms of AI (those that impact human dignity) should be prohibited. For example, the use of AI in law enforcement demands precise and foreseeable rules that must consider the interests of the individuals involved as well as the impacts on the functioning of a democratic society. Another example presented by the EDPB and the EDPS was the use of AI for social scoring. As it might lead to discrimination, they argued that it should be prohibited in all circumstances and for all types of social scoring instead of only when conducted ‘over a certain period of time’ or ‘by public authorities or on their behalf’.29
Undoubtedly, the definition of the terms of the EU AI Act is a challenge yet to be faced, and these points of the opinion will be discussed during the following negotiations. It is a challenge to find the perfect balance between setting global fair standards for the use of AI systems while not overregulating in a way that could prevent their development.
Moreover, more than a hundred civil society organisations such as European Digital Rights (EDRi), Access Now, Panoptykon Foundation, epicenter.works, AlgorithmWatch, European Disability Forum (EDF), Bits of Freedom, Fair Trials, PICUM and ANEC have also expressed their criticisms and recommendations in a statement regarding the proposed EU AI Act. In their words, the risk-based approach is ‘dysfunctional’, as it disregards that the level of risk also varies according to the context in which the system is deployed and therefore cannot be completely defined in advance. They argued that it neither provides a scope for updating the lists of unacceptable and limited risks nor makes it possible to expand the current fixed list of high risks.30
One of the main points of their argument was the need to prohibit all AI systems that pose unacceptable risks to fundamental rights, which according to them include: all social scoring systems; remote biometric identification (RBI); all emotion recognition systems; all AI physiognomy; all discriminatory biometric categorisation; all AI systems used by law enforcement and criminal justice authorities for the purpose of predicting crimes; and AI systems used for immigration enforcement purposes in a way that restricts the right to seek asylum and/or prejudices the fairness of migration procedures.31
Another important aspect they pointed out was that the proposed AI Act must apply regulatory obligations not only to providers of high-risk AI systems but also to users, as substantial risks may also arise from how the systems are used. Similarly, they argued for more transparency in the use of high-risk systems, recommending the registration of specific uses of AI systems in a public database that would enable research to understand where, by whom and for what purposes high-risk systems are being used.32 What is more, the EU database for high-risk AI systems includes information registered only by providers, without any information concerning the use. This is an issue that must be fixed, as it prevents the public from knowing where, by whom and to which purposes the high-risk AI systems are being used.33
Moreover, the civil society organisations pointed out that the proposed EU AI Act neither specifies individual rights for those impacted by AI systems nor offers a provision for individual or collective redress. They also highlighted that the proposed AI Act does not include accessibility requirements for AI providers and users, which conflicts with the European Accessibility Act, as it might lead to the development and use of AI with barriers for people with disabilities.34
Another of their concerns was related to the environment and the lack of direct boundaries established for AI systems to be developed in a sustainable and resource-friendly manner.35 Equally important, they stated that it is essential to ensure that the EU AI Act works for everyone, and to achieve this goal they recommended to ensure data protection and privacy for people with disabilities, equip enforcement bodies with the necessary resources and ensure trustworthy AI beyond the EU.36
Nonetheless, the proposed EU AI Act still provides valuable insights into how to improve the Brazilian bill on AI, as will be demonstrated.
IV. Brazilian regulation
On 29 September 2021, the Brazilian Congress passed a bill that creates a legal framework for artificial intelligence. The bill still needs approval from the Senate. However, it has been highly criticised for being superficial and lacking several important features such as accountability, penalties, enforcement, codes of conduct and the obligation to incentivise educational campaigns about the subject. Unlike the proposed European AI Act, the Brazilian bill does not discuss the risks produced by each type of AI system, nor does it indicate how companies should act according to it.37
The Brazilian Congress later appointed a committee constituted by 18 scholars and practitioners to propose a new text for the regulation of AI in Brazil. The first meeting of the committee happened on 30 March 2022, after which they were given 120 days to elaborate the new text of the AI regulation. The substitutive text shall include the topics of the Brazilian bills 5.051/2019 (which establishes the principles for the use of AI in Brazil), 21/2020 (the AI bill previously mentioned) and 872/2021 (which presents ethical frameworks and guidelines that support the development and use of AI in Brazil).38
During the first meeting, they approved their work plan. According to Professor Bruno Bioni, the work plan is divided into four main phases/topics. The first is dedicated to listening to society through public audiences. The second will be based on an international seminar which will demonstrate how Brazil could internalise experiences not only from the Global North but also from the Global South, considering regulatory models that are in line with the challenges Brazil faces. The third phase should address how to mitigate as much as possible the risks involved in the use of AI, as well as how to maximise as much as possible the benefits of the economic activities that will be optimised by AI. Finally, the last topic shall focus on an institutional governance arrangement and innovation arrangements.39 Hence, it is expected that this committee will make considerable changes to the current text in order to create a substitute that is clearer, more detailed and more effective.
The Brazilian bill for an AI regulation looks far more like an outline of principles than an actual legal framework for AI. It is likely to generate many discussions and legal insecurity if approved as is. Brazil has adhered to the Organisations for Economic Cooperation and Development’s (OECD) human-centred an outline of AI principles, which provides advice on matters such as transparency and security. The Brazilian bill for AI introduces six main principles for the responsible use of AI, as follows:
a) Purpose (sustainable development and well-being)
According to this principle, AI must be used in the pursuit of results that benefit people and the planet, with the aim of enhancing human capacities, reducing social inequalities and promoting sustainable development.40 Reliable AI might be a helpful tool to promote inclusive growth, sustainable development and well-being goals. In the same way, it may be a helpful resource to deal with technology access disparities. This principle acknowledges that AI systems may also contribute to perpetuating existing biases, whereas they should be used to reduce them.
In accordance with this principle, AI systems must be developed in a way that respects human dignity as well as privacy, personal data protection and labour rights. This is an essential principle to ensure that fundamental rights be respected and that the LGPD be observed (as later indicated in the bill).41 As some uses of AI systems might have consequences for human rights or even infringe them, it is essential to anchor those systems in values, which entails the possibility of human interference and oversight when appropriate. This alignment is important to guarantee that AI systems safeguard and support human rights throughout their process. This in turn should contribute to increasing public confidence in AI systems to safeguard human rights.
In a similar way, this principle states that AI systems must not be used for discriminatory, illegal or abusive acts. It is also related to the data protection principle of non-discrimination. In the context of AI systems, it is paramount to observe this principle so that the algorithm does not discriminate. Yet this principle is not accompanied by any guidance on the implementation of educational measures to prevent these events from happening.42
Another relevant strategy to reduce discrimination or other unfair outcomes would be to have participatory methods of risk assessment. This would minimise the chances of AI system bias, as the assessment would not only focus on data security and quality but also on the engagement of the groups potentially impacted by AI applications, who can facilitate the identification and elimination of existing bias. As bias can be caused by intentional or unintentional decisions by AI developers, another significant action to reduce it would be to form a multidisciplinary committee including experts from various fields. This would be an important measure to counteract the potentially limited point of view of AI developers or even potential biases associated with these developers’ identity (e.g. gender bias or ideological bias).43
d) Transparency and explainability
According to these principles, users have the right to be informed in a clear and accessible way about the use of AI solutions, and they also have the right to know the identity of the natural person running the AI system as well as the general criteria that guide the functioning of the system. It means that companies must clarify to the user how an AI system produced an outcome and how their data will be used by this device. This principle can be particularly relevant in situations where users are in communication with AI systems, such as chatbots (in this case, the law would be applied in the same way as in the EU AI Act).44
However, one issue to meet this obligation is known as the ‘black box’ problem. This situation happens when people are not able to understand or explain the complex technology employed. In the context of AI, this represents a major issue, as machine-learning systems employ algorithms that are self-learning and their developers do not have much control over the models that result from the training data. If one cannot tell which correlations were made to produce certain results and inferences, how to explain and provide transparency into this AI system? How to ensure that it is fair and not biased?45
This principle refers to the use of technical and administrative measures compatible with international standards to allow for the AI’s functionality and risk management as well as to ensure the traceability of procedures and decisions made during the system’s life cycle. Yet the bill does not mention specifically which international standards it should follow, and considering the nature of recent legal frameworks for AI, this could have different impacts depending on the legal reference they decide to adopt.46
It should be specified in the legislation when data subjects and the supervisory authority must be informed of a security breach. It would also be advisable for security measures to evolve over time to adapt to changes of vulnerabilities and new threats.
The accountability principle includes the obligation of AI agents to demonstrate the compliance with AI laws and the adoption of effective measures to the proper functioning of AI systems. This is necessary because AI systems do not work without human interference, whether it be to write algorithms or make decisions as to the uses of systems. However, once again, the bill does not mention how this will be enforced, which would mean outlining the applicable penalties for non-compliance and delegating the national supervisory authority on AI. Moreover, the bill does not clearly establish users’ and providers’ responsibilities regarding AI systems.47 For instance, it would be advisable to require that detailed records be kept during the development process of AI systems.
2. Main issues and considerations
Several issues could be pointed out in this bill. As previously stated, it lacks many key aspects of any AI regulation, such as enforcement authority, proper accountability and even the AI’s level of autonomy and supervision. The bill is rather vague and broad. In short, when it comes to AI, Brazil still needs to establish not only hard law elements but also soft law ones (codes of conduct, guides and best practices). It is necessary to create a more active interaction between the public and private spheres to promote a more sustainable AI development.48
In addition, one might ask: why is it so important to develop AI in a country like Brazil or in any country at all? The answer is simple. It is a technology with a general purpose that impacts all other technologies, causing a huge impact on wealth production and social relations. In other words, AI has the potential to considerably increase productivity in several economic sectors and to enable the performance of cognitive tasks that before were only possible for humans.49
On the other hand, an undeniable concern is that of ‘machine bias’, bearing in mind that data sources and data quality might directly impact the outcome of how AI operates. In this sense, the complexity of an AI strategy goes way further than technical matters, regarding ethics, culture, governance, justice, accountability and other topics that demand a wide debate by the academic community and society in general. Consequently, the need to regulate AI has become crystal clear.50
Once one concludes that personal data is an essential material for the development of the technology of machine learning, it becomes easy to understand that when AI, through the technique of machine learning contain personal data, we must address data protection laws – particularly in cases related to automated decisions, which are the ones that exclude any human influence on the outcome. The LGPD does not prohibit data processing based on automated decision-making, but it does ensure the right to review these decisions. The GDPR, however, ensures the individual’s right not to be subjected exclusively to an automated decision that produces effects in the legal sphere or that significantly affects the data subject. In other words, the GDPR has more limited requirements on this topic.51
One specific issue related to automated decisions is the difficulty of observing data protection principles such as purpose limitation, necessity and transparency. The great volume of data processed and the possibility of new purposes for data processing represent a major challenge to the protection of privacy and personal data with regard to automated decisions originating from AI.52
In Brazil particularly, there is a conflict between the need for transparency regarding the use and operation of AI systems and the need to respect the confidentiality inherent to trade secrets.53 It is therefore necessary to balance the regard for trade secrets that involve AI algorithms against other fundamental rights involved, such as the transparency principle, which is vital to data protection. One might question how it would be possible to observe the explainability principle and provide efficiency to the revision of an automated decision without having access to the factors that influenced the process of decision-making.
In many cases it would even be necessary to break the source code to gain access to this information. The ideal level of protection to answer these questions could be provided through confidential court proceedings in which companies would reveal their algorithms and all the necessary information only to the court, without this information becoming known to the public.54 In this way, judges would have all the necessary data to reach a fair decision and trade secrets would no longer constitute an obstacle to transparency.
In short, premature might be a good word to describe Brazil’s AI bill. It is about time to open this bill up to public discussion and get more contributions to help transform this bill into a truly effective AI regulation. Also, it would not even be fair to compare it to the proposed European AI Act, as it is at such a different stage of development. Brazil got a lot of inspiration from the GDPR in drawing up its own data protection legislation; now one can only hope that it follows a similar path regarding its AI regulation. Even though the EU model is not yet ideal, as demonstrated above, it would be already a step forward for the Brazilian AI bill to regulate AI in more detail.
Information is power, hence the importance of privacy. Carissa Véliz highlights that without our permission, or even our awareness, the internet model developed so far often puts our privacy at risk, and the way tech companies can process users’ personal data gives them too much power and jeopardises users’ freedom. The lack of transparency on personal data use and excessive surveillance are undermining equality among citizens in our digital society.55
A comparative examination of both the EU and Brazilian contexts led to the conclusion that Brazil could benefit from drawing inspiration from the more mature EU AI Act. Yet both still have room for improvement.
In terms of data protection principles, many similarities were presented between the EU and Brazilian legislations. As compliance to those principles is vital to data protection, it is important to understand its correlations with and impacts on AI systems specifically.
As demonstrated, the risk-based approach from the proposed EU AI Act is a method that acknowledges the risks of the use of AI systems and tries to react proportionately to each level of risk by defining clear rules. It also mentions penalties and how the regulation will be enforced. However, some points should still be reviewed, as indicated by the EDPB and EDPS.
As later discussed, the Brazilian bill for AI presents many valuable principles (even though some of them still lack detail). It was demonstrated that transparency and explainability are necessary to obtain public confidence in AI systems, to promote safer practices and to ease wider implementation in society.
In brief, AI may provide several benefits to individuals and society, but it cannot come at the cost of individuals’ fundamental rights. Any proper AI regulation must thoughtfully consider data protection laws and their impacts on individuals and society. It must also consider not only the principles that will guide it, but how it will be enforced. Certainly, it is not an easy job to undertake while also having to foster innovation and AI development, but it is a necessary one.
Doctor of Philosophy (Ph.D.) with Distinction in International Law focused on Comparative Law and Intellectual Property Law from University of São Paulo (USP), Brazil. International Research Guest at the Max Planck Institute for Innovation and Competition, Munich, Germany. Latin America Max Planck Smart IP Board Member. Author of 29 books published about Law and Technology. President of Data Protection Special Commission of OAB-SP.
Certified Legal Design Professional by Legal Creatives. CopyrightX certified by Harvard Law School in partnership with UERJ and ITS. Lawyer specialising in innovation, data protection and intellectual property law.
Fonte: Grur International