On February 2, 2025, the first provisions of the EU AI Act scheduled for implementation took effect, including a prohibition on practices deemed unacceptably risky and a broad affirmative obligation to promote AI literacy.
Among the prohibited practices is the use of AI to infer emotions in the workplace or education institutions, unless it is for medical or safety reasons.
In addition to providing for extraterritorial application, the EU AI Act is likely to serve as a model for AI regulation in other jurisdictions.
This article is the first in a series of articles on the EU AI Act as we continue our coverage of artificial intelligence and its impact on U.S. higher education institutions. We will provide you with frequent updates in this fast-moving area, including coverage of the EU AI Act, the AI regulatory scheme in China, (U.S.) state and federal legislation, recent AI-related litigation, as well as our thoughts on how U.S. higher education institutions can navigate complex legal considerations when entering into contracts with vendors.
The European Union’s Regulation on artificial intelligence (EU AI Act), which was officially published on July 12, 2024, is the world’s first comprehensive regulation on artificial intelligence technology. On February 2, 2025, the first provisions of the Act scheduled for implementation came into force. These provisions include the general provisions in Chapter I of the Act (Articles 1-4), including a broad affirmative obligation to promote AI literacy, and the prohibition in Chapter II (Article 5) of AI practices deemed of unacceptable risk, including the use of AI to infer emotions in educational institutions.
Extraterritorial Application
While the Act’s primary focus is on regulating the use of AI within the EU itself, it will have global influence. For starters, Article 2 of the Act extends it application extraterritorially to both “providers” who place AI tools on the market in the Union and “deployers” of these tools where the output is used in the Union. More broadly, the Act takes a risk-based approach to regulating AI technology: prohibiting AI practices deemed unacceptably risky, subjecting “high-risk AI systems” to extensive regulation, and imposing lesser requirements on “general-purpose AI models” (such as generative AI models with broad potential application), and simply mandating transparency for others. This approach will likely provide a model for other jurisdictions looking to regulate AI technology, as well as a standard for organizations developing and deploying AI tools to assess and manage the risks associated with their particular AI activities or uses.
For institutions of higher education, the implementation of this first comprehensive artificial intelligence regulatory framework represents an important opportunity to take stock of the uses of AI already taking place on their campuses and to begin to prepare for a future in which the use of AI will only become more common and the associated regulatory framework will only become more complex.
Prohibited Practices
Prohibition on “Manipulative, Exploitative and Social Control Practices”
Beginning with the prohibitions, the Act bars putting on the market or into service AI systems for what Recital 28 describes as “manipulative, exploitative and social control practices.” This Recital explains that these practices are “particularly harmful and abusive” because they “contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and fundamental rights enshrined in the [Charter of Fundamental Rights of the European Union].” The prohibited practices enumerated in Chapter II (Article 5) include:
placing on the market or using AI systems to deceive or manipulate, particularly when exploiting vulnerabilities related to age, disability, or social or economic condition;
engaging in social scoring or profiling based on assessments of social behavior or personality characteristics that might lead to detrimental consequences such as unfavorable treatment or suspicion of predisposition to criminality;
untargeted scraping of images to build facial recognition databases; and
uses of biometric data for profiling or real-time remote identification unless strictly necessary for a very limited range of public safety objectives.
Use of AI to Infer Emotions
The most relevant prohibition to institutions of higher education is the ban in Article 5(1)(f) on “the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put into place or into the market for medical or safety reasons.” Article 3(39) defines “emotion recognition system” as “an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.” Recital 18 clarifies that emotion recognition systems are AI tools that use biometric data to do more than simply track physical states, such as pain or fatigue; rather, they use biometric data to identify or infer emotions.
New Guidelines on the prohibited AI practices, dated February 4, 2025, clarify that use of AI to infer emotions in educational institutions is prohibited because of the imbalance of power this setting entails. In other settings, the Guidelines state, this use of AI will be regulated as high risk. The Guidelines also provide examples of the use of AI systems within the educational context. Use of an AI system for eye tracking in a testing situation when it aims to determine the focus of a student’s attention (e.g., on the exam or on unauthorized material) is not prohibited, but when used to detect emotional arousal or anxiety it would be prohibited. Similarly, use of an emotion recognition AI system used only for learning purposes in a role-play exercise, such as for training actors or student teachers, would not be prohibited, so long as it had no impact on the evaluation of the person being trained or certified. Use of an AI system to infer the interest or attention of students would also be prohibited. In a classroom setting, such an AI system would be prohibited as applied to both students (as a part of the education setting) and instructors (as a part of the workplace).
Clarification of Safety and Medical Exceptions
The Guidelines also clarify that the safety and medical exceptions to the prohibition should be interpreted narrowly. The safety exception applies “only in relation to the protection of life and health and not to protect other interests, for example property against theft or fraud.” Similarly, the medical exception applies only in therapeutic contexts, such as in medical devices that have passed an EU conformity assessment and are not used to detect generalized depression or indicators of wellbeing.
Penalties
Ensuring compliance will fall to competent national authorities, which EU Member States must designate by August 2, 2025, with annual reporting on instances of prohibited practices to the European Commission. As for potential penalties, Article 99(3) provides, “Non-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to administrative fines of up to EUR 35,000,000 or … up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.”
AI Literacy Requirements: A Compliance Challenge
The Act’s requirement that organizations promote AI literacy, which also took effect on February 2, 2025, is both broad and potentially a unique compliance challenge. It is broad in that it applies to both providers and deployers of AI systems. Moreover, its application will depend significantly upon the circumstances of each particular use of an AI system. Article 4 states that “providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf,” but it does not specify any criteria for what would count as “sufficient.” Adding to the complexity, Article 4 specifies that in ensuring this sufficient level of AI literacy for staff and others, organizations should “tak[e] into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”
Compounding the current interpretive challenge is the fact that the Commission has not yet released any guidance regarding the required AI literacy measures. The Act, however, does call for the EU’s AI Office and Member States to produce voluntary codes of conduct, including related to AI literacy, for AI applications that are not otherwise regulated by the Act (i.e., high-risk AI systems). It similarly calls for the EU’s AI Board to assist in promoting AI literacy and public awareness of AI in general.
The primary mode of enforcement provided by the Act will be through national authorities, and it specifies that Member States should develop “rules on penalties and other enforcement measures, which may also include warning and non-monetary measures.” It is also worth noting that although the Act does not itself provide for a private right of action, compliance with the Act may figure into litigation under the EU’s recently updated Product Liability Directive, which was expanded to cover software, including AI systems.
Looking Ahead: Next Steps for U.S. Institutions of Higher Education
With these first provisions of the EU AI Act taking effect, now is a good time for institutions of higher education to identify where AI tools are being deployed or even developed on their campuses. In particular, it will be important to determine whether any have outputs that are used in the EU that would bring the institution under the Act’s extraterritorial reach. More generally, it is also at very least a good practice to begin to assess whether staff and other stakeholders working with AI tools have the knowledge, skills, and understanding needed to use those tools responsibly.
With more provisions scheduled to take effect over the next two and a half years, here are the dates to keep in mind regarding some additional key provisions:
August 2, 2025: Compliance obligations related to general-purpose AI models take effect; deadline for EU Member States to designate competent authorities and establish rules on penalties and other enforcement measures.
February 2, 2026: Deadline for the Commission to issue guidance related to high-risk AI systems.
August 2, 2026: Compliance obligations regarding most high-risk AI systems become effective.
August 2, 2027: Compliance obligations related to the remaining high-risk AI systems become effective.
Stay tuned to XL INSIGHTS+ for more updates on key elements of the EU AI Act, as well as additional guidance from the EU as it becomes available.