The recent enactment of the EU’s AI Act represents a pivotal change in the AI landscape. Set for implementation in May 2024, the act classifies AI systems by risk levels, imposing stricter controls on those that could significantly impact human rights or safety. Moreover, it bans practices like mass surveillance and social scoring. Furthermore, it requires clear disclosure when users interact with AI and mandates human oversight for critical decisions. Firms failing to adhere to these standards face tough penalties, including fines and market restrictions, as the legislation aims to harmonize technological advancement with individual rights and public safety.

Impact of the AI Act on Businesses

The impact of this regulation, when fully in force, will have many business consequences. With the goal of not being exhaustive, we have reduced them to four.

  1. Regulation of high-risk applications: The law categorizes AI applications according to their risk level, focusing on intensively regulating those considered high-risk in sectors such as health, transportation, and education. This means that companies operating in these areas will have to subject their products to stricter compliance evaluations before their market launch.
  2. Specific prohibitions: The AI Act bans certain practices it considers dangerous, such as Chinese-style social scoring systems and toys that use AI to incite dangerous behaviors in minors. Companies will need to review their products and services to ensure they do not violate these standards.
  3. Transparency and labeling: One of the pillars of the AI Act is transparency. Companies will be required to inform users when they are interacting with an AI, which involves changes in how user interfaces and notification systems are designed.
  4. Supervision and governance: The new legislation also requires adequate human supervision in high-risk AI systems, and companies must establish and maintain detailed records of AI activity to facilitate audits and ensure regulatory compliance.

Preparations for Companies

Once the impacts are known, it is advisable for your company to progressively adapt to this new regulatory environment. Below I propose some lines of work that, from my point of view, are of transcendental importance to start this adaptation.

  1. AI impact assessment: Before implementing any AI solution, companies should conduct an impact assessment that includes risk analysis and mitigations. This is not only a legal requirement for certain types of AI, but also a good business practice to prevent future problems.
  2. Training and professional development: Training in AI ethics and regulatory compliance will become indispensable. Companies will need to train their development teams and executives to understand and correctly apply the AI Act rules.
  3. Investment in supervision technology: Investing in tools that allow monitoring, auditing, and modifying AI systems will help companies stay compliant and react quickly if problems arise.
  4. Collaboration with regulators and experts: Companies would do well to seek the advice of experts in regulation and AI ethics, and participate in forums and consortia discussing the implementation of the AI Act. This collaboration can facilitate a better understanding of regulatory expectations and best practices in the industry.

Opportunity of a New Regulatory Framework

The AI Act poses a significant challenge but also offers a chance for companies to showcase their commitment to responsible technological development. Furthermore, organizations that adapt to these regulations proactively will not only dodge penalties but will also earn consumer trust and position themselves as leaders in ethical AI adoption. Consequently, it is crucial for companies to start preparing now by aligning their strategies and operations with this new legal framework. This proactive approach will facilitate a smoother transition and ensure compliance.

You can learn more about the implementation of AI projects reading my last article The Challenge of Establishing an AI Ethics Committee in the Company.

Ricardo Alfaro