ellipse_blue ellipse_red ellipse_blue ellipse_red

Navigating the detail of the EU’s AI Act: An Insightful Discussion with Michael Borrelli, Director & COO at AI & Partners

In this exclusive interview, Michael Borrelli delves into the complexities of the European Union’s Artificial Intelligence (AI) Act, offering a comprehensive analysis of its impact on the global technology landscape. This conversation not only sheds light on the intricacies of the legislation but also offers foresight into how it might influence global AI governance and the balance between technological advancement and ethical considerations.


Q: To begin, could you give an overview of the EU’s AI Act and its main objectives?

The European Union’s Artificial Intelligence (AI) Act or the EU AI Act is a comprehensive regulatory framework designed to govern the EU-based development, deployment, and use of AI systems. Its extraterritorial nature affects businesses globally, irrespective of their jurisdiction. Its main objectives include fostering innovation and the uptake of AI, ensuring the protection of fundamental rights, and addressing potential risks associated with AI.

Q: What key areas does the act address in AI regulation? 

The EU AI Act focuses on various key areas in AI regulation. For risk categories, the act proposes four risk categories for AI systems, including ‘Unacceptable Risk,’ ‘High Risk,’ ‘Limited Risk,’ and ‘Minimal Risk.’ These categories help determine the level of regulatory scrutiny and requirements applicable to different types of AI systems.

The act is designed with high-risk AI systems in mind. High-risk AI systems include those that are used in critical infrastructure, biometric identification, education, law enforcement, and more. These systems face stringent requirements, such as conformity assessments, high-quality datasets, transparency, and human oversight.

Finally, the act has provisions for general-purpose AI systems. The EU AI Act introduces certain obligations for providers of general-purpose AI systems to ensure their compliance with ethical and societal considerations. Transparency requirements aim to inform users when they are interacting with AI systems, and certain systems may be subject to specific transparency and disclosure obligations.

Q: Could you talk about these risk categories and how they shape regulatory frameworks? 

The Commission introduces a risk-based approach encompassing four levels to govern AI systems. Minimal risk includes all AI systems that can be developed and used within the existing legal framework without additional obligations. The majority of AI systems currently in use within the EU fall into this classification.

Providers of such systems may voluntarily choose to adhere to the requirements for trustworthy AI and voluntary codes of conduct. High risk includes a limited number of AI systems with the potential to adversely impact people’s safety or fundamental rights. The act includes an annex listing high-risk AI systems, which may be periodically reviewed to align with evolving AI use cases.

This category encompasses safety components of products covered by sector-specific Union legislation, remaining high-risk when subjected to third-party conformity assessment under that legislation. Unacceptable risk encompasses a highly restricted set of particularly harmful AI uses that violate EU values by contravening fundamental rights. Specific Transparency risk specifies AI systems where there is a clear risk of manipulation, such as the use of chatbots.

Users should be informed when they are interacting with a machine. They ensure that a risk-based approach upholds trustworthy AI within the EU.

Q: What are the provisions for general-purpose AI systems, especially in the context of ethics and societal impact? 

Providers of general-purpose AI systems (GPAI) are obliged to disclose specific information to downstream system providers, fostering transparency and enhancing the understanding of these models, and implementing policies to uphold copyright laws while training their models as certain general-purpose AI models carry potential systemic risks due to their significant capabilities or widespread use.

Providers of GPAI identified with systemic risks are required to assess and mitigate certain risks, report serious incidents, conduct state-of-the-art tests and model evaluations, ensure cybersecurity, and provide information on the energy consumption of their models.

Q: How does the act fit with the EU’s existing digital legislation, like GDPR or Cyber Resilience Act? What new requirements does the AI Act bring for businesses implementing AI? 

The EU’s AI Act complements existing digital legislation, aligning with GDPR and the Cyber Resilience Act. It introduces specific obligations for high-risk AI systems, emphasizing transparency, accountability, and human oversight.

Businesses implementing AI must adhere to stringent rules, ensuring responsible development, deployment, and monitoring to protect individuals and societal values. This includes, but is not limited to, implementing a risk management system, human oversight, data governance and management, and technical documentation.

Q: How significant are the penalties for non-compliance? 

The act includes significant penalties, with fines of up to 7% of a company’s global annual turnover or EUR35 million. This underscores the importance of adherence to the regulations and is likely to impact business operations by incentivizing companies to comply with requirements.

Q: Can you explain the role of AI regulatory sandboxes and how accessible they will be to smaller enterprises and startups? 

The EU AI Act introduces AI regulatory sandboxes to allow for controlled testing of AI systems in a real-world environment. They are intended to be accessible to smaller enterprises and startups, promoting inclusivity in the development and testing of AI technologies.

Q: The pace of AI developments usually outpaces regulation. How does the act plan adapt to emerging technologies and unexpected developments?

The legislation sets result-oriented requirements for high-risk AI systems but leaves the concrete technical solutions and operations to industry-driven standards. In this way, they can remain flexible to different use cases and future-proof to new technological solutions. In addition, the EU AI Act can be amended by delegated and implemented acts.

This provision was put into action for the update to the FLOP threshold (a delegated act) to include criteria for classifying the GPAI models as systemic risk. This was implemented to establish regulatory sandboxes and elements of a real-world testing plan.

Q: There’s a concern that the AI Act might restrict innovation or cause market division. What’s your perspective on this? 

While these are valid concerns, its approach has been to balance innovation with the need for responsible AI development. For example, Article 55 of the act’s text from June 2023 contains specific provisions to help SMEs, startups, and users. The focus on risk-based regulation aims to address potential harms while allowing room for growth and advancement in the AI field. This allows for sustainable innovation without destabilizing market forces.

Q: Lastly, the AI Act is set to standardize AI regulation within the EU. How will it impact the global AI technology market in terms of cross-border cooperation?

The AI Act is likely to influence global AI standards and practices, especially as it standardizes AI regulation within the EU. It may set a benchmark for other regions to follow, and businesses operating globally will need to consider these regulations in their AI development and deployment strategies. Cross-border cooperation on AI governance, however, may be influenced by the EU’s regulatory approach, similar to how the GDPR influenced global data protection legislation.