
The EU AI Act is a significant regulatory framework aimed at harmonising the development, deployment, and use of AI within the EU. This comprehensive regulation, which went into effect on 1 August 2024, seeks to ensure safety, protect fundamental rights, and promote innovation while preventing market fragmentation. CMS' latest publication delves into the effects of the act.
The AI Act covers a broad range of AI applications across various sectors, including healthcare, finance, insurance, transportation, and education.
It applies to providers and deployers of AI systems within the EU, as well as those outside the EU whose AI systems impact the EU market. Exceptions include AI systems used for military, defence, or national security purposes, and those developed solely for scientific research.
“AI system” is defined as a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness, and from the input it receives can generate derived outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The EU Commission has published guidelines on AI system definition to explain the practical application of this legal concept, as covered in the AI Act.
CMS' publication explores elements and impacts of the Act, including the importance of:
- AI literacy
- Risk-based approach
- Prohibited AI practices
- High-risk AI systems
- General purpose AI models
- Penalties from the Act
Moreover, CMS highlights key governance, compliance, and regulatory aspects:
The Act mandates transparency to ensure public trust and prevent misuse of AI technologies. Providers and deployers must inform individuals about their interaction with AI systems, maintain detailed documentation, and adhere to logging and record-keeping practices.
High-risk AI systems have stricter transparency requirements, including marking synthetic content to prevent misinformation and ensure transparency on AI use and AI decision-making.
The AI Act promotes ethical AI development through regulatory sandboxes, providing a controlled environment for testing AI technologies. These sandboxes support cooperation among stakeholders, remove barriers for SMEs, and accelerate market access.
Furthermore, the Act encourages the development of codes of conduct and guidelines to facilitate compliance. These may cover voluntary application of requirements, ethical guidelines, environmental sustainability, AI literacy, and inclusive design.
Effective AI governance involves setting up AI use policies, AI literacy programmes, centralised risk assessment frameworks, governance committees, and operational controls.
These will ensure ethical AI use, compliance with regulations, and continuous improvement in AI risk management.
Learn More:
For a detailed analysis and more information, refer to the full article here.
About CMS:
CMS is a forward-thinking global organisation of independent law firms, boasting over 80 offices in more than 40 countries, and a network of 5,800 lawyers. Combining deep local market knowledge with a broad global perspective, CMS offers dynamic and agile legal solutions to help clients navigate the future with confidence. The firm's focus on social impact, diversity, and sustainability underscores their significance in conversations on governance and future-facing leadership. NEDA are proud to be working in partnership with CMS.