Pioneering Artificial Intelligence: An Expansive Framework for Europe's Technological Renaissance
- Antonis Hadjicostas
- May 12, 2024
- 4 min read
Updated: Mar 7

Introduction
In line with its digital strategy, the European Union (EU) is embarking on a journey to regulate Artificial Intelligence (AI) for the betterment of society. Acknowledging AI's transformative potential across sectors such as transportation, financial, health and energy, the EU aims to strike a balance between fostering innovation and safeguarding societal well-being.
European Parliament's top priority is to ensure that AI systems deployed in the EU adhere to stringent safety, transparency, and environmental standards. Human oversight is deemed essential to prevent detrimental outcomes, and European Parliament seeks a technology-neutral, uniform definition of AI to guide future regulations.
In April 2021, the European Commission proposed the EU's first regulatory framework for AI, setting the stage for a comprehensive approach to AI governance.
On March 13, 2024, the European Parliament adopted (first reading) the Artificial Intelligence Act (AI Act). The AI Act is assumed to be the world's first comprehensive horizontal legal framework for AI and is expected to provide EU-wide rules on data quality, transparency, human oversight and accountability.
Extraterritorial Scope of Artificial Intelligence (AI) Act
The AI Act represents a significant step in EU regulation, extending its reach beyond EU borders to ensure the effective governance of artificial intelligence (AI) within the Union.
Unlike typical EU regulations, which primarily affect entities within the Union, the AI Act applies to all providers and users of AI systems, irrespective of their location, as long as their outputs are utilized within the EU.
Its extraterritorial scope underscores the EU's commitment to upholding its policies, objectives, and internal market integrity in the realm of AI. To enforce compliance from non-EU entities, the AI Act mandates that third-country providers of AI systems appoint an authorized representative within the Union, allowing European authorities to exercise supervisory powers over such entities.
How AI Act may affect financial sector
The AI Act, a broad-reaching legislation, currently offers limited focus on AI tools within the financial sector. Explicit references within the AI Act relate primarily to credit scoring models and risk assessment tools in insurance.
AI systems used for credit evaluation or risk assessment in insurance, critical for individuals' financial access and well-being, are likely to be classified as high-risk due to potential life-altering consequences if improperly designed. However, the European Parliament suggests exempting AI systems detecting fraud in financial services from high-risk classification.
In order to avoid redundancy with existing financial regulations, the AI Act directs financial institutions to comply with certain requirements by adhering to financial regulation standards. As the list of high-risk AI systems evolves, institutions should monitor developments closely. With the rise of general-purpose AI in finance, institutions must navigate regulatory landscapes like the Digital Operational Resilience Act (DORA), considering interactions with the AI Act's obligations.
Supervisory authorities will integrate AI Act compliance checks into existing financial oversight practices, with the European Central Bank overseeing risk management for credit institutions. Additionally, the AI Act mandates the establishment of the European Artificial Intelligence Office, tasked with harmonizing AI Act implementation and advocating for the AI ecosystem's interests.
Risk Based Approach & Bans
The EU's AI regulatory framework classifies AI systems based on their potential risks to users. Unacceptable risks, such as cognitive manipulation and biometric identification, are outright banned. Meanwhile, high-risk AI systems, which could compromise safety or fundamental rights, undergo thorough assessment and oversight. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.
The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace, social scoring and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden

Transparency and Accountability Measures
Transparency lies at the core of Europe's AI regulation, ensuring users are aware of AI-generated content's nature and origin. Generative AI models, like ChatGPT, must meet transparency requirements and comply with copyright laws. High-impact AI models, such as GPT-4, undergo comprehensive evaluations, with incidents reported to the European Commission to ensure accountability and mitigate systemic risks.
Supporting Innovation and SMEs
Europe's regulatory framework aims to foster innovation, particularly among startups and small to medium-sized enterprises (SMEs). National authorities are tasked with providing conducive testing environments, enabling these entities to develop and train AI models effectively before market release.

Timeline for compliance
The AI Act will be phase-in implemented - in particular,
6-months for Member States to phase out prohibited systems.
12-months for general purpose AI governance obligations to become applicable.
24/36-months for all rules of the AI Act for becoming applicable including obligations for high-risk systems defined in corresponding Annexes of the AI Act.
Administrative fines
The new AI Act introduces significant fines for those breaching its requirements — fines/penalties may reach up to EUR30 million or 6% of companies’ total worldwide annual turnover for the preceding financial year, whichever is higher.
Conclusion
AI regulatory framework aims balanced approach that fosters innovation while safeguarding societal values and fundamental rights.
By prioritising safety, transparency, and accountability, European legislators aim to cultivate an AI ecosystem that promotes responsible development and utilization of AI technologies. As regulations take effect and evolve over time, European legislators are poised to lead the global conversation on ethical AI governance, paving the way for a digitally progressive and socially responsible future.