EU AI Act Implementation Progress

As the world’s first comprehensive statute on artificial intelligence takes effect, tech companies are at a crossroads: adapt to Europe’s gold regulatory standard or move from the European market. Far from a mere cost center, the EU AI Act’s phased rollout offers innovators an unprecedented playbook to differentiate on trust, governance, and ethical excellence, transforming regulatory demand into a springboard for international expansion.

This Act uses a risk-based framework to categorize AI applications into tiers: unacceptable risk (banned outright), high-risk (strict controls), limited risk (transparency requirements), and minimal risk (free from new rules). The goal is to ensure “trustworthy AI” promoting innovation while safeguarding health, safety, and fundamental rights.

The law has global reach: any company (even non-EU, e.g. Ukrainian or US firms) must comply if their AI system is used in the EU. Penalties for violations are steep; up to €35 million or 7% of worldwide annual revenue for serious breaches like using prohibited AI

This article unpacks the Act’s implementation timetable, risk‑based requirements, and strategic imperatives for turning compliance into competitive advantage.

Current Stage of Implementation

The implementation of the AI Act begins as follows:

2 February 2025 marks the start of initial obligations, including two measures: a prohibition on certain AI practices deemed unacceptable and a requirement for organizations to promote AI literacy. From this date, any use of AI on the banned list becomes illegal in the EU, and companies are required to educate staff about responsible AI use. AI in defense technology is excluded.

May 2025 sees the development of a Draft AI Code of Conduct, which provides guidance for AI developers, particularly those working on general-purpose AI models.

The next big date is August 2025 for general AI models, and then 2026 for all high-risk systems. Companies should map these dates to their internal roadmaps, ensuring that by each milestone they have completed the necessary compliance steps. In 2027 the EU AI Act will be fully operational, and all AI systems must be compliant.

This phased timeline illustrates that as of mid-2025 we are in the early implementation phase. The ban on certain AI uses is still in force, and companies should already be mapping out their AI systems and risks. The big compliance burdens (like high-risk system rules) loom on the 2026 horizon. For businesses, the current stage is a critical window to prepare – to audit AI tools, train teams, and follow the development of guidelines and codes that will fill in the details of the new law.

Prohibited AI Practices (Unacceptable Risk)

The EU AI Act outright bans a set of AI activities deemed “unacceptable risk” because they violate core values like human dignity, privacy, or safety. Companies must ensure they are not involved in any of the following practices in the EU:

Social scoring and broad surveillance, when AI used for evaluating or scoring individuals’ social behaviour or characteristics by governments in ways that lead to unfair detrimental treatment is forbidden. Also banned is indiscriminate surveillance like the mass scraping of facial images from the internet or CCTV footage to create facial recognition databases, which is seen as a violation of privacy and civil liberties.

Subliminal manipulation or exploitation, when the AI Act prohibits AI systems that manipulate people through subliminal techniques or exploit vulnerabilities of specific groups (such as children, the elderly, or persons with disabilities) in a manner that could cause physical or psychological harm. For example, an AI that covertly influences someone’s behaviour in a harmful way would fall under this ban.

Emotional recognition in sensitive contexts, when AI systems that attempt to infer human emotions in high-stakes areas like employment, education, or law-enforcement are banned. One cited example is recognition of emotion used in the workplace (e.g., by human resources) or in schools; the Act bars such practices due to their intrusive and potentially discriminatory nature.

Biometric categorization by sensitive traits, when the AI Act outlaws AI that uses biometric data (like face or voice recognition) to classify people by sensitive attributes. For example, an AI system analysing facial images to guess someone’s race, gender, or sexual orientation is prohibited. Such biometric inferences are considered too invasive and prone to abuse or error, risking unjust discrimination.

Predictive policing and risk profiling, when predictive policing AI tools that profile individuals to predict future criminal behaviour are banned. Specifically, AI systems that forecast the probability of someone committing an offense (based on their personal data or personality traits) cannot be used. This reflects ethical concerns about profiling and self-fulfilling biases.

Real-time remote biometric identification in public, when the AI Act forbids law-enforcement from using AI for real-time remote biometric identification of people in public spaces, such as live facial recognition on street cameras. There are tightly-scoped exceptions. For instance, the police might use it to locate a missing child or prevent an immediate terrorist threat, but only with judicial authorization and under strict conditions. The default stance, however, is that live facial ID scanning in public is off-limits due to its profound privacy implications.

In the meantime, businesses should know, that if an AI application could fundamentally threaten people’s rights or safety, it likely belongs in the prohibited category. The Act’s hard line on unacceptable uses underscores its overarching principle — certain AI risks are simply not worth taking in a democratic society.

Code of Conduct for General-Purpose AI Models

One of the most dynamic areas of implementation is the development of a Code of Practice for General-Purpose AI (GPAI) models. General-purpose AI refers to foundational models that aren’t designed for a single specific task but can be adapted for many purposes, like GPT, image generators, or multitalented AI. These models present unique regulatory challenges, since they can be integrated into countless downstream applications.

The EU AI Act imposes tailored obligations on GPAI providers, such as requirements to disclose general information about training data, ensure robust risk mitigation, and prevent misuse. To flesh out these broad duties, the Act calls for voluntary codes of conduct to guide GPAI developers in practice. The idea is that if the AI industry co-develops best practices now, it can quickly adopt them and even enjoy a kind of “safe harbour” until the full law applies. For example, more than 14 Ukrainian companies signed the Code of Conduct without requirements of any Ukrainian legislation.

By following this Code of Conduct, GPAI developers can demonstrate a commitment to responsible AI ahead of the Act’s formal compliance deadline. The European Commission has signalled that companies adhering to the code will be viewed favourably by regulators, and it might serve as an interim compliance path. If the voluntary approach fails (i.e., if the code is deemed insufficient or companies ignore it), the Commission can issue binding rules for GPAI via an implementing act by late 2025.

Currently, many major AI providers are anticipated to voluntarily adhere to the Code of Practice. In 2023, several U.S. AI firms committed to voluntary AI safety protocols initiated by the White House, a practice also observed in other countries. However, unlike these voluntary commitments, jurisdictions have not established binding regulations if these voluntary measures prove inadequate. This represents a significant distinction from other voluntary commitments. EU leans more on industry self-regulation or top-down rules for fast-evolving AI technologies.

HUDERIA Methodology for AI Impact Assessment

While the EU AI Act is a binding law, there are also complementary soft-law tools emerging to help organizations manage AI risks. One notable example is HUDERIA, which stands for Human Rights, Democracy, and the Rule of Law Impact Assessment for AI systems. The Council of Europe (CoE) is a separate entity from the EU, which is known for its human rights work, introduced the HUDERIA Methodology on 2 December 2024. This methodology provides a structured approach for companies and public institutions to evaluate the societal and rights-related impacts of their AI systems.

HUDERIA guides an organization to identify possible risks an AI system could pose to human rights or democratic values; assess the severity and likelihood of those risks; and plan measures to prevent or mitigate harm. It encourages looking at an AI system’s entire lifecycle from initial design decisions and data collection to deployment and human oversight recognizing that both technical factors and human context determine AI’s impact.

The HUDERIA Methodology is a voluntary guideline, not a law. It is not meant to interpret the EU AI Act or the forthcoming CoE AI Convention, but to complement them. Companies that adopt HUDERIA can bolster their internal AI governance and demonstrate due diligence in considering ethical impacts. For example, a firm could integrate HUDERIA’s questions into its product development cycle so that whenever a new AI feature is planned, the team evaluates potential human rights implications (privacy, non-discrimination, freedom of expression, etc.) early on. This can reveal issues that the legal compliance checklist might miss. We participated in the HUDERIA methodology pilot project organized by the Council of Europe and the Alan Turing Institute and have experience using this methodology with clients, especially during workshops, organized by the Ministry of Digital Transformation of Ukraine. This is a valuable tool for AI companies to develop responsible AI and prepare for AI Impact Assessment requirements under the AI Act.

The Council of Europe’s AI committee will also provide practical tools like templates, questionnaires, and examples to help implement the methodology in various organizations. This will make it easier, especially for smaller companies or public agencies, to run a thorough AI impact assessment without starting from scratch.

For Ukrainian companies (and any others) aiming to enter EU markets, using frameworks like HUDERIA can be an excellent preparatory step. It not only prepares you for the ethical expectations underlying laws like the EU AI Act but also signals to partners and regulators that you are acting responsibly. As AI regulations tighten, those who have internalized methodologies like HUDERIA will be in a better position to adapt and thrive in compliance-centric environments.

Implications for Companies Launching in the EU

For tech companies and start-ups aiming to launch products in the EU, the AI Act is a game-changer that cannot be ignored, because the EU AI Act applies to any provider or user of AI that affects people in the EU, regardless of where the company is based.

So, a Ukrainian AI software firm offering services to EU clients must comply with the Act, just like an EU-based company. Not doing so could result in market access being blocked or fines if an AI product is found non-compliant by EU authorities.

The staggered timeline gives some lead time, because companies can use this time to get ahead of compliance. For example, if you develop an AI tool for healthcare, HR or finance (likely high-risk categories), start building the required features now add transparency notices, keep audit trails of how your AI makes decisions, implement bias checks on training data, etc.

The Ukrainian government is proactively helping its businesses prepare. The Ministry of Digital Transformation of Ukraine released a White Paper for AI Regulation, explicitly aimed at aligning with the EU AI Act in several years. This White Paper takes a “bottom-up” approach, encouraging companies to start following ethical AI practices before any Ukrainian laws are imposed. It has introduced various tools designed to assist with this, including the Regulatory Sandbox, HUDERIA methodology workshops, Code of Conduct, and guidelines to educate companies on upcoming requirements. This creates a culture of compliance that will pay off when dealing with EU partners or undergoing due diligence checks.

Prepare for the 2026 high-risk rules in the EU in order to gain a competitive advantage. First, assess your risk level and check your AI use-cases against the prohibited practices list to ensure compliance. Identify which AI Act requirements apply to your project. In addition, conduct an AI impact assessment for deeper insights.

A practical tip is to design your system with configurable modules. For example, if you have an emotion recognition feature that’s legal in some markets but not in the EU, make it easy to disable or modify that feature for EU deployments. Localization for legal compliance will become as important as localization for language.

Compliance shouldn’t just be seen as a burden; it can open doors. Many EU clients and investors will prefer or even mandate that AI solutions meet the standards set out in the AI Act. Moreover, industries like banking or healthcare in the EU will gravitate towards vendors who understand regulatory requirements. Showing that you can navigate EU regulations builds trust.

  • Peter Bilyk

    Head of Technology and Investments AI at Juscutum 

    He has 10 years of legal experience, 5 of which are in the field of Legal Tech.

    Supports projects on investment attraction, crypto-consulting, and international structuring of IT companies.

    Develops the direction of Legal Design, GDPR, and electronic documental flow.

    Projects with his participation have been repeatedly recognized by the researchers of the Financial Times publication. He is the ambassador from Ukraine to the European Legal Tech Association (ELTA).

     

Juscutum

ADDRESS:

35 Olesya Honchara Street,

Kyiv, 01034, Ukraine

Tel: +380 50 490 0297

E-mail: partner@juscutum.com

Web-site: www.juscutum.com

Juscutum was founded in 2008 to provide legal support to innovative Ukrainian and international companies. Being a classic law firm, Juscutum offers not only a full range of legal services, but also has unique industry specializations such as IT, AI, media, business security.

Also, since 2014, Juscutum has been providing legal services to various blockchain projects. ​​Juscutum is a team of over 60 dedicated experts, including lawyers, financial analysts, accountants and auditors.

​​Juscutum accompanies clients in all aspects of the complex legal world: regulatory aspects, conflict management and dispute resolution, mergers and acquisitions, corporate law, taxes and audit, competition issues, finance, business defense lawyer services and more. As a result of cooperation, the firm`s clients often become long-term partners, as the firm`s team perceives their business as its own.