The AI Act is the world's first comprehensive set of laws which will regulate artificial intelligence (AI). The aim of such legislation is to ensure that all AI is developed, deployed and enacted in a manner that is safe, ethical and transparent. As AI continues to play a bigger role in our lives, this Act offers comprehensive protection for businesses and individuals alike. Over the next 24-36 months different parts of the AI Act will come into effect. Here, we summarise what this might mean for your startup.
Any startup that uses AI across a range of applications will be liable to comply with the AI Act. This includes users of third-party AI tools and systems and those who develop and deploy their own AI models. The act applies across all sectors.
The AI Act uses a tiered approach to categorising AI. It's important to recognise these and know what category of AI your business may be using.
This type of AI poses significant harm to users and its use is banned outright as of 2nd February 2025. Examples of this type of AI include social scoring systems, emotional recognition tools and real-time biometrics.
This type of AI is permitted but needs to comply with 7 key requirements in the areas of data governance, transparency, and human oversight. This category of AI is usually used in specific sectors such as healthcare, insurance, and finance and examples include profiling tools, credit-scoring systems and hiring programs using AI-based decision making. Compliance for all new AI entering the market within this category begins on 2nd August 2026. AI already on the market on this date and those who make significant updates or changes to their model will also need to comply.
This is one of the most applicable types of AI to startups and includes AI tools that might be used day-to-day such as chatbots, recommendation systems and automated content generators. GPAI providers (start-ups that develop or have a third-party develop AI models for them) must align with the following by 2nd August 2025:
Providers of GPAI models that were placed on the market prior to 2nd August 2025 will have until 2nd August 2027 to comply - giving businesses appropriate time to prepare.
The Act further outlines that end-users should be made aware that they are interacting with an AI or AI-generated content e.g. a chatbot on a website. This transparency obligation comes into effect on 2nd August 2026 as the AI Act becomes generally applicable.
This type of AI is also quite common in the startup space and includes AI that doesn't fall into any of the previous categories and/or those that are classified as minimal to no-risk. Examples include spam filters or inventory management systems.
Like GDPR, fines should be proportionate and consider a business' size and annual turnover.
Startups who are currently using AI in any form need to be aware of the ramifications over the the next 24 months. Below are some steps that startups should undertake in preparing for the AI Act.
Navigating the ever-evolving landscape of artificial intelligence can be daunting for startups. As the EU's AI Act begins rolling out, we break down some of the key phases to take note of and help you stay ahead in the AI-driven world.
1st August 2024: The AI Act officially came into force, marking the beginning of its implemetation phase. Over the following 24 months, various related legislation, guidelines and standards will come into effect across EU member states.
2nd February 2025: AI categorised as 'unacceptable risk' were prohibited on this date. This includes AI that have significant human impact such as social scoring techniques and live facial profiling.
2nd August 2025: Chapter 5 of the AI Act comes into force, outlining the obligations surrounding the most common type of AI: general purpose AI (GPAI). All new AI on the market falling into this category will need to align with obligations relating to classification, procedure and reporting. AI within this grouping that have been on the market prior to this date will have until 2nd August 2027 to comply.
2nd August 2026: The AI Act will be generally applicable from this date. High-risk AI systems placed on the market after this date will need to comply with specific obligations relating to data governance, transparency, and human oversight. Models placed on the market prior to this will only need to comply should they make significant changes or updates to models. Transparency for end users interacting with AI or AI-derived content also comes into place.