What the AI Act Means for Startups

The AI Act and What it Means for Startups - An Explainer

The AI Act is the world's first comprehensive set of laws which will regulate artificial intelligence (AI). The aim of such legislation is to ensure that all AI is developed, deployed and enacted in a manner that is safe, ethical and transparent. As AI continues to play a bigger role in our lives, this Act offers comprehensive protection for businesses and individuals alike. Over the next 24-36 months different parts of the AI Act will come into effect. Here, we summarise what this might mean for your startup.

AI Act Coverage

Any startup that uses AI across a range of applications will be liable to comply with the AI Act. This includes users of third-party AI tools and systems and those who develop and deploy their own AI models. The act applies across all sectors.

Risk Categories

The AI Act uses a tiered approach to categorising AI. It's important to recognise these and know what category of AI your business may be using.

Unacceptable Risk:

This type of AI poses significant harm to users and its use is banned outright as of 2nd February 2025. Examples of this type of AI include social scoring systems, emotional recognition tools and real-time biometrics.

High Risk:

This type of AI is permitted but needs to comply with 7 key requirements in the areas of data governance, transparency, and human oversight. This category of AI is usually used in specific sectors such as healthcare, insurance, and finance and examples include profiling tools, credit-scoring systems and hiring programs using AI-based decision making. Compliance for all new AI entering the market within this category begins on 2nd August 2026. AI already on the market on this date and those who make significant updates or changes to their model will also need to comply.

General Purpose AI (GPAI):

This is one of the most applicable types of AI to startups and includes AI tools that might be used day-to-day such as chatbots, recommendation systems and automated content generators. GPAI providers (start-ups that develop or have a third-party develop AI models for them) must align with the following by 2nd August 2025:

  • Develop technical documentation, including that related to model training, testing and evaluation procedures for public consumption
  • Consider and outline model copyright, including thought for the copyright of training data
  • Have available information and documentation to supply to anyone who will integrate the provider's model into their own system
  • Have processes in place to report security issues to the EU AI office and local regulator

Providers of GPAI models that were placed on the market prior to 2nd August 2025 will have until 2nd August 2027 to comply - giving businesses appropriate time to prepare.

The Act further outlines that end-users should be made aware that they are interacting with an AI or AI-generated content e.g. a chatbot on a website. This transparency obligation comes into effect on 2nd August 2026 as the AI Act becomes generally applicable.

No Risk AI:

This type of AI is also quite common in the startup space and includes AI that doesn't fall into any of the previous categories and/or those that are classified as minimal to no-risk. Examples include spam filters or inventory management systems.

Sanctions and Non-Compliance

Like GDPR, fines should be proportionate and consider a business' size and annual turnover.

What's next for Startups?

Startups who are currently using AI in any form need to be aware of the ramifications over the the next 24 months. Below are some steps that startups should undertake in preparing for the AI Act.

  1. Audit Current AI Useage: Does your business currently use AI models in any manner? Consider the range of tools, programs and software currently being used within a business capacity. Include generative AI in this.
  2. Clasify the risk of current AI models and learn the requirements coming within the AI Act.
  3. Audit future AI usage - while AI may not currently be deployed, is there potential for models to be adopted in the near future? Are your competitors deploying any models or tools in their offerings? Do your customers expect more in regards AI and automation? Consider all business functionalites in this e.g. the use of Chat GPT for social media captions.
  4. Prepare and Get Ready - ensure your AI useage is inline with all AI Act requirements prior to enforcement dates. Educate yourself and your employees on the AI Act and your business requirements. AI literacy also plays a part of the Act.
  5. Don't forget to consider compliance costs - This might include additional employees, workloads and the need for external expertise e.g. do you have the knowledge and capabilities in-house to provide the required technical documentation?

The AI Act: Key Dates for Your Diary

Navigating the ever-evolving landscape of artificial intelligence can be daunting for startups. As the EU's AI Act begins rolling out, we break down some of the key phases to take note of and help you stay ahead in the AI-driven world.

1st August 2024: The AI Act officially came into force, marking the beginning of its implemetation phase. Over the following 24 months, various related legislation, guidelines and standards will come into effect across EU member states.

2nd February 2025: AI categorised as 'unacceptable risk' were prohibited on this date. This includes AI that have significant human impact such as social scoring techniques and live facial profiling.

2nd August 2025: Chapter 5 of the AI Act comes into force, outlining the obligations surrounding the most common type of AI: general purpose AI (GPAI). All new AI on the market falling into this category will need to align with obligations relating to classification, procedure and reporting. AI within this grouping that have been on the market prior to this date will have until 2nd August 2027 to comply.

2nd August 2026: The AI Act will be generally applicable from this date. High-risk AI systems placed on the market after this date will need to comply with specific obligations relating to data governance, transparency, and human oversight. Models placed on the market prior to this will only need to comply should they make significant changes or updates to models. Transparency for end users interacting with AI or AI-derived content also comes into place.