16 May 24

AI is increasingly prevalent in the sports industry with uses as diverse as medical research, injury prevention and treatment, scouting, and fan engagement. This note summarises the diverging EU and UK approaches to AI regulation and some steps you can take now.

The EU AI Act

The European Parliament adopted the Artificial Intelligence Act (the Act) in March. Its first provisions will take effect from October 2024, with the rest being enforceable from May 2026.

The Act takes a risk-based framework approach under which certain uses of AI will be entirely prohibited, with the strictest regulatory requirements reserved for ‘high-risk’ AI systems (such as safety components and medical devices) and the providers and deployers of such systems.

AI systems which are deemed to have “limited” risk (likely to include chatbots and other content generation systems) are subject to lighter-touch regulation, based on transparency. AI systems with low risk are largely unregulated.

The Act has extra-territorial effect, so companies not based within the EU may also be caught and subject to potentially severe penalties for non-compliance (up to EUR 35 million or 7% of global annual turnover).

How to prepare

  1. Identify your AI systems. This would cover AI systems that can infer and generate outputs, and general purpose AI models. Examples include systems that create content having been trained with large amounts of data and could capture systems that create training plans or video content.
  2. Are you caught territorially? Overseas organisations will be subject to the Act to the extent they: (i) supply AI systems in the EU; and/or (ii) provide the output from an AI system in the EU.
  3. Clarify your role. Different compliance obligations apply depending on whether your organisation is a provider of the AI system or performs some other role.
  4. Classify your AI systems into a risk framework – i.e. prohibited, high, limited and low. Many systems utilised in a sporting context are likely to be limited or low risk meaning less extensive compliance actions are needed.
  5. Consider contractual updates. Evaluate if your templates and existing contracts need to be amended now to reflect the Act.
  6. Implement effective policies and processes. Specific requirements will depend on the AI systems you use and your role, but a strong governance framework will be useful and assist in compliance with crossover areas, such as cyber security and data protection.

UK – A different approach

The UK has taken a different approach to the regulation of AI technology. Instead of proposing any new AI-specific legislation, various regulators will create sector-specific guidance. This guidance will cover principles of (i) safety, security and robustness, (ii) transparency and explainability, (iii) fairness, (iv) accountability and governance, and (v) contestability and redress, but will focus on issues relevant to each sector. The aim of this approach is to regulate use, rather than specific technologies.

There have been independent efforts to establish a central AI Authority to oversee the regulatory framework, with a third reading of a private members’ bill scheduled for May 2024.

Roisín Cregan

Senior Associate