Ofgem is proposing the creation of regulatory guidelines focused on risk-based AI usage in the energy sector.
In a call for input issued by Ofgem on 5 April 2024, Ofgem has released its initial thoughts on the current regulatory framework governing existing AI usage in the energy sector.
Ofgem indicates that existing regulations are sufficient to cover AI utilisation and are seeking feedback and responses from energy sector organisations, AI developers, tech firms, and those involved in creating AI policy on their suggestions to ensure safe and responsible implementation of AI in the energy sector while encouraging innovation.
What are the risk of AI in the energy sector?
The use of AI is already prevalent in the energy sector, with technologies creating efficiencies and improving operations, but its use still presents unique regulatory risks and challenges.
These risks include supply chain liability, AI decision transparency, data protection, cybersecurity, governance, and accountability. Simply monitoring AI-related violations of energy regulations could be a highly complex process, particularly in terms of compliance detection and breach identification especially in the context of current methods.
What are the proposals for AI regulation?
Ofgem’s proposals for AI regulation in the energy sector are rooted in four principles outlined in the Department for Science, Innovation, and Technology’s AI white paper from March 2023. These principles include:
- Ensuring safety, security, and reliability;
- Promoting transparency and explainability;
- Upholding fairness; implementing accountability and governance measures; and
- Facilitating contestability and avenues for recourse.
Ofgem’s proposals have the objective of adopting an outcome-focused regulatory approach to AI in the energy sector, with an emphasis on proportionality and the application of key risk management principles – rather than strict rules governing AI use.
The call for input introduces a risk framework aimed at helping participants in the energy sector understand Ofgem’s perspective of AI risk. The framework is intended to guide ”duty holders,” which includes energy licensees and companies considering AI adoption, towards implementing appropriate measures to prevent or mitigate AI failures and minimise any adverse risk presented by the use of new AI technologies in the energy markets.
Despite Ofgem asserting that current regulations are sufficient to deal with the potential risks posed by AI, there appears to be a substantial need for further clarity and navigation of the legal and regulatory complexities, particularly concerning issues such as AI collusion, liability within the AI supply chain, and sustainability.
The call for input also addresses several wider implications for the energy sector. For instance, the utilisation of AI entails extensive data collection and processing, prompting concerns regarding data privacy and security, human rights, equality, safeguarding critical systems against cyber threats, health and safety, confidentiality, and the impact on workforces. This includes potential biases or errors in data, as well as mistakes due to inadequate training, data, or coding errors.
What is clear from the call for input is Ofgem’s emphasis on collaboration – as they cite their work with other regulators and key industry entities, such as the Office of AI (DSIT), CMA, ICO, EHRC, National Cyber Security Centre, and the AI Safety Institute, indicating a thoughtful and coordinated approach.
Ofgem’s call for input on AI in the energy sector can be found and responded to here: Use of AI within the energy sector call for input | Ofgem