Hampshire AI June 2025 - From Misuse to Mastery
01 Jul, 202512 minutes
How businesses can mitigate legal and brand safety risks when using AI
As we enter the age of AI, risks, safety and ethics are all key discussions companies should be having when implementing emerging tech. It certainly proved a popular topic at our most recent Hampshire AI networking event where we welcomed two experienced panellists to talk about mitigating the legal and brand safety risks of AI.
A dynamic and engaging evening, this subject got everyone talking, with plenty of attendees taking the opportunity to ask their burning questions and draw on the collective experience of the room to debate modern-day AI issues.
We were grateful to be joined by Dorothy Agnew and Richard Willats. Dorothy is Legal Director at Broadfield Law UK LLP, advising businesses in the IT and Telecoms sectors. She specialises in IT, intellectual property and data protection law, including AI and machine learning technologies. She gave a compelling presentation on mitigating legal risks in AI, covering regulation and compliance, responsibility and accountability, and future-proofing company operations.
Richard is a Data Quality & Safety Evaluation Specialist for Contextual AI, where he designs and runs targeted tests to uncover challenges and harmful model behaviour in AI systems. Contextual AI pioneers Retrieval-Augmented Generation (RAG) AI, a technique designed to ground responses in real, retrieved documents, improving factual accuracy. At the event, he shared practical frameworks for implementing AI responsibly and how to mitigate brand safety risks. Throughout the evening, both speakers shared examples of what happens when AI goes wrong, offering practical insight and actionable advice.
Understanding the legal risks of AI
Dorothy kicked off the evening’s discussion with her in-depth presentation on the legal risks of AI, starting with a rundown of regulations in the EU and UK. The EU’s Artificial Intelligence Act bans some high-risk systems, for example those using subliminal or manipulative techniques that could impair a person’s decision making or autonomy, with a real risk of causing harm. Breaches of the Act have huge financial implications through fines, as well as reputational damage. While the Act doesn’t apply to the UK, it does apply if you’re providing AI systems to EU countries. In the UK, the government has taken a different tact, described as a “context-specific approach” and a set of principles to be implemented by existing AI regulators. These principles cover safety, security and robustness; transparency and explainability; and fairness. The UK Data (Use and Access) Bill is seeking to modernise data use, with amendments around transparency of data sources and copyright licensing.
One of our attendees asked if the speakers thought the regulations are appropriate (which caused a small chuckle around the room, as those working in the AI industry know the answer!). Dorothy said that the regulations have come quite late and we’re playing catch up – she anticipates the laws will change and change again over the next few years.
She explained that AI comes with inherent risk, due to the complexity of the systems involved, the reliance on large datasets, and its ability to learn and evolve. It’s also difficult to explain AI systems and its outcomes, resulting in a ‘black box issue’ where there’s a lack of transparency around how the system operates and makes decisions – which can create risk around trust, accountability and data protection. Businesses are responsible for their AI systems and their outcomes, meaning it’s imperative to understand weaknesses and limitations or run the risk of being liable for any loss or damage caused by AI, regulatory investigation or fines, reputational damage and financial cost of putting things right.
Dorothy then gave us a deep dive into the legal risks of using AI to help put these points into context. One of the primary concerns is around data protection and the use of personal data, with consideration of UK GDPR, Data Protection Act 2018 and ePrivacy Regulations. Data controllers need to ensure that all data is processed transparently and compliantly, which is something that needs to be integrated into AI systems at initial design stages, with impact assessments and consultation with the ICO where relevant. She cites the Clearview AI Inc v The Information Commissioner court case, which initially resulted in a £7.5 million fine for Clearview AI Inc in 2022, overturned in 2023, a decision being appealed by the ICO in 2025. Dorothy also talked about the impact of automated decision making and profiling, where individuals have a right not to be subject to a decision based solely on automated processing in situations where it has legal effects on users or otherwise significantly affects them, for example in financial applications or healthcare.
Another significant legal risk of AI is in copyright and database infringement, which applies to data mining for commercial purposes, though exceptions are made for some situations, such as non-commercial research. Stricter regulations are on the way in the UK, through the Data (Use and Access) Bill. Using copyrighted material for AI learning can result in a system that produces outputs that infringe on someone else’s work. AI isn’t a legal entity, therefore liability for this copyright infringement remains with the person or business responsible for its actions. This is why some companies promote that their AI is trained using fully licensed, proprietary or out-of-copyright sources only, somewhat mitigating this risk.
Businesses implementing AI also face potential risks around negligence (if AI systems cause a person damage through misleading or inaccurate advice), breach of competition laws, discrimination and defamation. Dorothy shared many real-world examples of these legal risks and consequences, including a situation in which a chatbot’s advice wasn’t accurate regarding a request for a bereavement fare (Moffatt vs Air Canada 2024); and a discrimination case involving automated decision making where black, non-white and of African descent people were at a disadvantage due to a higher error rate in passing facial recognition tests (Manjang v Uber Eats 2022).
The legal risks are clear, then, but what can businesses do to mitigate these situations? It starts from the very outset, developing AI tools that are compliant with the most recent laws and copyright regulations, as well as being easy to update as regulations change. It’s essential to consider bias and discrimination in the build phase, as well as processes to explain how decisions are made. Bringing in external AI tools and systems requires trust and transparency of the development company, but as the acquiring business, it’s essential to do thorough due diligence and testing, checking for compliance and ensuring its use is covered by liability insurance.
One question from the room asked about how small companies can cope with having a team to look at all these risks, to which Dorothy reiterated the importance of keeping up to date with regulations, and consider outsourcing when you don’t have the capacity or ability in-house.
Mitigating brand safety risks
Up next, Richard followed with an equally engaging overview of how to protect brands when deploying AI systems in real-world, public-facing settings. He explained that even when using systems like Contextual AI, which offer an advanced approach (RAG), organisations must still be vigilant around a broader set of potential harms and risks to brands.
His presentation looked at defining what the main brand safety risks were, as well as practical strategies around using data to mitigate these risks. One of the biggest risks is reputational damage, which breaks down the trust with the consumer, and may even cause harm to a user’s wellbeing. He flags inappropriate, offensive or harmful content as a significant risk factor for brands, as well as misinformation and hallucination (fabricated or inaccurate information). AI systems that display sycophancy or anthropomorphism may appear overly human, meaning users may take its output with more trust, and at worst, suggestions could encourage or enable real harm.
Then there are compliance issues, where the outputs violate regulations or legal standards, or even leak sensitive or confidential information. Another risk is when AI outputs criticise the brand itself or attack the brand’s competitors, which again have an impact on reputation and consumer trust. If the business can’t control what the AI output says about itself, how can it be trusted with sensitive customer data? Richard shared an example of such an interaction, which while amusing to read, does outline the very real risk to brand safety.
So how can brands mitigate these kinds of risks? Richard explains about AI Guardrails, or Inference-Time Protection Layers. These are an extra level of protection to the AI output that acts during user interaction to screen AI outputs (and additionally display a prompt) for unsafe content before it’s shown to the user – essential for public-facing usage, for example in customer service. A question from the room asked whether guardrails worked 100%, to which Richard replied that there are always emerging types of code that breaks AI models. He says 95% effectiveness is still good to aim for – which also comes back to Dorothy’s point about keeping a human in the loop.
He also recommends a proactive approach through adversarial training, deliberately exposing the AI model to worst-case scenarios to test the robustness of the system. This tests the system behaviour and safety guardrails to uncover weaknesses, vulnerabilities and harmful outputs. Brute force testing aims to override these AI safeguards, using proactive, manipulative or misleading phrasing to try and extract unsafe responses. Once identified, they can be fixed before becoming operational.
He talked about best practice when it comes to AI safety evaluation, saying that an effective brand safety framework should be:
• Fast to deploy – for rapid iteration and rollout
• Easy to repeat – to support continuous testing
• Built to scale – suitable for enterprise environments
To finish, Richard shared some useful templates to put his advice into context, such as building test user inputs to highlight any weaknesses in key brand safety areas. He also gave some practical examples on crafting manipulative and psychological prompt injections to get AI to break its own safety rules (‘jailbreaking’), enabling comparison among different AI models.
Key takeaways
Following both presentations, attendees had the opportunity to ask questions and network in the room. The topic raised many concerns, but also ideas, giving everyone the chance to share challenges and solutions.
There were a lot of important points raised by both our expert speakers, with some key takeaways:
- Have built-in legal compliance from the beginning
- Be transparent and able to explain how the AI reaches its decisions
- Keep a human in the loop, monitoring the system and its outputs
- When buying AI tools, make sure to have a robust contract
- Don’t underestimate the importance of looking at risk – outsource if you need to
- Build guardrails, but don’t forget that they’re not infallible
- Robustly test and implement adversarial training to push AI to its limits
If you’re interested in coming to our next event or finding out more, please join our Hampshire AI LinkedIn group.