Hampshire AI January 2026 - Advancing AI Mobility
29 Jan, 202612 Minutes
Hampshire AI – Advancing AI Mobility: One Year of Conversation, Trust and Autonomy
Hampshire AI events have a habit of creating conversations that continue long after the last slide has been shown. January’s session felt particularly special. Not only did it bring together students, engineers, policymakers and business leaders to explore autonomous technology, it also marked one year since Hampshire AI began.
With almost 100 people filling the room, the anniversary event was both a celebration and a moment of reflection. The focus was AI driven mobility, but not in the abstract. Instead, the evening explored how autonomous systems already operate in the physical world, how they are governed, and how trust is built when software begins to make decisions that affect people’s safety.
Two speakers led the discussion. Edward Anastassacos, from SkyBound, opened the evening by exploring how autonomous intelligence is already being deployed through drones and aerial systems. Dirk Gorissen, from Wayve, followed with a deep dive into end to end AI for self driving vehicles. What followed was not a one way presentation, but a genuine exchange, shaped just as much by audience participation as by the talks themselves.
From Raw Footage to Meaningful Insight
Edward began by grounding the conversation in a challenge that many organisations recognise immediately. Vast amounts of visual data are being captured every day, yet turning that data into something useful, searchable and trustworthy remains difficult.
His talk focused on how autonomous drone systems can transform raw video and metadata into structured intelligence. Rather than relying on a single all purpose model, Edward explained that real world systems benefit from separation and specialisation. As he put it, “the key is a modular, layered architecture”, where some perception and real time processing is developed in house, while other components draw on open source or pre trained foundation models.
Under the hood, this architecture allows perception, tracking and semantic reasoning models to be trained, validated and updated independently, without destabilising the wider system.
This approach allows systems to scale, adapt and evolve without becoming brittle. Detection, tracking, localisation and semantic analysis can each improve independently, while still contributing to a coherent understanding of events over time.

Edward shared examples of the kinds of questions clients now expect AI systems to answer. These were not technical queries, but operational ones. Requests such as identifying a trespasser in a restricted railway zone weeks after the event, tracking the movement of a specific vehicle across multiple locations, or analysing traffic flow across several roads over an extended period.
What makes this possible is a layered intelligence pipeline. Video and sensor data feed into machine learning models that reason about behaviour, movement and context. Edward described these as agentic classifiers, systems that move beyond static labels to understand how entities behave and interact.
This led naturally into a discussion about scale. Operating one drone per operator limits impact. Operating many autonomous drones under a single control framework changes what is possible. Edward explained how shared situational awareness and coordinated autonomy allow systems to act collectively, particularly in use cases such as search and rescue or large scale infrastructure monitoring.
Simulation also plays a crucial role. Before deployment, models are trained and tested in synthetic environments designed to surface edge cases safely. This reduces risk while expanding coverage, especially for rare or dangerous scenarios that would be difficult to capture in the real world.
From Skies to Streets
Dirk’s session shifted the focus from aerial systems to roads, where autonomy is more visible, more regulated, and more closely tied to public trust. He described how autonomous driving has evolved from heavily rule based systems towards end to end learning approaches, where a single model learns directly from large volumes of real world driving data rather than relying on hand engineered rules.
In these systems, perception, prediction and planning are no longer treated as separate stages stitched together through rigid interfaces. Instead, raw sensor inputs such as camera, radar and vehicle state are mapped directly to driving actions. This reduces architectural complexity, but places greater emphasis on data quality, validation and safety boundaries.
Rather than encoding behaviour explicitly, the model learns how to drive by observing diverse environments and scenarios. This allows vehicles to generalise across cities, vehicles and conditions, including environments they have never encountered before. Dirk stressed that this ability to generalise is essential if autonomous systems are to move beyond tightly constrained test settings and operate safely in the real world.

Dirk also touched on how safety is addressed in learning based systems. Rather than relying solely on predefined rules, systems continuously monitor sensor health, operational limits and system confidence. If conditions move outside safe boundaries, behaviour is constrained or control is handed back. This layered approach allows learning based autonomy to operate within clearly defined safety envelopes.
When an attendee raised concerns about accuracy and model reliability for safety critical applications like driving, Dirk did not offer simple assurances. Instead, he reframed the issue by explaining that “there is no such thing as a perfect system”. What matters, he said, is how quality is defined, and that this is “as much a philosophical and ethical question as it is a technical one”. The real goal is robust design, strong oversight and continuous improvement.
This led into a broader discussion about responsibility and regulation. Dirk explained that his organisation works closely with regulators at both UK and international levels, including involvement in United Nations policy discussions and legislative drafting. He noted that many existing frameworks assume deterministic, rule based systems, which do not map neatly onto learning based models. Regulation, he argued, must evolve alongside the technology, particularly in how assurance and validation are approached.
Questions around robustness followed, particularly around sensor failure. Dirk was clear that “autonomous systems should never fail catastrophically”. Sensor health is continuously monitored, and when critical components degrade, the system adapts its behaviour. Certain manoeuvres may be restricted or disabled, and fallback strategies ensure safe operation. Redundancy and self monitoring are core design principles.
Dirk concluded by touching on vehicle to vehicle communication. While it may support high level coordination and longer term optimisation, he cautioned that safety critical decisions must always be made locally. Vehicles must be able to operate safely and independently, regardless of connectivity or infrastructure availability.

Privacy, Governance and Public Trust
With a room full of engaged attendees, questions around privacy and consent quickly emerged. One audience member raised concerns about capturing identifiable data through drones, including faces, pedestrians and workers, and asked how this aligns with GDPR and the risk of future re identification.
Edward emphasised that GDPR compliance is central to how these systems are designed, particularly in the UK and Europe. Governance and data minimisation, he explained, must be treated as first order design concerns rather than afterthoughts.
Dirk reinforced this perspective from the automotive side, stating clearly that “we strongly support GDPR. Techniques like automatic blurring and strict data retention policies are essential”. He also noted that data ownership often rests with clients, and that systems must be designed to minimise personal data use wherever possible.
Public trust and industry adoption were recurring themes. Dirk acknowledged that “public trust is fragile”, and that a single high profile incident can damage confidence across the wider industry. For this reason, safety cannot be treated as a purely technical concern. Functional safety, operational safety and clear communication with the public all play a role in building trust.
Scaling Intelligence Through Coordination
Another audience question explored how autonomous systems coordinate and share information. Edward described how moving from one drone per pilot to many drones per operator dramatically increases efficiency. In these systems, autonomous agents share environmental state, mission progress and situational awareness.

This combination of peer to peer coordination and central orchestration allows systems to adapt dynamically while remaining observable and controllable by humans. The emphasis, Edward stressed, is not on removing people from the loop, but on enabling them to operate at a higher level.
Independence, Coordination and Latency
The final audience discussion explored vehicle to vehicle communication and whether latency makes it impractical for split second safety decisions. Dirk agreed that while such communication may support high level coordination, it is not suitable for real time safety critical actions.
This contrasted with Edward’s drone use cases, where coordination already delivers tangible benefits without introducing unacceptable risk. Together, the talks highlighted that autonomy is not a single pattern, but a spectrum shaped by context, consequence and responsibility.
One Year In, and Looking Ahead
What made this Hampshire AI event stand out was not just the technical depth, but the quality of the conversation. With nearly 100 people in the room, the one year anniversary felt like a milestone not only in numbers, but in maturity.
Practitioners shared real world constraints. Speakers responded with honesty rather than certainty. From drones interpreting the world from above to vehicles navigating busy streets, the message was consistent. Autonomous systems are already here. Their success depends on trust, transparency and ongoing dialogue.
As Hampshire AI enters its second year, this event served as a reminder of why the community exists in the first place. Not to provide easy answers, but to ask better questions together.

If you’re interested in coming to our 2026 events or finding out more, please join our Hampshire AI LinkedIn group and we hope to see you soon.