
Hampshire AI September 2025 - When Creativity Meets Code
06 Oct, 202512 minutes
When creativity meets code – working in collaboration with AI
Our Hampshire AI networking group is going from strength to strength, and our September meet-up was the most thought-provoking yet. With engaging talks from the speakers and insightful questions from the audience, we spent the evening debating how we work ethically and effectively alongside both human creativity and machine intelligence.
We were joined by Professor Thomas Irvine, Deputy Director of Web Science Institute and Lecturer at University of Southampton. The Web Science Institute looks at the science of the sociotechnical – how technology interacts with society – by bringing together STEM, social sciences, arts and humanities to work towards leveraging the role of online technologies and AI to tackle global challenges. The Institute also coordinates the AI@Southampton initiative, which unites AI work from all over the University’s different departments. One of its early projects is dubbed an ‘AI Driving Licence’ for all University of Southampton graduates, which covers context, background and theory, critical competence and creative innovation.
We were also joined by Vinh-Dieu Lam, a Software Engineer with over 20 years’ experience creating interactive CGI/VFX digital worlds, currently working at Odyssey, an AI lab pioneering interactive video models. Vinh-Dieu has a huge wealth of experience in creating immersive, interactive and engaging worlds for entertainment purposes. He previously worked at Ubisoft Shanghai as an AI Engineer, before moving to Creative Assembly as a Gameplay Programmer and Wayve as Tech Lead, before landing in his current role at Odyssey.
Their presentations explored the evolving skillsets needed to thrive in the modern world, highlighting how we can harness AI responsibly; not to replace creativity, but to enhance and amplify it.
AI-generated music
Alongside his work with the Web Science Institute, Professor Thomas Irvine is the Head of Music at the University of Southampton. The use of AI in music generation has been the source of contentious debate in the media of late, so his talk on ‘Adventures in Responsible Music AI’ is very topical.
He asked the audience to listen to two musical extracts, the result of requesting an AI music generator to create some ‘country music songs about machine learning’. On a first listen, they are impressive – drawing from what the machine knows about popular country music and what humans engage most with.
But, as Thomas points out, what now? With the capability to generate passable musical riffs at the touch of a button, it could be bad news for musicians as businesses shortcut the cost and effort involved in working with a human creative. Not only that, as AI pulls from source material, we’re facing concerns around the violation of intellectual property rights, not to mention the environmental considerations of the massive energy use behind AI models capable of generating complex music. We also risk the loss of true creativity; the proliferation of AI music could take over, leaving little room for original sound.
It certainly threw up a lot of talking points among our audience. One attendee asked about AI in creative fields, like music, and whether the output was intelligent or just statistical, to which Thomas replied that it is not inherently creative. “These systems mirror human traditions; they don’t invent them. Take jazz, for example. A jazz musician spends hours practising scales, patterns and phrasing. In performance, they guess the next note, and sometimes that guess surprises even them. That spark, the right thing at the right time is creativity. AI can’t do that. It generates patterns based on probabilities.” He continued to explain that AI uses what it knows to bring together a pattern, but it won’t invent something unexpected. “The ‘hallucinations’ AI produces are gaps in its knowledge, not flashes of inspiration. True improvisation still belongs to humans.”
Another question asked was around copyright and how AI fits into that debate. “The copyright system we have wasn’t built for AI,” said Thomas. “Current debates often focus on protecting established stars like Paul McCartney or Elton John, but they’re not the ones at risk. The real concern is for the ‘next Paul McCartney’, the unknown artist trying to break through today. Copyright has always favoured platforms and a small group of successful artists, often at the expense of everyday musicians. Streaming services like Spotify grew out of these systems, and many musicians barely earn a living. If we simply extend existing copyright rules to AI, we risk reinforcing inequalities rather than addressing them. The real challenge is making sure new talent still has space to thrive in an industry increasingly shaped by automation.”
Thomas challenged the audience to ask questions and take responsibly for how Gen AI models are used. It offers huge opportunities for creative and cultural industries, but how can we work with it in collaboration responsibly? Referring to the Edinburgh Declaration on Responsibility for Responsible AI, he suggests that we all need to be asking questions around who accepts the responsibility for Gen AI music, how we identify and attend to those who are vulnerable, and how we think about the pace of innovation.
AI as a creative partner
Our second speaker of the night Vinh-Dieu Lam focused on ‘Generating Fun and Art: AI as a creative partner’. He opened his talk by asking the audience to think about how much of what we already see is real versus computer generated, highlighting examples from popular films, such as Toy Story and Avatar, to TV shows like Doctor Who. He also talked about the work of Director Ridley Scott and his film Napoleon, which aimed to use ‘invisible’ CGI and VFX to be as authentic as possible. Ridley Scott has been quoted as saying in an interview for the release of the film: “It’s all real. When you are using CGI and AI, the audience can tell it is fake. All of this is real shooting.” The film used the technique of 3D scanning and 3D assets to subtly enhance scenes shot on set. These examples showcase the collaboration of digital technology and human creativity, working together to produce high-quality and engaging end results.
Vinh-Dieu moved on to discuss generative AI, creating entirely new content. He explains that generative AI works beyond 2D and 3D, working across millions of dimensions at once: “It’s like reading many maps of the same place and trying to work out the best route between two points on all of them at once”. He worked through a showcase of Gen AI and its capabilities. From a simple photo edit that retains the authenticity of the original image, intelligently filling in the gaps left behind, to bringing in entirely new objects and elements, manipulating facial expressions to changing the environment. Similarly, he showcased Gen AI in video generation to create something new, creating videos from still images and incorporating trajectory control. The results are fun, designed to entertain, with the audience fully aware of the role of AI in their creation.
Vinh-Dieu also highlighted Runway’s Act Two generative video tool as an example of using AI as a collaborative partner in creation. It’s a tool that “allows you to animate characters using driving performance videos. By providing a driving performance of someone acting out a scene and a character reference (image or video), Act-Two transfers the movement to your character, bringing them to life with realistic motion, speech, and expression.” This level of human interaction gives a sense of authenticity to the generated content.
Odyssey is focused on the future of generative AI and the way we engage with it. It has launched a unique project that offers real-time AI video generation that you can interact with and explore (see Odyssey World here). “We call this interactive video – video you can both watch and interact with, imagined entirely by AI in real-time. It's something that looks like video you watch every day, but which you can interact and engage with in compelling ways.”
The future, then, is using AI as a creative partner and not just a tool, blending both real and AI elements, but retaining that authenticity that human viewers crave. The future of entertainment, according to Vinh-Dieu and Odyssey, is in AI-enabled, interactive, immersive experiences.
Audience participation
The two speakers provided different, but equalling compelling and complementary, points. On one hand, we need to be careful around Gen AI created content, being aware of governance and responsibility, and protecting human creativity. And on the other, if we use Gen AI in the right way, we can build powerful collaborative creative works that challenge and engage audiences.
It raises many questions for debate, and the speakers opened the floor to attendees to ask away. One person asked about how you can control the output of generated-AI content, in comparison to the intricacy of coding directly. “That’s exactly the challenge,” explained Vinh-Dieu. “With code, you can build complex systems quite directly. But with AI, especially in video or games, control requires far more data and much clearer structures. You need to show the system examples of what control looks like, and in some cases, you might need a second AI to manage the first. We’re not there yet. At the cutting edge today, we’re still trying to help AI learn how the physical world works – for example, whether an object falling onto a trampoline bounces as expected. Once systems learn those basics, we can start layering controls. There’s also exciting research in physics simulations that might help us understand control better.”
Another attendee was curious about how the quality of AI results are evaluated, whether through technical benchmarks or human evaluation: “Both,” answered Vinh-Dieu. “For training, you need clear measures of performance. Sometimes that’s simple, like checking whether a video frame at five seconds still looks like the original, or whether a model told to ‘move forward’ actually produces forward motion. Other times, it involves a secondary model that checks if an output matches the instruction, for example, whether an image really contains a cat. But human evaluation remains essential, especially in areas like music.”
Of course, a key concern is around governance, and who is responsible for monitoring AI-generated content. “Right now, it’s mostly self-regulation,” said Vinh-Dieu. “Companies like Google have experimented with hidden watermarks in generated images. OpenAI is working on similar systems. These watermarks matter, especially with lawsuits over training data and provenance. If models train on AI-generated content without safeguards, quality will quickly degrade, it becomes like a feedback loop. So, companies have strong incentives to track which outputs are synthetic and which are real.”
Wrapping up
There’s so much still to discuss, but the evening concluded with a reminder that AI mirrors human culture but does not originate it. These key questions around control, evaluation, governance, creativity and copyright are all interconnected. As AI grows and evolves – which it will – it’s on everyone involved to consider the legal frameworks and regulation needed to ensure fairness and opportunity for future creators, while not limiting innovation and collaboration.