top of page
Screenshot 2025-08-03 at 16.39_edited.jp

Making 'UnderWorld' - Notes from the Underside

LATEST NEWS

I recently shared a short film I’d made using AI tools — 'Underworld' a 90-second tale about a mouse beneath the stage of the Cavern Club Liverpool in 1962, as The Beatles played their first televised gig.It has been, to my surprise and delight, very warmly received — so thank you to everyone who watched, commented or passed it on.I wanted to share a little about what I learned in the making of it. Not just about the tools, but about storytelling, time, tone and consistency — and how AI might fit (or not fit) into all of that.

1. The importance of the idea

 

There are now many ways to generate images. But what still seems to be needed is a cohesive idea that holds it all together. To make my learning process more enjoyable (because I’d been warned it was likely to be frustrating) I wan’t to see if I could tell a quirky story. And something that wasn’t too ambitious but felt internally consistent and set within the confines of a single location.The idea of the mouse came to me after attending a conference on AI last year when I noticed a tiny creature running in and out of a gap in the curtain underneath the speakers’ podium. This particular mouse felt symbolic somehow - a symbol of humanity’s vulnerability in the face of the overwhelming potential of this revolutionary tech.As far as The Beatles connection was concerned, anyone who knows me is only too eye-rollingly aware of how endlessly inspiring I find them - their music, their story and their phenomenon, which has kept me fascinated for much of my life. Putting the mouse and The Beatles together was easy as soon as I thought about them in the damp basement club, called the Cavern, where they rose to fame - it seemed like a very plausible location for a mouse to live. I had the makings of a concept with a satisfying internal logic. I also thought it would be really interesting to try to create a specific place at a specific time.

2. Building worlds requires constraints

 

I had to lock the camera POV early. The mouse’s perspective kept the world small, which helped me manage continuity and complexity. AI seems to love to drift — across lighting, texture, architecture — so keeping the world really tight made it easier to control. Ironically, the limitations became the style.

 

 

3. A variety of tools and techniques were needed

 

I used a combination of AI platforms for image generation, upscaling, motion, grading. Nothing did the whole job. Every shot was tweaked, stitched, or masked in Resolve. AI helped me get 80% to a moment — but it still needed a lot of input to make it work, which is reassuring I suppose.My working method was to fully brief Chat GPT 4 of the whole idea, the location & scenario (inputting photographic references of the specific occasion that The Beatles were filmed there), the period & look (inputting specific references again, eg images of Vox amps, period mic stands, Cavern Club details from the era etc.

 

*Side note - I got fellow filmmaker Simon Weitzman, who as luck would have it, had a meeting at the current Cavern, to take some ultra low angle stills of the stage. It turned out though, they weren’t that useful - too many none period details that wouldn’t work.

Final thought

 

I’m going to continue to explore how these tools can support human-centred storytelling — and where their limits lie (maybe with more mouse adventures). If you’re working in branded content, music storytelling or maybe need a reconstruction sequence for a documentary and want to talk about how this might apply to your project, Iet's chat.

 

Here’s the link to Underworld:

 

https://lnkd.in/ebbZb9KK

 

 

Thanks for watching!

I then built a rough storyboard using the best Chat GPT images and then refined them until I thought they were good enough to use as initial frames for each shot. The animations themselves were generated in Runway Gen 4. Many takes were needed for each shot (sometimes as many as 20) 

 

It was a good job I’d gone for the ‘Unlimited’ package - the number of credits bundled with the next one down wouldn't have been nearly enough. I also discovered having copious patience was going to be key. The 2500(ish) credits allocation with Unlimited gets burnt through very quickly and then you’re on the slow boat, ie generations go in a queue, two at a time. Typically it takes 10 to 15 mins before the generating even starts. 

 

I had to keep reminding myself though, getting a real mouse to perform on camera would have been pretty tricky too so it was worth persisting with wrangling the AI version. 

 

 

4. The cute factor

 

I think what’s made people respond so positively wasn’t the fact it was AI-generated. It was the twitch of a whiskers-covered nose, the ruffled fur and the determined focus of a creature in a world too big for it. After a bit of refining of his look, the mouse turned out to be pretty cute. I had made one rather fundamental error though... my critter was about 50% too big (according to my mouse-expert wife). I’d finished my first rough cut when she pointed that out - changing it would have meant starting again and I couldn't face that prospect. It's alright though - he’s a partially giant breed only found in Liverpool! 🫤 

 

​

bottom of page