Article

Neural Viz: Filmmaker Josh Crafts AI’s First Cinematic Universe with Midjourney, ElevenLabs and Runway

DATE: 10/28/2025 · STATUS: LIVE

Josh Kerrigan shapes stubborn neural creatures from his Los Angeles desk, coaxing Tiggy into life, then a glitch suddenly appears…

Neural Viz: Filmmaker Josh Crafts AI’s First Cinematic Universe with Midjourney, ElevenLabs and Runway
Article content

Neural Viz has become the most ambitious cinematic universe to come out of the age of AI. The person behind it is Josh Wallace Kerrigan.

Kerrigan’s work often begins with a small, stubborn problem. In one scene he wanted Tiggy, a squat brown alien, to tilt a little and look toward the camera. Tiggy sat in the passenger seat of a police cruiser, but the creature refused to cooperate. At first the gaze shifted a hair. Then Tiggy faced the wrong side of the lens. At one point the creature’s skin went blotchy, like overripe fruit.

This was not happening on a soundstage or a backlot. Kerrigan sat at his Los Angeles computer, sipping iced coffee, and ran FLUX Kontext to generate and rework images until one finally matched what he had in mind. He’d created the very first image of Tiggy with Midjourney (prompt: “fat blob alien with a tiny mouth and tiny lips”), used ElevenLabs to shape the character’s voice (Kerrigan layered his own speech with a synthetic timbre and pushed the pitch up), and leaned on Runway to craft the exact shot description (“close up on the little alien as they ride in the passenger seat, shallow depth of field”).

The filmmaker kept chasing usable frames as the software made odd choices. Tiggy would look unnaturally muscular in one render, or suffer unrealistic dryness on the back. When Kerrigan asked for “frog-like skin” on the back of Tiggy’s head, the generator pasted a frog’s face instead. The system balked at showing Tiggy without clothing; Tiggy is unclothed in the fiction, but a prompt for a “short shirtless alien” returned an error. “I said the word shirtless,” he guessed.

Talk about AI often falls into extremes: doom or hype. Watching Kerrigan work felt neither apocalyptic nor glib. It often resembled puppy training: gentle persistence, repeated corrections, occasional chaos. The tools misread directions, made bizarre substitutes, or wandered off on tangents. Still, with patient iteration he assembled eight minutes of tightly scripted original television.

Those eight minutes were the newest episode in a sci-fi universe Kerrigan runs under the Neural Viz name. The venture began in 2024 with a mockumentary web series called Unanswered Oddities, a talking-head program set in a future where Earth is populated by creatures called glurons. Those characters perform Ancient Aliens–style speculation about the vanished humans, mispronouncing “human” as “hooman.” Each episode riffs on a misread corner of human culture—America, exercise, the NFL—and started out feeling like a clever, self-contained gag.

The world expanded fast. Neural Viz began issuing different programs from the same gluron network, Monovision: a documentary-style cop show, a UFC-like fighting-bug series. Podcasts and vox-pop street interviews followed. Subplots grew between videos: romances and cults surfaced, archival grainy footage hinted at how humans disappeared. Kerrigan built not only characters and a language but a broader mythology, all created with generative tools.

The channel first caught on with Reddit and AI enthusiasts on Twitter, then reached wider audiences: individual clips amassed hundreds of thousands of views on YouTube and millions on TikTok and Instagram. More than popularity, Neural Viz marks a milestone in computer-assisted filmmaking: it shows that AI-generated video can be deliberate and thoughtful rather than just bizarre or lazy. The usual mental images of AI video—hippos on diving boards, babies flying planes, politicians in ridiculous antics—are typically slapdash. That trove of low-effort clips has hurt public perceptions of the format, and fueled worries that bots will hollow out film work and further numb viewers.

Kerrigan’s channel models a different approach. He writes full scripts in a traditional manner even as he uses prompts to fill in nearly every production role. He performs the parts himself and uses AI as a disguise. Once he frames his shots, he employs Runway’s facial motion-capture tools to animate Tiggy, performing the lines himself in front of his camera the way Andy Serkis used his body to create Gollum—only Kerrigan never leaves his swivel chair.

There’s a parallel to how Trey Parker and Matt Stone remade cartoons by embracing cheap production methods; Kerrigan has taken a technology many dismiss and pushed it toward intentional storytelling. Some have started calling him the first AI auteur.

He kept his identity nearly hidden at first. Now he is public.

Josh Wallace Kerrigan is the youngest of three brothers and grew up outside Wichita Falls, Texas, watching movies like Tremors and Jurassic Park. At nine or ten he and a friend used the video camera mounted on his desktop to make a short about a killer baseball player—tagline: “Three strikes, you’re out.” He studied film at Minnesota State University Moorhead and moved to Los Angeles after graduating in 2012.

Kerrigan then followed a familiar early-career path for comedic filmmakers in the 2010s. He held a series of day jobs: barista at “a Starbucks inside a Target,” assistant to a director who cowrote Neighbors, and producer of behind-the-scenes and promotional features for films such as Mufasa: The Lion King and the John Cena–Akwafina comedy Jackpot!. He formed a sketch group called Hush Money and produced weekly videos for a year that ran on Funny or Die; the troupe’s genre satires included a Saw parody that director James Wan praised. In 2021 Kerrigan directed a low-budget horror feature and sold a TV pilot to Disney.

He accumulated many on-set skills—cinematography, lighting, sound—yet struggled to build a lasting foothold. The pandemic and the streaming market contraction altered the traditional writer’s route to steady work. Writers’ rooms shrank, strikes halted production, and the contracts that followed reflected a smaller pool of revenue and anxiety about AI’s encroachment.

Kerrigan began experimenting in 2023 with Blender and Unreal Engine. He wanted animation and repeatable characters and environments he could revisit. Then he discovered generative AI tools such as Midjourney and Hedra, which sped up tedious aspects of 3D modeling.

Where many newcomers fling the wildest prompts at a generator—dragons in space, crying kittens—Kerrigan took a study-first approach. He examined the software’s weaknesses and built around them. He noticed AI does best on talking heads but struggles with action sequences, so he chose a documentary style. To avoid the uncanny valley of simulated humans he selected bulbous alien figures. To hide rendering flaws he favored the grainy look of late-20th-century TV. The result was Unanswered Oddities, with its deadpan nod to Unsolved Mysteries.

Early episodes look rough, but they established a skewed comedic tone and set up Monoverse’s main beats: the Monolith, an autocratic godlike structure; a Resistance movement bent on toppling it; and Tiggy Skibbles, a fast-talking conspiracy theorist who claims “hoomans” never existed—then goes missing.

For Kerrigan, the arrival of generative tools felt empowering. “The first time you start to see those weird creatures talking and whatnot, it is pretty mind-blowing,” he says. He compared it to being the quiet person at a party watching others dance and thinking, They don’t know.

Redditors and other creators applauded Neural Viz’s decision to lean into the quirks and glitches of AI. Some fans guessed at the channel’s authorship; “I thought he was Mike Judge hiding under a pseudonym,” says Zack London, who publishes AI videos as Gossip Goblin and has over a million followers on Instagram.

Buoyed by that reaction, Kerrigan made more episodes without a master plan. “There was no plan,” he says, which helped him work anonymously. He tried multiple formats, applying his taste for genre satire and an urge to remain creatively engaged. The Cop Files arrived as an X-Files–meets-Cops spin-off in which a detective probes Tiggy’s disappearance; Human Hunters came next, a spoof of ghost-investigation shows.

The series evolved as new software emerged. Kerrigan tried many of the latest apps to broaden his options and attract tech-interested viewers. At first he would record snippets of dialogue into a microphone and let the AI roughly sync mouth flaps and basic facial moves. That granted some control over performance but not a lot. Runway’s October 2024 release of Act-One changed that: Kerrigan could perform lines in front of his webcam and have the software map his facial and vocal delivery onto a character model. That gave him far greater control and made the work feel more personally authored. The trade-off is that the characters began to look more like him; Kerrigan says he’d like to bring in other actors, but playing every part solo is more efficient.

New tools opened doors for storytelling choices. When Google’s Veo 2 video generator arrived, Kerrigan used it for a flashback depicting the Monolith wiping out humanity—his first cinematic narrative sequence. The Cop Files shifted from monologues aimed at the camera to scenes where characters interacted, moved through space, and went on missions.

Software limitations even shaped the fiction. An episode in April features Tiggy with unusually smooth skin because the video generation tool Sora at that time struggled with model consistency. Kerrigan had Tiggy explain that he was “metamorphosizing” after losing access to “morph inhibitors.” The gag dovetailed with a show theory that glurons are mutated descendants of humans. Since then “morph inhibitors” has become an inside joke across episodes.

Glitches often yielded gifts. Reester Pruckett, a knife-obsessed gluron rancher patterned after Southern characters Kerrigan knew growing up, developed a habit of beginning lines with an unbearably long vowel—“Iiiiiiiiiiiiiiiiiiiiiii came out here to practice my switchblade.” That tic came from a software glitch and was so amusing Kerrigan kept it as Pruckett’s signature cadence.

By late 2024, studio executives began direct-messaging Kerrigan. He spoke with “almost all of the major studios,” as well as producers and creators who explored possible collaborations. Fans suggested Adult Swim as a fit. When Kerrigan met with people tied to Adult Swim, one of them observed that creators now have more power; “That sentiment has come up multiple times in meetings with other various studios,” Kerrigan said.

The conversations turned into two job offers. One position would have placed him in-house at a studio to work on AI projects. Kerrigan declined in order to develop his own TV pilot with an independent producer. He also planned to debut a non-AI short he co-directed at SXSW in spring 2025. With the pilot deal and growing revenue from Neural Viz clips on YouTube and TikTok, Kerrigan had enough income to quit his day job in January, the first time he’d left employment since moving to Los Angeles.

In June I attended the AI Film Festival in New York City, held at Lincoln Center’s Alice Tully Hall and organized by Runway. Hundreds filled the auditorium to see the 10 shorts billed as the best AI films of 2025, culled from some 6,000 entries.

I left feeling deflated. Many films were technically slick but emotionally thin. The festival program included a Q&A with the musician Flying Lotus and a partly AI-made music video for J Balvin. Most pieces read like lab projects meant to demonstrate tools rather than convey human stakes. One standout, a film essay called “Total Pixel Space,” won the top prize for being both clever and unsettling.

That split reflects a puzzle in AI film: tools grow faster than creators learn to use them for expressive, memorable stories. A handful of makers are doing interesting work. Zack London, aka Gossip Goblin, constructs ominous impressionistic shorts about machine-dominated futures. Aze Alter creates eerie pieces that skirt horror. The comedy duo TalkBoys Studio, friends of Kerrigan, release animated shorts with talking animals and dinosaurs.

By contrast, social feeds have been flooded with easy, viral prompt-and-play clips. When Google launched Veo 3 in May, letting users generate multimodal video from a simple prompt, social platforms filled with inexplicable trends—one was Bigfoot vlogging to a camera. An influencer even set up a system that generated a new Bigfoot clip every hour and pushed it to TikTok. OpenAI’s late-September Sora 2, which enables face scanning to insert users into videos, sped the tide further toward low-effort content.

Part of why Neural Viz stands out is how traditionally Kerrigan approaches craft. He always begins with writing—slug lines, action, dialogue, camera blocking—then storyboards each shot. For every storyboard panel he creates a still using an image generator such as Flux, Runway, or even ChatGPT. He keeps lighting consistent and maintains sight lines in dialogue. He makes background elements readable; left to themselves, AI tools tend to blur props and set details. For handheld camera motion he films his monitor with an iPhone and maps that natural movement onto AI footage, a practical trick that blends physical cinematography with virtual images. “Everything I do within these tools is a skill set that’s been built up over a decade plus,” he says. “I do not believe there’s a lot of people that could do this specific thing.”

One afternoon over Zoom I watched Kerrigan work on a difficult scene: after a hostage incident and rescue, Tiggy meets the leader of the Resistance, and a planned twist must land with precise timing and subtle movement. Every beat posed technical challenges. Kerrigan fiddled with head proportions, tried to line up a gun’s aim, and choreographed how the Resistance leader would remove a hood without it looking fake.

Partway through, a Runway spokesperson emailed about an upcoming release of Act-Two, a new motion-capture tool. Kerrigan decided to pause production and wait for the software roll-out so he could test the improved capture capabilities.

Later that day we walked through the Academy Museum of Motion Pictures, ten minutes from his apartment, and looked at exhibition pieces that trace film technology: zoetropes, Cinerama cameras, animatronic creatures. After a morning spent with digital glurons, the museum’s physical artifacts—Bong Joon Ho’s storyboards, monster maquettes, VFX models from The Avengers—felt antiquated in a curious way.

We stopped in front of a hand-tinted film of a dancer waving colorful robes. Kerrigan pointed out that the impulse behind painting celluloid felt like experimentation rather than a plan for posterity. “They're not thinking, like, This is gonna be in a museum one day,” he said.

Kerrigan avoids grand claims about the fate of filmmaking. He doesn’t frame his work as part of a movement and calls AI a tool in a storyteller’s kit. Along with his AI projects he’s finishing a conventional horror feature based on a short he codirected, which won an audience prize at SXSW. “I’m here to tell stories, and these tools are a part of the workflow,” he says. “They’re not the end-all-be-all, nor do I think they will be anytime soon.”

Still, the industry is preparing for a major shift. Studios are folding AI into production pipelines; James Cameron joined the board of an AI company, and Darren Aronofsky launched an AI-focused studio in partnership with Google’s DeepMind. Writers’ and actors’ unions pushed for protections in recent contract talks.

Kerrigan has fielded online criticism for using AI and recognizes the technology could upend established labor models. His larger gripe is about ownership: studios still hold most of the narrative power. AI allows him to make and own work independently. “There is a version of these tools that allows people to become more independent of the system, and I think that’s probably a good thing,” he said. He also worries about burnout. Producing a near-studio-quality piece every few weeks is thrilling but exhausting; the expectation to keep that pace is intense.

Ian McLees and Dan Bonventre of TalkBoys Studio recall mixed reception to their AI shorts. “Our friends who are sitcom writers, feature writers, were like, ‘This isn’t worth your time, this is gonna kill jobs,’” McLees says. “We’re like, the jobs are already gone, the studios killed it.” He cast the shift alongside past upheavals—like the move from hand-drawn to 3D animation—and argued for being part of the change. “We wanted to be at the table and not on the menu,” he says.

Zack London says some illustrators respond with visceral hatred. “Bro, you draw, like, furry fan art,” he says to critics. “You don't have to freak out at the first new thing that challenges whatever you thought was creative.”

So far the people most at risk appear to be specialists who excel at single technical crafts. The likely winners are idea people—writers, directors, storytellers—with a knack for these systems. Those who can imagine concepts and also operate the software will have enormous leverage.

Some newer tools give creators more control rather than fewer choices. Kerrigan picked up Act-Two once it launched, and it captured his facial performances with finer nuance than Act-One. In one close-up Tiggy’s lip trembles during an emotional moment, a subtlety that sells the scene.

A recurring mystery of the Monoverse is how humans went extinct. One character insists people were taken by escalators—sucked into moving steps one by one, “taken out by their own dumb invention.” In a scene a news reporter stands beside an escalator talking about the threat. As Kerrigan set the shot he could have left the escalator area empty; instead he placed a simple stairway and a figure climbing it, a quiet counterpoint to the rumor.

Kerrigan’s channel still hinges on experimentation and constraint. Generative systems misbehave, and those errors feed creativity. He treats each tool like a collaborator with quirks. Over time, that collaboration has produced a serialized fiction that plays like classic television remixed through the idiosyncrasies of modern software: a handcrafted world built on emerging automation, a creator learning how to get what he wants from machines that only sometimes do what he asks.

Keep building
END OF PAGE

Vibe Coding MicroApps (Skool community) — by Scale By Tech

Vibe Coding MicroApps is the Skool community by Scale By Tech. Build ROI microapps fast — templates, prompts, and deploy on MicroApp.live included.

Get started

BUILD MICROAPPS, NOT SPREADSHEETS.

© 2025 Vibe Coding MicroApps by Scale By Tech — Ship a microapp in 48 hours.