top of page
Search

Beyond Traditional Video: How AI is Reshaping Media from Production to Advertising

Writer's picture: Aisha Yusaf, CEO/Product Chief, Purple PathsAisha Yusaf, CEO/Product Chief, Purple Paths

AI video generation has dominated headlines this past year, promising faster and more cost-effective content creation. However, the potential of AI in video extends far beyond mere generation. There are numerous opportunities in business processes and workflows where AI can enhance speed and efficiency while also providing novel insights, improving accuracy, offering superior personalisation, and uncovering valuable data-driven insights. With the advent of Large Language Models (LLMs), we find ourselves on the cusp of a transformative era for video.


This article will explore our journey through AI-related video work at Riveroconsult partner Purple Paths and highlight areas where LLMs can unlock exciting possibilities. Beyond video generation, we see ample opportunities in metadata management, advertising, scriptwriting, and enhanced personalisation. We'll showcase some of the projects we've worked on that illustrate the evolution of AI in video.


Purple Paths' Pioneering AI Video Projects

The application of AI to video predates the era of LLMs. At Purple Paths, we've been actively engaged with this technology for several years. Our experience includes two significant projects that exemplify the power of AI in video applications:


  1. BBC Snippets: This innovative project harnessed machine learning for video content management, effectively making archive video content searchable. Purple Paths founder Ezo played a key role in its development while collaborating closely with the BBC team.


  2. Pika: A children's camera game led by our team member Aisha. This project used computer vision on mobile devices to create an engaging interactive experience. Pika not only demonstrated the potential of AI in educational entertainment but also pushed the boundaries of what was possible with mobile computer vision back in 2018.



BBC Snippets: Unlocking the Value of a Vast Media Archive

BBC Snippets revolutionised content discovery within the BBC's vast media archives, significantly improving workflow efficiency for production and news teams. This innovative project demonstrates how to transform an extensive media library into a powerful, searchable asset. BBC Snippets had two primary applications: enabling production teams to swiftly compile taster tapes and allowing news teams to locate relevant, high-quality video from the archive for broadcast.


The system allowed production teams to search programme content using precise keywords, with results filterable by genre and date. It featured 'clickable transcripts' and visual navigation tools for an improved user experience. By leveraging the BBC Redux video archive, Snippets combined subtitle data with the BBC's database and employed computer vision on extracted video frames. The project also developed a transcoding pipeline to deliver compressed, high-quality snippets for taster tapes and broadcast segments.


Let's delve into the key challenges faced during the BBC Snippets project and examine how they were overcome:


Efficient Processing and Smart Content Recognition

The project efficiently processed millions of hours of video content using a robust FFmpeg pipeline, splitting streams into audio, subtitles, video, and key frames. Advanced computer vision technology automatically extracted important images, enabling visual content searches. This streamlined the process of finding specific material without watching hours of video.


Enhanced Searchability and Content Tagging

An advanced indexing system compiled key frames, timestamps, and subtitles, enabling rapid location of specific programme segments. The project's "smart tagging" feature automatically generated transcripts, segmented videos, and identified actors, significantly enhancing the archive's usability and value.


Versatile Formatting and Intuitive Navigation

The system quickly converted video snippets into compact, high-quality formats for previews or broadcast-ready versions for news segments. Despite the complex processes behind the scenes, the project prioritised user-friendliness. A standout feature was the "clickable transcript," enabling users to jump to specific moments in a video by clicking on transcript words.


These technical solutions collectively enabled BBC Snippets to revolutionise content discovery within the BBC's vast media archives, significantly improving workflow efficiency for production and news teams.


Pika: A Magical Camera Game for Children

Imagine a world where children can go on exciting colour hunts with their mobile phones, feeding colours to a friendly digital creature. This is the essence of Pika, a delightful camera game we developed at Purple Paths. The idea for Pika came from our founder Aisha's fond memories of playful cameras from her childhood - think Polaroids and those fun disposable cameras. We wanted to bring that same sense of joy and discovery to today's tech-savvy kids.


Our journey began with a simple question: How can we make photography fun for children? We explored various ideas, from special camera cases to attachable gadgets. But it was the concept of camera games that really caught the imagination of children and parents in our research groups. This led us to create Pika, a game where children 'feed' colours to a cute digital pet - much like the popular Tamagotchi toys. The game encourages kids to explore their surroundings, searching for specific colours to help their digital friend grow and thrive.


Here’s how we tackled key challenges for Pika:


Colour Recognition

We needed to make sure the game could recognise colours accurately, even in dim lighting. Off-the-shelf computer vision algorithms in 2018 struggled with this, especially with different shades of the same colour. Our team got creative and built our own compact neural network to recognise shades of the same colour while being small enough to run on a mobile.


Speedy Response Times

To keep children engaged, Pika needed to respond instantly to the colours they found. We built a custom video pipeline and ran our computer vision algorithm directly on the phone for fast image analysis - 30 times every second!


Child-Friendly Design

Perhaps the most crucial aspect was making sure Pika was easy for young children to use, even if they couldn't read yet. That's why we designed Pika as a big, friendly eye that children could interact with intuitively. No reading required - just point, snap, and watch Pika react!


The result? A magical camera game that not only entertains but also encourages children to see the world around them in new, colourful ways.



Enhancing BBC Snippets and Pika with LLMs

Both BBC Snippets and Pika were innovative projects developed before the advent of Large Language Models (LLMs). However, with the emergence of LLMs, these projects could now be significantly enhanced in their capabilities and user experience. Here's how LLMs could be integrated into these projects to take them to the next level:


BBC Snippets with LLMs

LLMs could transform the BBC Snippets project by improving content understanding and search. By generating detailed descriptions and summaries of video content, an LLM would enable users to search for clips based on complex queries or abstract concepts.


For example, a user could look for "inspiring speeches about climate change," and the LLM would return more relevant results by understanding context and sentiment. Additionally, LLMs could automate the generation of tags, categories, and metadata for each video snippet, enhancing the organisation and discoverability of the archive.


This would save production teams valuable time and uncover content that may have been overlooked with traditional tagging methods.


Pika with LLMs

In the Pika children's camera game, LLMs could create a more interactive educational experience.


An LLM could generate dynamic, age-appropriate narratives and challenges based on the colours and objects captured by the child’s camera.


For instance, if a child photographs a red apple, the LLM could craft a short story about its journey from tree to table or pose questions about other red items in nature. This not only makes the game more engaging but also enhances its educational value by teaching children about colours and objects in a fun way.


Furthermore, LLMs could personalise the gaming experience by adapting challenges based on the child's age and interests, making Pika more captivating and effective as an educational tool.


What’s next for video and AI

While our past projects leveraged machine learning and computer vision, the recent advancements in Large Language Models (LLMs) and AI agents have opened up even more exciting possibilities. At Purple Paths, we're particularly enthusiastic about the potential for personalisation, content creation, improved ad placement, contextual advertising, and automation of workflows. These areas promise to deliver better experiences for audiences in ways that were simply not possible before.


Some ideas we've been thinking about in our team include:


Dynamic Script Enhancement: Imagine TV shows that stay relevant by automatically incorporating topical content. An AI agent could analyse current events and seamlessly integrate them into scripts, keeping episodic content fresh and timely, even for shows produced well in advance.


Personalised Advertising Content: Using LLMs to create tailored advertising experiences. By analysing viewer preferences, viewing history, and demographic data, AI could generate unique ad variations for each viewer, maximising engagement and conversion rates.


Intelligent Ad Insertion: Analysing video content in real-time to identify ideal moments for mid-roll ads. This technology could consider factors such as narrative tension, scene changes, and emotional impact to determine the most effective ad placement, ensuring minimal disruption to the viewing experience.


Context-Aware Brand Positioning: Understanding the context and sentiment of video content, then suggest brand placements or overlay ads that align seamlessly with the scene. This could create more natural, less intrusive advertising experiences.


These projects represent just a fraction of what's possible at the intersection of AI and video. The future of video is not just about what we watch, but how we interact with and shape the content itself.


☎️ Get in touch

We at Riveroconsult and Purple Paths are always excited to chat about new projects and offer our expertise to make them a reality. If you're a media, advertising or technology company looking to explore how AI can transform your project, book a free consultation with us.

bottom of page