On the latest issue of Broadcast Unscripted, the podcast from Radical Moves, we tackled the hot topic of AI for media and entertainment. Interestingly, while AI is often talked about in terms of the potential efficiencies it can deliver, the conversation very quickly delved into some of the more negative aspects that the industry will need to tackle. It turns out that AI is far too big a topic to fit into one podcast so we may well have to follow-up with some more bit-size pieces around this theme.
In the meantime, here are some of the major points raised by the guests.
1. AI will make media and entertainment more efficient
OK, so this is likely to be fairly obvious and something that has been cited a lot. The biggest attraction to using AI is the efficiencies it allows, enabling broadcasters and media companies to focus on creating and distributing great content. What is interesting is that these efficiencies can happen right through the media lifecycle, which will have a massive impact on the entire workflow. For Jane Sung, COO, Cinedeck, there are a number of efficiencies at point of ingest. “Primarily, we are seeing people wanting to do AI transcriptions, then AI captioning. That’s historically been the biggest time gobbler for workflows and localisation. So just the ability to be able to automate and harness all of that AI power for those workflows has been a real win for our customers.”
Accedo’s Joe Foster was an early adopter of AI when he started his company Easel TV (recently acquired by Accedo). “We implemented some AI very early on for exactly the same reasons, effectively to shorten the processing time and complexity for our clients to build their own service. AI made a significant difference to the time they can spend producing a service like that by asking it to get or suggest metadata, tags and categories. We also used AI for imaging where we could take one image and create something like 15 or 16 different images within that at different crops.”
Some of the feedback Jane is hearing is that customers don’t want to wait until the end of the day for the dailies workflow: “we’re likely going to be implementing more and more on set or in-cloud captioning so that they can review right away.”
Joe goes onto highlight just how much AI technology has advanced in recent years. “The actual reality was, it wasn’t that easy to do when we started 10 years ago. But recently you get to the point where you can implement those things and they make a difference.”
2. AI can improve experience for viewers
Ultimately all media companies want to improve the experience for viewers and AI can help do that, whether by ensuring they can more easily get the right content at the right time or by improving efficiencies to deliver more.
Tom Dunning, Founder and CEO, Ad Signal, comments: “AI can be used not just to tag it which would include the appropriateness of content for different audiences, but also to start dynamically building that timeline or the sequence for that clip.” He goes onto highlight improvements: “one of the real advances in AI has been natural language search, which is ‘I want to see actor A driving a car, eating an apple’. How can we turn that into real searchable, discoverable content and understand the relationships between things.”
According to James Hynard, Sales Manager, castLabs, AI has a particularly interesting use case for production: “You will be able to make movies cheaper and better, potentially replacing some quite bad CGI in some films and actually improving this to make it more realistic.”
Meanwhile a good experience means less errors and this is another area where AI can help. Jane highlights: “AI will be good at detecting operational anomalies so that you can make sure you don’t have as much downtime.”
3. AI may have a negative impact on sustainability initiatives
Along with AI, sustainability is another massive topic in the industry right now, with many companies waking up to their responsibility to reduce carbon footprint and prioritise sustainable initiatives, both for their own processes and their entire workflow. While, as Tom puts it: “the world is going to use AI to try to solve some of these sustainability issues,” it will also be used “to tag cat videos.” And more than that, Tom warns about the threat to sustainability: “I genuinely believe that AI is the biggest threat to our sustainability talent targets in the world. Right now, data centres are around 3.7% of the world’s carbon. For reference, aviation is 2.1%. And by 2040, that’s going to hit 14%. AI is a massive driving force behind that. It’s ridiculously power and cooling hungry, and it needs a lot of special resources to produce the GPUs that everything runs on. If you look today at just video processing and hosting and streaming, that accounts for 1.84% of the world’s carbon, which is just insane.”
In fact, a recent report by the IEA expects that by 2026, the AI industry alone is expected to consume at least ten times its energy demand from just three years prior. However, Tom doesn’t expect any business to give up the advantages of AI, so instead we need to ensure its usage is sustainable. This is just one of the reasons behind Tom co-founding the Digital Sustainability Alliance.
4. We need to protect ourselves against deep fakes and legal rights
As Joe pointed out AI is great at recommending content but as he states: “where it gets questionable is when you start looking at people who are getting third party content, because now it’s about what’s the legal rights to use what you’re being suggested.”
This is an area that James also raised as a concern: “When it comes to content creation, this is where we start to get into a very sort of grey zone. And frankly, there’s no regulation around this and I don’t expect any regulation to come anytime soon.” He goes onto discuss the dangers of deep fakes: “How do we know as individuals what is real and what’s been faked, because the technology is getting better and better? And we need tools to be able to say did Keir Starmer or Rishi Sunak actually say this or has someone used the technology to make them say something that could be detrimental when it comes to polling day?”
He goes onto warn about the upcoming US presidential election: “There is fear there will be our first major deep fake events. They’re using AI to spread disinformation.”
Both Ad Signal and castLabs have technology to detect deep fakes with Ad Signal using fingerprinting and castLabs watermarking. As James notes: “I think there’s definitely space for fingerprinting and watermarking, and also signed metadata to actually coexist in this space. It’s not about finding one tool for the solution, it’s actually about multiple tools.”
Should we welcome or fear AI for M&E?
The answer is quite simply both. There are a lot of positive advances that will be possible thanks to AI. According to Joe: “I think the general principle here is that there’s so many areas that you can use AI positively to speed things up.” He goes onto mention: “I think that will have a huge impact on the speed at which we can innovate and the speed at which we can add features.”
Tom even believes it could perhaps make: “it easier for people to enter the industry. Or if you are a smaller player, allow more independent film studios to produce content.”
However, AI is only as good as the input you get. As Jane warns: “It doesn’t matter how much content you have, without metadata, without really good suggested metadata even, it’s kind of meaningless.”
But perhaps the biggest concern are the ramifications of deep fakes and pirated content. Fortunately there is technology out there to help and that is getting better all the time. While regulation is much-needed, it is unlikely to be coming anytime soon, which means the industry itself needs to take the steps to reduce its impact.