Last year, we journeyed beyond the algorithm with our newest partner, GoogleDeepMind, conducting in-depth interviews with more than 30 AI specialists. Here's what we learnt!

Share on

In the past year, we have conducted in-depth interviews with more than 30 AI specialists–from researchers to engineers to video game testers–as part of a new series of branded documentaries for Google DeepMind, a leading AI research institute.

The films explore the future of AI: what does it look like, what might it mean for us, and what sort of developments can we expect in the next few years?

The answers to those questions have been fascinating. One thing that really stood out was how often the experts would use metaphor to help explain:

“AI is like the electric guitar.” “AI is like a library.” “AI is like a synthesiser”

AI can be difficult to speak about. There are a few reasons for this.

The first is a language barrier and a whole alphabet of acronyms. Once you get to grips with some of that technical language, there's then a PhD's worth of underlying technical principles with which to grapple.

Beyond that, the scope of AI is gigantic. It touches (or has the potential to touch) pretty much every area of life. The scale can feel overwhelming.

And finally, when you get down to its essence, you're talking about subjects that are very abstract: intelligence, consciousness, logic. These are things that as humans we often struggle to fully understand.

But let’s take a step back and look at what the metaphors used by those AI researchers have in common. The guitar, the library, the synthesiser: they are all tools.

At Gramafilm we use lots of AI tools–and you probably do too.

It’s hard to imagine organising a shoot without using Google Maps. Or researching a subject without using Google search. Our edit suites are increasingly full of AI technology, from scene detection to auto-transcription. A lot of our work in social strategy involves exploiting algorithms. If you opened Instagram today, you had a close encounter with AI.

But a new category of AI tool has become available to producers over the last year: ‘generative AI’. Generative AI refers to a type of artificial intelligence that can create new content.

This could be image generation as seen on standalone services like Dall-E, Midjourney and Stable Diffusion and is being used in popular editing platforms including Photoshop. With these tools we can extend and create artwork, and even photorealistic imagery, from nothing more than a text prompt or a reference image.

It could be text generation through a large language model like in Chat GPT or Gemini. Using these tools we can parse text, write prompts, and even generate code for programming.

But for all their applications, a tool is a tool. It has no intrinsic merit until you put it to use.

It's what we do with our tools that counts.

To find out how to make it count, you need to use it. We’ve been experimenting with video generation, whether that is through something that is completely generative like Stable Diffusion or through technologies that use other parts of AI such as NERF recordings.

Throughout time the tools we use have changed the work we do and the way we have organised ourselves to do it. AI is certainly going to have huge consequences for countries, companies and of course the creative industry and freelancers. That can feel overwhelming, until we think about the tools and how we can use them in exciting ways.


Developing the Ultimate AI Collaborator - Drew’s Story


Building the Tools to Make Ai Breakthroughs Possible - Anna’s Story

At Gramafilm we like to say that we “tell the human stories behind tech”. We have another unofficial way that we like to describe ourselves, which you’ll see on our t-shirts and tote bags: “Made with love and computers since 2008”.

Those mottos are not mutually exclusive.

We have always told human stories using the latest technology. Over the last 16 years we have transitioned from recording on video tape to solid state media. We worked with 3D and 360º video. We moved from high definition to 2k to 4k to HDR. We have always seen new technological developments as opportunities to create and deliver great work for our clients.

We believe that AI is an exciting new toolset that we can use to push the boundaries of creativity.

Let's explore that metaphor from the [Google DeepMind] research scientist:

“AI is like the electric guitar.”

The electric guitar was originally designed to be an amplified version to the acoustic guitar. As its popularity grew, aficionados of acoustic and classical guitar expressed concern. In 1968 the classical guitarist Andres Segovia said this: “Electric guitars are an abomination, whoever heard of an electric violin? An electric cello? Or for that matter an electric singer?”

In that same year, 1968, Jimi Hendrix released his album Electric Ladyland, The Beatles released The White Album, and the Velvet Underground released White Light/White Heat. All these artists took the electric guitar, and rather than producing inauthentic replications of acoustic guitar, they created new, exciting sounds that were unique to this new tool.

So when we hear that “AI is like the electric guitar”, we at Gramafilm are excited to think of the creative potential that it offers, and the new exciting things that we can do with it.

- Article by George Kenwright, Producer.

Video Production for AI Company

Google DeepMind is a leading artificial intelligence research lab that has been making groundbreaking advances in the field since 2010. Although people may be aware of its power, the full scope of work behind AI remains a mystery.

To disparage stereotypes and showcase the multiple applications and outcomes of AI, we explored the people behind Google DeepMind and the diverse areas of work.

Check it out.