Demystifying AI, offered by the ScreenSkills Film Skills Fund with contributions from UK film productions, is a webinar for anyone interested in learning about how AI might be used within filmmaking.
This session features UK filmmaker, Simon Ball, who shares his experience of using AI in filmmaking and how AI can be applied practically to the filmmaking process. The webinar dives into a brief history of AI image generation, its potential to contribute to filmmaking and how AI can work hand-in-hand with creative thinking in filmmaking.
"After making the same film over and over again for the best part of 5 years, I discovered the wacky world of AI image generation. Not content with just making functional imagery, I devised a novel method of utilising AI in the filmmaking process, bringing me great success on the festival circuit and greatly boosting my career trajectory. We’ll dive into a very brief history of AI image generation, how AI can be applied practically within the filmmaking process and how creative thinking can ensure that even the greediest of tech bro can’t steal an artform." - Simon Ball
Image © Simon Ball features the filmmaker (Simon) juxtaposed with his AI-generated portrait, illustrating his journey from traditional to future tech.
Watch the video or read the full transcript below.
Emma Turner
Good afternoon, everyone. My name is Emma Turner and I'm Head of Film, Animation and Future Skills here at ScreenSkills. For those that don't know, maybe joining us for the first time, we are the industry skills body and it's our job to make sure that the UK screen workforce is the best it can be now and in the future.
And we're really, really delighted on such a sunny afternoon that you're joining us for one of our Demystifying webinar series, which we use to raise awareness in different areas of the screen industry. And this one, we're looking at how Simon Ball, who I'm going to introduce you to very soon, who works as a short filmmaker is using AI to enhance his creative processes.
But first of all, I'm just going to run through you through some house making… house making? Housekeeping things before we go any further. You'll notice we've turned your mics and cameras off. This is because we are recording this session to go up on our platform probably in about a month's time. We'll do it faster if we can. And it just helps us keep the audio clean.
If you want to use a caption option, if you look down, wiggle the mouse around and you'll see you have an option to toggle on show captions. Please go ahead and activate that if that's what you want.
You'll have seen as you were in the waiting room and we'll also put it in the chat in a few minutes’ time a disclaimer from us. Everything that you hear today is opinions of people. It's advice you can listen to or not, but it is just advice from that particular person's point of view.
It's a private event and we're incredibly grateful for Simon sharing his thoughts with us and his opinions on things. So please be really respectful. This is his work, his creativity. And one of my team is going to pop into the Q&A function, which we'll use for Q&As, I promise, just what we call the pledge, which is us asking you to be respectful of each other and of the speaker that we've got here today.
Talking of the Q&A – most of the time we're going to be listening to amazing Simon, but I will make sure there's some time towards the end of the session, maybe the last 15 minutes where we can go through the questions that you have put into the Q&A function. I’ll ask them on your behalf. So please ask away. And there is no such thing as a dumb question, especially in AI. Everybody's learning. So please go ahead. If we don't get enough time to answer everybody's question, Simon's very kindly said that he will prepare a frequently asked FAQ sheet which we will put up on our platform alongside the recording so hopefully everybody will get their questions answered.
And so without further ado, and somebody is going to be very clever and minimize me and maximize Simon, I'm going to hand over to Simon Ball, who's going to hopefully take on an amazing romp through what he does. Thank you.
Simon Ball
Cool. Thank you very much, Emma. And thank you for having me. And thank you for… Yeah, organizing this. I'm going to load up some slides. And hit play. Cool. So yeah, my name is Simon Ball. I'm a filmmaker.
These are all my things. Who am I? I am a filmmaker. I've been working at various levels of film for like or 12 years now, which is kind of… long but it's true. I started as an extra, then became a PA and then became an editor and then set up my own firm and then got into AI. I kind of was making loads and loads of videos and was kind of getting into a bit of a funk and then while I was working on one project, one thing kind of led to another. I kind of got interested in AI when I saw that… worked out a method of being able to apply this into making like fictional short films, which is what I always wanted to do and was having a bit of trouble doing so trying to make something like, I guess, original or interesting. And now I'm kind of going around film festivals and seeing how the whole world of film kind of works at all levels, which is really, really exciting.
I also organize a film festival in Clapham, which is going… well and is very busy. And we did a screening of like Gen AI films in 2023, which I think is one of the first, which is cool. And then we did a… bit with the Alan Turing Institute last year where we did another slate of AI films and had like talks that were going on at the same time which was cool. I also like long walks on the beach. So that's me.
How did I find AI? Well, the picture there is kind of a clue. I was browsing a web forum and I followed a thread. I was kind of bored and then saw something that said, look, this picture has been made with AI. And I was like, oh. Okay, that's quite interesting. I found myself in a Google collab and it was this strange interface that I had no idea what was going on and everything was in Spanish which was an interesting challenge. I had to kind of translate everything and then work out how all of these parameters are working, but it ended up being cool which is cool. You would get all of these strange kind of computer generated images that didn't really look like anything that existed at the time and they had this strange, weird, dreamy kind of quality to them because they were so… weird, which I'll kind of talk about a bit later as well. But then I'm sure most of the people are aware some apps started to go like viral, at least on the YouTube channels that I was watching. So, Stable Diffusion, Midjourney, that type of thing where you would type in a prompt and it would generate a picture for you. At the time, this seemed like very, very novel and very, very interesting.
I then started to religiously watch YouTube because every single day there would be a new guy that would turn up and go, Oh my gosh, look at this amazing new app that's going to change everything. And it kind of changed something but like… Every day there would be a new video and you'd be like, wow, this is kind of Intense. But all of these YouTube videos are very, very helpful so… There are a few different channels that were really helpful to me. Olivio Syrikas is the one that comes to mind where they would basically just go step-by-step on how they installed some of these things. And this was quite important to me that I was able to install an instance of stable diffusion on my own computer. And have that locally on my machine so that I didn't have to kind of subscribe to a website or something like this. It's all freely available on the internet and people can just like download it and mess about and do whatever they want.
And then YouTube is a great way of just seeing how to actually do it because it is quite complicated, but you have to do like command prompt and things like this. But at the same time, once you've done it once, you get used to it. Then trial and error. Loads and loads of trial and error, seeing what works, what doesn't. It's a brand new interface. It's uh quite interesting to see what does work and what doesn't work, but that can only really be like done on your own time with your own process to see what you like as the output. You get a lot of computer crashes, but as part of the course, it's kind of all right. But It's a developing field and hopefully one day there'll be some sort of AI tech support, I don't know, but you just kind of have to Google loads of internet forums and see what other people have done and see what works and what doesn't.
So, from messing about with these apps, I kind of generated some sort of processes as to how I should be looking at these applications. First of all, there is a massive limitation to what the AI can actually achieve. I think this is pretty good to kind of acknowledge in that you get lots of marketing hype where somebody comes along and says, Oh my goodness gracious me, this AI is amazing it can like do this great image or whatever, and it's like well yes, but it also makes a lot of rubbish as well. So we kind of have to admit that there's a limit to what it can do and then kind of work around these limitations.
Seeing some of these initial outputs, I mean, maybe this is a bit outdated now, but a lot of photorealism and requests to try and make something super realistic It just seemed kind of… Why would you need to do that? You could just go and like film something. We have cameras for that so just trying to replace something didn't seem that useful for me but making a creative image that was very, very interesting to me. So I kind of concluded that AI is good for enabling different production ideas rather than replacing anything because at least with the current set of tools, if you tried to, like, replace production you run into so many problems like character consistency or just trying to get control of the image flow so that you can make an interesting edit. Anything like this… Anybody that says otherwise is probably trying to sell a subscription to an application.
But however, it is good to see AI as a selection of new applications and as in any kind of market economy, you see which ones get picked up, which ones have an actual use case and you learn them and then you try and apply it to your own projects. And if they unlock a new kind of way of doing something, great. If it's a time waster, then you can just kind of move on to the next application. I would say that it is good to kind of invest a bit of time if you are interested in this topic to look into free or open source software because it’s so much trial and error to get the result that you're happy with.
Although a monthly subscription fee to a website seems kind of cheap in the first instance if you end up kind of overusing your credits and needing to do more and more like rendering and things like this to see what works, the cost can kind of add up and up and up. So if you're an editor or something and you've got a graphics card in your computer, then you can probably run one of these and like launch it and just mess about and see what works and what doesn't.
However, like, also… generating things and images and text, images and video via a text prompt can be quite a laborious process actually because you're typing in exactly what should be in the image. And I'm going to kind of elaborate on this a little but it can kind of create a very, very annoying process where you spend ages trying to get the AI to generate precisely what you're asking for, but the AI can't really do it to the level that you might be looking for. So you end up with this like cycle of frustration. But I kind of want to show you a little about how I applied this into a short film or short films that I've been making over the last couple of years and hopefully this will kind of be useful for people to see how like potential AI-use could enhance a different production rather than kind of get in the way or be annoying on some level.
So what I was using was Stable Diffusion, which I could install on my own computer, which was great. And once you kind of get through all of these phases, you end up with a user interface that looks like this. It just runs in your web browser and web browser and It's quite simple. You can see at the top left where it's got the text saying ‘green sapling’, that's like your positive prompt, what you want to see. Underneath that, you'll have a negative prompt, what you don't want to see. So, like, for example, if your data set is kind of trained on lots of things that have watermarks, then you might want to put a negative prompt of no watermarks or something like this to make sure that the watermark kind of doesn't appear.
You kind of see that the width and the height of the image is at 512 pixels by 512. That's because if it goes any higher, your computer will probably crash. You can get newer models or newer versions that are able to do larger renders, but this also may take up a lot of your computer's capabilities and there are various ways to upscale your image using AI, which is also another useful thing. The CFG scale is kind of like… how much creative license you want to give the AI, which is something that you just have to experiment with. And then the seed, which is kind of an important number, every image that gets generated in one of these things is assigned a random seed number which means that if you input exactly the same parameters in a month's time or something like that, you will generate exactly the same image is quite interesting.
But it's kind of good at generating loads and loads and loads of images but as a filmmaker it’s useful on some level, but also not useful because you need to have something that's working as a sequence of images to play a video. So what I kind of concluded was that if you see at the top, there's an image-to-image button. Much better to kind of experiment with image-to-image because then I can have the consistency of real-life footage and then process this using the AI to kind of create an interesting new image.
The first experiments that I was doing kind of was a bit funny in the way that I was… I just had a baby and was like desperate to make a project or something like this. So I managed to get my parents-in-law to act in this silly bit that we filmed very, very quickly where it was kind of doing the two Ronnies and my mother-in-law is not Ronnie, that bit is me that's going to come in later but we just filmed a bit. And as you can see, it kind of transformed the image using the image-to-image thing to kind of give it this like weird sheen or whatever. Based on like different art styles that I was interested in at the time. The film kind of evolved into taking on the persona of a different artist through each scene that we were filming and then transforming the bit. So there's me just sat in front of a camera, transformed into this like geezer but then in the sequence like every single frame would be kind of different.
I kind of had to acknowledge that if the AI was giving me an inconsistent image or inconsistent flow of images, I would have to make that inconsistency part of the aesthetics. So that was the thing that kind of uh makes it a lot easier. If you're not going to get a smooth 25 frames-per-second video, but you're still interested in the stuff that this thing can do then you kind of have to make compromises with regards to things like frame rate or like image optical flow or things like this. But the film did quite well. We ended up going to CineQuest in Silicon Valley and won a prize, which was great because I just made the film with Yeva. Yeva's my wife and at that time she's pregnant again. So we're kind of busy with, like, screaming babies and things like this, but you know. You do what you do. And it was good because we were just a two-person team and we managed to beat like loads of much larger studios including like Lucasfilm in the competition, which was good. It made me feel a bit like one of the Skywalkers because, yeah, Lucasfilm was the empire in this thing and we were kind of going there. But you can see kind of just transforming the image to make it kind of like… the bottom right there is a bit more… Bernard Buffet and up the top was a bit more impressionistic but making the most of kind of AI hallucinations, which I thought was the most interesting thing about the whole thing.
The film did very well, so we decided to make another one. And the kind of thing that I took most from the production was that I should be able to kind of transform backgrounds and things like this with it. And we can see here we have an action where Sam, the actor, is walking up Westminster Underground Station and we then could break the image down into frames and then I get him to walk up the escalator. But this time it's a Piranesian-inspired Imaginary prison that he's got to kind of walk through. This is kind of a long process, so it's not like a thing that I was able to just press the render and I would get the thing that I was looking for. There was like a lot of rotoscoping and things like this, processing various different layers of action and then remaking the image so you kind of still need to make a creative thing. And of course, the image is still not like a stable thing, but in the character's drama at that time, they were in a kind of dissonant world. So I was able to use the AI to kind of enhance the image somewhat to make something that was… mirroring the world of the characters, which I thought was an interesting way of going about things.
You can see here that my studio was kind of transformed into this kind of like painterly aesthetic where it's just like… I don't know how to describe it. I quite liked it. It was kind of like hallucinatory is the best way to… do it but it was all kind of flickery but some aspects were kind of more solidified than others. So you're just kind of like looking at a very, very strange image. Like here, the actors were just kind of on Clapham Common and we got them to do the action and then all of the bit in the background there, trees and things like this. But because the AI was being told that it should make an image with an imaginary prison inspired by Piranesi, the trees would then be transformed into something that kind of gives the impression of an imaginary prison by Piranei which was an interesting way at looking at how to kind of construct a drama.
But yeah, I thought I would show this one breakdown of how I worked with my production designer because I thought this was pretty interesting. Now, Lily is a great production designer. She was in the team that did the… the art department on Poor Things and it was very, very busy and I didn't have any time to do much but we still got a lot out of the time. In this instance, this is real footage of, well, there's a foot, so there's the joke but um… This is the footage of the actor walking to the carpet. And we wanted to transform the image per the aesthetics but we kind of had the request from Lily that she thought, yeah, we should make the floor look a bit like it's cracking or cratered or something like that. So then we did an initial transformation so that you would get this kind of image. And so we get like the nice texture on the floor. But then also we've lost the carpet and we've lost a bit of the background there so that's not quite optimal, but however using like basic animation techniques, we can put the carpet back in with less intense kind of processing. Put the backdrop back in and then once we're all done, make the shoes look a bit better and then like the character walks in absolutely fine.
It was just quite an interesting way of working with the production designer in like the post-production process which was interesting. She also did stuff like changing the fabrics of the studio and then we would kind of remake the image using the AI and you get all of this kind of like interesting like potentials.
That was a while ago now and now things have kind of progressed a little. This was something that I made about a month ago, whilst my daughter was doing ballet in Stay and Play and I had a laptop and a bit of time. And I needed to make a promotional video for my film festival because I was going to start distributing films. And then I… didn't have any money to do that because film festivals are not like a very profitable enterprise, apparently, if you do it for a few years but I had a kind of an idea of what I wanted to do. Kind of got a channel on in-flight entertainment, so I needed to make an airplane based video. Kind of had an art style that I liked and then I was going to use an AI video generator to make the action. I then also used the audio generator to create the voices. I then used music generator to create the music and then the sound effects will procedurally generated as well. And it kind of looks like this.
Promotional video
It's just AI this, AI that. You know what I mean? Every day I wake up and there's a new app, a new model and a new thing I've got to learn. What am I supposed to do? Become a computer? What if I don't want to be a computer?
I'm not…
What we need to do is talk for this whole flight, just you and me. That'll show ‘em. Wait, don't you want to talk about the future?
Simon Ball
So that's like a very, very quick thing that could be done in a couple of hours. The good thing about that was I was able to use subjective prompting in order to generate what I was looking for because when I was kind of using these generators which I’ll kind of introduce a bit later, if you ask for a very, very specific thing, you don't really get good results. However, in that bit, there was two characters, but the second character was a complete fabrication of the AI. I gave it the initial image. And I said, hey, can you make a video out of this? And then it was like, okay. And it rendered a video where the camera pans and there's this random guy talking and I was kind of like, okay, I didn't ask for this. I literally asked for nothing. The prompt was completely blank. But this is actually exactly what I was looking for because now this allows me to quickly make my little promo video and get it out very, very quickly. And then it made my life a lot, lot easier.
And so I could make an interesting promo video that allows me to kind of help the filmmakers that I'm distributing and make it in a kind of funny style that utilizes basic kind of storytelling devices to make a thing. And I think it kind of turned out all right. But the idea is essentially that subjectivity was the thing that allows the AI to be like... give me a good image, which I think is an interesting thing to think about when if you end up constructing your own prompts is like… Instead of being like very finicky and asking for something very, very specific. Why don't you allow the AI to kind of give you something back in like surprise, I guess, is the best way to describe it and I don't know. This is also how I had the most fun because then you would get unexpected kind of results that like… actually unlocked a lot for the film.
So in the short film that we were doing, for example, having subjective prompting when processing an image allowed for… there's one scene in the film where one of the characters is typing on a typewriter next to a curtain. And the pattern of the curtain sometimes we pick up like a face or something like this. Now, I had not asked for any kind of face to be picked up but the AI would generate something that would then kind of have a face or like the impression of a face that would kind of mirror the emotion of the scene that was going on at the time, which kind of made me quite spooky because I was maybe spending too long on the computer but it was like really good actually and kind of added a lot to the film. So I think this is quite a useful principle is… like if the tool is operating like this, it should be seen more as like a collaborative partner rather than some sort of digital slave or something like that because it just is much more fun that way and you get better results.
So we'll kind of go through AI in general to kind of help people that maybe don't know what's going on with AI. So you have, like, categories of different applications like LLMs, the large language models. It's basically like ChatGPT, Claude, you probably get things… there's a lot of hype around AI agents now. It's kind of like pulling from the same kind of place where you type something and it types something back and it's hopefully useful to whatever trying to deal with. Image generators, you have things like Midjourney, Stable Diffusion, Leonardo. They all kind of have their own aesthetics now. They're all like working very, very hard to make a new image generator that makes even better images or something like this. A lot of them are closed source applications, but something like Stable Diffusion is open source. For LLMs, you can also install various different instances locally on your own computer and there are lots of people training like open source models that you can just download and use so that if you don't like giving your information to the OpenAI Corporation, then you don't have to do that. The video generators, there's things like Minimax, which is what I use to make that promo video. There's Kling and so many others that you can end up spending a lot of time and a lot of money waiting for a thing to render and the result is no good but we keep persevering. Some of them are good, some of them are bad.
Audio generators as well. You'd have something like Suno, which is good for music. And then ElevenLabs, which is good for sound effects and voice generation, I think. There are probably better ones for, kind of, directing narrative voices or things like this, but you can kind of end up with a very kind of robotic voiceover if you don't kind of mess with the parameters quite a lot. But I think that's kind of like… part of the aesthetic of using AI, you're going to get something robotic back on some level.
What I find interesting about all of this kind of thing is that, you know… cool effects. I've got to have a cool effect if I'm making slides there's like a new thing happening every single day. It's overwhelming. You open up your social media feed or whatever, and it's like, wow, look at this cool thing I made with this new app and things like this. And it's like, oh my days, it's happening all of the time. It can feel like it is a bit much like to try and keep a handle on all of this kind of development. And ultimately, it can get in the way of actually making your own ideas if you're trying to keep up with things all of the time. What I kind of want to talk about is the idea that once you've consumed this information and you get an idea about how you can use all of these tools to make your project then like turn off the phone because it's no good anymore and it's just going to get in the way of you actually making an actual output.
It's actually quite similar to when Bitcoin became like a very, very famous. There's quite a lot of parallels, which is quite funny to note. So if ChatGPT is Bitcoin, like, you can see how there was a big hype over like ChatGPT and it's like, wow, this is cool. This is AI. It's going to change everything. And then like uh there's maybe like a plateau and now we're in a kind of market where there's lots of different competitor AIs is making their own different thing. Bitcoin was going to be the new money, but then we're still using the old money, you know, the kind of… is that… I don't know how long these things will get used or if they will become like staples of the creative production process but, you know, technology seems to be developing all of the time and there's not much we can do to stop it. So we kind of just like observe and see what we can actually use in a practical way.
There's so many different applications. Just need to acknowledge that there's so many different things and ultimately creative people will decide if the application is good or not because like a creative person will make the thing on the platform and then turn it into something useful… or not. I would say that the creatives kind of are more useful than the tech bros in this thing because the tech bro making a… an application tends to be using a kind of engineer's mindset. So, then you get all of this obsession with like trying to make a photo realistic output or something like that. Maybe that is useful, but it can also be quite like boring. You know, it's just not interesting we kind of… have a whole history of making video production and image capture and things like this. If an application is just trying to do exactly the same thing, then how is this actually fundamentally interesting to how we develop and produce ideas? Yeah, you know, all of these applications need creative people to use them. Otherwise, there's no market for them like. They literally are marketing to like creative people to say, please, please use this application. If you don't use it, then we don't really have anything which is interesting considering how sensitive the topic can be.
But… the thing that is useful to kind of acknowledge is that development will always happen and there will always be a better app and there will be a new model and there will be a new thing. There's always a new iPhone now. There's a new phone that's got a… you can't really tell the difference that much between them. Some of them are good, some of them are bad and then we keep waking up the same day anyway like… However, it is kind of difficult to tell like if something has been made with generative AI or not. Sometimes you can tell, but this is all kind of dependent on how people view like the material that they've been given. If the material has been like given to them on a social media page where their behavior is to just scroll, scroll, scroll, scroll, scroll, then does it matter if it is like AI generated because you know it's so throwaway like…? I don't know. However, if you were to view that same content in the cinema where you have a massive screen, you probably would be able to see all of the strangeness of the image and the kind of flaws that are there and so… And because you've gone to the cinema and paid money for the ticket, it's a very, very different viewing experience than, like, scrolling on your telephone.
So then how does this apply into filmmaking? Well, there's a lot of irony to it. So I imagine that boring labor intensive jobs would become obsolete. But then new boring labor intensive jobs will pop up instead because like if you are managing a process and you've got a new fancy app that's going to make a solution to it, then usually there are going to be errors in the thing and there's certainly loads of errors in AI. You talk about hallucinations if you're ChatGPT thing is making up information or providing you false information or harmful information, then this is not very good, is it? And so you need somebody to be able to kind of oversee and make sure that it's not going to cause something dramatically bad. Within film, I could say RIP rotoscoping which would be great because rotoscoping is like a very, very labor intensive and boring job if you've done it and but you know rotoscoping will return but the kind of parameters of how you are doing the rotoscoping probably will evolve and that would be thanks to AI. Thank you, AI. For example, when I was doing my film, I was able to do rotoscoping using some sort of AI-enhanced roto brush or something like that and AI enhanced like… removal… like character detection or something like this. But then this was plugging into another boring labor intensive process because I was doing everything frame by frame, it was kind of the same thing. Apples and oranges, it's all good.
In film, all of the major studios are basically working so hard to try and make the most innovative AI-adoption possible so that they can make the coolest stuff. I mean this is not a new thing. The tenter would need a technician to kind of oversee it to make sure the thing is working like…If you use Zoom in your professional life, you probably have like a Zoom person that is good at Zoom, can make sure the Zoom works and it doesn't crash. It's just like people get good at an app and then like they get booked to make sure that the app in a professional application like works and doesn't crash. Yeah, none of the apps are perfect. You get funny results or they don't do exactly what you want. So you kind of have to think creatively about how to actually get the result that is going to actually unlock whatever it is that you're working on.
And this is just the natural form of life where if the market kind of goes into such a direction where it's like all of the studios are like, ‘we got to do AI’ then Like a creative probably needs to learn a new software package as quickly as possible in order to meet the needs of the commissioners of whatever it is that they're looking for. But then I don't think this is a particularly new phenomenon because studios have been like trying to innovate with computer generated imagery for years and years and years and years and years.
Oh, wait, but… One important thing there – a lot of things can be rendered with natural language. So that's quite a useful thing to just say. Like you can prompt something using natural language and the natural language will render something better than if you were using some sort of convoluted prompt structure. If you are prompting images usually like the first five words that you use in your prompting will be the thing that the AI clings to the most. It will kind of give you a stronger rendition of that rather than if you've used like 100 words, whatever the hundredth word is. But using natural language to do this thing is quite cool because it just means that you don't have to get overwhelmed with a strange application or coding language or something like that. Some examples of AI that I've kind of seen, like I had a client that wanted to change one word in a voiceover script, which was really, really annoying because they also wanted the videos like the next day. So I was able to clone the voiceover artist voice and then get an AI to kind of read the voice and I could then put that in and fix the video and deliver on time.
Premiere, which is what I do editing on, has a new generative extend thing, which is great. You just like drag out a clip and then it'll work and then hopefully give you like an extra couple of seconds. That looks possible, I guess. If you're an editor, I'm sure you've had many, many times where uh you've needed to kind of add an extra second to a clip or something. So you kind of like maybe reverse the end of the clip or like… There's lots of different things that you can do but then there's a new feature that's cool You could also kind of, I think there's one called flawless editing or something like that that says you can remap the mouth of an actor so that if you wanted to redub it in a different language. Great, you can do that. I think there's pros and cons to that. It's great if you want to distribute your film to loads of different languages but Also, I mean, you don't need to do that as well so…. There's also films that are doing stuff where you could film something with a standby and then slap on somebody else's face. Which is great if like you wanted to book a big actor but you can't afford them. But also maybe has like a lot of negatives in terms of performance or something like that. There's a lot of pros and cons to these things. It's just like how you use them.
Kind of like getting towards the end but like this isn't new. People have been using like technology to like fake images for ever. If you shoot a model and then in your film make that into the idea that this is a large thing then… this has been going on forever. This is just a new thing that exists that people can kind of take advantage of and then use to kind of hopefully make more interesting films. I think a lot of these applications are making very, very like interesting – or some interesting, some not – but then if they all kind of get used to make like a very, very over-processed thing then you kind of end up with KitKat, which is fine and tastes good, but it can kind of make you ill if you have too much of it. And then… But however, it is distributed all over the world. There's an analogy here. This is an analogy of a hyper-processed studio film. It gets distributed all over the place. So if all of these like studios are using all of these applications to make a super polished, like AI-enhanced image or whatever. It's not necessarily a good thing because ultimately it is the consumer at the end of the day that will decide if there is a market for an AI-generated film. I mean, people like new things so I'm sure there will be one. You know, people still buy an M&M, and that's mass produced by all sorts of like funky machines. But then, you know. Somebody will also go to their like artisan chocolatier, wherever they are and buy like posh chocolate because you know people like choice.
I'm kind of optimistic that this won't kind of destroy the industry or anything like that. Say there is some app where a person can type in the film that they want to watch and the film will generate it in the style that they want and they get a 90 minute film. Great, I guess. Will the novelty of that last for the rest of their life? I mean, people like the… like the pomp and the circumstance of the cinema, like the red carpets, the actors the whole aesthetics of being in the movie so I'm kind of… Like you have a lot of doomsters that will try and say this is going to be the end woe is me type thing but… You know, it'll be all right.
Also important to note is that LinkedIn is not real life. And I kind of can get into some sort of very skittish brain mode where you turn on your LinkedIn and you get a hundred different AI videos and it's like, ah, it's all happening and then this guy's like, look what I was able to do with AI, AI is happening. And it's like, right. You know, it's probably good for that person, but it's just like a marketing gimmick at the end of the day. And probably some of these video apps book these people to make the videos and publicize it or whatever. You know, with a bit of detachment, it's good to see if there's kind of like new things that are happening or like... There's one app at the minute called Higgsfield which is now doing like 3D camera motion which is interesting but then you get 100 million videos of people going, I've made this cool video using Higgsfield and it's like… the same thing over and over and over again. Just because you can render something and press a button and get an output… that actually means that literally every single person that's using the application can render exactly the same effect and the same thing. So the value of the output is very, very low.
So everything kind of is about creativity and like just seeing this as an interesting suite of tools and thinking like, okay, how can this actually impact my film in a positive way that I could use it to get my idea made? Because I guess that's what I'm interested in is like seeing how a film can get made because it's very difficult to make a film and if AI can help get it made to a good quality then, great. And everything is ultimately dependent on an idea. If you have a good idea and you have a story that works within the confines of how you are delivering it then somebody will like that. A good story – people like that. And it's just always going to be there.
And then this is a thing that I've been reflecting on after being at an event recently where there were loads of commissioners saying they wanted a pitch to have a ‘comp’ so they could like place the idea. However, all of these people… like the more random thing that kind of came that they couldn't compare it to. So it was like this very, very ironic kind of thing. So then the point being a cool new idea, enabled by AI, is a thing that I think has real value not the AI in and of itself. The AI is just like something that exists that you can like use or not.
So to wrap up to make sure that there's enough time for questions. You know… man has been battling with technology for a long, long, long time. You have in Jerusalem by William Blake, Satanic mills, you know, industrial revolution was making all the farmers go off. You've heard this analogy a hundred times… when the photos came out then the oil painters were so angry because it was like, oh, no, they're going to just take photos but… People go and develop new things and then people get interested in them and they're not… they exist and if something has value and people enjoy it and look at it and like it, then that will last but this isn't a new thing, really. I would say the only thing that really matters is the quality of ideas, the quality of how you are able to execute and then thinking about the audience, how are you going to communicate your cool idea to a wide audience and help them comprehend it and understand it, whatever it is that you're trying to say. If you're thinking about these things, then your film is going to be fine.
Yeah, once you've, if you're doing a load of research into AI and things like that, then it's a really good idea to stop looking at the computer once you have your idea and then just like focus on execution. Because otherwise you'll see the next day there is a new app and then you'll be like, oh, how do I implement this app into my thing or something like that? And it's just like, confusing. And ultimately, because these things are happening so quickly, being able to take this down the whole production pipeline to get an actual film made is a long, long, long, long time so then… like if I'm shopping around this short film at the minute… like the applications as the packages that I've been using are like a year and a half old or something like that. It can take that long in order to get something into a state that you can actually distribute it.
So as long as you have an idea and you can kind of like go for it then… being able to acknowledge that development is just happening all of the time is a good way to kind of be able to detach from, I guess, techno-paranoia or something like this where you feel you've got the fear of missing out or something. And it's like, oh my, how am I going to put this new app into my thing or whatever? It's just like a good idea to say, all of these things are happening, grab like a broad comprehension about what these different applications do and then if you can use them, you can use them. I think that's the best way. Otherwise, it's just like too much and it's intense.
But like this kind of thing is just accessible to everybody. If I was able to teach myself on YouTube, then great, anybody can do that. I was doing a bit at the SODA school in Manchester where there was like a year-two student that was using the same software stack that I was using to make animations. It's like, well, this is great. This is like anybody can just get an idea, fiddle about on the computer and make something.
So yeah, that's it from me. I would say the future is going to be okay. And then that's a funny picture of an AI hugging a camera. Thank you very much.
Emma Turner
Thank you. That's absolutely brilliant. We've got a really nice little selection of questions to ask you actually, which takes us to the right time.
Simon Ball
Cool.
Emma Turner
So thank you people who've been posting And then it’s self-explanatory. So I don't know if you can see them as well. But the first one is, can you give an example of what a subjective prompt is, please? Just a quick example.
Simon Ball
The subjective prompt yeah okay so… I would say like an objective prompt, so you can contextualize, is like ‘a table with four legs’ or something like this. But a subjective prompt would be like ‘the table with four legs appears to be subliminal’ or something like that. Just something like that is going to… that the AI hasn't really been trained to be able to answer. The AI has been trained to go like, a table with four legs. But then if you're giving it words that it doesn’t know the definition to, then it can kind of provide a kind of like glitch effect that kind of maybe gives you something interesting. Or if you've got like an emotion like anger, the table is angry or something. Then perhaps if you give it the right kind of framework around it so you don't just get like an animated table with a cross face or something like this, you might be able to see anger in a different way within the image that the AI gives you.
Emma Turner
Brilliant. And now completely going the other way, because I'm doing this in order, into a bit of politics.
Simon Ball
Okay.
Emma Turner
Curious in your opinion, this never-ending cycle of better and better apps, especially things like OpenAI and Google, continue to lobby the US government to allow for copyrighted material to be reused and against it. What's your thoughts on that?
Simon Ball
I'm not a huge fan of OpenAI because if they are like a corporation that has like a got a stock price or whatever and they've like sold out to Microsoft, then actually the applications that they develop are for the enrichment of Microsoft stockholders. Or shareholders or whatever, so it's not fundamentally interesting. So they're not really like protagonist-ic heroes. They're just a company that are just trying to make money.
However, the same lobbying is probably going on by like Apple and all of these types of things. Like, Meta will be doing the same with their own version of the thing. It's just kind of the realities of living in a market economics where it's like all of these companies are just trying to sell a thing as quickly as possible and they try and like get as many favors from government. I mean, I don't think this is new. I'm sure Microsoft, when they invented Microsoft Windows back in the day, were lobbying the government to get this installed on as many computers as possible we end up with literally every single government computer running an instance of Windows. So it's just time progresses and the apps change.
Emma Turner
And I think that sort of answered the second bit of Josh's question. I'm just going to go, if we have time, Josh, I'll come back to the second part of your question. And then somebody's also asked, where does AI get its images? So it's going back more sort of nitty gritty. Where does AI get its images from? Simon said it's good to prompt to exclude things like a watermarks image. Does AI create that image or from scratch or does it source it from the internet?
Simon Ball
So it will have a thing called a data set, which is a very, very large file which is what it's been trained on. And then it's a massive job of categorization so that the AI can kind of identify what it is trying to pull from because it is ultimately responding to text. And then I guess within Stable Diffusion, for example, if I'm using image to image the process is that if I give it an initial image then… the process is to make the image cloudy. So blurry or whatever and you can choose how much like blur that you apply into the image and then it will then apply the prompting to this kind of blurring so then it remakes the image like that. When it's generating an image text to image it's… I mean, it is generating an image based on your words, but it is also pulling from its vast archive of images so that it has like a seed number. For example, if you choose to regenerate your image with a different seed, you'll get a completely different image based on exactly the same input. But then if you pull back from the seed number then you will see exactly the same image. So it's kind of more like a library or something like this, but like a creative interpretation of it.
Emma Turner
Brilliant. Okay. In your short – this is next question – were you editing your film's timeline on Stable Diffusion or were you using SD, standard def, to get the AI-generated footage and then editing on Premiere, Final Cut, etc.?
Simon Ball
We shot normally and then I use Premiere to edit. So then we just like shot an action in real life and then I took the footage into Premiere, cut the video. And then… took it then into After Effects, then took the individual frames into Stable Diffusion and then went all the way back. I then went all the way back again because I needed to upscale the image and Stable Diffusion is good at that as well.
Emma Turner
Great. Another really good question. Do you feel you're doing so well in these AI festivals... Do you feel by doing so well in these AI festivals has brought you more queries and basically more work and more interest in your works and have companies, who haven't really been grasping tech, show more interest in you? Is it being good for your career, Simon?
Simon Ball
I think it's been good for my career. I mean, I've not been submitting it to AI film festivals because they tend to want like something that's been generated… like, I don't know. And politics of AI film festivals is very long and uninteresting but then… We've been getting it into like say like normal film festivals, which has been exciting so then that was kind of what I wanted to see is like, could I benchmark this against like normal production and is there value to this? I think that's more useful for people to understand if there is something to it or not.
In terms of my personal creative career, yeah, great. I've been able to go to film festivals and have my work displayed and things like this. I get to do this type of webinar or whatever. Does it lead to like corporate clients? Not really. I'm kind of busy looking after my daughters and that kind of takes up the more time than I can. I tried to drum up business or whatever but the plight of life of being an independent film director is just like not going to change from being like a one of fiscal annoyance.
Emma Turner
Cool. Good question again. What are your concerns about the environmental impact of AI, i.e. how much energy consumption it takes to generate an idea/prompt?
Simon Ball
Depends how you're generating your ideas, I suppose. If I was making my film entirely on my own computer that was pulling energy from like… there's Ecotricity at the time. Yeah, I wouldn't necessarily go back to for a variety of reasons but then that's just my own computer that is plugged into a main socket on the wall. Open source, installed locally. So that's like exactly the same kind of power draw if I was doing some sort of mad render on any other project using any other technology.
In terms of, yeah, if you're going to an internet thing where it's like video generation and it's distributed to like a lot of different servers, I don't see how that's sustainable. Like if you're going to try and do like an Open AI SORA, that's going to be like distributed to like literally every person in the whole entire world, then that's like, a lot of compute for like not a great result so I don't think… I think people once the hype has died down a little will realize that it's like maybe there are more efficient ways of looking at the issue.
Emma Turner
Cool. Penultimate question, I think. And it's a great question: for a new AI user which application would you say would you suggest having a play around with? What would be your entry point?
Simon Ball
Midjourney is probably the ‘funnest’ one or maybe with the easiest kind of interface. So Midjourney, you just type in the image and you get the image and then you can upscale. It gives you like a set of four and you go, I like that one. And then you upscale that one and then you can just take that into whatever application you want to do next. Like, I can make posters with the images and stuff like this… with like stuff I wouldn't be able to do otherwise and then I think that's good. I've been ElevenLabs are pretty good if you need audio things. Suno is good for music generation. But a lot of these things, once you've used it for a while, you can kind of see the patterns of how it generates things. The novelty kind of wears off and maybe they're like a variable like long-term usage but they're quite fun to mess about with. I will always be like a proponent of Stable Diffusion because it is free and if you just have a bit of time to spend on YouTube watching tutorial videos, then you can install it on your own computer and just do whatever it is that you want without having to pay for a subscription elsewhere.
Emma Turner
Brilliant. And in fact, that is the last question and it's great it's a great one to start on. So look, I'll thank Simon properly in a minute. So I just want to do a little bit of a wrap up.
For those that don't know ScreenSkills or maybe the first time for them, please go on to our, well, you're on our platform kind of… but if you go back onto our main platform have a look at what we else we have on offer. I would suggest sign up to all our newsletters. Then you'll find out about our training, our programs, e-learning modules. Nearly everything, like 99.9% of what we offer is entirely free and we're there for you. So please, please, you know, make the most of it. We're here to, as I said, right at the beginning, we're here to make sure that we're the best screen workforce in the world.
You will get, you might have had it already, but anybody on this webinar will either as soon as you've finished it or it may have come in already, you'll get a feedback form, please fill it in. It really, really helps us to be able to feed back to the industry, its production companies making films and shows, who give us money to make us be able to do all this sort of stuff. So it's really important to us that we know what you think. And, you know, please, if there's free text, make suggestions of what else you'd like to hear.
And in saying that, I'm just going to say a massive thank you to Simon and wish everybody a really sunny weekend. Fingers crossed. Well, sunny Saturday, I think Sunday might be a bit meh. But anyway, thank you so much, Simon. It's been a real pleasure and thank you to Matt and Katie who have been in the background making the wizardry work. So, thank you everyone.
Simon Ball
Yeah, thanks for having me and thanks everybody for listening to me bang on instead of being out in the sun. Thank you.
Emma Turner
Yes, exactly. Bye, bye everyone.
About our speaker
Simon Ball is a filmmaker from Clapham, London. Since graduating from university he has worked at major motion picture studios, small production houses, his own studio, national political campaigns, farmers markets, film festivals, disaster response and everything else in between. He managed to beat the BFI and Lucasfilm in an Oscar-qualifying short film competition with a short film he made whilst juggling his 6-month old daughter with a newly invented technological process. His follow up short is now doing the rounds in other festivals while he produces videos for national health organisations and cultural bodies in between trips. He also organises the Clapham International Film Festival, and will produce a feature during the summer to pass the time.