Put simply, Artificial Intelligence (AI) is a goal. It’s an ambition to program machines and software to behave in a way that seems human-like or ‘intelligent’.
Rather than simply obeying instructions, AI systems aim to reason, learn, communicate and make decisions – mimicking the kind of traits we associate with humans.
The concept of a machine behaving like a person has been around for a very long time, but the label AI started in the 1950’s. The original AIs were simply hard-coded logic or rules. They could play games like chess or perform simple tasks. They followed fixed instructions and didn’t learn or improve themselves.
In the ‘90s, machine learning concepts started to be applied, and they soon flourished (Fradkov, A. L. Early History of Machine Learning, 2020). Instead of programming every rule, machines were given sets of data and programmed to train on (or learn some aspect of) that data. They were programmed to do this by looking for and remembering patterns in a dataset and then comparing new data with the patterns they had learned to recognise. For example, they started to be able to tell if there was speech in audio or tell when there was a face in an image.
In the 2010s, access to the internet and large swathes of ‘deep’ data coincided with developments in hardware, creating computer memory and processing capabilities like never before. This allowed the expansion of the size of the AI training datasets to a huge scale. With this increase in capacity, researchers started to develop more sophisticated training methods and algorithms, like neural networks.
Today, in the creative industries, we are seeing systems that are increasingly more like humans. The current challenge for the technology developers lies in more complex problems and reasoning, which, even with the world's best computing, greatest algorithms, largest set of training data and lots of time, still doesn’t manage to perform like humans in a general sense. The pace of progress is rapid and accelerating, and educating freelancers in high-end television for new developed skills is key.
There are two processes it is worth understanding before reading the AI skills in HETV and film: Training (learning) and Inference (doing).
Training is when the creators of the AI teach the program how to carry out a task. They will give the AI access to large amounts of relevant training data, like a complete location photo library and all of the tags and metadata associated with the photos. Normally, the larger and more accurate the dataset, the better the AI will be at learning and performing the future task (inference). Some AI systems can learn every time they perform their task, as they add to their dataset and their training is updated as they go.
Inference is when you give a trained AI a specific task to carry out, and the program will try and complete the task, giving back some kind of result. So, for example you might show the trained AI a new location and say find me all the other locations in this library that are similar – that is inference. Typically, AI Inference is run on cloud servers hosted by tech companies (like OpenAI or Google). However, if you run an AI inference process on your own hardware, and the app doesn’t upload anything to the internet, it can be completely confidential.
Using AI as a time saving tool has the potential to be very helpful, but it also carries with it some risks.
Risks to be aware of:
Production policy – Some studios and broadcasters have banned the use of all AI in the creation of a production. This normally means creative and generative AI, the kind that could create an entire video clip. It is always best to check with your production team what you are and are not allowed to do on this specific production, so you don’t get caught out. Most are now producing guidelines for acceptable use and some will give you a list of AI apps which are approved for use. Remember, this field is changing every month so ask for the latest version of guidelines on each new production.
Protection of data – If you ask an AI assistant a question, send a script, images, photos or your location libraries to an AI you might be giving it permission to learn from that data (AI Training). As a freelancer, you might also be giving it permission to distribute that data or information to others. Even if you are allowed to use AI in a production, you might fall foul of confidentiality or IP protection clauses. So be careful what online tools you are using.
Some AI tools detail how they will use your data, and often, paid or premium plans have greater levels of confidentiality.
Although it is more complex to setup, you can also host completely private AI algorithms on your own hardware. You can safely carry out custom training with your own datasets and also ask questions or set tasks in a confidential environment.
Thinking and reasoning – AI tools can be great at sifting and organising through large quantities of data, but consider for a moment if you need to understand ‘the why’ behind a decision? For example, an AI might be able to create a script breakdown for you, but what if things change rapidly or last minute, will you have enough understanding to adapt? In these fast-moving situations, having done the ‘hard work’ with the script can be of great value, and enable you to solve the problem quickly.
AI can be very helpful in reorganising and presenting information in different ways, but to make great decisions in the moment, it is important to have the reasoning behind the breakdown clear in your mind.
Errors and bias – AI systems can make mistakes, just like we do. You will still need to carefully check your work for errors. The data that an AI is trained on, and the algorithm design, will also skew the results. For example, if you only trained the data with images of locations with one style of red brick, it will always produce red brick buildings in its answers.
If you haven’t trained an AI yourself, you might not be aware of the kind of bias it has. AIs are now trained on datasets so large and ‘deep’, that no one really understands what they are learning from. So again, just check to make sure you’re happy with the result.
If an AI hasn’t been given the right information, it can’t give you the right answer.
Training sources - Most trained AI systems that are available for common use have been trained through ‘deep learning’ on large sets of data, scraped from the internet. In the past, this data has been taken rather indiscriminately. There is now a UK law to cover this (see TDM) for non-commercial research, but commercial use of copyrighted material is protected with an ‘opt out’. However, abuse of this is really difficult to prove, and there are many ongoing lawsuits around the world on this topic. It’s also worth noting that Europe, China and the US take slightly different approaches. Regular changes should be expected in these areas, as governments and legislation catch up with rapidly changing technology capabilities.
EU AI Act is a significant piece of legislation that is affecting many of the big tech companies and has a global impact. This is gradually coming into force in phases, with the next phase happening right now (Aug 2025). Some of the large tech businesses are refusing to comply, such as Meta.
It is worth us considering whose voices, visuals and words we are reusing with AI. Are you respecting the original creators whose work trained this AI?
Machine learning vs deep learning - Nearly all of the concerns are focused on the use of AIs trained with the ‘deep learning’ algorithm approach, using massive datasets. The concerns are targeting where that data came from and whether it was taken with permission. But there are also many systems which have been trained on completely acceptable datasets, used or given with full consent.
Some of these smaller, permitted datasets can be quite modest; typically, the algorithms to work with these are more ‘niche’, perhaps only targeting one specific capability, and often use ‘machine learning’ algorithm techniques. These kinds of systems are far, far less controversial (although technically still under the ‘AI’ banner) these are being used for face detection, or object tracking and have been used in film and TV for decades without concern.
Examples of deep learning AI tools: Firefly, Runway, Midjourney and Google Veo 3 Cinema Flow
An example of a machine learning AI tool: Adobe After Effects Content-Aware Fill
Tracking AI usage
Whilst there is no unalterable way of proving that a camera captured a specific piece of content, it is now possible to track recorded content with embedded metadata through the life cycle of a production. Systems like the C2PA standard with CAI are enabling this to happen.
We have a responsibility to be clear about how something was created. And complying with the broadcasters, streamers, studios and distributors' requests in this way is crucial.
This can be challenging working across multiple productions, where one might happily use every tool available and be content with generative AI being used across the board. Another might want the original content preserved wherever possible, with generative AI tools banned in each instance.
Freelancers must ensure they comply with policy guidelines. Trust is very important, and how something was made or done could become the focus of a future court case with gross misconduct and/or your job on the line.
There are many ways to try and stop AI systems from using your data for successful training. But the best, and only sure way is to just keep your content private, or behind some kind of digital security access and restricting access to only trusted people.
If you feel the need to share some content publicly, there are ways to try and protect it, but they all have vulnerabilities. You could consider ‘stacking’ lots of techniques together, and that can often be more powerful.
Although very visually obvious, you could simply watermark the content with visual barriers, or only include thumbnails or low-resolution/highly compressed versions for public display.
There are various tools available to apply a ‘poison’ to obscure the style or recognition of content in images or video from AI robots. This is a bit like a watermark for AI. The poison subtly adjusts the content of the image in a way that is imperceptible to humans, but very misleading to AI training. It is technically possible to undo this poison, so it isn’t foolproof (although currently removing the poison is very difficult to do!). Examples of this are Nightshade, Glaze and Image Degradation (DIY).
You could also consider adopting workflows like the C2PA standard and CAI, which allow creators to embed metadata into content. These enable teams to work on and edit images and video whilst keeping track of changes, to prove authenticity and the source of the original content. You can attach or embed metadata to your content in other ways as well, like NoAI. You can include files like robots.txt and NoIndex to inform search engines not to crawl your website as well.
And finally, you can make use of the law in your favour. Some countries have opt out clauses for AI training data collection. Make sure you also visibly show alongside any content that you do not permit the use of the content for AI training. Finally, don’t forget to clearly assert your copyright.
Quick AI checklist:
A quick ‘checklist’ summary of things you might want to consider before you use AI in a project.
- Do I have permission to use the data I’m sending to an AI?
- Who owns the output that will be created?
- Is my input data stored or reused?
- Was this AI trained ethically (with consented or licensed material)?
- How might this use impact creative jobs?
- Is the output accurate, verifiable or likely to cause reputational risk?
- Could the output unintentionally imitate someone else’s work?
- Am I being transparent about how this AI was used?
- Does this use align with my employer's or customers' policies?
Links to other ScreenSkills resources
Explore more AI-related training, events and opportunities with ScreenSkills
About this document
These AI sheets were written by a human, Phil Adlam. Phil is the Chief Technology Officer at Production Park. They were written alongside research and collaboration with the high-end television industry in various departments. ChatGPT was used for grammar amendments.