Synthetic Media and the Death of Film

Four years ago, I wrote a short piece titled “Synthetic Media and The Future of Film”. The term “synthetic media” might seem odd now, but 2022 was so long ago that we hadn’t yet agreed what to call this stuff. Something else we most certainly hadn’t agreed on was what the near-term future looked like, let alone anything multiple years away. Many people confidently predicted that AI (in its various forms - LLMs took their fair share of it as well) was nothing but a fad. They believed it would only be useful as a tool to the people who were conveniently already established in whatever field was being discussed, in this case the arts. They went online and cried about soul and the subsequent lack thereof, which has since developed into histrionics about “slop” everywhere you look on the internet. Many people were absolutely sure that AI wouldn’t turn out to be anything more impactful than NFTs. Meanwhile, I tried my own hand at predicting the future - to far better results.

Before we continue, it might be a good idea to quickly look over the history of image models. Obviously, we’re not going to go over every important model (or we’d be here all day), but a brief overview would be handy. To most people, it probably feels like this technology just popped up out of nowhere somewhere in 2023. In truth, what we’ve come to call generative media has been around for a while in various forms.

In 2014, GANs (Generative Adversarial Nets) were invented. While computers had been able to identify and categorize images for many years before then, a 2012 model called AlexNet being especially important to that part of the story, this was different. By pitting two neural networks against each other, GANs were able to create novel and unique images for the first time. They were grainy and small, frankly unimpressive if you didn’t already know what you were looking at, but they were new images. This was the moment when the research community realized the potential of this technique and decided to push.

One year after that, in 2015, Google released something called DeepDream. You might even remember this if you’re old enough - horrifying images with dogs hallucinated into every square inch of the frame. Some people even applied the technique to videos, producing some truly insane visuals. This was when the technology went viral, capturing the attention of the entire internet. Of course, a vast majority of people dismissed it as a toy - and perhaps they were justified in doing so. The artistic potential of AI wasn’t exactly obvious at the time, but the researchers knew what was possible and kept pushing.

2017 was the year of Deepfakes, and you almost certainly know what these are. Released by a Redditor who called himself “u/deepfakes”, this technology gave everyone the power to faceswap any image or video they wanted. This was obviously met with incredible backlash, hinting at the arguments over AI art to come in only several years, but the technology was widely embraced by Hollywood and smaller creators.

StyleGAN was released by Nvidia in 2018 and it broke the internet. Websites like “This Person Does Not Exist” went insanely viral, with people struggling to believe the faces on their screen didn’t actually belong to anyone. Image editing was also now possible, with websites offering the ability to upload images and moves sliders to edit the details. Hair color, eye color, masculinity, emotion, and nearly every other point of control you could imagine - all possible. Not to oversell things, as the quality wasn’t phenomenal by any stretch of the imagination, but it’s fair to say this might have been the true moment everything changed. It was at this point that there was finally no going back.

Then in 2021, it happened. OpenAI published their report on DALL-E, the first major transformer-based model. Looking at the images now, they’re laughably quaint. However, at the time, they were nothing short of incredible. There was no true moment of virality for DALL-E, the internet had bigger things to pay attention to at the time, but I remember exactly where I was when I looked at it for the first time. I considered the pace of image models over the last few years and realized that it would only be a few more until the technology was actually useful. When I tried to explain this to people, I was simply told that CGI already exists, computers have been doing this since the 1990s, and that I shouldn’t obsess. Needless to say, such sentiments would not age well.

Now we come to 2022, the year AI image generation caught fire. DALL-E 2 was released on April 6th of that year, and nothing has been the same since. For the first time, it was plainly obvious that AI models were no longer pitiful at their jobs, now actually posing some threat to human artists. This prompted the internet to split into two groups. The first was pro-AI and the second was anti-AI, and the artists found themselves squarely in the anti-AI camp. They confidently declared that AI art was low quality, born from theft, and a fad like NFTs. Some even professed that AI art would be dead by 2024. The internet was quite beside itself, now having to deal with something truly new for the first time in many years. Meanwhile, I was focusing on the future.

(It’s going to be hard to write this section without seeming too braggadocious, and I completely acknowledge that I was hardly the first person to have any of these ideas, but I think I’ve earned a little bit of “told you so” by now.)

On May 24th, 2022, I posted the aforementioned “Future of Film” article on LessWrong. I won’t dwell too long on this article in particular, since it isn’t actually terribly important and serves mostly as an artifact to show what my thoughts at the time were, but I do think it’s worth going over it briefly.

My thesis was simple: Image models followed a predictable path of increasing quality and were likely to keep getting better until they achieved perfection. If we assume the same growth path for video models (which were in their infancy at the time), we were likely to see photorealistic video generation within several years. I had been paying close attention and that made the most sense to me at the time. My most conservative guess at the time was that it would take 6 years for video to effectively be solved, which would have put us at 2028. At the time of writing this, that is still my prediction.

To clarify, I fully believe people will be making their own films by either later this year or early next year, 2027. The questions of “can I make a movie” and “is this literally perfect” are not the same. In the same way that in 2026 you can use an image model to make essentially whatever picture you want, the image problem itself has not been “solved” completely. Funnily enough, I would guess images being solved happens around the same time videos are solved.

Over the following few years, image and video models continued to release and improve. The internet arguments continued and the world kept spinning. I spent those years repeating myself about the path AI was on and what was going to be possible soon, but very few people were convinced. For a long time, it was an oddball opinion that AI models would take over the arts completely. “Maybe in 50 years” was the most generous take you could commonly find online. Everyone was so sure that this just couldn’t be done and the world would remain “normal” for the rest of their lives. Things couldn’t actually change, right? Nothing ever happens, right? It’s just a nothingburger, that must be it!

Of course, now we find ourselves in 2026 - looking down the unsympathetic barrel of a loaded gun that promises to completely upend and overturn everything we hold dear about artistic expression. Ten-year cinematic universe plans and TV show release schedules reaching into the 2030s seem almost comical now. The mainstream opinion has shifted from “maybe in 50 years” to “AI is going to kill Hollywood”. This was obviously always going to happen, people typically aren’t actually delusional and can usually accept something as true if the evidence is right in front of them, but it is oddly frustrating.

When I said this would happen 4 years ago, I was dismissed. Now, if you ask around, people will tell you that they always knew it was going to happen and that it was super obvious to them. Suddenly, everyone seems to have been under the impression the whole time that AI movies were inevitable before the decade was out. The only issue is that I’m old enough to remember 4 years ago, and they’re lying.

Perhaps this seems like I’m angry. “Why didn’t they listen? Why don’t they praise me?” In truth, I don’t actually care for acknowledgement or anything like that. The reason I bring this up, and indeed the reason I decided to write this post at all, is to explain why this behavior is concerning.

The conversation about AI movies naturally tends to shift into what comes next. I don’t know for sure what’s ahead of us, but video games being the next thing to fall seems certain enough. Of course, when this is brought up, the usual argument begins again - it will never happen. Nothing ever happens, right? But as sure as the sun will rise tomorrow, when AI video games take over, everyone will simply say they knew all along and that it was always obvious.

The reason this is concerning is because it shows the general public has very poor prediction skills. It highlights the failure in how everyone is processing exponential change. They can retcon their beliefs and easily convince themselves they knew all along, which instills a false sense of confidence in their prediction skills - for the future they “knew” was coming indeed came to pass, but they cannot meaningfully engage in conversations about what happens next. They cannot make predictions about the future, which are hard to make.

And of course, they didn’t see it coming. They’re remembering a fictional version of themselves - one who paid attention and made sense of things. They take this imagined track record, a list of perfect predictions if there ever was one, and look at the world of today. “If I was right before, I must be right now!” They look at their jobs, the economy, their own role in society, and feel safe - for they don’t feel threatened today, and their track record is perfect! They don’t currently predict there will be danger, so there surely will not be.

Talk to people. Ask them about AI. Ask them to tell you about the future of jobs and the impact AI will have on the market. You will get the same dismissive shrugs I got 4 years ago. “That sounds a little silly,” they assert, confident that the world never really changes all that much anyway. They’ll say we’ll always need a human touch for medical fields, and nurses are safe. That we’ll always need human judgement for law, and attorneys are safe. That we’ll always need CEOs and janitors and everyone in-between. That nothing will change. It hasn’t happened yet, so it won’t.

I wrote about this very phenomenon in my post “Update All The Way”:

You can think of it like two men wanting to walk across a street, but seeing a car heading toward them. The first man says, “Just because the car has moved 50 feet in the time since I first looked at it doesn’t mean it’s going to reach the crosswalk! It would have to go at least ten times further than that to get there! That seems unlikely to me, as I am a very smart person and know that cars are heavy and hard to move. It will probably stop after another two or three feet, so I’m all good to walk into the street.”

The second man says, “Oh wow, that car sure covered that 50 feet rather quickly. If it keeps that pace up, it’ll be here in ten seconds or so – even faster if it speeds up. I think it’s probably smart to not step into the street.” And so, the second man stays on the sidewalk while the first man proudly and confidently steps out onto the street, only to be struck dead by the speeding car. His inability to see the issue at hand and update all the way turned out to be a fatal mistake. Unfortunately, it seems to me as though most people would find themselves squarely in the position of the first man – unable to see the inevitable outcome.


We already lived through the last normal year. 2024 has come and gone. We are drifting toward the most transformative and important event in the entire history of our species - and nobody can see it coming. The small group online who try to warn everyone else are mocked and ridiculed, only for their positions to become mainstream opinion multiple years after it’s too late to stop anything. Unfortunately, it doesn’t do much good to agree that the car stood a good chance of hitting you after you’re already bleeding out on the pavement.

The world you know is coming to an end, in more ways than one, and it doesn’t care if you believe that’s true. Kick and scream and bury your head in the sand, desperately reassuring yourself that the world will always make perfect sense - it doesn't matter. Reality itself is about to fracture under the weight of the exponential, and I told you so.

2/15/2026