“There are things known and there are things unknown and in between are the doors of perception.” — Aldous Huxley
I’m Huxley Westemeier (26’) and welcome to “The Sift,” a weekly opinions column focused on the impacts and implications of new technologies.
______________________________________________________
OpenAI would like you to believe that Sora 2 is the next big leap in imaginative storytelling. But is it?
You simply enter a few words, and it generates a ten-second cinematic video complete with lighting, movement, and matched sound. Entering “SpongeBob wandering through Times Square at night” will spit out footage that is convincing at first glance. Take a look at this official YouTube video from OpenAI if you’re curious.
Once the novelty factor wears off (and trust me, it does quickly), it begins to raise questions about the data source and copyright implications. The clips are smoother and much sharper than those in Sora 1, and OpenAI claims it has built safety layers to promote “responsible” use, but I don’t buy it. They claim in their press release that it will help creators “push creative boundaries.” That’s wonderful press-kit language, but it’s hard to take their vision seriously when the product immediately starts churning out copyright violations.
Within hours of the launch last Tuesday, the invite-only Sora app has been flooded with every possible format of AI-generated content imaginable. I’ve seen clips of SpongeBob wearing a Nazi uniform, Pikachu robbing banks, an unsettling amount of Epstein and Michael Jackson content, and alarming clips of Stephen Hawking being forced into a wrestling arena (which is extremely inappropriate and disrespectful).
How is this allowed?
404 Media referred to Sora 2 as a “copyright infringement machine,” and I agree. OpenAI’s safety filters have obviously not been effective in reducing the number of fake celebrity lookalikes, and allowing it to create extreme graphic content is a significant red flag. It’s essential to note that OpenAI’s data sources are currently undisclosed. In my opinion, they likely scraped public video databases, such as YouTube or other social media sites. Your face might already be within the model somewhere.
This is where the “responsibility” argument completely falls apart. OpenAI didn’t begin explicitly with permission. They’re using “opt-out” logic. Your work or likeness might already be included, unless you were proactive and blocked it. The company has since backtracked (as of Oct. 7), promising more control for copyright holders, but it still appears to be incredibly unethical.
The Motion Picture Association has already condemned Sora 2. They have accused OpenAI of “systematic disregard” for ownership, and various celebrity estates (including Robin Williams’s) are rightfully distressed that their memories are being digitally resurrected for heinous reasons.
I find Sora 2 to be an unsettling product. The videos are already convincing, making it challenging to forget they’re fake. As I’ve said about AI-image generation tools like Midjourney in the past, they simply replace imagination with their own level of fake, repetitive content. Sora CAN make anything. But if stolen data and entirely artificial content is the future of storytelling, it might not be a story worth telling.
Once fake looks real, does the truth even matter?