“There are things known and there are things unknown and in between are the doors of perception.” — Aldous Huxley
I’m Huxley Westemeier (26’) and welcome to “The Sift,” a weekly opinions column focused on the impacts and implications of new technologies.
______________________________________________________
Two weeks ago, during the height of the Palisades and Sunset fires in Los Angeles, California, my social media feed- mainly Instagram Reels- was filled with videos and photos from the fire’s aftermath. But amidst the heartbreaking footage, photos and reels of the famous Hollywood sign in the Hollywood Hills engulfed in flames appeared repeatedly. It is unmistakably a powerful visual, but also a completely fake and AI-generated one. It’s evident once you look at the details- the flames seemed oddly fluid and unnatural and the spacing between the letters and font didn’t match the actual sign. According to maps of the fire’s spread the Hollywood sign was never on fire and never at risk of bursting into flames. Yet if you go on Instagram now, two weeks later, and search ‘Hollywood Sign,’ you’ll see posts commenting on the tasteless ‘burning’ fake images. However, if you scroll past the first few results, the AI-generated videos are still available.
Why do I mention this? Because on January 7th, 2025, Meta (the company behind Instagram, Facebook, Quest VR headsets, and messaging services like WhatsApp) announced via their website that in the next few months, they would end their “third party fact checking program in the United States and begin moving to a community-based program called “Community Notes”.
What are Community Notes, you might ask? They rely on other Meta users to report content that might be incorrect or manipulative- essentially forcing YOU, the user, to be your own fact checker. But it’s also important to note that within their documentation surrounding how Community Notes work, they mention how similar to X’s policies, “Community Notes will require agreement between people with a range of perspectives to help prevent biased ratings”. This part is worrisome- Meta has billions of users across Instagram and Facebook, and bias is inevitable. It’s likely that in the case of the Hollywood Sign misinformation, ‘agreement between people with a range of perspectives’ would hopefully occur. However, what will happen with political or other more controversial content? Plus, that Hollywood Sign content is still on Meta’s platforms. While I found a few examples of the ‘Community Notes’ tag being applied to some of the more egregious examples, there are still hundreds of posts promoting the fire as authentic content.
It’s no secret that these press releases from Meta came just weeks before the inauguration of a president who has consistently shared misinformation about the 2020 election and even AI-generated images of Taylor Swift wearing his merchandise. Hours after being sworn in on January 20th, our 47th president ordered that “no federal officer, employee or agent may unconstitutionally abridge the free speech of any American citizen,” which the Associated Press calls “an early step toward his campaign promise to dismantle what he called government ‘censorship’ of U.S. citizens.” We are in the beginning stages of the new administration, and it terrifies me to see recent policy changes that hint at the continued lack of social media fact-checking in the coming months.
As AI content generators improve and the guardrails of misinformation disappear, I recommend turning to reputable news sources known for fact-checking their content. Social media has never been the most reliable source of political or general information, and that’s true now more than ever.