Terrell Jones
Annalisa💓
I believe this conspiracy theory holds a partial truth, but it doesn’t get the full picture right. We live in a world where social media keeps advancing rapidly, yet there are still tools that allow us to detect when content has been generated by AI or manipulated with specialized software. Take Photoshop as an example: edited images started appearing online in the early 2000s, and even today there are publicly accessible apps and websites—some of them completely free—that can analyze a suspicious image and confirm whether it has been Photoshopped. The same applies to AI-generated content. Some companies embed non-visible watermarks in their images, like Google does with its Nano Banana model. These watermarks aren’t obvious to users, but certain tools can still detect them. Even on the Gemini app itself, newly generated photos contain a visible watermark, regardless of whether you’re using a pro subscription. However, when using the Nano Banana Pro model on other platforms, those watermarks don’t appear. As a blind user, I can often tell immediately when a video has been generated by AI, even without seeing it, simply because of the voice. I’ve come across many AI-generated videos on social media, and each time I’ve recognized them instantly—and the comments always confirmed it. So even though AI continues to advance, there are still reliable ways to identify it. These signs may go unnoticed by regular or casual users, who can easily be fooled, but someone with more experience or specialization can usually tell the difference.