Don't Fall for AI Deepfakes—Check for These 7 Telltale Signs - SLVITO

Don’t Fall for AI Deepfakes—Check for These 7 Telltale Signs


AI videos are flooding the internet, thanks to the launch of high-quality, mainstream video generation models, such as ChatGPT’s Sora 2 and Google Gemini’s Veo 3, that can also spin up semi-realistic audio. Unfortunately, that means it’s increasingly difficult to determine what is real and trustworthy online. A foolproof way to identify an AI video every time simply doesn’t exist, but the seven red flags we’ve identified should help you spot them more often than not.


1. Is There a Watermark?

This might sound obvious, but it’s the best indicator around. Sora-generated videos, for example, include an easy-to-spot watermark, usually at the bottom left. Unless somebody’s really trying to play a trick on you by editing the watermark into a real video, chances are any videos with the Sora watermark aren’t real. However, not all AI-generated videos, including those from Veo 3, always have watermarks. Furthermore, you can remove watermarks even from Sora clips in a variety of ways.

Beyond basic watermarks, Google uses SynthID tech. SynthID is, essentially, a digital watermark Google embeds into AI content you generate with Gemini. Humans can’t see this watermark, but a machine can detect it. Although this seems like a great solution that doesn’t distract from the content itself, you can find various apps that remove these markers and strategies to get around them. Even if a particular tool isn’t effective at removing SynthID right now, internet users will inevitably find a way to make it so.


2. Can You Easily Find the Source Material?

Nothing exists in a vacuum on the internet. Accordingly, you can capture a still of a video and search for it, such as with Google Lens. If you can easily identify the source material, then you know something is real. However, if you can’t, there’s a chance the video in question is AI-generated. This is especially reliable for animated videos and video game gameplay. Creating animations and making games is expensive, labor-intensive work, so you won’t find many sourceless videos of this type of media unless they’re AI-generated.

For example, if I search up a still of the Veo 3 onions video I linked above, the first result is the original video. If I Google a still of this AI-generated animation, I find a plethora of news stories about it and how it was generated. Considering how viral AI videos often become, you might not even need to rely on source material, as you will simply be able to find the video in a story or on a site that clearly demonstrates it’s AI-generated. However, not all AI videos go viral, so sometimes, a lack of source material is noteworthy in and of itself.


3. How Uncanny Is the Audio?

The inclusion of audio in AI-generated videos is a big step toward realism, but it still isn’t flawless. Listen closely to a video’s audio, and pay special attention to timbre, which is the character of a sound independent of its pitch or volume. AI-generated voices, in particular, often have a vaguely robotic timbre. And even if you can’t describe exactly why they sound off, it’s usually easy to hear when something isn’t quite right. However, this tip is less reliable with ambient noises or sound effects.

Timbre aside, syncing issues also plague AI-generated videos. Often, you might hear a footstep just before somebody starts walking or a faucet running for just a hair longer than someone is actually using it in the video. If you notice anything like these examples where the audio of a video doesn’t perfectly match up with what you see, that’s another telltale sign it’s AI-generated. Of course, this isn’t a smoking gun since normal videos can also have syncing issues. But in conjunction with other red flags, such as uncanny timbre, inconsistent audio sync stands out.



Newsletter Icon

Get Our Best Stories!

Your Daily Dose of Our Top Tech News


What's New Now Newsletter Image

Sign up for our What’s New Now newsletter to receive the latest news, best new products, and expert advice from the editors of PCMag.

By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!


4. Is the Text Legible?

Text remains a problem area for AI video generation models. Text on pages in a book or on a whiteboard is almost never perfect in an AI-generated video, especially across different scenes. If a video includes text, pay special attention to it, as oftentimes it will turn into nonsense or start magically appearing if a character is writing it down. A suspicious lack of text in a video can also be a sign: If you encounter a video about something that would typically include some text, but doesn’t, somebody might be trying to mask this shortcoming of AI videos.

For example, take a look at this video. “The Price Is Right” text in the background looks great, but if you look at the contestant’s name tag, which reads Marge, it’s distorted for most of the video when it isn’t the clear focus of the virtual camera. Text doesn’t appear in every AI-generated video, but when it does, it’s often a dead giveaway, especially with finer details. You can test this for yourself by generating a text-heavy video with your favorite AI video generation model. Even with multiple generations, it’s often difficult (or even impossible) to get it perfect.


5. How Long Is the Video?

Most AI video generation models create videos of certain lengths. With Sora, for example, you can make 10- or 15-second-long videos, while ChatGPT Pro subscribers can make 25-second-long videos. So, if a suspicious clip is going around that happens to be exactly 10, 15, or 25 seconds in length, that can be a good indicator it’s not real. Furthermore, it’s difficult with the current tools to stitch many AI-generated clips together without it becoming obvious that it’s AI. Therefore, a 20-minute-long video very likely isn’t AI.

Recommended by Our Editors


6. What Resolution Is the Video?

In 2025, even a budget smartphone can record in 4K. And if you’re watching a streamer’s broadcast or video game gameplay, it’s unlikely the resolution in question is lower than 1080p. Resolution varies across video generation models, and AI video upscaling does exist, but if you find a suspicious video at, for example, 720p, that can be a strong indicator it’s not real. On the opposite side of things, if you’re watching a pristine 4K/60p video in HDR, it’s very likely that the video in question is real, as AI video models can’t yet usually produce clips at that quality level.


7. Can the Video Pass an AI Detection Test?

The rise of AI video generation models coincides with the rise of AI-generated video detection tools, such as CloudSEK’s Deepfake Analyser app. This app examines a video you link to and estimates the probability of it being real or not. If a video fails to pass a test like this, that’s a strong indication that it may be AI-generated. However, these tests can be hit or miss, so passing the test can’t fully exonerate a video. For example, I gave the Deepfake Analyser one of Google’s Veo 3 demo videos, and it told me that it was a real video.


Trust Your Gut, and Look Out For Multiple Offenses

Famously, Supreme Court Justice Potter Stewart described his metric for obscenity as, “I know it when I see it.” In many cases, you’ll know AI videos when you see them, too, even if you can’t think up a good rubric for spotting them in general. That said, the red flags above are all worth keeping an eye out for, particularly if a video has multiple. For example, a video that is exactly 10 seconds long might not raise any eyebrows by itself. But if it also has audio sync issues and distorted text, it’s probably an AI-generated video.

Unfortunately, AI gets better almost every day, so the obvious signs of AI generation will, inevitably, fade away. Furthermore, people will get better at working around the common shortcomings of these video models. My ultimate recommendation is for you to stick to reputable sources when it comes to any hard-to-believe videos, and trust your gut. If something seems off, and you can’t get a trustworthy confirmation of its veracity, it’s better to just consider it fake.

About Our Expert





Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top