How can you distinguish human-made content from AI-generated content?

AI-generated content is everywhere on the web. Videos, images, art, writing, and recently even music created by AI have become increasingly prevalent. How will we continue to distinguish real from fake?

The short answer is that it’s hard, and there’s good news and bad news. The good news is that certain common features or deficiencies in AI-generated content make it relatively easy to distinguish. The bad news is that progress is happening quickly, and this may not be true for long—in fact, there are already AI-generated or AI-assisted deepfakes that are nearly indistinguishable from the real-life people they are meant to imitate.

In this article, we’ll point out some common ways you can help tell real from fake, consider how this might become more challenging as technology progresses, and give tips about avoiding falling for deepfakes or other misleading AI-generated content.

How much AI-generated content is already on the web today?

It is difficult to determine how much AI-generated content has already seeped into the web, filling news feeds and search results. One of the more pernicious facts about AI-generated content is that it can be extremely difficult to determine with certainty whether AI was involved in creating any given piece of content. Common tools like “AI detectors” do not produce reliable results. 

However, there have been signs that AI-generated content has significantly increased in prevalence since the release of products like ChatGPT. A recent analysis from researchers at Stanford and Georgetown found that Facebook pages sharing AI-generated spam have been receiving high engagement from users, many of whom do not seem to be aware of the synthetic origin of the images, and are then being promoted to additional users due to their success at generating user engagement.

But this problem isn’t limited to social media. Even in peer-reviewed medical research, where content should be held to the highest standards of authenticity, recent increases in the use of terms like “delve” (which AI models often overuse) indicate a significant increase in AI-generated content. 

This problem is likely to get worse before it gets better. One report by Europol, the EU’s law enforcement agency, found that as much as ninety percent of internet content could be AI-generated by 2026.

AI-generated content at its worst: Deepfakes

Deepfakes are synthetic media, often relying on AI technology in their creation process. They purport to show a real individual’s likeness or voice, saying or doing things that never actually happened.

While this technology has been prevalent since the 2010s, recent advances in AI technology have made it cheaper, easier, and more convincing. It’s easy to understand the risk this poses. We are already seeing the proliferation of phone scams, altered videos, and non-consensual sexually explicit material that have resulted from the proliferation of deepfake technology.

So far, it’s looking like the deepfake issue will get worse before it gets better. This isn’t to say that nothing can be done—there are currently groups, including The Midas Project, campaigning to ensure that deepfake technology is monitored and restricted. However, the solutions are difficult, and the best thing people can do today is prepare themselves to identify and respond to AI-generated deepfakes.

How can I avoid falling for AI-generated content and deepfakes?

The biggest tell for AI-generated content today is subtle inconsistencies in the details of the content. Whether you’re looking at text, images, or videos, AI-generated content is much more likely to contradict itself, contain physical impossibilities, or make simple mistakes. A classic example of this is the “too many fingers” issue that has plagued AI image generators for years, or deficiencies in logical reasoning that are still common in many AI language models. 

But again, the bad news is that these inconsistencies are getting harder and harder to spot. AI progress is on an exponential curve — the rate of improvement isn’t constant, but is rather increasing year by year. As deepfake technology improves, it will become increasingly challenging to tell whether something is real or fake. Even the aforementioned “tells” like additional fingers or logical errors are beginning to disappear among the newest generation of models.

But all hope is not lost! There are ways to get involved in raising awareness about the issue of deepfakes and demanding change. Below, we’ve included a link to the open letter “Disrupting the Deepfake Supply Chain,” which has been signed by dozens of leaders in industry and civil society, demanding action be taken to reduce the proliferation and dangers of deepfake technology. Click the button below to read the letter and add your signature. 

Ready to take action?