AI generates article with ‘serious’ YMYL content issues

Factually incorrect AI content on a health topic made its way into Men's Journal. Meanwhile, Google warns of AI hallucination.

Chat with SearchBot

Men’s Journal is the latest publication to be called out for using AI to generate content that contained several “serious” errors.

What happened. 18 specific errors were identified in the first AI-generated article published on Men’s Journal. It was titled “What All Men Should Know About Low Testosterone.” As Futurism reported:

Like most AI-generated content, the article was written with the confident authority of an actual expert. It sported academic-looking citations, and a disclosure at the top lent extra credibility by assuring readers that it had been “reviewed and fact-checked by our editorial team.” 

The publication ended up making substantial changes to its testosterone article. But as Futurism’s article noted, publishing inaccurate content on health could have serious implications.

E-E-A-T and YMYL. E-E-A-T stands for expertise, experience, authoritativeness and trustworthiness. It is a concept – a way for Google to evaluate the signals associated with your business, your website and its content for the purposes of ranking.

As Hyung-Jin Kim, the VP of Search at Google, told us at SMX Next in November (before Google added “experience” as a component of E-A-T):

“E-A-T is a template for how we rate an individual site. We do it to every single query and every single result. It’s pervasive throughout every single thing we do.”

YMYL is short for Your Money or Your Life. YMYL is in play whenever topics or pages might impact a person’s future happiness, health, financial stability or safety if presented inaccurately.

Essentially, Men’s Journal published inaccurate information that could impact someone’s health. This is something that could potentially impact its E-E-A-T – and eventually the rankings – of Men’s Journal in the future.

Dig deeper: How to improve E-A-T for YMYL pages

Although, in this case as Glenn Gabe pointed out on Twitter, the article was noindexed.

While AI content can rank (especially with some minor editing), just remember that Google’s helpful content system is designed to detect low-quality content – sitewide – created for search engines.

We know Google doesn’t oppose AI-generated content entirely. After all, it would be hard for the company to do so at the same time as it is planning to use AI chat as a core feature of its search results.

Why we care. Content accuracy is incredibly important. The real and online worlds are incredibly confusing and noisy for people. Your brand’s content must be trustworthy. Brands must be a beacon of understanding in an ocean of noise. Make sure you are providing helpful answers or accurate information that people are searching for.

Others using AI. Red Ventures brands, including CNET and BankRate, were also called out previously for publishing poor AI-generated content. Half of CNET’s AI-written content contained errors, according to The Verge.

And there will be plenty more AI content to come. We know BuzzFeed is diving into AI content. And at least 10% of Fortune 500 companies plan to invest in AI-supported digital content creation, according to Forrester.

Human error and AI error. It’s also important to remember that, while AI content can be generated quickly, you need to have an editorial review process in place to make sure any information you publish is correct.

AI is trained on the web, so how can it be perfect? The web is full of errors, misinformation and inaccuracies, even on trustworthy sites.

Content written by humans can contain serious errors. Mistakes happen all the time, from small, niche publishers all the way to The New York Times.

Also, Futurism repeatedly referred to AI content as “garbage.” But let’s not forget that plenty of human-written “garbage” has been published for as long as there have been search engines. It’s up to the spam-fighting teams at search engines to make sure this stuff doesn’t rank. And it’s nowhere near as bad as it was in the earliest days of search 20 years ago.

AI hallucination. If all of this hasn’t been enough to think about, consider this: AI making up answers.

“This kind of artificial intelligence we’re talking about right now can sometimes lead to something we call hallucination. This then expresses itself in such a way that a machine provides a convincing but completely made-up answer.”

– Prabhakar Raghavan, a senior vice president at Google and head of Google Search, as quoted by Welt am Sonntag (a German Sunday newspaper)

Bottom line: AI is in its early days and there are a lot of ways to hurt yourself as a content publisher right now. Be careful. AI content may be fast and cheap, but if it’s untrustworthy or unhelpful, your audience will abandon you.


About the author

Danny Goodwin
Staff
Danny Goodwin has been Managing Editor of Search Engine Land & Search Marketing Expo - SMX since 2022. He joined Search Engine Land in 2022 as Senior Editor. In addition to reporting on the latest search marketing news, he manages Search Engine Land’s SME (Subject Matter Expert) program. He also helps program U.S. SMX events.

Goodwin has been editing and writing about the latest developments and trends in search and digital marketing since 2007. He previously was Executive Editor of Search Engine Journal (from 2017 to 2022), managing editor of Momentology (from 2014-2016) and editor of Search Engine Watch (from 2007 to 2014). He has spoken at many major search conferences and virtual events, and has been sourced for his expertise by a wide range of publications and podcasts.

Get the must-read newsletter for search marketers.