YouTube is facing backlash as creators discover undisclosed use of AI-powered visual enhancements.
Share on:

By Tasneem Bandukwala

Without informing creators, YouTube has begun testing AI-powered visual enhancements on some Shorts, raising serious concerns about consent and transparency.

For months, creators such as Rhett Shull and Rick Beato noticed unsettling alterations in their content—from unnaturally smooth skin and warped facial features to sharper but oddly distorted textures on clothing and instruments. Shull uncovered the modifications by comparing his Shorts on Instagram and YouTube, concluding, “it just looks wrong.” 

In response to mounting evidence, YouTube confirmed the changes but downplayed the implications. According to Rene Ritchie, YouTube’s head of editorial, the experiment uses “traditional machine learning,” not generative AI, to unblur, denoise, and clarify videos similar to smartphone photo processing. 

Still, creators argue this distinction fails to justify the lack of transparency or consent. Shull emphasized that such undisclosed enhancements not only misrepresent the creator’s original work but erode trust with audiences. “Underlying all of that is this… trust that what I’m making… is truly me,” he remarked. 

Digital ethics experts warn that YouTube’s experiment marks a troubling turning point. What may be framed as benign “quality improvement” by platforms can quickly shift into editorial control, manipulating reality without creator approval.

This comes amid wider debate about AI in media, with past criticism directed at Netflix’s AI remasters and undisclosed edits on other services.

As trust in media integrity hangs in the balance, creators and experts alike are demanding that platforms like YouTube adopt clearer consent mechanisms and allow creators to opt out of such AI experiments.