A Popular Tech YouTuber Was Banned, And Fans Are Pointing Fingers At AI: 'Never Let Clankers Moderate'
Whether it's a woodworking YouTube channel or one focused on car repairs, one constant is the community of like-minded individuals that develops in comments sections and Twitch chats. Take Enderman, a YouTube channel dedicated to exploring Windows. It has a 390,000-strong subscriber base, which Enderman carefully cultivated since starting in November 2014. In November 2025, though, Enderman was on the receiving end of a channel ban that was allegedly unjust and administered by YouTube's AI tools. As a result, fans of the channel have been at the center of a wave of discourse surrounding so-called "clankers" and their influence on content moderation — to wit, the dystopian idea of AI making such decisions without sufficient oversight from a human.
The Reddit thread "Enderman's channel has sadly been deleted..." gets immediately to the heart of the issue, in my eyes, with u/CatOfBacon lamenting, "This is why we should never let clankers moderate ANYTHING. Just letting them immediately pull the trigger with zero human review is just going to cause more ... like this to happen." Of course, moderation errors can be made, whether by human or AI, and in such cases, I feel the utmost needs to be done to ensure creators can rectify the situation when they are penalized or even banned unfairly. That said, u/Bekfast59 added that the appeals process in such a case can be "fully AI as well," muddying the waters.
Watching fans hurry to preserve the YouTuber's content on services like PreserveTube, it really struck me that YouTube's processes can leave creators extremely vulnerable. A banned channel means that those connected to it are also banned, and it isn't clear precisely how YouTube determines that. These things need to be made more transparent to users.
Enderman's response to the channel ban and being reinstated
A November 3, 2025 upload from Enderman, simply titled "My channel is getting terminated," leaves no room at all for ambiguity. He immediately launches into the story of his second channel, Andrew, which had been banned for something seemingly random: Being linked to another channel that had been hit by three copyright strikes, according to the YouTube Studio message the content creator received. With no apparent connection to the other channel in question, a bemused Enderman associated this banning with a mistaken automatic AI flagging. "I had no idea such drastic measures like channel termination were allowed to be processed by AI, and AI only," he said.
From the video and the YouTube Studio appeals process that the creator went through on camera, it isn't clear whether this was entirely the case or whether a human evaluated the channel after it was flagged. Enderman's claim, though, is far from a unique one among tech YouTubers. Other channels, such as creator Scrachit Gaming (who has accrued 402,000 subscribers over almost 3,000 uploads), were also targeted, with the creator sharing in a post on X that they had also been banned for an alleged link to the same channel that Enderman was flagged for.
The very same day, a follow-up post from TeamYouTube declared that it had restored the Scrachit Gaming channel after looking into the ban, and had also followed up with other affected creators. As of the time of writing, Enderman's secondary channel Andrew has also been reinstated. The quick turnaround went a very long way to convincing me that this may have been a simple automatic error by YouTube's systems, quickly corrected when a human assessed the situation.
The harsh reality of content creation
With a huge network of channels of all shapes and sizes, it's natural that there would be some bad actors among them, and that YouTube would require ways of responding to and combating that. Unfortunately, though, it seems that the AI systems that play a role in this lack oversight, a problem for the platform to resolve going forward. What is undeniable is that machine learning has a significant role in the way that YouTube monitors and moderates its content.
YouTube explains that its systems access a wealth of data from previous reviews, and so "can offer a high level of accuracy in detecting violations." As is typically the case with AI, then, the time- and cost-saving benefits of this process are clear to see, but there's a big potential human cost. The platform also states that "automation is only used in cases where our systems have a high degree of confidence that content is violative," and that, in other cases, it simply flags it for manual review by an employee.
My question, though, is what constitutes this "high degree"? Google's AI Assistant, for instance, has very confidently told me to use gasoline in a recipe, after all, and so I think AI in all its forms always needs appropriate oversight. Copyright strikes and other penalties are far from a rarity on YouTube, and they shouldn't be administered lightly. People's livelihoods are potentially on the line, and if things reach that point, it's vital to ascertain that a ban-worthy offense was committed. According to Enderman, this wasn't the case with his channel.
The AI and human influence on YouTube moderation
The issue is ensuring that humans assess YouTube bans, and that affected creators have the opportunity to make their case to them. That said, manually reviewing every piece of flagged content for issues would be a monumental task, and AI inevitably has to play a part. YouTube explained in its email to Enderman (shared by the creator on X) that "we use a combination of automated systems and human reviews to process removal requests," but this is incredibly vague. If AI can automatically flag several prominent YouTubers, as is allegedly the case here, it's indicative of a significant problem. The consequences of a ban can also be severe, with YouTube policies preventing a terminated creator from running other channels or starting new ones.
YouTube requires a myriad of policies to protect both its users and itself. It has a three-strikes policy regarding copyright claims, but also stipulates that creators can complete the Copyright School training or request that the issuer retract their claim to resolve the issues. Getting three strikes warrants termination, and this also applies to channels linked to another channel that has three strikes.
This, it seems, is what happened to Enderman, and if AI can be used to determine whether a channel is "linked" to another, it's easy to see how such a tide of mistaken bans can occur. There's a risk of channels of all sizes turning their back on YouTube as these kinds of things happen, and that could ultimately be very costly. YouTube has a lot of advanced features and tools, but it could be much better at some of the most important things.