5 Things You Should Never Use AI For
At this point, it's safe to say AI is almost everywhere, in almost everything. AI generators for music have led to a wave of AI bands on Spotify. AI is in tools for applying for jobs and in AI-controlled F-16 fighter jets in dogfights against humans. People are using it in increasingly weird ways, creating some of the strangest AI-powered gadgets you can imagine. Our social media is now overrun with AI slop, and a lot of the accounts watching those videos may be bots, too. We're at the point that the average Joe must know how to detect AI-generated text, pictures, and video to avoid being tricked. This state of affairs has certainly led many to throw up their hands. Might as well embrace AI, everywhere, if it's the new reality, right? We couldn't disagree more. No matter how ubiquitous AI becomes, there will always be some use cases where it shouldn't be touched.
We're going to skip the more obvious, controversial stuff. Tech companies are effectively stealing artists' work without consent or proper compensation to train their generative AI models, and then using those models to replace them. Sadly, we're unlikely to convince big companies or the general public why issues like that are wrong. Instead, we're going to focus on clear-cut areas where using AI is morally and ethically fraught, or where there's a real risk — physically or otherwise — to the users. These are five ways we think you should never use AI.
Creating deepfakes of other people
When deepfakes rose to prominence, we thought the biggest risk was misinformation. People do use deepfakes to make politicians say things they never said, sure, but people primarily use deepfakes (98% of them) to make pornography featuring people who never consented. For years now, people have been able to view entire websites full of deepfaked, non-consensual pornography featuring celebrities and sometimes everyday people. Paid services can take anyone's picture and transform it into pornography for the right price.
The stories of this being used for ill are gut-churning. Take some Telegram channels in South Korea (via the BBC) that were sharing pictures of students (sometimes underage) to transform them into sexually explicit ones. Or a woman who stumbled upon a site depicting her in multiple SA fantasies, complete with her Instagram handle and location (via BBC). Deepfake pornography has been used to silence journalists and humiliate popular celebrities. You don't have to go far to find reprehensible stories like these because unfortunately, they're a dime a dozen. There is some good news: Major deepfake porn websites have been shut down, and some legislation has been introduced to combat certain uses of deepfake porn. Regardless, the tools to make deepfakes are always there.
Hopefully we don't need to explain why deepfake pornography is wrong. Making deepfake pornography of another person without their consent, even if it's never posted online, is unacceptable. There are vanishingly few situations where using deepfakes is morally harmless, like de-aging a celebrity for a movie or making them say something silly for a joke. But the sad truth is this technology, aside from deepfake porn, is often used for fraud and political misinformation. If using deepfakes for a certain reason gives you pause, then you probably shouldn't.
Asking for (almost) any health-related information
Everyone remembers when Google's AI was telling people to put glue on pizza. It seemed like an open-and-shut example of why you should never trust a chatbot with health information. Despite that, some stats suggest over a third of Americans rely on chatbots for that and more. People use it to plan meals, craft bespoke workouts, and in some cases, even verify information they've heard elsewhere. It terrifies me seeing how many of my friends, relatives, and acquaintances uncritically use AI for health-related purposes. My fears are justified. People have already suffered serious consequences when taking chatbot health advice.
Most people are aware by now that AI has the propensity to hallucinate, and the evidence suggests the hallucination problem is getting worse. It bears repeating that LLMs are trained on the internet full of text, some of it from reputable, fact-based sources, and some of it from flat-out hogwash. They simply cannot fact-check themselves to discern the truth, and they readily fabricate information to fill their knowledge gaps. Their primary purpose is to give the most likely answer to a question based on their training data, and to be genial and submissive; we've all seen how an AI will change its answer if you correct it. The rise of AI in healthcare could therefore have catastrophic consequences on society.
Admittedly, some of the health uses of AI are mostly benign. You'll probably be fine if AI tells you to do a couple extra reps of a certain exercise. But asking what to do with your body — what you ingest, what you put on your skin, how much you sleep — is gambling your health in the worst way possible. Stick to medical professionals.
Doing your homework
When chatbots can write essays, answer questions, and solve problems — even if they do these things incorrectly or with false information — students take the path of least resistance and use them. AI is now commonplace in both schools and colleges. Educators can do almost nothing to stop it, and in many cases, are using AI themselves. Educational institutions are increasingly embracing AI and even redefining cheating to account for it. In my personal opinion, this is a reckless, society-wide experiment that could have lasting consequences for future generations.
The problem with AI in education is that, I believe, it undermines the entire purpose of education. Education is about teaching students invaluable life skills like how to think critically, how to research, how to solve problems. When a chatbot can do your homework for you instantly, you miss out on the difficult yet vital process that shapes you into an educated, intelligent person. Chatbots have only been around for a couple of years, but I'm convinced we're in for a brutal sucker punch when millions of students (who bypassed this learning experience from day one) graduate.
The other issue is the hallucinations that make chatbots untrustworthy. Imagine future doctors, pilots, architects, and other professionals whose mistakes could seriously harm people — individuals who have learned incorrect information or been deprived of the education necessary to do their jobs right. It's not unreasonable to assume that we will soon have professionals in many fields whose training has been negatively affected by AI. I don't think even the most ardent AI evangelist would be willing to undergo surgery with a ChatGPT-taught cardiologist. So doing your homework with AI isn't just wrong because it's academically dishonest, it could also put others at risk.
Getting any serious life advice
At first glance, it might seem that because AI excels at conversation, it would be great as a confidant or therapist, or just someone to talk to. But then you read the horror stories — the teen who wasn't talked out of suicidal thoughts by ChatGPT (via NBC News), or the "AI girlfriend" that encouraged her partner to take his life (via Futurism). Lots of people are using ChatGPT as a therapist or digital companion and getting deeply disturbing advice on how to solve their problems. Chatbots like ChatGPT have installed guardrails — like not being able to recommend that you break up with someone — but these guardrails often show up only after harm has already been done, or they have easy workarounds.
Now, I don't want to be too hard on people who use chatbots for advice. Therapy is expensive, inflation is at record levels, and we're in the midst of a loneliness epidemic. It's hard to blame someone who turns to a free chatbot that offers readily available, convincing dialogue. However, this is yet another instance where people treat LLMs as oracles. Chatbots are fancy algorithms that predict words in a sequence, not trained professionals — much less sentient beings.
I don't think there's anything wrong with using a chatbot to bounce ideas off, as long as you take everything it says with a huge grain of salt. But don't ask it for mental health diagnoses, don't ask it for relationship advice, and don't treat it like a life coach. And definitely don't make the mistake of assuming that paying for a chatbot's premium version means you can trust it. At the end of the day, it's still just an algorithm.
Vibe-coding everything
I won't deny that AI has proved surprisingly useful for coding, too. The best AI tools for coding can become indispensable for beginners and experts alike. It's definitely helped programmers figure out how to write certain things by just describing them in their own words, and produce code at a much faster clip. Coding with AI is known as vibe coding. Yet as is typical for people who use AI in other situations, many vibe coders have AI bang out all the code and then let it run wild — even if it's a hot mess.
Vibe coding is problematic for two reasons. Number one, as with AI in education, it robs professionals of important learning experiences. Programming is, at its core, a problem-solving exercise. Coders spend many careful hours behind the keyboard figuring out how to make code functional, efficient, and free of security flaws — improving their skills along the way. Vibe coders tend to take whatever the chatbot spits out and run it with minimal to no revision. Too much vibe coding could make you a worse programmer.
The second issue appears when that code is used in the wild. We've already seen instances where vibe coding may have harmed real people. You may remember the popular Tea app, which let women anonymously alert other women to problematic men on the dating scene. Then it suffered a huge breach. There have been allegations that Tea's developers used AI vibe coding, but even if they didn't, the evidence suggests vibe coding produces shaky, flaw-ridden code. If programmers are going to vibe-code, they need to double- and triple-check the output or else they risk their users' privacy and data safety.