Why You Probably Shouldn't Use ChatGPT At Your Workplace (Or At All)

When ChatGPT was first released, the internet was agog with fascination of the AI tool: "ooh, it can write in the voice of Edgar Allan Poe," "look, it can also troubleshoot code," "hey, this thing can even write coherent video scripts!"

But that allure soon gave way to anxiety as time and use began to reveal the tool's shortcomings. Some of them were minor hiccups like server downtimes, others were more serious flaws with greater implication, such as the risk of leaking sensitive personal or corporate information.

That last bit might not raise eyebrows at first glance since personal data privacy on today's webscape seems more and more like a pipe dream. However, ChatGPT's privacy loopholes are particularly concerning because of its numerous use cases and the magnitude of its user base.

Still, the tool's potential for optimizing workflows and ultimately boosting productivity might make many turn a blind eye to its pitfalls. We'll share three major reasons why you should take these drawbacks seriously, and probably not use ChatGPT at work (or at all). If you do choose to go ahead with using the AI tool, at least you can be aware of its dangers.

It could encourage cutting corners

According to Yahoo Finance, many companies have staff who now rely heavily on ChatGPT to carry out salient tasks at their jobs, usually without their employers' knowledge. There's no doubt that the AI tool is useful as a workflow supplement, but depending on it to do the heavy lifting for you at work might be harmful — both for yourself and your employer.

If you use it skillfully enough, ChatGPT can produce some pretty impressive answers, and that might be a tempting premise to put in less effort at your job than you would otherwise. This can quickly become a slippery slope to complacency, which can consequently stifle innovation and originality.

There's also something to be said about the ethics of outsourcing work to AI — you're being paid for your specialized knowledge and/or experience, so delivering AI-generated work instead would be professionally dishonest. This could damage your reputation and career long-term.

Also, it's common knowledge that ChatGPT is not 100% accurate in anything it generates. The AI tool can, and does, frequently produce false and unreliable information. If by some oversight these incorrect data ends up in your work, you could face some grave consequences; or worse, damage your company's reputation.

Data leaks

ChatGPT is trained on user data, which means the AI tool refines its responses by collecting and analyzing the content of every query it receives. Basically, every piece of data that you enter into ChatGPT is cached on the servers and available to the tool's developers. This implies that sensitive information can be compromised, as was the case when Samsung employees mistakenly leaked confidential company data via ChatGPT queries.

Per Gizmodo, the leak happened when one Samsung employee fed source code from a faulty semiconductor database into ChatGPT, and asked the AI tool to help fix it. In another instance, a Samsung employee reportedly submitted an entire meeting to the chatbot and requested that it draft minutes. Samsung banned employees' use of ChatGPT after the incidents, and several companies followed suit soon after, including Apple, JPMorgan Chase, Deutsche Bank, Verizon, and Accenture, among others.

There are also other worrying instances of privacy breaches in the AI tool's history. In March 2023, a bug allowed some users access to conversations in other active users' chat histories. Although this bug was fixed, and OpenAI has recently made some changes to mitigate the risk of data leaks, there's no assurance of data privacy with ChatGPT. 

If you do decide to use ChatGPT, you can choose to disable the chatbot's ability to use your conversations for training. You'll find the switch in ChatGPT settings under the section "Data Controls." Once you turn it off, OpenAI says they will not utilize data from subsequent conversations to train ChatGPT.

Cybersecurity concerns

We've covered ChatGPT's privacy vulnerabilities above, now let's talk security. Before you ask; no, those two things do not exactly mean the same thing, although they are closely related and often overlap. Whatever threatens your privacy also endangers your security, and vice versa.

There's no clear indication that ChatGPT is vulnerable to security risks such as hacks or malware attacks, but until its security is officially declared foolproof, it's not advisable to approve of or employ it for official work. This is especially important considering ChatGPT's design and function — its capacity for human-like interaction is a veritable gold mine for malicious agents looking to manipulate users.

All of that considered, incorporating ChatGPT (at its current state) into your workflow puts you at a security risk if it turns out that the AI tool has security flaws that attackers might be able to exploit. OpenAI has not been silent about the fact that ChatGPT is, in many ways, a work in progress. Until the kinks have been worked out, it's advisable to keep it away from work.