ChatGPT Hasn't Been Banned From Giving Legal Or Medical Advice, But You Still Shouldn't Trust It

Karan Singhal, OpenAI's Head of Health AI, debunked a now-deleted post on X (formerly Twitter) saying "ChatGPT will no longer provide health or legal advice." The original poster was likely talking about OpenAI's update on its usage policies in late October, which included the provision that prevents its services from being used for "tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional." However, Singhal said that this isn't new and has always been in OpenAI's terms of service, except that it was placed under a subsection for developers.

Aside from that, the policy states that it's the user who's not allowed to use ChatGPT without guidance from a licensed expert. This meant that the AI would still give medical or legal advice, but it's up to you to find a licensed professional to check its answer and help you understand it. Singhal himself says it in his post: "ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information."

ChatGPT can be a useful tool for helping you understand things, and it also has several features that make it so much more powerful. However, you should not trust it 100% of the time, as it can also make mistakes and errors. This is especially true for something as important as your health or freedom.

How to ensure ChatGPT isn't lying to you

AI models are getting more powerful as tech companies develop them further, but they're also getting worse hallucinations at the same time. These are one of the reasons why you shouldn't trust ChatGPT 100% of the time, especially for things that might have a material impact on your life.

In fact, there have already been several examples of lawyers falling for hallucinations, dreamt up by their AI tool, making it into their legal briefs. It has gotten so bad that one judge is even considering sanctions against lawyers who have improperly used ChatGPT or other similar LLMs. And while AI can provide general medical information, it still cannot replace a doctor who can provide a more complete assessment backed by years of experience. Aside from that, these chatbots will often give you what you want to hear and sometimes skip information that, although true, is unpleasant.

Nevertheless, that does not mean you cannot use ChatGPT or its other AI counterparts for research. However, it's just a tool, so you should not let it do the thinking for you. One way you can ensure that it's giving you the correct answers is to ask for its sources. You should then check them out so you can see the data for yourself, allowing you to gauge whether what ChatGPT says is true or not. Alternatively, you can find a reliable source and ask the AI LLM to explain it to you — that way, you know that the information you're getting is already accurate to begin with.

Recommended