How Legal Advice from AI Can Permanently Injure Your Solid Injury Case

Adam Werner

The worst part about getting legal advice from a chatbot is not that it gets things wrong. It is that you will not know it got things wrong until your case is already damaged beyond repair.

Personal injury cases are won or lost on decisions made in the first few days after an accident. What you say to the insurance adjuster. Which doctor you see. Whether you accept that first settlement offer. How you document what happened.

For many Americans, AI is now involved in those decisions. And not in a small way.

Thomson Reuters ran a survey in 2024 and the number that jumped out was this: a third of consumers said they had already turned to generative AI for legal research or advice. Among younger adults, the number was significantly higher. People are opening ChatGPT before they call a lawyer. Some of them never call a lawyer at all.

The Problem Is Not Bad Answers. The Problem Is Convincing Bad Answers.

A search engine gives you a list of links. You click, you read, you decide what to trust. There is friction in the process. That friction protects you.

A chatbot gives you a fully formed answer in confident, authoritative prose. Often, there are no links to check. No competing opinions to weigh. Just a clean paragraph that reads like it came from a lawyer who reviewed your case.

It didn't.

Large language models are not reasoning about your situation. They are assembling language patterns from training data. When the output happens to align with accurate legal guidance, that is a coincidence of pattern matching. When it doesn't, you have no way to tell the difference.

According to Pasadena, Texas car accident lawyer Charlie Gustin of Gustin Law, "The most dangerous thing about AI legal advice is that people trust it. If you use a chatbot as your lawyer before you talk to a real lawyer, you risk making mistakes that hurt your case before it ever begins. And unlike privileged conversations with your attorney, AI chats are more and more likely to be subpoenaed and used against you. Once that damage is done, you could lose what would otherwise have been a rock-solid case."

That last part matters more than anything else in this conversation. The damage is done before you know it is damage.

Your AI Conversation Is Evidence For the Other Side.

Adam Greene is a partner and personal injury attorney at the Steinberg Law Firm in Charleston, South Carolina. "Every day, I handle worker injury cases, and their complexity cannot be overstated," Greene said. "If you are using AI for legal advice, proceed with great caution. There is no attorney-client privilege, meaning everything you type in is discoverable by the other side, including insurance companies and their legal counsel. Not everything is accurate, and the platforms will even make up information when they have no reliable source to answer a prompt."

Most people treat a chatbot like a private journal. They type in details about their accident, their injuries, their frustrations, their fears. They describe symptoms. They ask whether their case is strong or weak. They dump in details they wouldn't share with someone sitting across the table from them.

All of that is fair game in discovery.

As attorney Greene stated, there is no attorney-client privilege between you and an AI chatbot. When you tell a licensed attorney about your case, that conversation is protected by law. Type that same information into ChatGPT, Claude, Gemini, or any other chatbot, and it lands on a server that a tech company owns and controls. Defense attorneys can subpoena it. Insurance companies can request it in discovery. And they will.

You asked whether your prior medical history would hurt your claim. You provided examples of your medical history. You typed out your version of events three different ways because you weren't sure which details mattered. Your best intentions to find the right answers could be twisted into accusations of fraud by an insurance company that doesn’t want to cover the complex injuries you sustained in an accident. 

Treatment Decisions Made from AI Advice Can Tank a Case and Your Physical Recovery

Here is a scenario that plays out more often than most people realize.

You get hurt. You ask a chatbot what kind of doctor you should see. It tells you to start with your primary care physician. That sounds reasonable. So you go to your regular doctor, who prescribes rest and ibuprofen.

Weeks later, you are still in pain. You finally see a specialist. The MRI shows a herniated disc. But the insurance company now has a gap in your treatment record. You waited weeks before seeing a specialist. Their argument writes itself. If the injury were really that serious, why did you wait?

The insurance company is not evaluating whether you are hurt. They are evaluating whether your medical records tell a story that supports the value of your claim.

AI does not understand that distinction. It gives you medically reasonable advice in a vacuum. Personal injury law does not operate in a vacuum.

The same problem shows up with treatment gaps, delayed imaging, and decisions to stop physical therapy early. Every one of those choices feels personal and private in the moment. In litigation, they become exhibits.

Insurance Companies Would Love to Go Against a DIY Injury Lawyer Powered by ChatGPT

Insurance carriers figured out how to pay less a long time ago. They have had decades to build the infrastructure for it. They employ adjusters trained to spot claimants who don't have legal representation. They know the patterns. They know the language. And they know when someone walks in armed with chatbot-generated talking points instead of a litigation strategy.

A claimant who uses AI to draft a demand letter sends a signal. The letter will use generic templates. It will cite general principles without applying them to the specific facts of the case. It will probably ask for a round number that has no connection to the actual calculation of damages, future medical costs, lost earning capacity, or pain and suffering multipliers that a real attorney would use.

The adjuster reads that letter and the picture is immediately clear. No lawyer. Somebody who AI-prompted their way through the process and now thinks they have it handled. Somebody who will likely take the first number thrown at them because they have no real sense of what the claim should be worth.

What AI Can Actually Do for You (and Where to Draw the Line)

AI is fine for looking up what a legal term means. Want to know what "comparative negligence" or "subrogation" refers to? Go ahead and ask. Want a general walkthrough of the claims process before your first consultation? That works too. And when it comes time to review a list of the best lawyers in your area, AI is really good at matching you with a firm that has experience with your type of case. 

Stop there. That is the line.

The second you start asking AI to weigh in on your specific situation, to draft a demand letter, to recommend which doctor to see, or to put a dollar figure on your injuries, you are in trouble. You are handing life-altering decisions to a system that has never read your medical records, will never face consequences for getting it wrong, and cannot offer you a shred of legal protection.

Personal injury attorneys work on contingency for a reason. You pay nothing unless they recover money for you. That cost barrier people worry about? It is not real. What is real is the price of navigating a complex legal fight on your own and discovering, months down the road, that the chatbot you relied on had no business advising you in the first place.

Any personal injury lawyer with a track record of success is going to give you a free consultation and guarantee that you won’t spend any money unless your case is won. There is zero risk to talking to an attorney about your injury. But you risk losing everything when you tell a chatbot about your injury.