How NOT to Use AI for Agency Legal Issues

Quick note before we start: we are very pro-AI at the Firm. We use it, we encourage it, and we build with it.
But using AI to process legal issues is a game of nuance. And sometimes unrealistic expectations. And many times, very misleading and (sometimes potentially damaging) results. AI in agency legal affairs is one area where how you use it, what you tell it, and who uses it matters more than anything else.
We understand the appeal. AI is fast, accessible, and, at first glance, impressively articulate. It can feel like having a highly responsive assistant on demand. In the marketing and advertising world, you are already leveraging AI tools to move faster, ideate more effectively, and scale your services.
However, when it comes to legal advice, that is where things begin to break down. The benefits of accessibility and speed are usually outweighed by the costs of checking, verifying and (most of the time) correcting an LLM-generated legal product.
More importantly, relying on AI for legal analysis or drafting doesn’t just create the risks caused by hallucination, lack of context, or delegated judgment. It also makes your lawyer’s job more difficult, more time-consuming, and ultimately…more expensive (gasp) for your agency.
Some things to consider:
1. Confidentiality Isn’t Optional, And AI Is Still a Third Party
One of the biggest, and most overlooked, issues with using AI for legal questions is confidentiality. Even if a platform is described as “secure” or “closed,” you are still sharing information with a third party, which is an important distinction.
In short, if you are inputting contracts, deal terms, client information, campaign strategies, or dispute details, you may be disclosing confidential or even privileged information of your agency. You might also be disclosing your client’s proprietary information without proper permissions. This could not only put you in breach of contract, but it could lose the trust of your clients.
Also, if a legal matter is sensitive, you can destroy the confidentiality privilege you would have in-hand if consulting with a lawyer about it. The law does not preserve privilege or confidentiality for information you willingly put into an LLM to process.
A good rule of thumb is simple. If you would not forward the information to someone outside your organization, it should not be going into an AI prompt without permission. And if the issue would require the protection of attorney-client privilege, it shouldn’t be going into an AI prompt at all.
2. AI Doesn’t Know Your Business (Or Your Industry)
Legal advice is not just about what the law says. It is about how the law applies to your specific situation.
When we review an agreement or advise on a deal, we are not just looking at the language. We are also considering your business model, industry norms, risk tolerance, negotiation leverage, and how these agreements actually play out in practice after years of being on the legal front lines of these situations.
AI, however, does not have that context.
The effectiveness of AI also depends heavily on the prompt. A prompt drafted by someone without legal training is going to produce a very different result than one drafted by someone who knows what issues to look for and how to frame them. Just as an agency strategy or media planning expert would get different results than someone not fully knowledgeable about your clients’ businesses would do.
We see this frequently. The output sounds good, but it is answering a different question than the one that actually matters due to the prompt it was given. Sometimes, AI answers such a different question that the output is unhelpful at best, and damaging to your agency at worst.
3. Sending AI Summaries to Your Lawyer Increases Legal Fees
This is one of the more practical issues.
We know why agencies are using AI to process legal issues. It’s the hope of saved time, and saved legal fees.
When clients send AI-generated summaries or analyses it can, but usually doesn’t, streamline our work. In most cases, it doubles the work. This will change as LLM models evolve. It hasn’t changed yet.
Why?
A lawyer’s ethical obligations require us to review underlying agreements, facts, negotiations, situations, details, etc. to advise you. If you send AI output and task us with reviewing it, we’re still going to have to perform our own review. We also then need to evaluate the AI output, identify any inaccuracies, and reframe the analysis where necessary. In other words, we are doing the original work and reviewing the AI work. Most of the time, we also have to explain to you why the AI output is incorrect or inapplicable because we are required as lawyers to do so.
The end result is the same, but it takes longer to get there.
If efficiency is the goal, it is almost always faster and more profitable to send original documents, agreements, campaigns, and ask focused questions or provide us a high-level summary of flagged issues. That allows us to give you direct, actionable advice without the extra steps. By the way, AI is excellent at helping you do this very thing well.
4. Accuracy Is Inconsistent, And That Shows Up in Negotiations
AI can be helpful, but it is not consistently reliable in legal contexts.
We regularly see outputs that misstate the law, apply the wrong standard, overgeneralize niche issues, or introduce concepts that are not relevant to the deal at hand.
This tends to create friction during negotiations. Opposing counsel may push back on points that are not legally grounded, or conversations may go down paths that are not actually productive.
The result is longer negotiations, more back-and-forth, and unnecessary confusion.
As an agency, this should sound familiar. Your clients cannot rely on a single prompt to generate a fully developed campaign. It takes experience and industry knowledge to execute effectively. Legal work operates the same way.
5. Your Prompts May Not Be As Private As You Think
There is also a developing issue around discoverability.
Courts are increasingly treating AI prompts and outputs as business records that may be subject to discovery in litigation. That means what you input, and what the tool generates, could potentially be reviewed by opposing counsel.
This aligns with broader discovery rules that allow access to electronically stored information when it is relevant to a dispute.
From a practical standpoint, that means your internal questions, concerns, and strategies could become part of the record.
This area is still evolving, but it is something to be aware of when using AI for sensitive matters.
6. So, Should You Avoid AI Entirely?
No.
We are strong advocates for using AI in your business. It can improve efficiency, support creativity, and help your team operate more strategically.
However, legal work is one of the areas where the value of AI depends heavily on the user’s training and experience.
We use AI in our practice, but we use it with legal training, industry-specific knowledge, and the ability to validate and apply the output appropriately.
That is what makes it effective.
From a agency perspective, AI works best as a support tool and a thinking partner. It can help you understand terminology, think through business goals that your legal foundation will need to support, brainstorm negotiation strategies, benchmark situations, and prepare questions for your lawyer.
It’s also fair to expect your professional advisors, including lawyers, to be proactive about using AI to make their work for you more streamlined and efficient. We’re here for it.
But when it comes to interpreting contracts (or, especially, writing them), assessing legal risk, evaluating the legal compliance of marketing materials, clearing a piece of intellectual property, or making decisions that impact your business, that is where your legal team should step in.
And if you ask a legal advisor to step in, a responsible lawyer will use AI appropriately and not to take shortcuts giving you the support your agency deserves.
Share Your Thoughts!