.

The L+C Blog

The AI-Ready Agency: 5 Steps to Reducing Your Agency’s AI Risks Right Now

Our Firm is talking with agency leaders daily about the impact of AI on their businesses.

A lot of that conversation about implementation or integration of AI into your agency’s work right now is focused in two areas – opportunity and risk.

Your agency has likely spent a lot of time consuming information about the business opportunities that AI can create for its business operations, and in creating client deliverables.

How does the agency balance that with the inevitable legal risks that integrating AI might pose?

Start with the five steps you can take now to help minimize them.

Most of the risk we help agencies evaluate falls into one of two categories – intellectual property issues, and data confidentiality or privacy issues. Agencies have questions like:

  • Who owns the work product when AI is used to create it? (What if part came out of AI, and our agency also created part?)
  • Whose responsibility is it if IP infringement occurs in the delivered work?
  • What information of the Client can we use in our prompting?
  • And, again, whose responsibility is it if a confidentiality agreement or data privacy rule is accidently breached when we use AI?

The five steps we’ve outlined can prepare your agency to avoid or manage these risks, especially as the dynamic nature of AI adoption continues to move at a quicker and quicker pace.

Step One – Have Conversations with Key Parties

One of the best strategies to manage risk is to communicate about the potential risks before they present a challenge to your agency or any of its crucial relationship partners.

Talk to the agency’s clients at the beginning of the relationship, or the start of a project, about their point of view about and risk tolerance for AI. Some discussion prompts for these client conversations are:

  • Does your organization have an AI use Policy that the agency needs to understand?
  • Are you on board with our potential AI use at the agency for this work?
  • Are any of the assets you are providing to us: AI-generated? Confidential/Proprietary?
  • Is any of the data you provided to us from your customers or any third parties?

These questions should be designed to accomplish the following – education (of the client) and minimizing risk (for both client and your agency). It also positions you as proactive and as a leader to your clients on AI use.

The agency should also have conversations with vendors and contractors (especially freelance talent) who will contribute to its work. Before your agency incorporates or passes on any deliverables that include work any third party used AI to create, it should consider asking questions like:

  • Will you be using GAI to create any of the work you deliver to us?
  • Which AI tools do you use, and for what use cases?
  • Have you or will you input any of the information the agency has provided, or the client has provided, into your prompt engineering?

And, most critically, talk to your agency’s internal talent to ensure that they understand any risks, client concerns, and the agency’s policies around use of AI in creating delivered work. Your team should also be well-versed in the questions the agency needs to ask clients and third parties about their AI policies and handling of confidential or sensitive information in devising their AI use plans.

Step Two – Develop Agency Policies Around AI Use

Every agency needs a written policy that describes its risk profile, rules of use, approved uses and tools, and human guardrails that shape its AI use practices.

And further to the point raised above about crucial conversations, it’s the job of agency leadership to make sure your team is trained and fluent in the policy, as well as confident about when to ask for help or clarification when new issues around AI use arise. Which they will. Team members should also be able to explain the policy to clients when necessary, and know when and how to ask third parties questions about their AI use and expectations as appropriate.

What kind of things should the agency address in its policy? Here’s a starter list:

  • Which AI tools are approved by the agency for use, and what is the approval process for adopting new tools?
  • What are the approved use cases for GAI – inspiration, first drafts, research, strategy?
  • What are human checkpoints on any final deliverables or work that consist of GAI created content?
  • What information or questions do we seek or ask our vendors, our contractors or our clients?
  • What is our Firm policy on inputs into GAI – who approves what information is fed into prompt engineering?
  • What is our Firm’s risk tolerance for using GAI to create work?

Your agency might also find value in creating an external-facing policy (or a summary of your policy) to share with clients and third parties to ensure you’re all on the same page about when and how AI is used to create client deliverables.

Step 3 – Review Terms and Conditions of AI Tools

It’s important to be familiar with the user terms and conditions of the AI tools your agency regularly uses. This is fluid, admittedly – there are many tools, and multiple levels of access or use to many of them, that can make it confusing to understand which version(s) of the terms apply. And the dominant tools themselves are continuously evolving. Start with the low-hanging fruit of the tools the agency uses most consistently.

Agencies are not law firms, and so what should you be looking for when scanning the terms so that you understand where your risks lie?

Some significant terms of which the agency should be aware include:

  • What are the indemnification and liability limits of the tool’s Terms of Use? And does this change based upon what kind of access the agency has to it or which version it uses?
  • What is the platform Owner’s position on ownership of the created output?
  • What representations are being made about training vs. non-training use of our inputs/prompts?

There are some patterns you will notice as you review the terms of the “major” AI platforms, and we’ve created a guide to those that you can access here.

Step Four – Address AI in YOUR Agency Services Agreements

The agency’s Master Service Agreement is key to minimizing risk in many areas of your agency, and addressing AI specifically in that contract is one of the most significant steps you should take to achieve that goal.

Your MSA should work hand-in-hand with your written policies and the crucial conversations the agency is having with clients about AI usage. In it, you should look to address these points:

  • Limitation of the agency’s liability for use of assets it created with GAI
  • Ownership of IP in deliverables the agency created using GAI
  • Client acknowledgement that GAI has been used by the agency to create deliverables
  • Client approval of work created before it gets shared publicly

What if the agency is already in the midst of a contract or a project with its client, working under a contract that is silent on the topic of AI use? Address it in your SOW, in an Addendum to the agency’s MSA with the client, or in a written acknowledgement signed by the client about the agency’s use of AI.

Step 5 – Proceed With Caution When Inputting Client or Consumer Information

Your agency is typically given access to a great deal of information in order to execute your work for clients. Much of it is sensitive, proprietary or for other reasons confidential. And some of it doesn’t even belong to the client, as in the case of customer or consumer data.

The best risk reduction practice surrounding the handling of this information is to AVOID inputting business confidential or proprietary information into your AI prompt engineering if you truly want to maintain it as confidential. And if you are going to input it, do so only after discussing it with the client and having their (ideally written) acknowledgment of that.

Next, remember that all data privacy laws apply to the use of customer, client, consumer data. If your use case for AI requires inputting this data to do research, build models, or develop insights, make sure the agency understands and is following those laws when you input the data (which many times means:  don’t input the data!). Again, it’s another crucial client conversation.

Time (and technology) move fast, and these best practices for agencies will likely evolve. Focusing your risk reduction efforts here, today, will position your agency to adapt nimbly to whatever changes AI has in store for its work.

 

Questions? Reach out to us for help or to learn more about our AI Legal Toolkit for Agencies.

Share Your Thoughts!

Contact

Sharon Toerek
Toerek Law
737 Bolivar Road, Suite 110
Cleveland, Ohio
44115
Call Me: 800.572.1155
Email: sharon@legalandcreative.com

Tweeted Recently

Subscribe to Legal+Creative

Copyright ©2022. All Rights Reserved.