Point of View

Protect your data, IP, and future: Negotiate these five terms in your AI contracts

Vendors are embedding AI into your products and services—often without telling you—and your old contract terms won’t protect you. Every new deal and renewal you sign risks leaking your data, losing control of your IP, and absorbing liability you never anticipated. Enterprise leaders who lock in these five contract terms today will decide how AI serves them, while those who don’t will serve the AI instead.

This document is part of our comprehensive service offering, AI-First Deal Lab, which provides an extensive review of agreements. It is for informational purposes only and reflects HFS Research’s views based on industry analysis and observed commercial trends. It does not constitute legal advice, and HFS Research is not a law firm. Buyers should consult with qualified legal counsel before drafting or negotiating any contract terms or making decisions based on the contents of this publication.

1. Force providers to reveal how and where they use AI in your solutions

You cannot control what you’re not aware of. So, your contract must require disclosure to prevent the inclusion of technical capabilities that could pose a risk to your organization. Providers will likely propose terms that require disclosure of AI use only for material topics, while buyers will likely overcorrect and propose terms that require full disclosure and approval of all use of AI. We think the middle ground requires disclosure and approval where there is significant use of AI or material deployment of AI in features. For example, the vendor will notify the client of significant AI use changes and obtain approval for material AI feature deployment.

Example: Vendor shall disclose in writing any material use of artificial intelligence within the products, services, or processes provided to Client, including significant AI functionality changes. Vendor shall obtain Client’s prior written approval before deploying any AI features that materially alter service outputs or decision-making processes.

2. Own the outcomes your data creates

The next issue related to model training is intellectual property ownership. The trained model will likely represent a mix of client and vendor training content; after all, the vendor will position its product during sales efforts as more effective than others based on its training, while your company will expose business information and customer data to the systems. Vendors will open negotiations seeking to retain ownership of all models and improvements, including those trained on client data—a position that doesn’t make good business sense. The reverse is that the client owns all models trained on its data and the outputs it creates; this doesn’t make sense either, as it includes the vendor’s IP. We think there is a better outcome where the client owns outputs and has a perpetual license to models trained on its data, while the vendor retains underlying model IP. In some cases, consider a “residuals model” or shared monetization rights if your data significantly enhances the vendor’s commercial models.

Example: All outputs generated by AI models trained on Client data shall be owned exclusively by Client. Vendor retains ownership of its underlying models but grants Client a perpetual, royalty-free license to use any AI models trained in whole or in part on Client data for Client’s internal purposes.

3. Expose and fix AI bias before it damages your firm’s brand

Companies must control model bias. The media is awash with stories of autonomous driving and customer service models gone wrong. To manage this, enterprises must ensure models are mitigated and tested for bias while providing transparency. As models reason, they can produce documentation logs of that thinking. The ability to review and report on these logs provides insights into how models work through real-world situations and where they exhibit biases. Vendor contracts are likely to either disclaim or omit warranties for this factor. We think the best position for clients to consider is to ensure the vendor has functionality to track and report on transparency reasoning, while requiring the vendor to warrant using industry best practices for bias testing. Further, the client should require the vendor to provide reasonable explanations of model behavior upon request.

Example: Vendor warrants that it will implement and maintain industry best practices for bias detection, mitigation, and testing in all AI models used to deliver services. Vendor shall, upon request, provide Client with documentation of AI decision-making processes, including model reasoning logs and bias testing results, in a format reasonably usable by Client.

4. Make vendors shield you when AI creates legal disputes

Current court dockets show many pending cases related to IP infringement, in which content creators allege infringement by model creators. Furthermore, recent press stories have alleged that some companies train models by analyzing other models. And let’s not ignore the long list of pending patents that you likely have no visibility into, or situations where customers or employees may allege discrimination based on model bias. Vendors will likely disclaim all liability for AI outputs and provide tools as-is to limit general liability, including all IP liability. Clients will likely open by requesting the vendor to broadly indemnify them for all third-party claims arising from AI use and AI-generated outputs, including IP and discrimination. We think this is a good position, although the discrimination language may be substituted with requirements to address and resolve bias and discrimination issues.

Example: Vendor shall defend, indemnify, and hold harmless Client from and against any third-party claims, damages, liabilities, or expenses arising from: (a) intellectual property infringement in AI-generated outputs; and (b) failure of AI systems to comply with applicable anti-discrimination or data protection laws. This obligation shall survive termination of the Agreement.

5. Terminate your deals when AI fails

Termination clauses have long been contentious flashpoints. As much as we don’t want to throw fuel on this fire, AI output can be unpredictable, and models may never work to the expected levels, despite sales claims otherwise. As such, we believe that buyers should negotiate out clauses if AI systems fail to meet agreed-upon accuracy, compliance, and bias thresholds for several months. Buyers should also ensure the vendor permits them to terminate if AI system use becomes unlawful under prevailing regulations. Vendors will point to their uncured material breach clause, which is nearly impossible to penetrate because the cure is constant training. We think the right balance of risk is to allow clients to terminate if the technology use becomes unlawful or if material AI-related issues are not remediated in a reasonable period, such as 30 days.

Example: Client may terminate this Agreement, in whole or in part, without penalty upon thirty (30) days’ written notice if: (a) AI systems fail to meet agreed-upon accuracy, compliance, or bias thresholds for a consecutive ninety (90) day period; or (b) the use of AI systems becomes unlawful under applicable regulations.

The Bottom Line: You either control these five terms today, or your vendors will control your data, risk profile, and future.

AI isn’t just a feature or a Clippy icon dancing in a co-pilot screen. It’s a shift in how work gets done, who owns the outcomes, and who bears the risk. Providers are rewriting their contracts to take advantage of that shift. Buyers must respond, but be reasonable. Most importantly, you don’t need to become a legal expert, but you do need to insist on deal terms that reflect this new reality (and hire a great attorney, too). Your old terms simply don’t make sense in this new world. Companies must protect themselves, and if your terms don’t address the five topics above, you’re handing over control of your data, risk profile, and future.

Sign in to view or download this research.

Login

Register

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started

Download Research

    Sign In

    Insight. Inspiration. Impact.

    Register now for immediate access of HFS' research, data and forward looking trends.

    Get Started