Discover Insights

Add Your Heading Text Here

Powering Privilege Logging with Generative AI

Ready for a bear hunt? A 9-step roadmap for getting through your Contract Data Migration

Building a new data repository reminds me of the children’s book: “We’re Going on a Bear Hunt.” You can’t go over it, you can’t go around it, you have to go through it.

Contract Data Migration is messy and hard. It’s rarely properly funded and it takes forever. However, you can’t get an effective operating environment until you harmonize and harness your underlying agreement data. Like the book says, there’s no getting around it.

What makes migrating contract data so difficult and how do you trek through the rough terrain to emerge victorious on the other side?

How did we get here?


First of all, chances are you have accumulated myriad entities and storage locations for contracts in your environment. It is also likely you lacked a policy to collect the agreements in a centralized location or the discipline to enforce it. Or, maybe the data wasn’t compatible or there was no funding or suitable location to consolidate.

The result is your agreements ended up all over the place. Then, to make a clean start, maybe at some point you tried the easy fix: “Just use the new repository going forward.”

This shortcut felt simple but it was doomed to failure. Why? A significant percentage of your agreements are not “net new” activity but are modifications, extensions, or renewals of existing agreements. Without parents, child agreements lack context and meaning. To see the complete picture of a transaction or relationship, you have to straddle two or more environments. Without a complete view of your historical agreements, the value of your repository diminishes.

And, once people have access to more than one repository, no matter how much you insist they use the new one for all new activity, the pull of the familiar and the need for starting points are going to keep the legacy repositories active.

So you don’t want to just start from scratch, however, your existing agreements will not wander over to the new repository on their own. Data must be tagged for value, cleansed and organized, and loaded logically into their new home. And the process to do this needs to address the idiosyncrasies of each of the existing source locations where your legacy agreements reside.

So how do we conduct a successful bear hunt and bring in the beast? Here’s a 9-step roadmap to guide you straight through your contract data migration:

(1) Envision the end state

For your first step— and this cannot be skipped—you must plan your end state. This requires you to start from the end result of a good repository. Ask yourself what do you want to search for and report out? Having robust search capabilities and insightful reports is the holy grail of contract data management, but you have to go into the process knowing what you want to see on the front-end so you know what to collect on the back-end.

(2) Sort out the sorting

Established your search engine rules and field capture requirements. How do you need to search, by business unit, industry, geography, agreement value, product or service line, date, etc.? Balance the number of tags to collect with their value. Collect flexible information that can be grouped as the organization changes. For example, use country and have a roll up to morphing region structures. Collect by products that can be grouped and re-grouped to represent changing business units.

(3) Define your data’s attributes for high-level viewing and quick insights

Do I need to know who worked on the agreement, the parties notice addresses, entity names, expiration dates? These are the high-level elements that make up your attributes. Interview your users, what do they need, how would they like to search.

(4) Identify data gaps

Look across what you currently have collected in your existing repositories. Harmonize the data field names for a consistent reference going forward. End date, close date, expiration date can all have different meanings; pick one and define it clearly for all users. Find your gaps; what repositories collected what information? Chances are high they didn’t all collect everything you want to know going forward.

(5) Reconcile data gaps

How are you going to fill in the missing data? Is there a need for an extraction project to tackle the missing pieces? If time and money won’t permit a full population cleansing then prioritize the agreements you need to cleanse. Find out when things are expiring or if they already have and put them to archive state. Focus on your high touch customers/suppliers. The smaller and one-off agreements can wait if you don’t have the time or funding to do it all.

(6) Develop a data storage strategy

The next step to achieve an efficient migration is to utilize a staging application that will allow you to load all your varying repository content side by side and do bulk attribute tag overlays to harmonize your storage attributes prior to migration into the new repository. Here are some example questions to help establish missing attribute values for large document populations. Ask yourself: How can I generalize some things I know to apply across the population? Can I do keyword searches in document tiles to find all of my Masters, SOWs, NDAs that are not labeled as such? Will this let you properly label the agreement type on 80-90% of your agreements? Do source system folders allow you to bulk label country fields? Can your staging application identify foreign language agreements by character recognition?

These first 6 steps set the stage for your new repository design and reduce the quantity of agreements that will need manual or AI supported reviews to plug the holes. Keep track of your methodology as you will need to apply it one more time in a final true up of any agreements created in legacy environments in between . Here you have options. If there are key data attributes you must know about your agreements that cannot be bulk estimated with sufficient accuracy (such as expiration date) you can explore running all or at least your prioritized agreements through an AI supported extraction process. I say an AI supported as the types of data points typically needed at this stage also often represent the most difficult content for an AI to find. Expect to require some level of human validation to supplement the AI reviews to assure you have reliable responses. This is where prioritization of agreements for review along with bulk estimating responses where possible help to bring your population of agreements down to a manageable number for pricing and time considerations.

(7) Build agreement trees

Once you have your data lined up and cleansed, you can build your parent/child agreement trees. Having these relationships readily visible in your repository saves search time and provides enormous benefits in understanding the relationships with your customers or suppliers. A repository that can display parent/child hierarchies give your users an at-a-glance view of each agreement and its context in the overall counterparty relationship.

(8) Conduct a final cleanse

After you have done your bulk migration, do a true up with a final load and cleanse of any incremental data entered into still functioning repositories between your original pull and their freeze or retirement date. Re-run your methodologies on the incremental data and bring it over. Now the go-forward use of your new repository is supported by a firm foundation of information that also requires everyone to use the new application to function. Your bear is in the cage.

(9) Put a plan in place to water and feed your captured bear

Finally, you have your wonderful new repository but it will only be as good as the processes you put in place to maintain it. Post signature contract management is the unsung value engine behind your repository. However, your business and Legal teams won’t have the urgency for investing in the upkeep of your repository that they have in supporting the pressures of bringing in new transactions. Here is where service providers or off-shore in house teams can offer options to collect and populate your executed agreements in low-cost centers. These teams make sure that your data going forward is handled consistently and economically. All the hard work of capturing the bear is lost if you kill it in captivity. Your new repository is also now well positioned to have analytics applied and managed services introduced to track the performance and obligation fulfillment of your agreements. But bear care and taming is a topic for another article.

All of this can feel overwhelming. But, if you try to take a shortcut with a “go forward load only” approach, you will wait a long time and likely never achieve value as you wait for the elusive critical mass to produce meaningful use and insights from your repository. And in the meantime, your efforts will be constantly derailed by the management of multiple environments.

Clients who have invested in a proper data migration are glad they did so. Those that haven’t are among the 80% of contract repository users who are disappointed in their deployments. Investing in proper contract data migration opens the door for achieving the value proposition of your repository and workflow tools and sets the stage for even greater understanding of your agreement content. Don’t be among the disappointed.

You can’t go around a proper migration, but you can get through it. And, what’s on the other side is worth it: reduced costs, faster turnaround times and happier internal clients. You can capture and tame that bear.

what is a smart contract

Related Content

Apple Forced to Halt Sales of the Apple Watch Series 9 and Ultra 2

Apple could avoid the import ban by moving manufacturing to the US, but that would introduce a host of legal and business challenges — including wilful infringement – which can lead to enhanced damages.

Episode 2: Part 3 – Cory Osher for the Data Point Series

Cory Osher answers questions about resistance to using AI, or LLMs, in legal and discusses ethical consideration and privacy considerations.

Episode 2: Part 2 – Cory Osher for the Data Point Serie

Cory Osher discusses best use cases for AI in legal and the dangers and risks of hallucinations.

Episode2: Part 1 – Cory Osher for the Data Point Series

Cory Osher introduces the Data Point series and explains the relevancy of the phrase, "There are decades when nothing happens, and weeks when decades happen" as it relates to the increasing adoption of AI in legal.
Whitepaper

Saving Money Through Smarter Legal Invoice Review Processes

Reduce spend and administrative burden with a strategic legal invoice review program anchored in data.

Source Code Analysis leads to $67.5 M Verdict