Our client, a global Tier 1 bank, had entered into a multi-year IT outsourcing agreement with a large IT services provider. After 5 years, the first status review was coming up and our client worried because his firm had not met certain milestones which would trigger a penalty fee of up to $20 million. Due to the length of the contract - it consisted of multiple volumes - and the fact that the review was only 4 weeks out, it was impractical to do an internal audit of the contract.
When we learned about the situation during an unrelated conversation, we offered to use our AI-supported Methodology to help. The client agreed and made the contract available to us.
Within our system, we modeled the rules governing the contract, including dates, assumptions, and penalties. As soon as the contract was available in this structured format, it was possible to:
- Point out inconsistencies between clauses spread out over the thousands of pages of the contract;
- Identify contradictions between contract sections;
Focus on ambiguities that had been overlooked by the lawyers in a contract that relied heavily on boilerplate text. This gave our client the opportunity to leverage these ambiguities for his goals;
- Model a timeline of deliverables and penalties;
- Run a battery of simulations to find the best course of action for our client.
Based on our simulations of possible outcomes projected into the future, the bank was able to propose small contract changes that benefited their financial goals. During the 5-year review meeting, the contract partner agreed to those seemingly trivial suggestions without understanding the financial consequences over the second period of the 10-year contract. In addition, the bank could reduce the penalty fee from $20 million to zero, by choosing the side of the discovered ambiguities beneficial to them.
The client, a large US-based government agency, witnessed unexpected and seemingly random behavior in an aircraft. Unsure of the reasons, they asked a few select companies, among them us, to analyze the erratic behavior of the vessel. One complication: The source code was classified and we could use only the sensor data of a few flights.
Applying our Induction System to the actual telemetry data allowed us to see clusters of variables whose relation had not been identified previously.
We informed the client about our findings and our hypothesis that a signing error (a minus sign was set where the value should be positive) in the source code caused the problem. The client examined the source code and confirmed our hypothesis. After fixing the error the aircraft behaved as expected.
The induction system uses examples to identify and extract the underlying rules. These rules are then verified with another dataset and, if necessary, modified. The rules are expressed in a script that is visually represented in a flowdiagram. The complexer the rule, i.e. the more decisions are needed, the complexer the representation. At this point in the process, incongruencies, contradictions and ambiguities are also identified so that it is possible to act on them. If needed the rules can be further tested by running them against a large set of known data during backtesting.
When the rules are known, they can be applied to new specific cases in an operational context, e.g. protocol translation, or to run scenarios, e.g. different interpreations of contracts.
In the final step, the system can codify the rules in a number of programming languages so that clients can apply the rules in their operations or as part of continuous analysis.