In February, when OpenAI services were temporarily blocked in Italy, we warned that:
Design for change.
The most effective way to prepare for change is to expect it. When selecting software, whether from a vendor or as part of internal development, consider whether it’s designed to support swapping out one model for another. This applies to all software, but as the pace of change is so rapid in AI, it’s especially true for software that relies on LLMs.
While the focus today is on OpenAI’s counterparty risk, it’s also worth noting that change may be driven by market forces like a superior model from someone like Google or Meta. If you need to change models to keep up with your peers, then you’ll want to be able to do so quickly and easily.
It’s no mistake that we designed Kelvin to be modular, supporting the use of different components across the entire stack - most especially LLMs. This modularity has a cost, as it requires us to support multiple interfaces and implementations, but it also provides customers with the flexibility and peace of mind that no one component will be a single point of failure.
Have a backup plan - or maybe a better plan.
This is a corollary to the first point, but it’s worth emphasizing. Regardless of whether your primary plan is to use GPT-4 or an open source model like Llama2, it’s critical that you have a backup plan. As with any risk management strategy, business continuity is the goal, and the only real way to obtain it is to build in redundancy.
In addition to the obvious benefits of redundancy, there are also often cost savings to be had. For example, in testing alternatives to GPT-4, many organizations have found that they can achieve similar levels of quality with less expensive vendors or models. This is especially true for organizations that handle a range of different types of tasks or documents, as different models may be better suited to different types of documents.
Avoid embedding lock-in.
Some choices are easy to change, like the tool used to extract text from a PDF. Other choices - like document embeddings in a large DMS - are much less so. One of the most painful lessons that organizations learn is that when they select a closed embedding model to support their RAG workflows, they are effectively locked in to that model for the life of their system.
Imagine a world where OpenAI’s models are no longer available. Just kidding; you don’t have to, since they’ve already deprecated a number of prior models that users relied on.
Now, imagine that you’ve spent millions of dollars indexing your entire document management system with a model like
text-embedding-ada-002. What do you do when that model is no longer available? How do you compare a new document to the millions of documents that you’ve already indexed?
The answer is that you can’t. You’re stuck with the model that you’ve selected, and if it’s no longer available, your only choice is to start over and pay to re-index your entire system. For some organizations, this may be a small price to pay, but for others, it may be a material and painful cost.
The best insurance policy is to avoid embedding lock-in in the first place. This is why we’ve not only designed Kelvin to support open embeddings, but also chosen to provide customers with traditional escrow and source code access to the Kelvin software and our proprietary models. Even if we were to go out of business, our customers would still be able to use our embedding models to support their existing workflows.
Own your own.
In the long run, we believe that the nature of competition in the industry will change as LLMs become more widely available. Not only will all firms have access to the same models through vendors like Thomson Reuters or LEXIS, but their clients will be able to generate the same work product in-house.
Relationships will always be important, but we expect that firms will increasingly compete on the basis of their private processes and knowledge. In such a world, it is critical that firms have the ability to build and maintain their own LLMs and LLM-driven workflows.
While we expected that this dynamic would play out over longer horizons, this weekend’s events have made it clear that it’s never too early to start planning for a future in which you own your own LLMs. In addition to the competitive benefits, this also provides a hedge against the risk of vendor discontinuance.
We are uniquely situated in the market to support organizations that decide to start on this journey, as we provide not just the software to build and maintain LLM workflows, but also the training data and consulting expertise to help you get started.
If you’d like to learn more about how we can help you de-risk your AI strategy, don’t be shy: say email@example.com.