Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more
When a legal research company Lexisnexis Created his protective assistant, he wanted to find the best way to take advantage of his expertise without deploying a large model.
Protégé aims to help lawyers, partners and parajurists to write and show legal documents and to ensure that everything they cite in complaints and memories is correct. However, Lexisnexis did not want a general legal assistant for AI; They wanted to build one who learns the workflow of a business and is more customizable.
LexISNEXIS has seen the opportunity to bring the power of the big language models (LLMS) of Anthropic and Mistral and to find the best models that answer user questions, Jeff Riehl, CTO de Lexisnexis Legal and Professional, told Venturebeat.
“We use the best model for the specific use case in the context of our multimodel approach. We use the model that provides the best result with the fastest response time,” said Riehl. “For certain use cases, it will be a small language model like Mistral where we are distilled to improve performance and reduce costs.”
Although LLM always provides value in creating AI applications, some organizations are turning to the use of small languages (SLMS) or LLMS distillation models to become small versions of the same model.
Distillation, where an LLM “teaches” a smaller model, has become a popular method for many organizations.
Small models often work best for applications such as chatbots or the completion of the simple code, which Lexisnexis wanted to use for the protégé.
This is not the first time that Lexisnexis has built AI applications, even before launching its Lexisnexis + IA legal research center in July 2024.
“We used a lot of AI in the past, which was more around natural language treatment, in -depth learning and automatic learning,” said Riehl. “It really changed in November 2022 when Chatgpt was launched, because before that, many AI capabilities were somehow behind the scenes. But once Chatgpt came out, the generating capacities, the conversational capacities of this were very, very intriguing for us.”
Small adjustment and routing models
Riehl said Lexisnexis use different models of most of the main model suppliers when building its AI platforms. Lexisnexis + AI used Claude models from GPT Anthropic, Openai models and a Mistral model.
This multimodal approach helped to break down each task that users wanted to perform on the platform. To do this, Lexisnexis had to archite its platform to switch between the models.
“We decompose the task carried out into individual components, then we would identify the best large language model to support this component. An example of this is that we will use Mistral to assess the request that the user has entered,” said Riehl.
For the protégé, the company wanted faster response times and more adjusted models for legal use cases. He therefore turned to what Riehl calls “refined” versions of models, essentially smaller weight versions of LLMS or distilled models.
“You don’t need GPT-4O to assess a request, so we use it for more sophisticated work, and we change models,” he said.
When a user asks Protected a question on a specific case, the first model he ping is an refined mistral “to assess the query, then determine what is the purpose and intention of this request” before moving on to the model best suited to accomplish the task. Riehl said that the following model could be an LLM that generates new requests for the search engine or another model that sums up the results.
Currently, Lexisnexis is mainly based on an adjusted Mistral model, although Riehl declared that he had used a refined version of Claude “when he came out for the first time; we do not use it in the product today but other ways.” Lexisnexis is also interested in using other OPENAI models, in particular since the company has published new capacity to strengthen last year. Lexisnexis is evaluating OpenAi reasoning models, including O3 for its platforms.
Riehl added that he could also consider the use of Google Gemini models.
LexISNEXIS supports all of its AI platforms with its own knowledge graphic to carry out increased generation (RAG) capacities of recovery, especially since protected could help launch agent processes later.
The legal suite of AI
Even before the advent of generating AI, Lexisnexis tested the possibility of putting chatbots to work in the legal industry. In 2017, the company tested an AI assistant who would compete with Ross and Watson of IBM in the Lexisnexis + AI platform of the company, which brings together the Lexisnexis IA services.
Protected helps law firms with tasks that parajurists or partners tend to do. It helps to write memories and complaints that are based on business documents and data, suggest the next stages of legal work flow, suggest new invites to refine research, to write questions for deposits and in discovery, links of links in deposits for accuracy, generation of calendar and, of course, summarizes complex legal documents.
“We consider the protégé as the initial stage of personalization and agent capacities,” said Riehl. “Think of the different types of lawyers: M&A, litigant, real estate. It will continue to become more and more personalized according to the specific task you perform. Our vision is that each law professional will have a personal assistant to help them do their job according to what they do, not what other lawyers do. ”
Protected now competes with other legal research platforms and technology. Thomson Reuters personalized the O1-Mini of Openai model for its Coconstel legal assistant. Harvey, who lifted $ 300 million Investors, including Lexisnexis, also has a legal AI assistant.