Hanzo

Part III: Optimizing eDiscovery with AI —  Overcoming Scaling Challenges

9 July 2024


Recap and Introduction to Scaling Challenges

In previous blog posts, we discussed how large language models (LLMs) can be used in eDiscovery and the importance of data security, keeping costs low, and transparency of data analysis. One of the novel risks introduced by using LLMs is “hallucinations,” or when an LLM generates inaccurate or irrelevant text. In this post, we will put everything together and tackle the challenge of scale.

Meeting Multiple Requirements with LLMs

As we have seen, Hanzo effectively meets several critical requirements in the deployment of Large Language Models (LLMs) for legal eDiscovery:

  • Data security: Datasets are segregated safely in customer environments to maintain a single-tenant policy. This method minimizes the risk of data breaches and unauthorized access, providing a robust security framework that is essential in handling sensitive legal information.
  • Cost: To manage costs effectively, Hanzo utilizes LLMs in a targeted manner. By deploying the smallest appropriate model for each specific task, Hanzo ensures that computational resources are used efficiently. Furthermore, the operational model is designed so that machines remain active only for the time needed to complete the task, significantly reducing unnecessary expenditures on processing power and energy.
  • Transparency: To address the issue of hallucinations—where LLMs might generate inaccurate or irrelevant information—Hanzo has established stringent controls. The system is designed to either return the AI-generated content to the user for a thorough review or limit responses to simple yes/no answers. This method not only mitigates the risk of misinformation but also enhances the transparency of the AI processes, enabling users to understand and trust the results provided by the LLMs.

Challenges of Scaling LLMs in eDiscovery

The main challenge we face with LLMs is scale. LLMs are expensive to run due to the high-capacity computing resources required. Sometimes, the necessary hardware isn’t available, leading to delays in dataset analysis. Solving this scale issue is critical because a secure, cost-effective, and transparent solution is pointless without the essential hardware to analyze your datasets. However, even though LLMs require more expensive hardware compared to traditional machine learning models used in eDiscovery, the overall cost remains lower than CAL/TAR due to the elimination of human costs.

Strategic Scaling Solutions With the Right LLM

By choosing the best LLM for the task and tuning the deployments, Hanzo is able to engineer a solution that requires affordable and abundant hardware. When we need extra capacity, we can horizontally scale up more machines for as long as necessary. Understanding how datasets and tasks scale is crucial to understanding how the workload and hardware demands scale. Doubling the size of the dataset should double the workload, and so should doubling the number of questions posed.

Impact of Scaling on Data Processing

Being able to scale up data processing can be crucial when time is limited, datasets grow in size, or additional processing is needed at a later time. By keeping the LLM-based data processing within customer environments, we ensure that customers do not compete for shared resources and that processing can be scaled up as long as resources are available. This also means that there are no quotas, rate limits, or API tokens to worry about, and that costs scale linearly with processing time.

Wrapping Up: Key Takeaways from Our Journey Through AI-Enhanced eDiscovery

Throughout this series, we’ve examined the intricate balance of cost, scale, and transparency in deploying AI in legal eDiscovery. Hanzo’s strategic approach to integrating LLMs puts data security at the heart of data processing with LLMs, makes the solution cost-effective and scalable, and keeps the process as transparent as possible. This approach demonstrates a commitment to providing secure, cost-efficient, and transparent AI solutions, proving essential in navigating the complexities of modern legal challenges. Through careful planning, informed by the needs of legal service providers, and by prioritizing these requirements, Hanzo not only enhances the efficiency of eDiscovery processes but also makes relevancy assessments straightforward, supporting the broader goal of advancing legal tech to meet contemporary needs.


Explore the value of automated relevancy assessment for eDiscovery analysis and review! 

Try our value calculator now and see your potential time and money savings with Hanzo Spotlight AI