Myths Busted: How Syllo’s Agentic AI Calms 5 Fears About Automated Document Review

Category
Date
By

Litigators are using agentic AI in active litigation to obtain a strategic edge.  Conducting factual investigations using agentic AI, a trial team at Quinn Emanuel was able to complete review of more than 100,000 documents, and win a bet-the-company trial in the small span of 6 weeks.  A Mayer Brown trial team used agentic AI to find 50 key documents within a universe of more than 400,000 documents that had been missed in previously conducted managed reviews.  A trial team at Ballard Spahr identified deficiencies in an opponent’s 25,000 document production within days.  Attorneys at these and other firms have found that an agentic approach to using Generative AI in document review achieves rates of recall and precision that surpass traditional review methodologies while also offering unparalleled speed, quality, flexibility, and cost-effectiveness.  

Despite its active use in litigation and the advantages it provides, the application of GenAI to litigation is still an evolving field and the use of agentic AI is newer still.   This post clears up some misconceptions we have heard from attorneys and eDiscovery practitioners about using GenAI and Agentic AI in document review.

Before we dive into these misconceptions, let’s explain how agentic AI is being applied to document review.  Syllo’s agentic AI document review system coordinates multiple LLMs that organize and delegate the work of the document review between one another and autonomously make decisions about how to conduct the review within guidelines set by users.  This methodology allows litigation teams to apply unlimited issue codes, dramatically increases the degree of complexity and nuance that the system can accurately handle, and reduces the cost of the reviews.  

This post explores five commonly-held assumptions we have encountered about AI document review that do not hold water in view of Syllo’s agentic approach.

Myth 1:  AI document review is too expensive.

To the contrary, Syllo’s agentic approach to document review is far less expensive than either human review, non-GenAI implementations of TAR, or linear approaches to generative AI that run each document sequentially through a large language model.  Clients have saved 20% – 30% using Syllo’s document review solution, and sometimes more.  In addition to being less expensive on a rate basis, the superior performance of agentic AI document review means clients save by avoiding repetitively reviewing document sets that have not been correctly coded.

The agentic approach differs from a linear approach, in which users provide an LLM with a single or multi-pronged prompt that sets forth case context and a description of what documents are responsive and/or the issues with which the document is to be coded.  In a linear approach, the LLM considers the responsiveness of each document in the review population one by one.

The linear approach often results in higher expenses because only a limited number of issues can be included in each prompt.  If the litigation teams need to research additional issues, then a new set of prompts needs to be run over the data set.  Further, stuffing issues into a single prompt diminishes the accuracy of the results and leaves larger buckets of documents to manually sort and review on the back-end of the AI review process.  Consequently, litigation teams employing a linear prompting approach often conduct multiple trial runs over a sample set of documents in order to optimize results.  These repeated trial runs, and the limit on the number of issue codes that can be applied, add to the expense of the linear GenAI review approach.

Myth 2:  AI document review isn’t ready to match the performance of human reviewers.

It is often an unstated assumption that human reviewers are the most capable of evaluating documents for relevance and that providers of AI document review solutions must demonstrate that their technology compares favorably with human review.  In reality, initial recall rates for contract review teams typically top out at 80%.  If the issues that are the subject of review are numerous and complex, rates of recall can suffer further still.  

The performance of Syllo’s agentic AI in live litigation, including head-to-head reviews against human reviewers, demonstrates that an agentic approach to automated document tagging performs better than human reviewers on large complex datasets.  Users of Syllo can expect to identify 95% or more of all relevant documents with a high degree of precision, and agentic reviews on Syllo have frequently yielded estimated Recall of 100% in industry-standard elusion testing.  Performance can be further improved by conducting quality control reviews and further refining instructions based on those areas in which GenAI document tagging has been overinclusive or underinclusive.

Myth 3:  We can’t risk AI hallucinations.

When language models are used to issue code documents, the outputs are the tags applied to documents.  In Syllo’s agentic review process, the tags are reviewed in connection with the actual documents in the document set.  There is no opportunity in this workflow for an LLM to invent a document.  

Syllo’s agentic system further mitigates the risk of errors by providing transparent, explainable AI outputs that allow legal teams to understand why specific tags were applied.   Each AI tag comes with a detailed and accurate summary that explains why a tag was applied to a given document.  Moreover, the tags link to the particular excerpts of documents—so the user can be taken to the part of the document that is applicable for a given tag.   

This built-in accountability and adaptability ensure that Syllo’s tagging enhances review accuracy while maintaining full user oversight and control.

Myth 4:  AI review will never hold up in court if challenged by our adversaries.

Courts have developed a well-articulated jurisprudence for evaluating technology-assisted review that is applicable for tagging documents with AI.  As an initial matter, producing parties are deemed to be in the best position to “evaluate the procedures, methodologies, and technologies appropriate for preserving and producing their own electronically stored information.” Hyles v. New York City, No. 10-cv-3119 (AT) (AJP), 2016 WL 4077114, at *3 (S.D.N.Y. Aug. 1, 2016) (citing Principle 6 of the Sedona Conference).  To the extent an adversary challenges the methodology, courts look to the results produced by that methodology.  As Magistrate Judge Peck wrote in the landmark opinion in Da Silva Moore, “I may be less interested in the science behind the ‘black box’ of the vendor’s software than in whether it produced responsive documents with reasonably high recall and high precision.”  Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182, 183-84 (S.D.N.Y. 2012).

The performance of agentic AI review in terms of recall and precision as compared to traditional technology-assisted review means that it is well-placed to overcome challenges.  The white paper co-authored by Syllo and 25 legal practitioners highlights how Syllo had obtained average estimated recall of 97.8% and average estimated precision of 79.7% in its ten most recent responsiveness reviews. 

Myth 5:  Our clients wouldn’t want us using unproven AI technology.

Many in-house teams are actively surveying their outside counsel about AI adoption and how law firms are integrating GenAI tools into their workflows. At a recent roundtable with in-house counsel in which Syllo participated, in-house counsel explained that they were looking for their outside counsel to take the initiative in adopting new technology for use in litigation. As AI tools become more commonly used, these expectations will only increase and a lack of adoption of such tools will only become more glaring.  The demonstrated performance of agentic AI review makes it an obvious candidate for application in complex litigation to decrease costs and identify the most critical documents in the litigation.

The Bottom Line

The legal industry has in agentic GenAI document review a transformational technology.  Many of the perceived barriers to AI adoption are rooted in outdated assumptions that do not take into account advancements that obviate objections to application of this technology.

Syllo is proving that agentic AI produces superior results to traditional document review methodologies while resolving the challenges formerly faced in applying AI to document review.  Forward-thinking firms are already seeing the benefits—faster document review, lower costs, and a competitive edge in litigation.

Ready to learn about how your attorneys can leverage agentic AI review in litigation? Find out more here or request a demo today.