Ryan Adams | February 28, 2025

Busting Seven Myths About GenAI Document Review

Busting Seven Myths About GenAI Document Review

Busting Seven Myths About GenAI Document Review

By Eric Wall

 

It’s not a hot take to call law firms ‘cautious,’ but sometimes those making decisions about technology adoption can use caution to mask inertia.

When it comes to adopting GenAI-powered tools like Syllo, careful due diligence is critical. Legal teams must fully understand a platform’s capabilities, rigorously evaluate security measures, and collaborate with providers to ensure seamless integration. However, some common objections to GenAI adoption are based on misconceptions rather than legitimate roadblocks.

It’s time to address those misconceptions head-on.

 

1. We don’t want large language models trained using client data.

Attorneys often assume that, for GenAI tools to work effectively, they need to be trained on their client’s data or their own data.  This isn’t the case.  The most sophisticated large language models can understand even the most complex subject matter and be provided the background they need to understand the context of the documents that are the subject of the litigation without altering the weights of the model.

Syllo addresses this concern by providing a secure environment and ensuring that client data is not used to train underlying GenAI models. This approach allows legal teams to leverage the full power of advanced language models without compromising client confidentiality.

 

2. GenAI document review isn’t ready to match the performance of human reviewers.

It is often an unstated assumption that human reviewers are the most capable of evaluating documents for relevance and that GenAI must demonstrate that it is up to the level of human review.  In reality, initial recall rates for a human review typically top out at 80%.  While reviewers can increase recall rates using traditional technology assisted review, these technologies require iterative rounds of review to increase the percentage of relevant documents identified.

GenAI document tagging has proven to perform better than human reviewers.  Users of Syllo can expect to identify 95% or more of all relevant documents with the initial review already resulting in a high recall rate.  Performance can be further improved by conducting quality control reviews and further refining instructions based on those areas in which GenAI document tagging has been overinclusive or underinclusive.

 

3. GenAI document review is too expensive.

To the contrary, GenAI document review is far less expensive than human review.  Clients have saved 20% – 30% using Syllo’s GenAI document review with the savings coming predominantly from eschewing use of contract review teams.  In addition to being less expensive on a rate basis, the superior performance of GenAI document review (see above) means that clients save by avoiding repetitively reviewing document sets that have not been correctly coded.

 

4. We don’t have time to learn a new workflow or integrate GenAI document tagging into our tech stack.

Law firms don’t have to fly solo when it comes to figuring out how to leverage GenAI document tagging.  As we’ve assisted law firms using Syllo in litigation, we’ve learned that they are looking for support in navigating the new workflow that this technology introduces into the review process.  Further, it’s more convenient for attorneys to receive support migrating data to Syllo from an eDiscovery platform that doesn’t include GenAI document tagging functionality.  We’ve accordingly assembled a team of seasoned professionals trained on Syllo who can provide expedited delivery of the benefits of GenAI document tagging without requiring the law firm to reallocate resources to an immediate integration of Syllo into the firm’s existing tech stack.  Firms can therefore use Syllo for pressing litigations while rolling out Syllo to the firm more broadly over time.

 

5. We can’t risk AI hallucinations.

In GenAI document tagging, the outputs of GenAI models are tags of documents.  These tags are reviewed in connection with the actual documents in the document set.  There is no occasion in this workflow for GenAI to invent a document.  As mentioned above, GenAI document tagging is better than humans at identifying relevant documents, but to the extent that there are documents that GenAI does not tag correctly, performance can be improved by a combination of quality control review and refining instructions.

Syllo further mitigates the risk of errors by providing transparent, explainable AI outputs that allow legal teams to understand why specific tags were applied. Each document tag is accompanied by clear explanations and reasoning generated by the AI, enabling attorneys to easily trace and validate the logic behind the tagging. This built-in accountability and adaptability ensure that Syllo’s GenAI tagging enhances review accuracy while maintaining full user oversight and control.

 

6. AI review will never hold up in court if challenged by our adversaries.

Courts have developed a well-articulated jurisprudence for evaluating technology-assisted review that is applicable for GenAI document tagging.  As an initial matter, producing parties are deemed to be in the best position to “evaluate the procedures, methodologies, and technologies appropriate for preserving and producing their own electronically stored information.” Hyles v. New York City, No. 10-cv-3119 (AT) (AJP), 2016 WL 4077114, at *3 (S.D.N.Y. Aug. 1, 2016) (citing Principle 6 of the Sedona Conference).  To the extent an adversary challenges the methodology, courts look to the results of the review methodology.  As mentioned in a seminal case on this subject, “I may be less interested in the science behind the ‘black box’ of the vendor’s software than in whether it produced responsive documents with reasonably high recall and high precision.”  Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182, 183-84 (S.D.N.Y. 2012).

Also, litigation teams have found GenAI document review in context in which defensibility isn’t an issue.  For example, GenAI document tagging has been used to review incoming productions and is particularly useful for reviewing “document dumps” where large quantities of documents are produced often shortly in advance of key deadlines, like depositions.  GenAI document tagging also serves as a useful tool in high-value litigations where documents have already been produced with human review or more traditional technologies.  GenAI document review can be used to see if these older techniques missed important documents that can be useful at summary judgment or trial.

 

7. Our clients wouldn’t want us using unproven AI technology.

Many in-house teams are actively surveying their outside counsel about AI adoption and how law firms are integrating GenAI tools into their workflows. At a recent roundtable with in-house counsel in which Syllo participated, in-house counsel explained that they were looking for their outside counsel to take the initiative in adopting new technology for use in litigation. As GenAI tools become more commonly used, these expectations will only increase and a lack of adoption of such tools will only become more glaring.

 

The Bottom Line

The legal industry is on the cusp of a technological transformation, and many of the perceived barriers to GenAI adoption are rooted in outdated assumptions.

Platforms like Syllo are proving that GenAI can be deployed securely, efficiently, and with greater accuracy than traditional review methods. Forward-thinking firms are already seeing the benefits—faster document review, lower costs, and a competitive edge in litigation.

It’s no longer a question of if GenAI will become standard in litigation workflows. It’s a matter of when. And firms that hesitate risk being left behind.

Ready to learn about how your attorneys can leverage GenAI in litigation? Find out more here or request a demo today.