These stated drawbacks contribute to why TAR 2.0 is so promising. And in fact, a chief complaint from users of TAR 1.0 is that it takes too many iterations and, ultimately, too much time to reach targets for accuracy. The use of randomly selected samples also means that human reviewers don’t have the opportunity to provide feedback to the machine learning models by introducing documents of a higher value that are outside of the prescribed samples. These sample sets sometimes include documents with low value (e.g., vague text or text unrelated to any other documents), and that means you need to perform multiple iterations of these sample reviews to achieve the desired accuracy. The process also makes review more efficient by ensuring that contract reviewers are generally looking at only the most relevant documents.īut traditional TAR 1.0 also has drawbacks, chief among them the need to review randomly selected sample document sets or seed sets, and to do so multiple times. 2 These TAR 1.0 processes help cut the number of documents needed for review and thereby dramatically reduce the number of time humans must spend on document review, which, of course, can drastically cut review costs in the form of attorneys’ billable time to clients. Depending on whether one is using simple active learning (SAL) or simple passive learning (SPL), TAR typically involves anywhere from 6 to 10 steps. TAR in its original form, however, is a multi-step process. All of these practices have gained traction since the publication of the first federal opinion approving TAR use (Da Silva Moore v. The legal marketplace offers many tools, methods, and protocols that claim to be TAR, including predictive coding, assisted review, advanced analytics, concept search, and early case assessment. To make this distinction clearer, let’s take a more detailed look at TAR 1.0 versus TAR 2.0. This eliminates the need for an SME to conduct multiple reviews of random sample sets. The more documents you review, the more accurate the results. In TAR 2.0, the user can begin reviewing any set of documents and then use tagging calls to predict tags for other documents in the database. While TAR has evolved considerably and steadily gained popularity, new techniques such as active learning, or TAR 2.0, are now emerging. That process is repeated until accuracy levels meet acceptable standards. The TAR workflow is essentially an iterative process where a subject matter expert (SME) reviews document samples and then the computer applies coding to the total documents based on what it learned from the samples. It started in 2010 with technology-assisted review (TAR), also known as predictive coding. 1ĪI, one of the key technologies for increasing speed and accuracy in workflows in just about every industry, has been part of the litigation discovery process for nearly a decade. That pressure to speed a review is one of the reasons that eDiscovery is effectively ground zero for today’s exploding use of artificial intelligence (AI) in law.Īccording to a recent Altman Weil survey, 58% of all respondents said that out of 10 options to improve law department efficiency, the most frequently cited is greater use of technology tools to aid in speed and accuracy. At one time or another, most lawyers involved in eDiscovery have felt the unique pressure of a slow-moving document review.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |