Advanced AI tools that are under development should transform the legal landscape in a profound way.
There are a number of potential applications for using AI in the legal domain, especially for those that relate to the automation of repetitive and routine tasks. Conducting legal research can be tedious, monotonous and time-consuming, but performing timely and comprehensive legal research is critically important for lawyers. AI systems certainly aid lawyers by performing legal research on relevant case law and applicable statutes faster and more thoroughly than most lawyers may be able to do on their own. Such systems are proving powerful enough to use data to predict the outcome of litigation and enable lawyers to provide more impactful advice to their clients in connection with dispute resolution issues.
Prior to 2016, when the use of “hired” robots to assist in bankruptcy cases were first publicly announced, lawyers had been using static pieces of software to navigate the law, which were limited in scope and required many hours of information retrieval tasks on the part of each lawyer. The use of complex software in the practice of law had become commonplace (for, say, purposes of discovery), but AI tools, such as IBM’s Watson, assist legal professionals in making actual judgments. Using Machine Learning (ML) technology to fine tune their research methods, some legal research platforms—such as Bloomberg BNA, LexisNexis and Thomson Reuters—come with a steep learning curve based on their current functionality and offerings, requiring training that is not built in to the billable hour model, making them less desirable from an operational perspective.
The days of manually poring through endless Internet and database search results are essentially gone for firms that use AI and ML tools, representing a transition from programming to learning and teaching. The more advance AI tools do not merely translate words and syntax into search results; they are learning to understand the law. Weighing data, drafting documents and making arguments will remain the domain of humans, but by tackling the burdensome task of research, AI frees up lawyers to do what they do best.
Although it remains early days for artificial intelligence in the legal profession, cutting-edge AI tools that are under development should transform the legal landscape in a profound way. Such tools stand to be so transformative in saving time and money that not finding meaningful ways to utilize AI throughout a law firm’s operations will make some of them obsolete. Most law firms are using AI primarily for electronic discovery, due diligence and contract review. The legal sector’s challenge remains to become forward leaning and imaginative enough to embrace the inevitable and adopt AI on a broad basis.
Beyond helping to prepare cases, AI can also predict how they will hold up in court. Software can, for example, determine which judges tend to favor plaintiffs, summarize the legal strategies of opposing lawyers based on their case histories and determine the arguments most likely to convince specific judges. Critics rightly fear that AI could be used to game the legal system by third-party investors hoping to make a profit.
Until there is a significant society-changing breakthrough in AI, robolawyers will not be disputing the finer points of copyright law or writing elegant legal briefs, but chatbots could be very useful for other aspects of the practice of law. For example, bankruptcy, deportation and divorce disputes typically require navigating through lengthy and confusing statutes that have been interpreted in thousands of previous decisions. Chatbots could eventually analyze nearly every possible exception, loophole, or historical case to determine the path of least resistance.
AI is both enabling lawyers to establish new boundaries for how and where the law applies, while establishing the boundaries of what AI is capable of. Typically, the legal system’s interaction with software like robotics establishes liability where developers may have been negligent or could foresee harm. But certain aspects of AI make it difficult to prove fault by humans and no way to accurate foresee injury. Traditional tort law would stipulate that a software developer would not be liable in such circumstances. The dangers posed by Terminator-like outcomes can therefore potentially proliferate without anyone having to assume responsibility.
While it is unlikely that outcomes associated with AI will result in a permanent state in which no one is held responsible for its actions, it is unclear how the law will evolve to address AI’s every impact. Concepts such as liability or a jury of peers appear meaningless, unless AI’s developers and users can be proven to have intended to create harm or damage. As is currently the case with cybercrime, wherein actors operate in an anonymous, boundaryless and largely lawless world, identifying AI’s actors and proving their intent can be a rather difficult task.
A useful starting point down that road would be to establish norms and standards to govern the AI world. Laws should be passed that force manufacturers and developers to abide by a general set of ethical guidelines—such as technical standards mandated by treaty or international regulation—to be applied when it is foreseeable that the algorithms and data can cause harm. This could be achieved by convening globally recognized AI experts to create a framework that includes explicit creation standards for neural network architectures (containing instructions to train and interpret AI models).
Clearly, for such a standard to be enforced would require government intervention, and some governments have already initiated the process of forming teams to explore the concept. It will be a long time until such laws are routinely adopted, however, and even longer until a multilateral organization is created and tasked with passing and enforcing laws governing AI on a global basis. That is probably what will be needed, however, to make the idea of universal enforcement and compliance a reality. As is the case with the cyber world, a large question mark hangs over the AI world, its various manifestations, and its future impact on the law.
Daniel Wagner is CEO of Country Risk Solutions. Keith Furst is Managing Director of Data Derivatives. They are the co-authors of the forthcoming book AI Supremacy, which will be published in September.
Editor's Note: A version of this article first appeared here in The National Interest on September 10, 2018.