Lawyering up: Can AI make legal more human?

Source Node: 751044

Last month, I found myself giving a demonstration of Juro — our product for contract management software — on a Monday afternoon. Our sales team was tied up and I thought it would be fun. We sell to lawyers and this person was an assistant general counsel at a mid-market corporate. I ran through the high-level messages we usually use to frame these meetings, and then before jumping into the product, asked if they had any questions so far.

“Yeah, thanks Richard — can you tell me, does this do AI?”

Confusing. I’m not sure AI is something you “do,” but regardless: “Yes, we have machine learning models in the product — but can I ask, why? What are you trying to do?”

“Not sure — I just need to know if it does AI. OK, it does. Does it do blockchain?”

Needless to say, this is a pretty terrible way to tackle legal processes: putting the tech first and the people last. This approach, with the hype around technology completely disconnected from what people actually need, is as unhelpful as it is common.

The truth is that for most people, dealing with the law and legal processes is stressful. People and businesses deal with legal issues at key moments: buying a house, making a claim in court, hiring a new employee, signing a new customer. But instead of being delightful, friendly and human, these processes are often stressful, confusing and scary.

Everyone hates lawyers

Lawyers don’t always enjoy the trust of their clients — and I say this as a former lawyer. A Princeton study found that the average person rates them highly for competence, but poorly for trust and warmth. Legal matters remain abstruse and difficult for most people. And while technology is certainly transforming the legal industry at every level — particularly when it comes to applications of AI — it hasn’t fulfilled its potential to give people a better experience of law, rather than simply making work more efficient and profitable for big law firms.

The legal industry has at least begun its journey with AI, with applications of machine learning springing up across almost the entire workflow within law firms. Contract review was the first category to really take off, with law firms using named entity and optical character recognition to tell clients what’s in their contracts. Other applications of AI have addressed legal tasks like billing, time management, e-discovery and legal research. This is all encouraging, but it doesn’t do much for end users.

How AI will shape people’s experience of law

The good news is that AI, thoughtfully deployed, has the potential to change real people’s experience of law in a profound way, making arcane and impenetrable processes more accessible and more human for everyone.

Here are four factors we must consider to make this a reality.

1. Enter the chatbots

Joshua Browder created DoNotPay four years ago, originally as an app to contest parking tickets. It’s since expanded to incorporate more than 1,000 chatbots, attracting investment from Andreesen Horowitz along the way. DoNotPay uses AI (supported by IBM’s Watson) to automatically generate documents to challenge various infractions and fines, empowering end users to navigate stressful legal processes on their phones.

But while DoNotPay was celebrated in some circles, in others it was met with hostility — even lawsuits. The law is a regulated profession, with lawyers needing recognized qualifications to undertake certain “protected activities” — and those activities aren’t open to laypeople, never mind algorithms. Some lawyers don’t want software taking over their secret kingdoms, nor their billable hours.

If AI is really going to make the experience of law more human, this attitude can’t survive. If a chatbot can reproduce what you do faster, more cheaply, and with a better experience for the end user, then does your activity deserve to be “protected”? As a lawyer, wouldn’t you rather spend your time adding real value? Chatbots already help people with their banking, mortgages, shopping and utilities — there’s no reason legal would be any different, with AI giving people an accessible experience, right at the point of need. Pioneers at Stanford and Suffolk have embraced this challenge — it’s only a matter of time before regulators catch up.

2. Contracts machines can read

At its core, a legal contract is just the written expression of a relationship: a series of promises to govern how two parties treat each other. Contracts should be something to celebrate; sadly, most people hate them. Research from the IACCM showed that 83% of people are dissatisfied with contract process. Signing, scanning, posting, and ultimately losing documents makes contracting fundamentally unfriendly and uncollaborative, with relationships between people and businesses being damaged from the outset.

This is partly down to contracts’ default format — static files in Word documents, or PDFs. They’re made of unstructured data that’s difficult to search; audit trails of activity are lost, along with negotiation histories. This archaic process is why a couple buying a house can arrange the mortgage online in a chat window with a digital broker, but when it comes to the actual sale contract, they’re likely to get a scanned, three-column, almost-illegible hard copy document to try and decipher. Contract management software still struggles to overcome this problem, despite how far technology has advanced.

But if contracts are machine-readable, they can be collaborative and dynamic from the outset. Businesses won’t have to pay law firms to find out what’s in their contracts, because they’ll be searchable. If our couple of homebuyers don’t understand the changes in their conveyancing documents, they can scroll back through the version history and make comments for their lawyers to answer.

Even better, machine learning models can begin to understand the problem clauses that baffle readers, and flag them during drafting, suggesting plain language alternatives. If the wording of a clause is likely to have readers scratching their heads and spending more money on lawyers to explain it, then AI can spot the problem in the draft and flag it with authors. If AI can support the resurgence of plain language in drafting, that can only help to make legal more human.

3. Predicting (and deciding) the future

The predictive capability of AI, with regard to big data, could be transformative at every level of the legal system. If it can help people and businesses spot risks that might arise from a given course of action, they could avoid costly litigation before it arises. If a legal matter is unavoidable, predictive analytics could tell us how long it’s likely to take and how much it’s likely to cost. If we can give people a better understanding of the consequences of their legal strategies, it might help to avoid difficult, stressful outcomes for end users.

Some companies have already started to use public datasets to make this happen. In New York, Premonition has compiled all the publicly available data they can find, using it to predict win rates for a given legal issue before each court and judge. The aim is to guide potential claimants to the venue for their case where they’re likely to secure the best outcome. If we can determine whether a claim is likely to succeed in a given court, could we determine whether it’s likely to succeed anywhere — and in doing so, aren’t we effectively rendering a predictive verdict through AI?

Whether this is possible or desirable is a different question entirely — which brings me to my next point.

4. Without reproducing the past’s biases

The use of AI to improve people’s experiences of the legal system also represents an opportunity to remove bias from justice. In theory, algorithmic decisions should be free from historical biases that affect minorities in their interactions with the justice system; AI should be agnostic when it comes to factors like ethnicity, gender and sexual orientation. But results so far paint a different picture.

Public authorities in the US have already begun to use AI to do the heavy lifting on legal processes that involve large volumes of calculations; for example, assigning risk scores to people arrested, in terms of how likely they are to reoffend. However, in a study by ProPublica, researchers found that the AI-generated scores assigned to more than 7,000 people in Florida were “remarkably unreliable”; across the full range of potential crimes, predictions were slightly more accurate than the flip of a coin, at just 61 per cent. When life and liberty are on the line, this just isn’t good enough.

Worse, the system exacerbated racial disparities, rather than removing them: it was around twice as likely to label black defendants as criminals than white defendants. This bias is a key risk that accompanies the expansion of AI’s remit in legal. Bias can be overcome, but it will take work; it would be a huge step backward if we were to bring the biases and inequalities of antiquated legal processes into our new, AI-powered world.

The lack of visibility into decision-making is also a problem. In ProPublica’s study, it was impossible for defendants to understand the reasoning behind the score: machine learning algorithms don’t show their workings, which makes their rulings hard to appeal. A healthy justice system that respects the rule of law is one where justice is not only done, but is seen to be done — participants must be able to understand why a particular decision was rendered in a given legal process, or they have no guidance to help them avoid the dispute next time. If machine learning models are to become active participants in legal processes, there must be transparency into their reasoning. If not, legal risks become even less friendly, rather than more human.

Less artifice, more intelligence

The encroachment of AI into every legal process — from high-value commercial deals, to everyday parts of the justice system — seems to be unstoppable. But the translation of AI’s transformative power into a more humane legal system, while possible, isn’t inevitable. It will take conscious, intelligent efforts on the part of tech companies, vendors, regulators, public institutions and educators to make sure that end-users remain central to our focus. AI can certainly make legal more humane — but only if humans actually force it to do so.

Source: https://unbabel.com/blog/artificial-intelligence-legal/

Time Stamp:

More from Unbabel