Why Future Enterprise Paralegals Will Be More Powerful Than Future Law Firm Senior Partners

On May 17, 2018, I listened to a talk by the Dr. Lewis Z. Liu, the CEO of Eigen Technologies. (His Ph.D. is in atomic and laser physics from the University of Cambridge.)

The point of Dr. Liu’s fine talk was to suggest why, with assistance from machines using artificial intelligence in the form of deep learning and natural language processing (NLP), the young associates of the law firms of tomorrow will know more and be more efficient, more productive than the highest paid senior partners of the law firms of today.

In an equation of sorts, Dr. Liu argued that, being AI-enabled means this (using > to mean “greater value than”):

young associates of the law firms of tomorrow > senior partners of the law firms of today

I believe that, but I’ll take Dr. Liu’s contention to an even higher level.

My contention is that, in the future, the paralegals in the enterprise Law Departments (whether corporate or governmental) will, with the assistance of machines using the same tools, e.g., artificial intelligence in the form of deep learning and NLP, will be more efficient and more productive than the most senior partners of the outside Law Firms.

Here’s my contention (where >> means “much greater value than”):

paralegals in Law Departments >> senior partners in Law Firms.

Now how am I going to convince you of that outlandish contention?

Let’s start with the paralegal in the enterprise Law Department of today. When a lawsuit is filed, custodians of potentially relevant documents receive litigation hold notices, and collections are assembled into a case-specific corpus of documents.

These collections need to be separated into documents that irrelevant and need not be produced; documents that are potentially relevant; and documents that may be potentially relevant but are privileged from being produced because of an applicable privilege, e.g., the attorney work-product doctrine or the attorney-client privilege.

In addition, an early assessment of the relevant but non-privileged documents can be done. That assessment may reveal whether risky or “smoking gun” documents are contained in the set that must be produced to an adversary unless otherwise privileged from being disclosed.

This information is useful in that such “smoking gun” documents, if they exist, may be weak and indicate that the lawsuit is defensible, or may be terrifying and suggest that the case is a looming disaster.

But the context is an already-filed lawsuit. In other words, if it poses a terrible risk, the enterprise is in big trouble. If that’s the nature of the lawsuit, it’s already too late.

Could a paralegal in the Law Department have seen such a risk coming, say by accessing and assessing the “smoking gun” or risky emails or other text-based documents before the situation devolved into litigation?

Before Intraspexion began combining artificial intelligence in the form of deep learning to internal enterprise communications such as emails, the answer is no.

And the reason is simple: The amount of data in yesterday’s emails is too large to read and there are no tools available to do the job.

But suppose that such a tool exists? We at Intraspexion have used artificial intelligence in the form of deep learning and created a patented software system for finding specific types of litigation risks in yesterday’s emails.

The system is currently trained for the risk of employment discrimination and, after the deep learning analysis “engine” is trained, it is very sharp.

What do I mean by “sharp”? I mean this:

I’ve said that our first classification is “employment discrimination.”

In our white paper, I described our steps for training a machine to “understand” this litigation risk category.

For the “positive” training set, we created a “positive” set of examples from the factual allegations in hundreds of previously filed discrimination complaints in the federal court litigation database called PACER, and in the classification for “Civil Rights-Jobs," which (in PACER) is Nature of Suit code 442. 

Now, for our purposes, we didn't care about the legalese of "jurisdiction and venue," the names of the parties, or specific claims that were being made. And it didn't matter whether the discrimination was for age, race, sex, and any other sub-category of discrimination. PACER has no sub-classifications for them.

Next, we created a “negative” set of examples that was “unrelated” to “employment discrimination.” This negative set consisted of newspaper and Wikipedia articles and other text, including emails.

But to the best of our knowledge, there were no "discrimination" articles or emails in these sources for our "negative" examples.

After that, we switched from PACER and looked at Enron emails and, to make a long story short, found four (4) examples of true risks for employment discrimination. We found them in the subsets for Lay, Kenneth (Ken Lay was the Chairman and CEO of Enron); Derrick, J.; and a few other former Enron employees.

Now, having found four "true risks," we knew that we had something special. Enron is known for fraud, not employment discrimination. And, as far as we know, no one before us had previously surfaced emails that were about "discrimination." 

Thus, we had successfully trained our Deep Learning model to "learn" the pattern for "discrimination."

Our third step was a benchmarking project we can't discuss.

Then, after that "first light" pilot project, we added 10,000 Enron non-discrimination emails to the unrelated set, so the model could “understand” English in the context of emails.

Then we looked at a held-out set of 20,401 Enron emails that our system had never analyzed previously.

Result: Our "model" called out 25 emails as being "related" to discrimination, and our 4 "needles" were among the 25.

That's 25 out of 20,401 emails, a fraction of 0.001225, which is a little less than one-eighth of one percent.

So if a machine can winnow 20,401 down to only 25 emails that it calls out as “related” to the risk of employment discrimination, that’s great, but then what?

What’s a machine to do? What’s next?

It doesn’t know.

Now I can conclude my argument in three short steps.

  1. Enterprise paralegals by themselves are closest to the risky data but don’t swim well in that ocean. They don’t bother to even look because the water is so murky.

  2. A computer trained using artificial intelligence in the form of deep learning can find the text related to the risks for which it’s been properly trained, but by itself cannot deal with the results.

  3. But paralegals who receive the results can assess the results, and then open and conduct an internal investigation, and, perhaps after sharing the investigation results with others, advise a control group executive about the potentially adverse situation.

Thus, the enterprise may be proactive and avoid the lawsuit altogether.

Senior partners in the very best outside law firms can never hope to be so helpful. They’re not close enough to the risky data when it’s only risky.

Conclusion: the enterprise paralegals of the Law Departments of the Future, enabled by artificial intelligence using deep learning, will be more powerful and beneficial to the enterprise than the outside law firm’s most senior partners.