Intraspexion UI

for the corporate legal departments of the future,

Intraspexion Is the New Sonar

By Nick Brestoff, Founder and CEO and Jagannath Rajagopal, Chief Data Scientist

Overview

That image above is Intraspexion's User Interface. One page. It's that simple. After you read this white paper and we come to it again, you'll understand what it reveals and why.

Introduction

On January 29, 2018, in the Artificial Lawyer blog by the UK's Richard Tromans, Ari Weinstein (CEO of Confident Contracts) reported on a panel at LTNY's Legal AI Bootcamp: "The 'kumbaya moment' of the session," he reported, "was when all agreed on the panel that AI not only helps legal teams to be more efficient, but it helps you to do what you could not do before." (Emphasis added.)

We'll guess that you care about what your Corporate Legal Department can do. So let's start off by describing what it can't do now

Consider your daily flow of internal communications. All that data used to be called a data lake. Now it's an ocean of data, and everyone in the Legal Department is closer to the data created yesterday than any outside counsel or eDiscovery vendor. But no one can see into that ocean. They don't know what's in those communications.

Why? Well that's simple: there's too much of it. But wouldn't you like to see the risky or "smoking gun" emails? You know, the ones lurking below the surface of your ocean; the ones that will show up as being material in some future lawsuit. Wouldn't you like to know about them before you have to manage the lawsuit?

Of course you would. If you could do that, then you could be proactive about the risks.

But you can't be proactive now, can you? You can't see these risks.

That's where we come in. Our system is like Sonar. Without Sonar, you can't see below the surface. But with Sonar, you can: you can locate fish and shipwrecks, as below.

 For the Wikipedia article about Sonar, click    here   .

For the Wikipedia article about Sonar, click here.

And with Sonar, you can also find threats like underwater mines, except that your underwater mines are the risky emails.

Our system is like Sonar. It's a litigation "early warning system."

To achieve this, we use a form of Artificial Intelligence. It's a class of machine learning that's technically described as a multi-layer neural network, but which has become more widely known as Deep Learning. 

For a great overview of Deep Learning, the 2016 Fortune article, "Why Deep Learning Is Suddenly Changing Your Life," is here

To better explain why Intraspexion is the New Sonar, we'll also use the analogy of finding the needles in the haystack. We know you've heard that one.

BACKGROUND

FIRST LIGHT

For starters, here's some of our history. The term "first light" refers, generally, to the first use of a new instrument. It's then that you see what needs further attention. In Q4 of 2017, we completed a pilot project with a company whose identity and confidential information we’re not permitted to disclose.

However, in a non-confidential telephone communication, a company attorney reported that our system had found, in a now-closed discrimination case, a risky email that the company already knew about. That was good news, but it was not exciting news.

But we were also told that our system had also found a risky email the company had later determined was material, but also that, previously, the company had not known about it.

Now that was compelling.

CREATING A MODEL

Deep Learning for text requires two basic ingredients: (1) a classification of data (i.e., a category or a label) and (2) lots of examples of the classification (a "positive" set) and, for contrast, examples of text we'd want the system to skip (a "negative" set).

When we put the pieces together, we've created a "model" of that classification.

Here's an analogy: Think a of model as if it were an archery target. We know what a bullseye is, but the target (if it was capable of knowing anything) needs examples of a classification (that "positive" set) to learn what we mean by "bullseye." So we grab some arrows that are positive for the risk. We don't shoot these exemplar arrows at the target. We walk right up to it and stuff them into the small center circle we call a bullseye. Now the target knows.

But better than that, when we use emails instead of arrows, our model will score the hits and tell us which arrows not only matched up with "bullseye," but also to what degree, e.g., 0.98, 0.94, 0.86, etc. 

TRAINING DATA IS THE KEY

Our first litigation risk classification is “employment discrimination.”

Now for "positive" and "negative" examples.

We created a “positive” set of examples from the factual allegations in hundreds of previously filed discrimination complaints in the federal court litigation database called PACER, and in the classification for “Civil Rights-Jobs," which (in PACER) is Nature of Suit code 442. 

Now, for our purposes, we didn't care about the legalese of "jurisdiction and venue," the names of the parties, or specific claims that were being made. And it didn't matter whether the discrimination was for age, race, sex, and any other sub-category of discrimination. PACER has no sub-classifications for them. We wanted our model (which could be any business-relevant PACER category) to learn "discrimination." It did.

Second, we created a “negative” set of examples that was “unrelated” to “employment discrimination.” This negative set consisted of newspaper and Wikipedia articles and other text, including emails. But to the best of our knowledge, there were no "discrimination" articles or emails in these sources for our "negative" examples.

After we used this data to train our discrimination model, we looked at the only publicly available set of corporate emails, the well-known Enron emails, and, to make a long story short, found four (4) examples of True Positives for employment discrimination.

We were surprised. We knew we were on the cusp of a breakthrough then. Enron was known for fictitious entities and marketplace fraud, not discrimination.

As far as we know, no one before us had previously surfaced Enron emails that were about "discrimination." 

Thus, we had successfully trained our Deep Learning model to "learn" the pattern for "discrimination." Was our model perfect? Hardly. For one thing, our initial training for the positive set was generic and not company-specific. 

Our third step was the pilot project we can't discuss.

For our fourth step, after that "first light" pilot project, we added 10,000 Enron non-discrimination emails to the negative, unrelated set, so the model could “understand” English in the context of Enron emails.

Then we looked at a held-out set of 20,401 Enron emails that our system had not previously ingested.

Well the result was remarkable: Our "model" called out 25 emails as being "related" to discrimination, and our 4 True Positive "needles" were among the 25.

That's 25 out of 20,401 emails, a fraction of 0.001225, which is a little less than one-eighth of one percent.

Given that result, we knew we had a very sharp "model" for employment discrimination in the context of Enron emails.

But could we visualize the model we had built, the Enron model? The answer is that there is a standard way of visualizing data patterns. The technical name is “t-stochastic neighbor embedding,” but the abbreviation is “t-sne,” and it's pronounced "tee-snee."

In the image below, you’ll see a t-sne visualization. Here, whites = training documents unrelated to discrimination (e. g., news and Wikipedia articles, emails, etc.); while reds (lower left hand corner) = are training documents that are related to discrimination, i.e.., the factual allegations in previously filed and publicly available complaints in discrimination lawsuits.

 t-SNE visualization of the discrimantioan emails found

See the separation between the whites and the reds? That's a clear decision boundary. There are no red documents in the cluster of whites, and no white documents in the cluster of reds.

If the colors are mixed, the parameters in the Deep Learning "engine" need to be ajusted to eliminate the mixing. (This is called tuning. There's some "art" in this science, after all.)

Next, when a company-specific "model" is asked to assess text in its emails, it can "read" the text in each email and indicate whether an email matches up with the pattern of reds, the documents related to the risk of interest, e.g., discrimination, and indicate (with scores) to what degree.  

review

The "reds" are documents consisting of factual allegations that were drawn from hundreds of discrimination complaints after they were filed in PACER. It didn't matter who the defendant was. We think of this level of training now as "generic."

Later we realized that we can augment the "generic" training by using factual allegations in discrimination complaints that have been previously filed against a specific company. When we do that, the level of training is more "company-specific."

In addition, our system includes a patented feedback feature. Our software allows a user to accept a "related to the risk" email as a True Positive or reject it as a False Positive. After there's enough company-specific feedback data like that, we can augment both the positive and negative training sets. The more the model learns about your company, the better it gets.

To sum up so far: A model for employment discrimination is a simple binary filter. It splits out the emails "related" to the risk from the "unrelated" emails. We show you only the small number of risky emails related to the risk for which the model has been trained.

This filtering makes it possible for a human reviewer to see a relatively small subset of emails "related" to the risk, and then that person splits out the True Positives from the False Positives.

So the human reviewer--a paralegal or attorney--is the Gold Standard. A human decides whether to escalate the high scoring emails to a second reviewer--if need be--or whether an internal investigation should take place.

And that is why AI, as we use it here, is not frightening in any way. It means Augmented (human) Intelligence.

That's a game changer. See the blog article, "Why Future Enterprise Paralegals Will Be More Powerful Than Future Law Firm Senior Partners," May 17, 2018, which you can jump to here.

Or, returning to our analogy of mines below the surface of your ocean of data, now you can see the underwater mines in time for the captain of the ship to take evasive action.

ARCHITECTURE

Now we'd like you to see one Power Point slide that depicts the architecture and, in a sense, the work flow of our system. We've covered the training step. It's the foundation and is at the bottom of this image.

Once your Law Department deploys our system, the work flow begins at the top. We have Office 365 connectivity and start with your company's emails (from yesterday and then every day thereafter, one day at a time) and end with a User Interface (UI).

Using the Administrative Console, you can designate who gets an alert about what "use case," and you can schedule an automated review of daily emails or manually set up a date range for a special selection of emails to review. 

Your emails then flow into and through one (or more) of our litigation risk models and then into the UI Viewer.

Here's the Architecture / work flow slide:

Architecture diagram_edited_NB_updated.png

And here's a reprise of our UI:

Presentation - Product_Edited_Images-Only .png

What's all this really doing for you?

To the left of the above UI, you can see that the system is reporting 24 emails as positive for the risk after processing 4,942 emails, which is about one-half of one percent. And as you now know, we can do much better than that.

Now let’s consider a larger set of emails.

An attorney for an NYSE company once told us that it was typical for the company to handle two million emails per month.

Since we were going to analyze emails daily and report "early warnings" on a near real-time basis, we did the math:

Assume 2,000,000 emails per month.

  1. divide 2,000,000 by 4.3 weeks per month, the result is 465,116 emails per week;

  2. divide 465,116 emails per week by 5 days per week, the result is 93,023 emails per day.

At that point, we realized that we were being asked to look at 93,023 emails per day! Yikes. You'd need an army.

So, unless your company wants to hire an army, no one even bothers to look

The size of the task makes the work just impossible.

But let's continue the calculation, but only in an informal way. Remember that when we ran our discrimination model against a held-out set of 20,401 Enron emails, it surfaced 25 emails related to the risk, a fraction of about one-eighth of one percent, i.e., 0.0012.

So the number of emails the Intraspexion system would surface as being "related to the risk," when presented with 93,023 emails per day, is:

93,023 multiplied by 0.0012 = about 112 emails per day.

Now that's doable.

Our hypothetical is just an illustration, but it shows that our patented technology can turn the impossible into possible

Wait. How do we know?

We know because, in an "Email Statistics Report, 2011-2015," the Radicati Group reported (at p. 3) that business users sent and received 121 emails per day in 2014 (on average), and reported that the number would grow to 140 emails per day in 2018. 

So, for a reviewer, 112 emails per day is a slightly below-average amount, and, assuming a 7-hour workday, turns out to be about 16 “related” emails per hour, which is one email about every four (4) minutes.

And if the company is at the projected 2018 level of 140 emails per day, that's 20 emails per hour during a 7-hour workday, which would give each reviewed three (3) minutes per email.

And we can tell you from experience that a reviewer can spot a False Positive in only a few seconds.

And we provide a "help" here. From the general training set (the factual allegations), we (Nick Brestoff and co-founder Larry Bridgesmith) built a database of words that are related to the discrimination risk. (Each of us had handled discrimination litigation.)

So after the Deep Learning engine / filter provides its output, we pass the email output through this database and it highlights those subject-matter words for the reviewers. In the UI, those words are highlighted in yellow. 

Accordingly, for companies generating two million emails per month, it may take only one (1) reviewer to decide which of yesterday's emails should be passed to a second reviewer or escalated to an investigation.

It's true, of course, that many companies generate more than two million emails per month, but that's no stopper. These models run on Graphics Processing Units (GPUs) and they're not only fast, they run in parallel.

Accordingly, computer processing capacity and speed are cost issues, but they are no longer system limitations.

Thus, with Intraspexion, a risky email might rise to the surface, and be visible to reviewers, only a day or so after it was written.

WHY OUR INNOVATION IS a game changer

Well, let's start by admitting that neural networks have been around for decades. (For that history, click on "decades.") To make another long story short, there were winters (disappointments) and springs (hope and the hype that goes with it).

But perhaps beginning in 2012, Deep Learning started producing extraordinary results.

The results were so strong that Andrew Ng--whose resume includes teaching computer science at Stanford, co-founding Coursera, leading Deep Learning teams at Google and Baidu, and more--has said that Deep Learning “is the new electricity,” and that, as such, Deep Learning will be as broadly impactful today as electricity was during the Industrial Revolution

(Prof. Ng made his "new electricity" observation in an October 2016 Fortune cover story, "Why Deep Learning Is Suddenly Changing Your Life," which you can access by clicking here.)

Thus, with Intraspexion, we’re working with today’s new electricity, and we're the first to invent it for the legal profession, which is why we have patents.

in conclusion

Our patented Deep Learning system may result in less litigation for you too

Give us a try. Request a demo.