what’s your litigation Problem?

You’re in the Corporate Legal Department. As things stand now, you can only manage the litigation that comes in the door. Even when you try to manage your litigation as efficiently as possible, that’s all you can do. There’s no way for you to see the litigation risks in time to avoid the lawsuits.

So you’re stuck in reactive mode. Put it this way: you’d need an army of paralegals and attorneys — who’d have to act like robots — to look at all of yesterday’s emails in order to find the “smoking guns” in some future lawsuit.

SOLUTION

But what if a technology could surface for you only a few candidates for the risky or "smoking gun" emails and bring them to your attention before they show up in a lawsuit?

If a technology could do that, the game would change. The technology would be the robot, and you’d be using your education and training to make the calls. Then you could be proactive about the risks and drive the frequency (and cost) of litigation down.

That’s the service we provide. We help you to see the risks, and we use AI in the form of “deep learning” to do it. With deep learning, we make it possible for you to see only the emails that are “related” to each of the risks you care about.

Let’s go from precedents and patterns to…predictions

We’re going to show you that this technology exists. It’s a multi-layer neural network — a form of artificial intelligence — that’s called “deep learning.”

So settle in, there’s a lot to absorb.

To create a “deep learning” model based on text which typifies a specific classification of litigation. We’ll need two basic ingredients: (1) a classification of text and (2) lots of examples of the classification. Now we have “classified data.” 

With classified data, we’ll create a dataset that’s "positive" for the litigation classification; and then, for contrast, we’ll create a "negative" set with text that’s not related to that classification.

AN ANALOGY TO TARGETS

Think of the model we’re going to create as if it were an archery target. We know what a bullseye is, but the deep learning model starts out as a blank slate. So think of the target that way. To build an archery target, we’d draw a circle in the center as the bullseye. But with deep learning, we use the "positive" set to teach the model what we mean by "bullseye."

So we grab some arrows. We don't shoot them at the target from afar. They could land anywhere. Instead, we walk right up to the target and stuff them into the small area we think of as the center. With enough arrows in that small central area, the target now “understands” a pattern of the “positive” examples.

Let’s drop the analogy for the moment. For a classification, let’s choose a business-relevant classification in PACER, e.g., Civil Rights-Jobs, better known as “employment discrimination.” What about the examples, the “arrows”? They consist of the text in attorney-vetted documents.

With such documents, we can build a “positive” set for “training” the deep learning “model” for employment discrimination. On the other hand, the “negative” dataset consists of text-based articles that are not related to employment discrimination. 

Now let’s go back to our analogy. What we want the technology to do for us is to surface the discrimination risks from yesterday’s emails. So we step back and, instead of arrows, let’s use darts (emails from the Enron dataset) as proxies for emails.

So we toss 20,401 darts (Enron emails) at our model and it “understands” which darts are related to the risk of employment discrimination and which darts are not related. Then, instead of 20,401 emails to review, your user interface populates with only 25 emails. The model has scored only 25 emails as “related” to the risk, along with scores to indicate how well each email matched up to the model.

Which of the 25 emails would you call False Positives? Which of them would you call True Positives? Tag them for positive and negative feedback so that the “model” may learn — and improve — over time.

Then focus on the True Positives. Now, based on yesterday’s emails, you have a reason for launching an internal investigation in an effort to nip the risk in the bud. So Intraspexion is not about helping you be more productive in doing any of the things you already do. With Intraspexion, you’re in the Prediction business.

Word Embeddings

But how were the words processed by a computer? Good question. As you know, computers process numbers, not words. So each word is turned into a number string using what data scientists call “word embeddings.” 

Word embedding technology didn’t exist until 2013, but it’s powerful because all of the words are kept in context as they are turned into number strings. As one famous linguist, J. R. Firth, observed — we only know a word by the company it keeps.

For example, the word “dog” takes on a completely different meaning as “hot dog.” The word “Ford” has different meanings if preceded by the words “President” or “Henry.” And “Ford” has different meanings if followed by the words “Motor” or “Mustang.” 

So today’s “word embedding” technology keeps each word in context, by looking at the number strings for two or three words before and after the word for which a number string is being created. Let me say it again: Each word’s number string is nuanced by the number strings of the words before and after that word.

Visualizing our classifier

Now we have what’s called a “binary classifier.” Thinking back to our archery analogy, the classifier is a Pattern that acts like a filter for the emails that hit only the bullseye. If other emails miss the bullseye, the classifier lets them go by. They’re neither saved nor forwarded. 

Think of it this way: When a client explains a new matter, attorneys — having taken contracts, torts, and family law courses in law school — ask themselves about the story that’s unfolding and ask themselves, “What’s the context?” So, right there, they engage in pattern-matching to a “classification.”

Now for the next step. Having listened to the story (the facts), the next step is legal research within the classification of interest. Now the first question is whether there are any appellate decisions with the same or similar facts? And a follow up question is whether these precedents say anything about how this new matter will turn out for either a plaintiff or a defendant?

But can a computer learn to pattern-match emails (and attachments) to a model of a specific risk classification? Yes.

And how do we know when our classifier is sharp? Well, there is a state-of-the-art way of visualize the “positive” and “negative” Patterns for each risk classification.

The technical name is “t-Distributed Stochastic Neighbor Embedding,” but the abbreviation is “t-SNE,” pronounced "tee-snee."

In the image below, you’ll see a t-SNE visualization of actual data at the “document” level. By “document,” we again mean an entire block of factual allegations extracted from a single lawsuit.

Here, the whites = training documents unrelated to discrimination, while the reds (lower left hand corner) = training documents that are related to discrimination, i.e.., the factual allegations in previously filed discrimination lawsuits.

 t-SNE visualization of the discrimantioan emails found

See the separation between the whites and the reds? There are no red documents in the cluster of whites, and no white documents in the cluster of reds. It’s a clear decision boundary for yes or no.

If the colors are mixed, the parameters in the Deep Learning model ("engine") need to be adjusted (“tuned”) to eliminate the mixing.  

review

This filter doesn’t make predictions; it enables Predictions — by you! — because it enables you as a human reviewer to see a relatively small subset of emails that are "related" to the risk. Then you can assess whether each email is a True Positive or a False Positive.

So the Gold Standard is still you, the attorney. You’re not replaced by AI. You’re empowered by it. Or do you think you can process 20,401 emails from yesterday and find the emails “related’ to a specific litigation risk? So let a computer do that. That’s what we invented for you. Then you decide whether to escalate a True Positive to a second reviewer--if need be--or whether to just go ahead and launch an internal investigation.

That's why AI is a game changer. It enables you to go from “I only manage the lawsuits that come in the door” (Reactive) to “I can see how this email might be a “smoking gun” in some future lawsuit. Let’s investigate.” (Proactive). 

ARCHITECTURE

Now we'd like you to see the confidential work flow of our system. It was designed with the work-product (WP) doctrine and the attorney client (AC) privilege in mind. A “deep learning” model is the foundation for the Architecture, so it’s at the bottom.

Once your Law Department deploys our system “in anticipation of litigation” (and so satisfies an element of the WP doctrine), the work flow begins at the top. We have Office 365 connectivity and, with our Administrative Console, you can designate who gets an alert about which “use case,” and you can schedule an automated review of daily emails or manually set up a date range to review. If you launch an investigation, the WP doctrine applies again. If you decide to advise a “control group” executive, the AC privilege should apply. 

Here's the Architecture / work flow. Start with you, the Customer, in the upper left hand corner.

Architecture diagram_edited_NB_updated.png

NOTE 1: The feedback loop (center right) collects the emails you tag as False or True Positives. They’re stored in a database. With enough examples, your deep learning engine will better reflect the culture of your enterprise. 

NOTE 2: Immediately below these Notes, you’ll see an early test using Enron emails in our one-page User Interface (UI). There, we found one (1) discrimination needle in the haystack, where the system reported 24 emails as “related” to the risk out of 4,942 emails, which is about one-half of one percent = 0.005%

NOTE 3. In the UI, the arrow points to a True Positive because it scored high. This is where you’d tag it as a True Positive. Then, if you clicked Open Email (to the left of True Positive), you’d get that email back in its native (.pst) format. That’s your starting gun for an investigation using internal resources, e.g., email threading, HR performance reviews, etc.

Presentation - Product_Edited_Images-Only .png

 what’s the “accuracy” now?

Well, with that held-out set of 20,401 emails, only 25 were “related” to the risk. That fraction is 25 out of 20,401 and is equal to 4x improvement, reduction to a fraction of one-eighth of one percent, which is the same as 0.0012.

Let’s see how this improvement plays out. An attorney for an NYSE company once told us that it was typical for the company to handle two million emails per month. OK, let’s start with 2,000,000 emails per month.

  1. Divide 2,000,000 by 4.3 weeks per month: the result is 465,116 emails per week;

  2. Now divide 465,116 emails per week by 5 days per week: the result is 93,023 emails per day.

At that point, without Intraspexion, you’d stop. You’d say, “Wait. You want me to look at 93,023 emails per day? Yikes. I'd need an army.”

But let's continue the calculation. So the number of emails the Intraspexion system would surface for you, as being "related to the risk," when presented with 93,023 emails per day, is 93,023 multiplied by 0.0012 = about 112 emails per day.

Now that's doable.

But wait. How do we know that this many — 112 per day — is doable?

We know because, in an "Email Statistics Report, 2011-2015," the Radicati Group reported (at p. 3) that, on average, each business user was expected to send/receive 125 emails per day in 2015.

So, for a reviewer, 112 emails per day is likely to be a slightly below average back in 2015.

Now, assuming a 7-hour workday, one reviewer could handle 16 “related” emails per hour, which is one email about every four (4) minutes; and that’s a lot of time to read and think about an email. And we can tell you from experience that a reviewer can spot a False Positive in only a few seconds.

THE UI VISUALIZATION

And our UI will help. To help you spot a True Positive, we built a database of subject-matter words that are relevant to the risk. So after the deep learning model (filter) provides its output, but before it gets to the UI, we pass the emails through this database, and it highlights those subject-matter words for you. That’s why these words are highlighted in yellow in the UI, above. 

OUR INNOVATION IS a game-changer

Now you know why LawGeex (in its In-House Counsel’s Legal Tech Buyer’s Guide for both 2017 and 2018) called Intraspexion a Prediction Technology. It enables you to do now, with our patented software system, what’s previously been impossible for you to do. With Intraspexion, you can see a risk in time to nip it in the bud.

So give us a try. Request a demo.