Prediction

“Prediction takes information you have … and uses it to generate information you don’t have.”

Prediction Machines: The Simple Economics of Artificial Intelligence at 24 (italics added).

Harvard Business Review Press (April 2018).

What’s Deep Learning?

Deep learning is just software that enables a computer to learn. It’s the informal name for two types of a multi-layer neural network. One type can learn from images: think Autonomous Vehicles. The other type can learn from audio and text: think Language Translation. 

For the TED talk about deep learning (about 20 minutes), see The Wonderful and Terrifying Implications of Computers That Can Learn by Jeremy Howard (December 2014). Click here.

For the Wikipedia article about deep learning (more technical and last edited on 28 January 2019), click here.

Then go to the Applications section and the paragraph for “Natural language processing.” In that paragraph, LSTM is the acronym for Long Short-Term Memory and RNN is the acronym for a Recurrent Neural Network, the type of deep learning that can learn from text.

from precedents to patterns to predictions

A deep learning model for a particular litigation risk requires (1) examples of text for (2) a specific classification of litigation.

Precedents. In law school, you learned the big classifications of precedents: contracts, torts, and so on. Now, when you’re facing a new situation, it’s second nature for you to classify it. The federal judiciary’s litigation database (PACER) is just like that. In PACER, there are many business-relevant classifications of litigation and each of them has an associated Nature of Suit (NOS) code.   

Patterns. With examples of each specific risk, we can create a dataset that’s "positive" for that classification. Then, for contrast, we can create a "negative" set with text that’s not related to the risk. Then we’ll have a binary classifier. Think “Deal or No Deal.” 

Predictions. Now let’s look at the batch of emails in your company’s ocean of yesterday’s emails. That’s the information you have. Using Office 365 connectivity, and both open source and third party software, along with our proprietary software, we index the emails and pass them through one or more pre-trained classifiers.

Our first classifier is a deep learning model for Civil Rights-Jobs (NOS code 442) aka “employment discrimination.” Our User Interface reports only the small fraction of emails that pattern-match to the model for employment discrimination. That’s the information you don’t have.

With Intraspexion, our prediction software will help you drive the frequency and cost of litigation down.

 what’s the “accuracy” of this pattern-matching?

We ran an experiment with a held-out set of 20,401 emails (from the Enron dataset), and the system output only 25 as being “related” to the risk. That fraction—25 out of 20,401—is equal to about one-eighth of one percent, i.e., 0.001254.

Let’s see how this level of pattern-matching “accuracy” plays out. An attorney for an NYSE company once told us that it was typical for his company to handle two million emails per month. Let’s start there: 2,000,000 emails per month. Divide 2,000,000 by 4.3 weeks per month. The result is 465,116 emails per week. Now divide 465,116 emails per week by 5 days per week. The result is 93,023 emails per day.

At that point, without Intraspexion, you’d stop and say, “You want me to look at 93,023 emails per day? Yikes. I'd need an army.”

But now let’s apply that accuracy fraction. The number of emails the Intraspexion system would surface to you as being related to the risk is 93,023 multiplied by 0.001254 = about 117 emails per day.

Now that's doable.

So give us a try. Request a demo.