Justine - Legal AI

Where there is injustice, we will send in an invincible AI attorney, who can balance the scales of justice for the wronged and make sure every voice is heard, and every voice matters. Without justice, how can we eliminate poverty, sickness, and other ills of society? Welcome our Hero of Justice: Justine Falcon!

 

Lets look the old DL tech went that went into the alpha of Justine Falcon, which looks like this:

No alt text provided for this image

This combines 3-4 separate technologies, so I will break them down: Autoencoder, Predictor, and the Decoder. These are in a sequential pipeline. The first, Gen2 version of Justine is using DL autoencoders (Gen3 version will use the more advanced ORBAI BICHNN enhanced neuromorphic autoencoder) which can take whole documents and reduce them to a compact state vector, such that this smaller state vector still holds all the ‘meaning’ that the document did (and with the BICHNN autoencoders, can be reconstituted by the reverse process into the full document again). This is fairly common practice in deep learning, and in addition I patented what could be a really slick new way to do it for large documents so we can end up with a multi-level encoding if we want.

The autoencoder gives Justine her ability to sort and categorize documents, and quickly look them up, cause she just needs to work on compact state vectors. Some additional text search and question/answer techniques to fill in her functionality gives her some amazing text research, search, recall, and statistics capability. This alone could give a human lawyer some formidable tools to aid in their work.

The next part is the Predictor, which takes a sequence of these state vectors (each representing a document from each side, and what is in it) and trains to predict the sequence in which these documents occur, ina sort of very formalized legal conversation. In a legal case, each of these documents describes an action taken by one of the sides. The meaningful content is often surrounded by a lot of template legal language, so these compress well into state vectors such that similar docs have similar state vectors so they don’t have to be identical for the algorithm to work. It is the sequence of these documents or actions that makes up the litigation pattern, or legal conversation that happens between sides in each case.

Like all humans, lawyers can’t decide every action in every case from scratch every time, so they form habits, or patterns, such that once a case starts to go in a certain direction, they pull out their mental playbook and execute patterns or sequences of actions that they have learned worked well in previous cases. The more they do a pattern, and the more times it works, the more likely they are to use it again. Some mid-late career attorneys are so set in their patterns that they are incredibly predictable, and unable to break them even under the most compelling circumstances. That makes them vulnerable to an AI which is good at finding and predicting patterns.

Enter the predictor, which can train on all this attorney’s past cases, using something called an RNN/LSTM neural network. I won’t go into too much detail, but it is a DL recurrent ‘neural’ network that is learning the patterns of the attorney as it trains on his cases, training through them all multiple times until when you run it on a subset of their cases that you set aside for testing, the prediction of their next move(s) has as high of a probability as possible with the data you have.

The Predictor alone is a really potent tool in litigation, as now that it is trained on the opposing attorney’s cases and patterns, you can input your case into it, and it will spit out what the opposing counsel will likely do next, which is actually a compact vector telling you what the probability of each and every next potential action is. It is easy to narrow this down to the Top 5 most likely actions, with a percentage probability beside them. Now we can compare these to the compact vectors output from autoencoding all the other documents in the older case files, and use it to find the closest matches, so you can see what they might file against you. When we have the Gen3 BICHNN autoencoders online, you will be able to pass these state vector back up through the decoder, and if we have trained it REALLY well on the opposing attorney's other files, it will hand you the exact documents (motion, response, ...) that they are likely going to file next, with all the relevant information to your case filled in properly, perhaps weeks before they actually write it, let alone file it. Yes, people are really THAT predictable, and we write with predictable structure and content, especially senior attorneys. Ask any attorney if it would be advantageous if they could know opposing counsel’s top 5 most likely actions up to a week in advance, and be able to preview their motion?

As for the technology, It not only works for legal cases, where it follows, predicts, and decides actions in a 'conversation' between two attorneys litigating, but can also do the same for insurance claims, or for just regular e-mail, being able to read, follow, and respond to an e-mail thread like the people it was trained on. This technology is just as useful in chat conversation, or voice conversations trained on real people's conversations, and is truly able to master any form of human communication. It is the ORBAI Natural Language Engine, covered by patents 8, 9, 10, and the AI Lawyer in 11, and 12 in our June 2018 Provisional Patent (USPTO application #62687179).