Skip to content
6 minute read

Learn to Speak the (Large) Language (Model)

AI, am I right?

Seems like we all got thrown into the mix quite against our will last year, and we’re now scratching our heads and wondering what happened. The good news is that if you’ve done nothing to advance your knowledge in AI and machine learning yet, you’re not alone (especially in legal!).

The bad news? You soon will be.

Never fear – we’ve undertaken to break this complex subject into bite sized pieces that even smart people can understand. So, pull up a chair, grab a snack, and let’s artificial this intelligence up!

If you prefer video, check out YouTube recording of the sister session to this blog, “Setting the Foundation: Speaking the (Large) Language (Model).”

AI is a Senior Citizen

While it might seem like artificial intelligence just showed up on humanity’s collective doorstep unannounced, the concept of artificially intelligent robots was devised in the 1950s by a number of scientists and mathematicians, including a British gent named Alan Turing.

It's Good Trivia to Know Alan Turing

This is a bit of a detour, but it’s worth mentioning and knowing Turing’s name because you’re likely to hear people talk about the Turing Test as this AI thing catches fire. The Turing Test is a method from the 1950s for assessing artificial intelligence’s ability to deliver human-level responses and intelligence that is indiscernible from that of a human. If so, it passes the Turing test and is “thinking.” There’s debate as to what the Turing Test really tests ranging from thinking to imitation to not much useful, and most of the new Large Language Models are probably capable of passing it today. Whether it’s relevant today or not, you now have some back up content for your next awkward silence.

Alright, let’s get back to the topic – AI is old. In 1956, leading scientists came together during the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) and agreed that artificial intelligence was achieved as a practical matter. The primary reasons they couldn’t get it done back then? The early computers couldn’t even store calculations, making complex algorithms impossible to chain together the way we do today. And these behemoth computers had virtually no storage, limiting how complex things could get.

Fast forward to today when we have mobile devices with more storage and processing power than Mr. Turing would have ever believed was possible. And, because we have all this awesome computing power, our humans – these modern data scientists and evil geniuses – they’re able to continue to push the boundaries of… well, everything!

So, show some respect for your elders. Artificial intelligence is old.

Where’s It Actively (and Historically Used)

How old is AI? It’s so old that there aren’t humans underwriting insurance policies anymore and there haven’t been for years. Intelligent machines decide how much you pay. AI tells you how long your commute will take, identifies fraudulent banking activity. Very conservative industries like insurance and banking have been using this technology for a long time. They’re just using it in a very low risk way. Maybe we learn from them? Crazy idea?

ChatGPT and Friends – What Do I Need to Know?

First, know that ChatGPT is the name of a product created by a company called OpenAI. In the simplest terms, you can think of it like a chatbot supported by AI. More broadly, it falls into the category of generative AI. You’ll probably have heard of similar tools like Jasper.ai, CoPilot, or Otter.ai. These tools, which have large language models (or LLMs) at their foundation, have learned from (or "been trained on") sources far and wide and use that knowledge to respond to your inquiries.

A few things worth knowing here. First, it’s unclear what sources of information many of these models have been trained upon, so you have to read the fine print and ask questions when using it in a legal setting. Second, the model only knows what it’s been taught. So, if it’s not a model built for legal, it may miss nuances. And it doesn’t know what it doesn’t know. This is where it sometimes hallucinates – “makes stuff up’” in technical terms.

AI for Specific Domains

Also know that there are other options beyond these broad, public domain tools like ChatGPT. For instance, there are companies building and selling Large Language Models fine-tuned specifically to legal and even to practice areas. There’s AI fine-tuned on the specifics of legal writing in BriefCatch, AI specific to workflows used in legal operations in Steamline.ai, and to contract drafting in Henchman.io. There are quite a few promising tools emerging, and we’re only at the first generation so far.

AI Does Not Really Know What It’s Saying

I’m never sure if this jumps out at people in the same way it did me, so I’ll put it out there. Even though generative AI tools may look like they know what they’re saying, they’re mostly doing math and making predictions. Through their “training” – remember those information sources we talked about earlier – they’ve learned the patterns of our language. So, when AI is writing, it’s looking at the last few words and predicting the next few words that seem to make the most sense. This is why you have to at least give anything it produces a read, and if it’s high visibility/ importance, a deep review.

Make Sure You Understand How the AI You Use Learns

Now here’s where things get tricky for the legal industry. These models have to continuously be delivered a stream of new information in order to get better over time. Where do they get that information? You! And me!

Assume Everything Your Put into AI Is Now Public

Yes, you should assume, that every word you type into these systems is going back into their model to “improve” its language skills and the like. You should also assume that it’s learning from your material. So, if you’re putting your AEO work product or unpublished manuscript into a public domain tool, you should assume that portions of it may pop out for someone else one day. That’s why Sarah Silverman and many other artists are currently suing several of these companies for copyright (among other) violations. I suspect those claims will ultimately fail but be nonetheless considerate of the value of information you put in them for now.

Wait, You Mean I Want to Train the Model?

While this learning from your information may sound negative at first blush, I actually expect it will be one of the key differentiators in the future. You see, there’s this fun form of learning called “Reinforcement Learning from Human Feedback” or RLHF that can allow you to really train an internally focused model to give you outputs that are very aligned with ‘the way you do it’ at your firm or company. So instead of your ‘thumbs up’ in your chatbot teaching a public model what a good response looks like, you can instead train just your firm’s model. And that’s got legs, as they say.

Is the Prohibition Worth the Effort?

One last thought on understanding how your AI learns. The simple approach we've seen some take is to technically block or prevent a model from being trained on your activity and information as a means of implementing data privacy and protection measures. You might see that explained as, "None of our data is being sent back into the model.'" And that does mitigate some risk. But you end up with a 'dumb' model that never gets better for you and your purposes. There's so much value in training the model for your specific company's use cases that I'd encourage you to find a model and infrastructure arrangement that allows you to take advantage of the full scope of any AI you decide to work with, as opposed to limiting its utility. 

Want to Learn More?

I’m hosting an ongoing series called AI for Smart People designed to send you into 2024 with all the knowledge you need to make good decisions for your organization. Join us live on Wednesday afternoons or watch the recorded sessions on the Legal Tech Consultants YouTube channel.

Link-AI-Foundations

VIDEO: “Setting the Foundation: Speaking the (Large) Language (Model)”

I recently interviewed Meghan Anzelc, Ph.D., Chief Data and Analytics Officer, and Christina Fernandes-D'Souza, Director of Data Science, both at Three Arc Advisory, for our AI for Smart People series. 

It’s a broad discussion with some overlapping topics, but plenty of new content. You can watch that full interview on our YouTube channel here: https://youtu.be/etdEfXb5pBo?si=xIyM-Xs0qPvxTMD9

Terms to Know

AI (Artificial Intelligence): AI is a broad term used to describe machines that can think and act like humans.
ML (Machine Learning): ML is a subset of AI that focuses on algorithms that can learn from data.
Generative AI: Generative AI is a type of AI that focuses on creating new data from existing data.

Large Language Model (LLM): A large language model is a deep learning algorithm that can recognize, summarize, translate, predict, and generate content using very large datasets.

Prompt Engineering: Prompt engineering is the process of crafting and optimizing text prompts for an LLM to achieve a desired outcome. Prompt engineering is like designing the perfect question or request to get the best response from a smart computer program. It's all about making the computer understand what you want and giving you the appropriate answer.

Reinforcement Learning from Human Feedback (RLHF): Reinforcement Learning from Human Feedback (RLHF) is a method in artificial intelligence where a computer program learns and improves its decision-making by receiving guidance and feedback from humans, like how a student learns from a teacher's input.

    

Sign up for product news, and our latest insights

t's a newsletter so hot, that even global warming can't keep up.