The AI wars have begun. Not the one where the machines enslave us and use our body heat to power their compute–that’s at least 18 months off. 🙂 No, this AI war is between the tech giants and between closed, proprietary models and open source.
In one corner, we have OpenAI, supported by Microsoft, building out the strongest closed models with their GPT line. These models have set the benchmark for performance in natural language processing tasks, demonstrating remarkable capabilities in understanding and generating human-like text across a wide range of applications.
In the other, we have Zuck and his Llamas. And the Llamas are growing strong. The latest Llama model, Llama 3.1, represents a significant leap forward in open-source AI technology, offering comparable performance to the top proprietary models while providing the freedom to study, modify, and deploy the model without the restrictions and costs of closed-source alternatives.
So that’s what we’re going to do here. We will use the latest and greatest Meta has to offer to build out a RAG application that allows us to augment the model’s response with our own specific knowledge.
But first, we have to answer this question:
What is RAG?
RAG, or Retrieval-Augmented Generation, is an AI technique that combines the power of large language models (LLMs) with external knowledge retrieval. The idea is to fix one of LLMs’ key limitations: their inability to access or update their knowledge after training.
In a RAG system, when a query is received, it first goes through a retrieval step. This step searches a knowledge base to find relevant information. Once relevant information is retrieved, it’s fed into the LLM along with the original query. This allows the model to generate responses based not only on its pre-trained knowledge but also on the most up-to-date and relevant information from the external knowledge base.
There are a few ways to do this, but the most common current technique is using embeddings. Embeddings are a mathematical representation of the text in the knowledge base. Using a specific embedding model, you convert text into a dense vector representation (called an embedding).
When the text is represented as this vector, performing similarity searches on these numbers is much easier. So the basic process goes:
- You have a lot of text or documents. After some preprocessing (splitting it and cleaning it up), you create an embedding for each element of the text.
- You store these in a vector database–a specialized (or not 😉) database that efficiently stores these long embeddings.
- When a user makes a query, you then run that query through the embedding model to create its own embedding.
- You then search through your database looking for similar embeddings.
- You return the text associated with the N similar embeddings from the database, which are then passed to the LLM along with the original query.
- The LLM uses the query and the returned text to formulate a more specific, relevant, and up-to-date answer for the user.
You can see the applications. You can add your documentation to a vector database and build a model that allows users to pull the exact way to use your API. You can add all your past customer service interactions to a vector database and build a model that understands the biggest problems for your customers. You can add all your company’s internal knowledge base articles to a vector database and build a model that provides accurate and up-to-date information to employees across different departments.
RAG is extremely useful. And the underpinning technology of RAG is the vector database. Many specific vector database tools are available for this application, but as the saying goes, “Just Use Postgres.”
Creating a vector database using Postgres is astonishingly easy, and we’ll do that here.
Building our AI app
OK, so what are we going to build? We’re going to create a fairly simple application that peps us up with inspirational quotes. We’ll create these quotes ourselves and store them for retrieval.
The tech stack
Here’s what we’re going to use:
- Llama 3.1 for our model. Llama 3.1 is Meta’s latest open-source large language model, providing state-of-the-art performance for natural language AI tasks without the restrictions of proprietary models.
- Neon for our vector database. You probably know this, but Neon is a serverless Postgres database that offers vector operations via pgvector. Its serverless compute and storage scaling makes it ideal for storing and querying our embeddings efficiently.
- OctoAI to stitch everything together. OctoAI is a platform that simplifies the deployment and management of open-source AI models, allowing us to easily integrate Llama 3.1 and our Neon database into an application.
Creating a vector database in Neon
Neon has a Free plan: to start, create an account here and follow these instructions to connect to your database.
Once you’re connected, to turn Neon into a vector database just takes three words:
That’s it. Neon ships with pgvector, a Postgres extension that enables efficient storage and similarity search of embeddings. We can then create a quotes table that includes embeddings alongside the rest of our data:
So this table has four columns:
id
: A BIGSERIAL PRIMARY KEY serving as the primary identifier for each quote.quote
: A text column that stores the actual text of the inspirational quote.author
: A text column that stores the name of the person who said or wrote the quote.embedding
: A VECTOR(1024) column that stores the 1024-dimensional vector representation of the quote generated by an embedding model.
This vector allows for efficient similarity searches in the vector space later. Why 1024? Well, because that is the length of the output vector from the embedding model we’re going to use. If you were using the OpenAI embedding model, this number would be up to 3072. This is a good place to be cautious. The longer the embedding, the more storage it will use and the higher the cost.
Now we have our vector database (yes, that was all you had to do). Let’s populate it.
Creating embeddings
In this case, we’ve created a few fake quotes and stored them in a CSV. We’ve done this purely to show that the model is pulling from our RAG data rather than grabbing the quotes from somewhere else. We can then do steps 1-3 from above.
Luckily, as we’ve faked the data, we don’t need to do any cleanup, and we can get straight into creating and storing our embeddings. Here’s the Python code:
What do each of these functions do?
load_csv(filename)
: We’re parsing our CSV of quotes and returning the list of quotes and authors.get_embeddings(quotes)
: This is the heart of the code. We generate an embedding for each quote using the “helper/gte-large” model via the OctoAI API and return this list of embeddings.insert_into_db(quotes, authors, embeddings)
: Now we can add everything to Neon. We utilize a connection pool for efficient database connections and execute an SQL INSERT statement for each quote-author-embedding trio, committing the transaction at the end. It ensures proper closure of database connections and the connection pool.
With that done, we can see our data in the Neon tables page:
This is also a good way to see what embeddings are–just long lists of numbers. These embeddings underpin everything within the AI revolution. It’s all about matching these numbers.
Now we can start using this data.
Building a RAG model with Llama 3.1
Now we’re at the business end. Let’s start with the code, and then we’ll walk through it:
This starts with the user input. We’re just using the command line here, but you can imagine this input coming from an app or other frontend. This is a motivational app, so let’s ask it about some dreams:
That text is passed to get_input_embedding
. In this function, we’re doing exactly what we did above and creating an embedding for this string. Ultimately, we want a 1024-length vector we can check against the stored 1024-length vectors in our Neon vector database.
This embedding_vector
is then passed to get_quotes
. This sets up a connection to Neon again, but instead of inserting elements, this time it runs this SQL query:
Where input_embedding
is the embedding of our user input. This query performs a similarity search in the vector space of our stored quotes. The ‘<=>’ operator calculates the cosine distance between the input embedding and each embedding in the database. By ordering the results based on this distance and limiting them to 2, we retrieve the two most similar quotes to our input. In this case, this function will output:
You can see the ‘similarity,’ with both of these quotes talking about dreams. Now, RAG’s second magic trick is adding these quotes to the context of our larger model. Let’s just show this code again and then step through it:
The function takes two parameters: the user’s input and the quotes we retrieved from our vector database. It then uses those to construct the system prompt, using the retrieved quotes as context.
We then initialize the OctoAI client. This is where we’re calling the Llama 3.1 model through OctoAI. We’re using the chat completion endpoint, providing our constructed system prompt and the user’s input as messages. We’re limiting the response to 150 tokens.
Finally, we extract and print the generated response from the model’s output. This is what we get for the above input:
This embodies the core of our RAG system: it takes the context we’ve retrieved from our vector database (the quotes) and uses it to inform the language model’s response to the user’s input. We get those quotes back in our response, adding more relevancy to the model’s output.
We’ve created a RAG model with Llama 3.1 and Neon. What is the cost of all the AI calls here?
One cent. And that’s with all the tests while building.
Your AI apps have a home: Postgres
Hopefully, you’ve learned three things with this post:
- RAG is extremely powerful. This is just a toy example, but imagine having thousands of documents in a vector database and being able to add all that knowledge to a regular LLM. It can make the applications you build much more relevant to your users.
- Open-source models are up for the fight. We’re obviously barely scratching the surface of what Llama 3.1 can do, but the benchmarks put it up against OpenAI, Claude, and Cohere, and for a fraction of the cost.
- Postgres is a vector database. Like with Llama 3.1, we’ve barely started exploring the possibilities of Postgres and vectors. You can learn more about optimizing Neon for embeddings, and this is a great read on how Postgres compares to specialized Vector databases.
If you are using Neon, you already have a vector database. If you aren’t, sign up for free and start building your AI apps.