Sunday, September 8, 2024

Integrating vision into RAG applications

 Retrieval Augmented Generation (RAG) is a popular technique to get LLMs to provide answers that are grounded in a data source. What do you do when your knowledge base includes images, like graphs or photos? By adding multimodal models into your RAG flow, you can get answers based off image sources, too!  

Our most popular RAG solution accelerator, azure-search-openai-demo, now has support for RAG on image sources. In the example question below, the LLM answers the question by correctly interpreting a bar graph:  

This blog post will walk through the changes we made to enable multimodal RAG, both so that developers using the solution accelerator can understand how it works, and so that developers using other RAG solutions can bring in multimodal support. 

First let's talk about two essential ingredients: multimodal LLMs and multimodal embedding models. 


Multimodal LLMs 

Azure now offers multiple multimodal LLMs: gpt-4o and gpt-4o-mini, through the Azure OpenAI service, and phi3-vision, through the Azure AI Model Catalog. These models allow you to send in both images and text, and return text responses. (In the future, we may have LLMs that take audio input and return non-text inputs!) 

For example, an API call to the gpt-4o model can contain a question along with an image URL: 

{ 
"role": "user", 
"content": [ 
{ 
"type": "text", 
"text": "What’s in this image?" 
}, 
{ 
      "type": "image_url", 
      "image_url": { 
       "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" 
       } 
     } 
] 
} 

Those image URLs can be specified as full HTTP URLs, if the image happens to be available on the public web, or they can be specified as base-64 encoded Data URIs, which is particularly helpful for privately stored images. 

For more examples working with gpt-4o, check out openai-chat-vision-quickstart, a repo which can deploy a simple Chat+Vision app to Azure, plus includes Jupyter notebooks showcasing scenarios. 

 

Multimodal embedding models 

Azure also offers a multimodal embedding API, as part of the Azure AI Vision APIs, that can compute embeddings in a multimodal space for both text and images. The API uses the state-of-the-art Florence model from Microsoft Research. 

For example, this API call returns the embedding vector for an image: 

curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeImage?api-version=2024-02-01-preview&model-version=2023-04-15" --data-ascii " { 'url':'https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png' }" 

Once we have the ability to embed both images and text in the same embedding space, we can use vector search to find images that are similar to a user's query. Check out this notebook that setups a basic multimodal search of images using Azure AI Search. 
 

Multimodal RAG 

With those two multimodal models, we were able to give our RAG solution the ability to include image sources in both the retrieval and answering process. 

At a high-level, we made the following changes: 

  • Search index: We added a new field to the Azure AI Search index to store the embedding returned by the multimodal Azure AI Vision API (while keeping the existing field that stores the OpenAI text embeddings). 
  • Data ingestion: In addition to our usual PDF ingestion flow, we also convert each PDF document page to an image, store that image with the filename rendered on top, and add the embedding to the index. 
  • Question answering: We search the index using both the text and multimodal embeddings. We send both the text and the image to gpt-4o, and ask it to answer the question based on both kinds of sources. 
  • Citations: The frontend displays both image sources and text sources, to help users understand how the answer was generated. 

Let's dive deeper into each of the changes above. 


Search index 

For our standard RAG on documents approach, we use an Azure AI search index that stores the following fields: 

  • content: The extracted text content from Azure Document Intelligence, which can process a wide range of files and can even OCR images inside files. 
  • sourcefile: The filename of the document 
  • sourcepage: The filename with page number, for more precise citations. 
  • embedding:  A vector field with 1536 dimensions, to store the embedding of the content field, computed using text-only OpenAI ada-002 model.

For RAG on images, we add an additional field: 

  • imageEmbedding: A vector field with 1024 dimensions, to store the embedding of the image version of the document page, computed using the AI Vision vectorizeImage API endpoint. 


Data ingestion 

For our standard RAG approach, data ingestion involves these steps: 

  1. Use Azure Document Intelligence to extract text out of a document 
  2. Use a splitting strategy to chunk the text into sections. This is necessary in order to keep chunk sizes at a reasonable size, as sending too much content to an LLM at once tends to reduce answer quality. 
  3. Upload the original file to Azure Blob storage. 
  4. Compute ada-002 embeddings for the content field. 
  5. Add each chunk to the Azure AI search index. 

For RAG on images,  we add two additional steps before indexing: uploading an image version of each document page to Blob Storage and computing multi-modal embeddings for each image. 


Generating citable images 

The images are not just a direct copy of the document page. Instead, they contain the original document filename written in the top left corner of the image, like so: 

 

This crucial step will enable the GPT vision model to later provide citations in its answers. From a technical perspective, we achieved this by first using the PyMuPDF Python package to convert documents to images, then using the Pillow Python package to add a top border to the image and write the filename there.


Question answering 

Now that our Blob storage container has citable images and our AI search index has multi-modal embeddings, users can start to ask questions about images. 

Our RAG app has two primary question asking flows, one for "single-turn" questions, and the other for "multi-turn" questions which incorporates as much conversation history that can fit in the context window. To simplify this explanation, we'll focus on the single-turn flow.  

Our single-turn RAG on documents flow looks like: 

 

1. Receive a user question from the frontend. 

2. Compute an embedding for the user question using the OpenAI ada-002 model. 

3. Use the user question to fetch matching documents from the Azure AI search index, using a hybrid search that does a keyword search on the text and a vector search on the question embedding. 

4. Pass the resulting document chunks and the original user question to the gpt-3.5 model, with a system prompt that instructs it to adhere to the sources and provide citations with a certain format. 

Our single-turn RAG on documents-plus-images flows looks like this: