0 / 0
Using an AutoAI RAG index to chat with documents
Last updated: Dec 05, 2024
Using an AutoAI RAG index to chat with documents

If you created an AutoAI Retrieval-augmented generation (RAG) pattern by using a Milvus vector store, you can use the indexed grounding documents in the Prompt lab to add context to answers generated with foundation models.

As part of the process of running an AutoAI experiment for Retrieval-augmented generation, the grounding documents are vectorized and stored in a Milvus vector store. You can use the vectorized content to add context to generated answers to prompts. For more information on the Chat with documents feature of the Prompt lab, see Grounding foundation model prompts in contextual information.

Before you begin

To use the vectorized documents from your AutoAI RAG pattern, confirm that you have met the following requirements:

  1. A connection to the Milvus database instance is available from the project that you are using to create prompts.
  2. You used a supported embedding model to vectorize the documents.

For details on working in the Prompt lab with vectorized content, see Adding vectorized documents for grounding foundation model prompts.

Connecting your Milvus index to chat with documents

  1. From the Prompt Lab in chat mode, select a foundation model, and then specify any model parameters that you want to use for prompting.

  2. Click the Upload documents icon Upload documents icon, and then choose Add documents.

  3. Select watsonx.data Milvus as the vector store.

  4. Define the details by entering a name for the grounding documents and selecting the connection to the index.

  5. Specify the embeddings model used to create the AutoAI RAG pattern: slate-125m-english-rtrvr.

  6. Select the collection to use as grounding documents.

  7. Define the Milvus collection field schema, specifying document_id for Document name field and text as the value for Text field.

  8. Click Create.

  9. Submit questions about information from the document to see how well the model can use the contextual information to answer your questions.

    For example, ask about concepts that are explained in the grounding documents.

Parent topic: Automating a RAG pattern with AutoAI

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more