This class will be removed in 0.3.0. See below for an example implementation using createRetrievalChain.

Class for conducting conversational question-answering tasks with a retrieval component. Extends the BaseChain class and implements the ConversationalRetrievalQAChainInput interface.

import { ChatAnthropic } from "@langchain/anthropic";
import {
} from "@langchain/core/prompts";
import { BaseMessage } from "@langchain/core/messages";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
import { createHistoryAwareRetriever } from "langchain/chains/history_aware_retriever";
import { createRetrievalChain } from "langchain/chains/retrieval";

const retriever = ...your retriever;
const llm = new ChatAnthropic();

// Contextualize question
const contextualizeQSystemPrompt = `
Given a chat history and the latest user question
which might reference context in the chat history,
formulate a standalone question which can be understood
without the chat history. Do NOT answer the question, just
reformulate it if needed and otherwise return it as is.`;
const contextualizeQPrompt = ChatPromptTemplate.fromMessages([
["system", contextualizeQSystemPrompt],
new MessagesPlaceholder("chat_history"),
["human", "{input}"],
const historyAwareRetriever = await createHistoryAwareRetriever({
rephrasePrompt: contextualizeQPrompt,

// Answer question
const qaSystemPrompt = `
You are an assistant for question-answering tasks. Use
the following pieces of retrieved context to answer the
question. If you don't know the answer, just say that you
don't know. Use three sentences maximum and keep the answer
const qaPrompt = ChatPromptTemplate.fromMessages([
["system", qaSystemPrompt],
new MessagesPlaceholder("chat_history"),
["human", "{input}"],

// Below we use createStuffDocuments_chain to feed all retrieved context
// into the LLM. Note that we can also use StuffDocumentsChain and other
// instances of BaseCombineDocumentsChain.
const questionAnswerChain = await createStuffDocumentsChain({
prompt: qaPrompt,

const ragChain = await createRetrievalChain({
retriever: historyAwareRetriever,
combineDocsChain: questionAnswerChain,

// Usage:
const chat_history: BaseMessage[] = [];
const response = await ragChain.invoke({
input: "...",

Hierarchy (view full)




chatHistoryKey: string = "chat_history"
combineDocumentsChain: BaseChain<ChainValues, ChainValues>
inputKey: string = "question"
questionGeneratorChain: LLMChain<string, any>
retriever: BaseRetrieverInterface
returnGeneratedQuestion: boolean = false
returnSourceDocuments: boolean = false
memory?: any



  • Parameters

    • inputs: ChainValues[]
    • Optionalconfig: any[]

    Returns Promise<ChainValues[]>

    Use .batch() instead. Will be removed in 0.2.0.

    Call the chain on all inputs in the list

  • Parameters

    • values: any
    • Optionalconfig: any
    • Optionaltags: string[]

    Returns Promise<ChainValues>

    Use .invoke() instead. Will be removed in 0.2.0.

    Run the core logic of this chain and add to output if desired.

    Wraps _call and handles memory.

  • Invoke the chain with the provided input and returns the output.


    • input: ChainValues

      Input values for the chain run.

    • Optionaloptions: any

    Returns Promise<ChainValues>

    Promise that resolves with the output of the chain run.

  • Parameters

    • inputs: Record<string, unknown>
    • outputs: Record<string, unknown>
    • returnOnlyOutputs: boolean = false

    Returns Promise<Record<string, unknown>>

  • Parameters

    • input: any
    • Optionalconfig: any

    Returns Promise<string>

    Use .invoke() instead. Will be removed in 0.2.0.

  • Static method to convert the chat history input into a formatted string.


    • chatHistory: string | string[][] | BaseMessage[]

      Chat history input which can be a string, an array of BaseMessage instances, or an array of string arrays.

    Returns string

    A formatted string representing the chat history.