A cache that uses Momento as the backing store. See https://gomomento.com.

Example

const cache = new MomentoCache({
client: new CacheClient({
configuration: Configurations.Laptop.v1(),
credentialProvider: CredentialProvider.fromEnvironmentVariable({
environmentVariableName: "MOMENTO_API_KEY",
}),
defaultTtlSeconds: 60 * 60 * 24, // Cache TTL set to 24 hours.
}),
cacheName: "langchain",
});
// Initialize the OpenAI model with Momento cache for caching responses
const model = new ChatOpenAI({
cache,
});
await model.invoke("How are you today?");
const cachedValues = await cache.lookup("How are you today?", "llmKey");

Hierarchy (view full)

Methods

  • Lookup LLM generations in cache by prompt and associated LLM key.

    Parameters

    • prompt: string

      The prompt to lookup.

    • llmKey: string

      The LLM key to lookup.

    Returns Promise<null | Generation[]>

    The generations associated with the prompt and LLM key, or null if not found.

  • Update the cache with the given generations.

    Note this overwrites any existing generations for the given prompt and LLM key.

    Parameters

    • prompt: string

      The prompt to update.

    • llmKey: string

      The LLM key to update.

    • value: Generation[]

      The generations to store.

    Returns Promise<void>

Generated using TypeDoc