• Deprecated by Google. Will be removed in 0.3.0

An interface defining the input to the ChatGooglePaLM class.

interface GooglePaLMChatInput {
    apiKey?: string;
    cache?: boolean | BaseCache<Generation[]>;
    callbackManager?: CallbackManager;
    callbacks?: Callbacks;
    examples?: IExample[] | BaseMessageExamplePair[];
    maxConcurrency?: number;
    maxRetries?: number;
    metadata?: Record<string, unknown>;
    model?: string;
    modelName?: string;
    onFailedAttempt?: FailedAttemptHandler;
    tags?: string[];
    temperature?: number;
    topK?: number;
    topP?: number;
    verbose?: boolean;
}

Hierarchy

  • BaseChatModelParams
    • GooglePaLMChatInput

Implemented by

Properties

apiKey?: string

Google Palm API key to use

cache?: boolean | BaseCache<Generation[]>
callbackManager?: CallbackManager

Use callbacks instead

callbacks?: Callbacks
examples?: IExample[] | BaseMessageExamplePair[]
maxConcurrency?: number

The maximum number of concurrent calls that can be made. Defaults to Infinity, which means no limit.

maxRetries?: number

The maximum number of retries that can be made for a single call, with an exponential backoff between each attempt. Defaults to 6.

metadata?: Record<string, unknown>
model?: string

Model Name to use

Note: The format must follow the pattern - models/{model}

modelName?: string

Model Name to use

Note: The format must follow the pattern - models/{model} Alias for model

onFailedAttempt?: FailedAttemptHandler

Custom handler to handle failed attempts. Takes the originally thrown error object as input, and should itself throw an error if the input error is not retryable.

tags?: string[]
temperature?: number

Controls the randomness of the output.

Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in less surprising responses from the model.

Note: The default value varies by model

topK?: number

Top-k changes how the model selects tokens for output.

A top-k of 1 means the selected token is the most probable among all tokens in the model’s vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).

Note: The default value varies by model

topP?: number

Top-p changes how the model selects tokens for output.

Tokens are selected from most probable to least until the sum of their probabilities equals the top-p value.

For example, if tokens A, B, and C have a probability of .3, .2, and .1 and the top-p value is .5, then the model will select either A or B as the next token (using temperature).

Note: The default value varies by model

verbose?: boolean