Optional
apiOptional
cacheOptional
callbackOptional
callbacksOptional
maxThe maximum number of concurrent calls that can be made.
Defaults to Infinity
, which means no limit.
Optional
maxThe maximum number of retries that can be made for a single call, with an exponential backoff between each attempt. Defaults to 6.
Optional
maxThe maximum number of tokens that the model can process in a single response. This limits ensures computational efficiency and resource management.
Optional
metadataOptional
modelThe name of the model to use.
Optional
modelThe name of the model to use.
Alias for model
Optional
onCustom handler to handle failed attempts. Takes the originally thrown error object as input, and should itself throw an error if the input error is not retryable.
Optional
stopUp to 4 sequences where the API will stop generating further tokens. The
returned text will not contain the stop sequence.
Alias for stopSequences
Optional
stopUp to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
Optional
streamingWhether or not to stream responses.
Optional
tagsOptional
temperatureThe temperature to use for sampling.
Optional
verbose
The Groq API key to use for requests.