Optional
audio_The point in time, in milliseconds, to stop transcribing in your media file
Optional
audio_The point in time, in milliseconds, to begin transcribing in your media file
The URL of the audio or video file to transcribe.
Optional
auto_Enable Auto Chapters, can be true or false
Optional
auto_Enable Key Phrases, either true or false
Optional
boost_The word boost parameter value
Optional
content_Enable Content Moderation, can be true or false
Optional
content_The confidence threshold for the Content Moderation model. Values must be between 25 and 100.
Optional
custom_Customize how words are spelled and formatted using to and from values
Optional
custom_Enable custom topics, either true or false
Optional
disfluencies?: undefined | booleanTranscribe Filler Words, like "umm", in your media file; can be true or false
Optional
dual_Enable Dual Channel transcription, can be true or false.
Optional
entity_Enable Detection, can be true or false
Optional
filter_Filter profanity from the transcribed text, can be true or false
Optional
format_Enable Text Formatting, can be true or false
Optional
iab_Enable Topic Detection, can be true or false
Optional
language_The language of your audio file. Possible values are found in Supported Languages. The default value is 'en_us'.
Optional
language_Enable Automatic language detection, either true or false.
Optional
punctuate?: undefined | booleanEnable Automatic Punctuation, can be true or false
Optional
redact_Redact PII from the transcribed text using the Redact PII model, can be true or false
Optional
redact_Generate a copy of the original media file with spoken PII "beeped" out, can be true or false. See PII redaction for more details.
Optional
redact_Controls the filetype of the audio created by redact_pii_audio. Currently supports mp3 (default) and wav. See PII redaction for more details.
Optional
redact_The list of PII Redaction policies to enable. See PII redaction for more details.
Optional
redact_The replacement logic for detected PII, can be "entity_type" or "hash". See PII redaction for more details.
Optional
sentiment_Enable Analysis, can be true or false
Optional
speaker_Enable Speaker diarization, can be true or false
Optional
speakers_Tells the speaker label model how many speakers it should attempt to identify, up to 10. See Speaker diarization for more details.
Optional
speech_The speech model to use for the transcription. When null
, the default model is used.
Optional
speech_Reject audio files that contain less than this fraction of speech. Valid values are in the range [0", 1] inclusive.
Optional
summarization?: undefined | booleanEnable Summarization, can be true or false
Optional
summary_The model to summarize the transcript
Optional
summary_The type of summary
Optional
topics?: undefined | string[]The list of custom topics
Optional
webhook_The header name to be sent with the transcript completed or failed webhook requests
Optional
webhook_The header value to send back with the transcript completed or failed webhook requests for added security
Optional
webhook_The URL to which we send webhook requests. We sends two different types of webhook requests. One request when a transcript is completed or failed, and one request when the redacted audio is ready if redact_pii_audio is enabled.
Optional
word_The list of custom vocabulary to boost transcription probability for
{
"speech_model": null,
"language_code": "en_us",
"audio_url": "https://github.com/AssemblyAI-Examples/audio-examples/raw/main/20230607_me_canadian_wildfires.mp3",
"punctuate": true,
"format_text": true,
"dual_channel": true,
"webhook_url": "https://your-webhook-url/path",
"webhook_auth_header_name": "webhook-secret",
"webhook_auth_header_value": "webhook-secret-value",
"auto_highlights": true,
"audio_start_from": 10,
"audio_end_at": 280,
"word_boost": [
"aws",
"azure",
"google cloud"
],
"boost_param": "high",
"filter_profanity": true,
"redact_pii": true,
"redact_pii_audio": true,
"redact_pii_audio_quality": "mp3",
"redact_pii_policies": [
"us_social_security_number",
"credit_card_number"
],
"redact_pii_sub": "hash",
"speaker_labels": true,
"speakers_expected": 2,
"content_safety": true,
"iab_categories": true,
"language_detection": false,
"custom_spelling": [],
"disfluencies": false,
"sentiment_analysis": true,
"auto_chapters": true,
"entity_detection": true,
"speech_threshold": 0.5,
"summarization": true,
"summary_model": "informative",
"summary_type": "bullets",
"custom_topics": true,
"topics": []
}
The parameters for creating a transcript