POST
/
v1
/
calls

Headers

authorization
string
required

Your API key for authentication.

encrypted_key
string

A special key for using a BYOT (Bring Your Own Twilio) account. Only required for sending calls from your own Twilio account.

Body

phone_number
string
required

The phone number to call. Country code defaults to +1 (US) if not specified.

For best results, use the E.164 format.

task
string
required

Provide instructions, relevant information, and examples of the ideal conversation flow.

This is your prompt where you are telling the agent what to do.

Recommendations:

  • Include context and a background/persona for the agent like "You are {name}, a customer service agent at {company} calling {name} about {reason}.
  • Phrase instructions like you are speaking to the agent before the call.
  • Any time you tell the agent not to do something, provide an example of what they should do instead.
  • Keep the prompt under 2,000 characters where possible.
  • Want to easily test out exactly how your agent will behave? Try out Agent Testing!
pathway_id
string

This is the pathway ID for the pathway you have created on our dev portal. You can access the ID of your pathways by clicking the ‘Copy ID’ button of your pathway here

Note: Certain parameters do not apply when using pathways.

Example Simple Request body:

{
  "phone_number": "+1975934749",
  "pathway_id": "a0f0d4ed-f5f5-4f16-b3f9-22166594d7a7"
}
start_node_id
string

This is the node ID for the node you want the pathway to start from. You can access the ID of your nodes here

Note: This parameter is only used when pathway_id is provided.

Example Simple Request body:

{
  "phone_number": "+1975934749",
  "pathway_id": "a0f0d4ed-f5f5-4f16-b3f9-22166594d7a7",
  "start_node_id": "a0cd5fed-f3f3-5412-1339-13567921d7b8"
}

Agent Parameters (Body)

voice
string
default: "mason"

The voice of the AI agent to use. Accepts any form of voice ID, including custom voice clones and voice presets.

Default voices can be referenced directly by their name instead of an id.

Usage example: voice: "maya"

Bland Curated voices:

  • maya
  • mason
  • ryan
  • adriana
  • tina
  • matt
  • evelyn

Use the GET /v1/voices endpoint to see a full list of your available voices.

background_track
string

Select an audio track that you’d like to play in the background during the call. The audio will play continuously when the agent isn’t speaking, and is incorporated into it’s speech as well.

Use this to provide a more natural, seamless, engaging experience for the conversation. We’ve found this creates a significantly smoother call experience by minimizing the stark differences between total silence and the agent’s speech.

Options:

  • null - Default, will play audible but quiet phone static.
  • office - Office-style soundscape. Includes faint typing, chatter, clicks, and other office sounds.
  • cafe - Cafe-like soundscape. Includes faint talking, clinking, and other cafe sounds.
  • restaurant - Similar to cafe, but more subtle.
  • none - Minimizes background noise
first_sentence
string

Makes your agent say a specific phrase or sentence for it’s first response.

wait_for_greeting
boolean
default: false

By default, the agent starts talking as soon as the call connects.

When wait_for_greeting is set to true, the agent will wait for the call recipient to speak first before responding.

block_interruptions
boolean
default: false

When set to true, the AI will not respond or process interruptions from the user.

interruption_threshold
number
default: 100

Adjusts how patient the AI is when waiting for the user to finish speaking.

Lower values mean the AI will respond more quickly, while higher values mean the AI will wait longer before responding.

Recommended range: 50-200

  • 50: Extremely quick, back and forth conversation
  • 100: Balanced to respond at a natural pace
  • 200: Very patient, allows for long pauses and interruptions. Ideal for collecting detailed information.

Try to start with 100 and make small adjustments in increments of ~10 as needed for your use case.

model
string
default: "enhanced"

Select a model to use for your call.

Options: base, turbo and enhanced.

In nearly all cases, enhanced is the best choice.

temperature
float
default: "0.7"

A value between 0 and 1 that controls the randomness of the LLM. 0 will cause more deterministic outputs while 1 will cause more random.

Example Values: "0.9", "0.3", "0.5"

keywords
string[]
default: "[]"

These words will be boosted in the transcription engine - recommended for proper nouns or words that are frequently mis-transcribed.

For example, if the word “Reece” is frequently transcribed as a homonym like “Reese” you could do this:

  {
    "keywords": ["Reece"]
  }

For stronger keyword boosts, you can place a colon then a boost factor after the word. The default boost factor is 2.

  {
    "keywords": ["Reece:3"]
  }
pronunciation_guide
array

The pronunciation guide is an array of objects that guides the agent on how to say specific words. This is great for situations with complicated terms or names.

[
  {
    "word": "example",
    "pronunciation": "ex-am-ple",
    "case_sensitive": "false",
    "spaced": "false"
  },
  {
    "word": "API",
    "pronunciation": "A P I",
    "case_sensitive": "true",
    "spaced": "true"
  }
]
transfer_phone_number
string

A phone number that the agent can transfer to under specific conditions - such as being asked to speak to a human or supervisor.

transfer_list
object

Give your agent the ability to transfer calls to a set of phone numbers.

Overrides transfer_phone_number if a transfer_list.default is specified.

Will default to transfer_list.default, or the chosen phone number.

Example usage to route calls to different departments:

{
  "transfer_list": {
      "default": "+12223334444",
      "sales": "+12223334444",
      "support": "+12223334444",
      "billing": "+12223334444"
  }
}
language
string
default: "en-US"

Select a supported language of your choice. Optimizes every part of our API for that language - transcription, speech, and other inner workings.

timezone
string
default: "America/Los_Angeles"

Set the timezone for the call. Handled automatically for calls in the US.

This helps significantly with use cases that rely on appointment setting, scheduling, or behaving differently based on the time of day.

Timezone options are here in the TZ identifier column.

request_data
object

Any JSON you put in here will be visible to the AI agent during the call - and can also be referenced with Prompt Variables.

For example, let’s say in your app you want to programmatically set the name of the person you’re calling. You could set request_data to:

  {
    "phone_number": "+1...",
    "task": "...",
    "first_sentence": "Hello {{name}}! How are you doing today?", // also works in the prompt, tools, etc.
    "request_data": {
      "name": "John Doe",
    }
  }

For further information about how Prompt Variables work, check out the Custom Tools tutorial.

tools
array

Interact with the real world through API calls.

Detailed tutorial here: Custom Tools

dynamic_data
array

Make dynamic requests to external APIs and use the data in your AI’s responses.

Call Parameters (Body)

start_time
string

The time you want the call to start. If you don’t specify a time (or the time is in the past), the call will send immediately.

Set your time in the format YYYY-MM-DD HH:MM:SS -HH:MM (ex. 2021-01-01 12:00:00 -05:00).

The timezone is optional, and defaults to UTC if not specified.

Note: Scheduled calls can be cancelled with the POST /v1/calls/:call_id/stop endpoint.

voicemail_message
string

When the AI encounters a voicemail, it will leave this message after the beep and then immediately end the call.

Warning: If amd is set to true or voicemail_action is set to ignore, then this will still work for voicemails, but it will not hang up for IVR systems.

voicemail_action
enum
default: "hangup"

This is processed separately from the AI’s decision making, and overrides it.

Options:

  • hangup
  • leave_message
  • ignore

Examples:

  • Call is answered by a voicemail (specifically with a beep or tone):

    • If voicemail_message is set, that message will be left and then the call will end.
    • Otherwise, the call immediately ends (regardless of amd)
  • Call is answered by an IVR system or phone tree:

    • If amd is set to true, the AI will navigate the system and continue as normal.
    • If voicemail_action is set to ignore, the AI will ignore the IVR and continue as normal.
    • Otherwise, if voicemail_message is set then it’ll leave that message and end the call.
    • Finally, if none of those conditions are met, the call will end immediately.

Note: If voicemail_message is set, then the AI will leave the message regardless of the voicemail_action.

retry
object

If the call goes to voicemail, you can set up the call to retry, after a configurable delay. You can also update the voicemail_action, and voicemail_message in the retry object, for the re-tried call.

Takes in the following parameters:

  • wait (integer): The delay in seconds before the call is retried.
  • voicemail_action (enum): The action to take when the call goes to voicemail. Options: hangup, leave_message, ignore.
  • voicemail_message (string): The message to leave when the call goes to voicemail.

Example:

{
    "retry": {
        "wait": 10,
        "voicemail_action": "leave_message",
        "voicemail_message": "Hello, this is a test message."
    }
}
max_duration
integer
default: "30"

When the call starts, a timer is set for the max_duration minutes. At the end of that timer, if the call is still active it will be automatically ended.

Example Values: 20, 2

record
boolean
default: "false"

To record your phone call, set record to true. When your call completes, you can access through the recording_url field in the call details or your webhook.

from
string

Specify a phone number to call from that you own or have uploaded from your Twilio account. Country code is required, spaces or parentheses must be excluded.

By default, calls are initiated from a separate pool of numbers owned by Bland.

webhook
string

When the call ends, we’ll send the call details in a POST request to the URL you specify here.

The request body will match the response from the GET /v1/calls/:call_id endpoint.

webhook_events
string[]

Specify which events you want to stream to the webhook, during the call.

Options:

  • queue
  • call
  • latency
  • webhook
  • tool
  • dynamic_data

Example Payload:

  {
    "message": "LLM: 411ms",
    "call_id": "0fb3c518-e941-48fd-a32c-67d59c541336",
    "category": "latency",
    "log_level": "performance"
  }
metadata
object

Add any additional information you want to associate with the call. This can be useful for tracking or categorizing calls.

Anything that you put here will be returned in your webhook or in the call details under metadata.

Example:

{
  "metadata": {
    "campaign_id": "1234",
    "source": "web"
  }
}
summary_prompt
string

At the end of each call, a summary is generated based on the transcript - you can use this field to add extra instructions and context for how it should be summarized.

For example: "Summarize the call in French instead of English."

analysis_prompt
string

Guides the output and provides additional instructions and clarifications for the analysis_schema.

analysis_schema
object

When the call ends, the transcript and call details will be analyzed by the AI.

Define a JSON schema for how you want to get information about the call - information like email addresses, names, appointment times or any other type of custom data.

In the webhook response or whenever you retrieve call data later, you’ll get the data you defined back under analysis.

For example, if you wanted to retrieve this information from the call:

{
  "analysis_schema": {
    "email_address": "email",
    "first_name": "string",
    "last_name": "string",
    "wants_to_book_appointment": "boolean",
    "appointment_time": "YYYY-MM-DD HH:MM:SS"
  }
}

You would get it filled out like this in your webhook once the call completes:

{
  "analysis": {
    "email_address": "johndoe@gmail.com",
    "first_name": "John",
    "last_name": "Doe",
    "wants_to_book_appointment": true,
    "appointment_time": "2024-01-01 12:00:00"
  }
}
answered_by_enabled
boolean
default: false

If this is set to true, we process the audio from the start of the call to determine if it was answered by a human or a voicemail.

In the call details or webhook response, you’ll see the answered_by field with the value human, unknown or voicemail.

Notes for accuracy:

  • When answered_by is voicemail or human, that is nearly 100% accurate.
  • When it is unknown, try using text analysis by adding answered_by to your analysis_schema.

Response

status
string

Can be success or error.

call_id
string

A unique identifier for the call (present only if status is success).

batch_id
string

The batch ID of the call (present only if status is success).

message
string

A message explaining the status of the call.

errors
array

For validation errors, a detailed list of each field with an error and it’s error message.

Example:

{
    "status": "error",
    "message": "Invalid parameters",
    "errors": [
        "Missing required parameter: phone_number.",
        "Missing required parameter: task.",
        "Phone number must be a string or number.",
        "Task must be a string."
    ]
}