VAPI Integration

There is a ready-made TTS endpoint for use with VAPI custom TTS feature.

1

Add your Respeecher API key to VAPI

Go to https://dashboard.vapi.ai/settings/integrations/custom-credential and create a new custom credential with the following values:

  • Authentication Type: Bearer Token
  • Credential Name: Arbitrary, for example Respeecher API
  • Token: <Your API Key>
  • Header Name: X-Api-Key
  • Include Bearer Prefix: No
2

Prepare a JSON request body

Create a text file vapi-custom-tts.json with the following content, substituting <CREDENTIAL_ID> with the ID (not the name, an ID is displayed after creation) of the custom credential from the previous step:

1{
2 "voice": {
3 "provider": "custom-voice",
4 "server": {
5 "url": "https://api.respeecher.com/v1/public/tts/en-rt/tts/vapi",
6 "credentialId": "<CREDENTIAL_ID>",
7 "timeoutSeconds": 30,
8 "headers": {
9 "X-Voice-JSON": "{\"id\": \"vikram\"}"
10 }
11 }
12 }
13}
3

Make a request to VAPI

Run the following command, substituting <ASSISTANT_ID> with the ID of your assistant and <VAPI_API_KEY> with your private VAPI API key from https://dashboard.vapi.ai/org/api-keys (not the Respeecher API key):

$curl -X PATCH https://api.vapi.ai/assistant/<ASSISTANT_ID> \
> -H "Authorization: Bearer <VAPI_API_KEY>" \
> -H "Content-Type: application/json" \
> -d "$(cat vapi-custom-tts.json)"
4

Test and customize

At this point your VAPI assistant should have a low-latency Respeecher TTS voice. If the integration works, you can repeat steps 2 and 3 with another model (for example, substituting en-rt with ua-rt, see Models & Languages for more details) or another voice (the value of X-Voice-JSON is a JSON string that describes an object of the same format that is used in the other TTS endpoints).

Assistant’s firstMessageMode value of assistant-speaks-first may work incorrectly with custom TTS. Try assistant-speaks-first-with-model-generated-message or assistant-waits-for-user in that case.