Skip to main content

Response format

AttributeDescription
text
string required

The text displayed to the user.

If SSML is not provided in the same response, voice channels will also speak the text aloud as speech.

ssml
string optional

An SSML string

Voice channels will speak aloud contents in the SSML using speech.

items
array optional

Items are lists of UI elements that users can interact with.

They will be displayed as card, carousel or list depending on the channel.

actions
array optional

Actions are buttons for call-to-action and for driving conversion.

On platforms without button support, they will be rendered as links or text.

override
dictionary optional

Some features are channel specific or not yet been implemented by Delight.

You can use override to return channel specific response.

Text#

Text represents the textual content you wish to communicate to the user. It is a string and a required attribute.

On messaging channels, they will be printed as text on screen. On voice channels, they will be spoken out as speech.

Text Example:
{
"text": "This is the text response."
}

SSML#

When returning a response, you can use a subset of the Speech Synthesis Markup Language (SSML) in your response . By using SSML, you can make the conversation response seem more like natural speech by adding cues for pronunciation and pauses. You can also play audio files in SSML.

The textual content inside SSML can be different than text, as you may wish to have a simple UI experience around text, while the output speech can provide a more detailed, read-aloud response.

When you are using both text and SSML fields, text will be printed on screen while the output speech will come from SSML.

SSML Example:
{
"text": "This is the text response.",
"ssml": "<speak>How do you spell Delight? It is <say-as interpret-as=\"verbatim\">delight</say-as></speak>"
}

Items#

Items is an array of UI elements your users can interact with. They are rendered as cards, carousel, list or similar UI elements depending on the channel.

Because different channels support different UI elements, our system adaptively selects the most suitable element on each channel to render your list. In general, our selection heuristics are as follows:

  • If items contain only 1 object, it will be rendered as a card or the closest equivalent depending on the channel.
Card Example:
{
"text": "This is the text response",
"ssml": "<speak>Assistant will speak according to SSML</speak>", //optional
"items": [
{
"title": "This is the card title",
"description": "Description",
"image": {
"url": "https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_272x92dp.png", //required
"placeholder": "Image alternate text" // optional
},
"actions": [
{
"title": "Show me the map",
"url": "https://maps.google.com"
}
]
}
]
}
  • If items contain more than 1 object but less than 10 objects (varies across and subject to each channel), it will be rendered as carousel or closest.
  • If items contain more than 10 objects (varies across and subject to each channel), it will be rendered as list or closest.
Carousel / List Example:
{
"text": "This is the text response",
"ssml": "<speak>Assistant will speak according to SSML</speak>", //optional
"items": [
{
"title": "This is #1 card title",
"description": "Description #1",
"image": {
"url": "https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_272x92dp.png", //required
"placeholder": "Image alternate text" // optional
},
"actions": [
{
"title": "Show me the map #1",
"url": "https://maps.google.com"
}
]
},
{
"title": "This is #2 card title",
"description": "Description #2",
"image": {
"url": "https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_272x92dp.png", //required
"placeholder": "Image alternate text" // optional
},
"actions": [
{
"title": "Show me the map #2",
"url": "https://maps.google.com"
}
]
}
]
}

For platforms that do not support cards or carousels or lists, we render items as text or speech. Users can then say or message the item title to select the item.

Item Object Schema#

Each object in items has the following scheme:

Actions#

Actions is an array of action objects. An action can serve as

  • A text hint for users (what to say or type)
  • A call-to-action button for users to click

Actions help guide users on what their next interaction could be, and drive the conversation forward.

Text button#

Text button full response
{
"text": "What do you want?",
"actions": [{
"title": "Option 1"
}, {
"title": "Option 2"
}}]
}

URL button#

User will be redirected to the link when they click the button.

URL button full response
{
"text": "Just put some vinegar on it",
"actions": [
{
"title": "Open google",
"url": "https://google.com"
}
]
}

Postback button#

The channel will send back a postback event when the user clicks the button.

Postback button full response
{
"text": "Just put some vinegar on it",
"actions": [
{
"title": "Postback button",
"payload": "THIS_IS_PAYLOAD_KEY"
}
]
}

Those button can be use together like below:

Actions button use together
{
"text": "What do you want?",
"actions": [
{
"title": "Option 1"
},
{
"title": "Open google",
"url": "https://google.com"
},
{
"title": "Postback button",
"payload": "THIS_IS_PAYLOAD_KEY"
}
]
}

Override#

Override field allows developer to perform customize response in 2 scenarios:

1. Different response for different channels#

""
{
"text": "This is the Normal Response",
"override": {
"facebook": {
"raw": { "text": "hello facebook user" }
},
"alexa": {
"raw": {
"version": "1.0",
"response": {
"outputSpeech": {
"type": "PlainText",
"text": "This is a message from alexa raw mode"
},
"shouldEndSession": false
}
}
},
"bixby": {
"raw": { "text": "This is a message from bixby raw mode" }
},
"telegram": {
"raw": { "text": "This is a message from telegram raw mode" }
},
"line": {
"raw": [
{
"quickReply": {
"items": [
{
"action": {
"label": "Option 1",
"text": "Option 1",
"type": "message"
},
"type": "action"
},
{
"action": {
"label": "Link Button",
"text": "Link Button",
"type": "message"
},
"type": "action"
},
{
"action": {
"label": "Postback button",
"text": "Postback button",
"type": "message"
},
"type": "action"
}
]
},
"text": "This is a message from line raw mode",
"type": "text"
}
] // line can support sending multiple messages in the same request, so we need to wrap it in array
},
"wechat": {
"raw": {
"msgtype": "text",
"text": {
"content": "This is a message from wechat raw mode"
}
}
},
"googleaction": {
"raw": {
"expectUserResponse": true,
"expectedInputs": [
{
"possibleIntents": [
{
"intent": "actions.intent.TEXT"
}
],
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Here's an example of override google action response. Which type of response would you like to see next?"
}
}
]
}
}
}
]
}
}
}
}

2. Work with the channel specific feature that Delight has not supported yet#

For example, Delight does not support send attachment to facebook at this moment. You can find the raw response from facebook documentation and send it with override field.

In this example, we are going to send attachment from URL. We go here https://developers.facebook.com/docs/messenger-platform/send-messages#url to find the response format.

code snippet from documentation
curl -X POST -H "Content-Type: application/json" -d '{
"recipient":{
"id":"1254459154682919"
},
"message":{
"attachment":{
"type":"image",
"payload":{
"url":"http://www.messenger-rocks.com/image.jpg",
"is_reusable":true
}
}
}
}' "https://graph.facebook.com/v10.0/me/messages?access_token=<PAGE_ACCESS_TOKEN>"

All we need is the content within the message field.

So the Delight response will be like this:

Delight response with facebook attaching from URL
{
"text": "This is the Normal Response",
"override": {
"facebook": {
"raw": {
"attachment": {
"type": "image",
"payload": {
"url": "http://www.messenger-rocks.com/image.jpg",
"is_reusable": true
}
}
}
}
}
}