The Oliver Developer Hub

Using Oliver, our emotionAI API, developers can directly benefit from this growing emotional intelligence, measure emotions and behaviors in conversations, and utilize our continuously evolving robust analytics in their own applications.

Either that involves development of a VA for a business, an interactive game for children, a voice-controlled speaker for the home, or a social robot designated to assist the elderly, incorporating emotion-aware spoken language understanding will supercharge your users’ experience.

Get Started    API Reference

Retrieve results

Results at the file/call level

When a processing request is completed successfully you can proceed in fetching the results. The results are provided in a JSON object format and are described in more detail in this page. You can also find the JSON schema of the result in another page. The endpoint to call is:

The GET results method requires the client ID (long: {cid}) and the process ID (long: {pid}) to be passed as path parameters. On invocation, it returns the result of the processing in JSON format. In case the specified cid or pid is not found or the status of the job is not set to completed, a corresponding error response is sent to the user. For example, calling the GET results method by either an invalid pid or a process that is not in completed status would trigger one of the following error responses:

{"code":1,"type":"error","message":"Process with pid 35235 not found!"}

{"code":1,"type":"error","message":"Cannot send results for Process with pid 11089. The reason is: Waiting for result"}

WARNING: When it is not possible to estimate one or more of outputs, e.g., due to a very short or corrupted audio file, the associated JSON field will not be returned. These undefined fields should be excluded when estimating call level statistics in your analytics module.

The GET results API call produces only the media type application/JSON response, setting the Content-Type response header accordingly.

There are four types of outputs:
  1. basic metrics that contain call voice activity detection (VAD) metrics, agent/customer demographics (age, gender) and language information (English vs. Spanish),
  2. core metrics that contain diarization and behavioral data on emotional strength and positivity (continuous emotions), discrete emotions (happy, angry, sad, frustrated), and engagement/empathy/politeness for both the agent and customer,
  3. KPI metrics that contain higher level information such as agent performance score, customer satisfaction. propensity to buy, call success, resolution or escalation, and
  4. events that contain the temporal boundaries for a number of behavioral and call-flow events detected during the call, e.g., when the agent or the customer were angry, when they were disengaged, or when a beep or a ring were observed during the call.

Updated less than a minute ago



Retrieve results


Results at the file/call level

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.