Introduction

Welcome to Hume AI

Hume AI builds AI models that enable technology to communicate with empathy and learn to make people happy.

So much of human communication -- in-person, text, audio, or video -- is shaped by emotional expression. These cues allow us to attend to each other's well-being. Our platform provides the APIs needed to ensure that technology, too, is guided by empathy and the pursuit of human well-being.

Empathic Voice Interface API

Hume's Empathic Voice Interface (EVI) is the world's first emotionally intelligent voice AI. It is the only API that measures nuanced vocal modulations and responds to them using an empathic large language model (eLLM), which guides language and speech generation. Trained on millions of human interactions, our eLLM unites language modeling and text-to-speech with better EQ, prosody, end-of-turn detection, interruptibility, and alignment.

EVI will be generally available in April 2024. Sign up here: Notify me of public access

Expression Measurement API

Hume's state of the art expression measurement models for the voice, face, and language are built on 10+ years of research and advances in semantic space theory pioneered by Alan Cowen. Our expression measurement models are able to capture hundreds of dimensions of human expression in audio, video, and images.

Custom Models API

Our Custom Models API builds on our expression measurement models and state-of-the-art eLLMs to bring custom insights to your application. Developed using transfer learning from our expression measurement models and eLLMs, our Custom Models API can predict almost any outcome more accurately than language alone, whether it's toxicity, depressed mood, driver drowsiness, or any other metric important to your users.

API Reference

Get support

If you have questions or run into challenges, we're here to help!