Our goal is to produce software that can recognize emotion from a human voice. To build such software, we need:
- databases of human emotional speech recordings, annotated for the intended emotions;
- an analysis of these recordings in terms of acoustic (or auditory) features;
- an algorithm that relates these features to the emotions.