Sunteți pe pagina 1din 2

PROPOSED FRAMEWORK

In this paper we proposed an emotion detection framework by utilizing Microsoft Emotion


API, which allows us to build more personalized apps with Microsoft’s cutting edge cloud-
based emotion recognition algorithm and plays music accordingly. The Emotion API takes a
facial expression in an image as an input and returns the confidence across a set of emotions
for each face in the image, as well as bounding box for the face, using the Face API. The
emotions detected are anger, fear, happiness, neutral, sadness and etc... These emotions are
communicated cross-culturally and universally via the same basic facial expressions, where
are identified by Emotion API.

Basic: If a user has already called the Face API, they can submit the face rectangle as an
input and use the basic tier.

Standard: If a user does not submit a face rectangle, they should use standard mode.

The database music library contain music for all type of facial expressions detected from this
framework. When the human expression or mood is detected automatically the framework
plays music according to that expression.

CONCLUSION

In this paper the framework recognizes the emotions expressed by one or more people in an
image, as well as returns a bounding box for the face. The emotions detected are anger, fear,
happiness, neutral, sadness and etc... A successful call to Emotion API returns an array of
face entries and their associated emotion scores, ranked by face rectangle size in descending
order. An empty response indicates that no faces were detected. An emotion entry contains
the following fields:

faceRectangle - Rectangle location of face in the image.

Scores - Emotion scores for each face in the image.


In interpreting results from the Emotion API, the emotion detected should be interpreted as
the emotion with the highest score, as scores are normalized to sum to one.

S-ar putea să vă placă și