The project is about the detection of the emotions elicited by the speaker while talking [2]. As an example, speech produced in a state of fear, anger, or joy becomes loud and fast, with a higher and wider range in pitch, whereas emotions such as sadness or tiredness generate slow and low-pitched speech. Detection of human emotions through voice-pattern and speech-pattern analysis has many applications such as better assisting human-machine interactions. Recent years many popular technologies have exploded related to human and machine conversations through voice such as Voice-first, Amazon Echo, Google Assistant and Cortana with HomePod (for Siri) etc Right now, we do not have such devices which can recognize our sentiments. This is an approach to place an audio.

Looking to write a paper based on my MTech thesis which is: Apply augmentation method on 1D CNN/ 2D CNN with/ without augmentation method to test model accuracy for Auto sentiment analysis. The paper would share the brief literature ( which I would be providing) along with my analysis and results to share with academic community on the better results that can be achieved with 2D CNN, VGG-19, Image-net.

×

Powered by WhatsApp Chat

× How can I help you?