International Journal of Science and Research (IJSR)

International Journal of Science and Research (IJSR)
Call for Papers | Fully Refereed | Open Access | Double Blind Peer Reviewed

ISSN: 2319-7064

Downloads: 132 | Views: 260

M.Tech / M.E / PhD Thesis | Electronics & Communication Engineering | India | Volume 4 Issue 7, July 2015 | Rating: 6.9 / 10

Distributed Speech Recognition HMM Modelling

Gauri A. Deshpande [2] | Pallavi S. Deshpande [2]

Abstract: Speech Recognition performed over a network is referred to as Distributed speech recognition (DSR). It is a technique that enables access to services and communication systems without the need to type or use a keypad. The primary objective of speech recognition is to enable all of us to have easy access to the full range of computer services and communication systems, without the need for all of us to be able to type, or to be near a keyboard. It makes use of client server architecture to enable the distributed nature. The use of HMM models for modeling acoustic parameters of speech delivers the price/performance levels which are acceptable, practicable and affordable. As just one example of a spectrum of possible new applications, we will be able to dictate our meeting notes directly into our enhanced cellular handset immediately after a meeting, and the draft text will already be in our personal computer, ready for editing, by the time we return to our office (or hotel room, or home). The performance of speech recognition systems receiving speech that has been transmitted over mobile channels can be significantly degraded when compared to using an unmodified signal. The degradations are as a result of both the low bit rate speech coding and channel transmission errors. A Distributed Speech Recognition (DSR) system overcomes these problems by eliminating the speech channel and instead using an error protected data channel to send a parameterized representation of the speech, which is suitable for recognition. We are using Julius engine for core speech processing functionalities. Julius engine also supports client server model. A deep understanding of all the modules along with baum-welch re-estimation and viterbi algorithm is achieved.

Keywords: Distributed Speech Processing, Speech Recognition, Client-Server model, HMM, Baum Welch Re-estimation, Viterbi Decoder

Edition: Volume 4 Issue 7, July 2015,

Pages: 773 - 777

How to Download this Article?

Type Your Valid Email Address below to Receive the Article PDF Link

Verification Code will appear in 2 Seconds ... Wait