Search

Guru Prakash Arumugam

from Campbell, CA

Guru Arumugam Phones & Addresses

  • Campbell, CA
  • Mountain View, CA
  • Sunnyvale, CA

Publications

Us Patents

Utterance Classifier

View page
US Patent:
20200349946, Nov 5, 2020
Filed:
Jul 21, 2020
Appl. No.:
16/935112
Inventors:
- Mountain View CA, US
Gabor Simko - Santa Clara CA, US
Maria Carolina Parada San Martin - Boulder CO, US
Ramkarthik Kalyanasundaram - Cupertino CA, US
Guru Prakash Arumugam - Sunnyvale CA, US
Srinivas Vasudevan - Mountain View CA, US
Assignee:
Google LLC - Mountain View CA
International Classification:
G10L 15/22
G06F 3/16
G10L 15/16
G10L 15/18
G10L 15/30
Abstract:
A method includes receiving a spoken utterance that includes a plurality of words, and generating, using a neural network-based utterance classifier comprising a stack of multiple Long-Short Term Memory (LSTM) layers, a respective textual representation for each word of the of the plurality of words of the spoken utterance. The neural network-based utterance classifier trained on negative training examples of spoken utterances not directed toward an automated assistant server. The method further including determining, using the respective textual representation generated for each word of the plurality of words of the spoken utterance, that the spoken utterance is one of directed toward the automated assistant server or not directed toward the automated assistant server, and when the spoken utterance is directed toward the automated assistant server, generating instructions that cause the automated assistant server to generate a response to the spoken utterance.

Utterance Classifier

View page
US Patent:
20190304459, Oct 3, 2019
Filed:
May 2, 2019
Appl. No.:
16/401349
Inventors:
- Mountain View CA, US
Gabor Simko - Santa Clara CA, US
Maria Carolina Parada San Martin - Boulder CO, US
Ramkarthik Kalyanasundaram - Cupertino CA, US
Guru Prakash Arumugam - Sunnyvale CA, US
Srinivas Vasudevan - Mountain View CA, US
International Classification:
G10L 15/22
G10L 15/18
G10L 15/30
G06F 3/16
G10L 15/16
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media for classification using neural networks. One method includes receiving audio data corresponding to an utterance. Obtaining a transcription of the utterance. Generating a representation of the audio data. Generating a representation of the transcription of the utterance. Providing (i) the representation of the audio data and (ii) the representation of the transcription of the utterance to a classifier that, based on a given representation of the audio data and a given representation of the transcription of the utterance, is trained to output an indication of whether the utterance associated with the given representation is likely directed to an automated assistance or is likely not directed to an automated assistant. Receiving, from the classifier, an indication of whether the utterance corresponding to the received audio data is likely directed to the automated assistant or is likely not directed to the automated assistant. Selectively instructing the automated assistant based at least on the indication of whether the utterance corresponding to the received audio data is likely directed to the automated assistant or is likely not directed to the automated assistant.

Utterance Classifier

View page
US Patent:
20190035390, Jan 31, 2019
Filed:
Jul 25, 2017
Appl. No.:
15/659016
Inventors:
- Mountain View CA, US
Gabor Simko - Santa Clara CA, US
Maria Carolina Parada San Martin - Boulder CO, US
Ramkarthik Kalyanasundaram - Cupertino CA, US
Guru Prakash Arumugam - Sunnyvale CA, US
Srinivas Vasudevan - Mountain View CA, US
International Classification:
G10L 15/22
G10L 15/16
G10L 15/30
G10L 15/18
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media for classification using neural networks. One method includes receiving audio data corresponding to an utterance. Obtaining a transcription of the utterance. Generating a representation of the audio data. Generating a representation of the transcription of the utterance. Providing (i) the representation of the audio data and (ii) the representation of the transcription of the utterance to a classifier that, based on a given representation of the audio data and a given representation of the transcription of the utterance, is trained to output an indication of whether the utterance associated with the given representation is likely directed to an automated assistance or is likely not directed to an automated assistant. Receiving, from the classifier, an indication of whether the utterance corresponding to the received audio data is likely directed to the automated assistant or is likely not directed to the automated assistant. Selectively instructing the automated assistant based at least on the indication of whether the utterance corresponding to the received audio data is likely directed to the automated assistant or is likely not directed to the automated assistant.
Guru Prakash Arumugam from Campbell, CA Get Report