Passing context to voice search
From what I understand, it goes through the same mechanism. Just create a normal
override in the backend and then annotate your dialog or widget with voice search functionality as described here .
Using their example, something like this should go into your interface:
<searchable xmlns:android="http://schemas.android.com/apk/res/android" android:label="@string/search_label" android:hint="@string/search_hint" android:voiceSearchMode="showVoiceSearchButton|launchRecognizer" > </searchable>
When you request a voice search, its data is passed through the search engine and to your callback
, allowing you to manipulate the data as needed.
Edit: The real problem we're having here was to differentiate when voice search is used in search widgets versus when standard text input was called.
Unfortunately, Google doesn't seem to provide these capabilities unless you roll your own Recognizer or try to retrieve properties from a search set that are in the form of voice data. The latter case is undocumented and at least apparently not supported either.
source to share