Great video, thanks. I have a question - why do you have a server? As the server is pretty simple couldn't you just call fetch from the expo project? Is it because of the google authorisation?
EXPO_PUBLIC_ variables are visible in plain-text in your compiled Expo application. It is a fairly simple application, yes, but you still don't want your Google API token visible on the client-side. The server presumably will serve authenticated routes anyway. I would also like to make another video on audio streaming implementation using the google node.js client - which you'll definitely need a server for.
Hi would there be any reason why when running on an android emulator that the results wouldn't be included in the server response but simply the totalBilledTime and the requestId?
Double-check that your android emulator is actually picking up your speech audio. There is an option in android emulator settings -> microphone -> “Virtual microphone uses host audio input.” Make sure that’s enabled.
Make sure your recording is prepared prior to triggering stopAndUnloadAsync. If _canRecord is false, then it won't work. Check out the source code in the video description for a working setup.
Double check that your encoding config is correct. You can debug the base64 URL when transcribing by pasting the data:audio/;base64 string into the browser to make sure the audio is actually being recorded. Check out the source code for a working implementation, as well.
That is odd, working fine on my end - again, definitely check set up and error response from the Google API. Also, console out the base64 audio URL and view it in the browser to make sure your audio is actually being recorded.
Thank you very much , you saved my day. I was failing to get the compatible encoding types between expo-av and google api
Great Job! I really enjoyed watching your tutorial. 🎉🎉🎉
This is some great work 👏🏻
Excellent one thanks
Do you need to create a service account? I have no option to generate API_KEY now.
Great video, thanks. I have a question - why do you have a server? As the server is pretty simple couldn't you just call fetch from the expo project? Is it because of the google authorisation?
EXPO_PUBLIC_ variables are visible in plain-text in your compiled Expo application. It is a fairly simple application, yes, but you still don't want your Google API token visible on the client-side. The server presumably will serve authenticated routes anyway. I would also like to make another video on audio streaming implementation using the google node.js client - which you'll definitely need a server for.
Excellent
Hi would there be any reason why when running on an android emulator that the results wouldn't be included in the server response but simply the totalBilledTime and the requestId?
Double-check that your android emulator is actually picking up your speech audio. There is an option in android emulator settings -> microphone -> “Virtual microphone uses host audio input.” Make sure that’s enabled.
it says recording must be prepared prior to unloading
Make sure your recording is prepared prior to triggering stopAndUnloadAsync. If _canRecord is false, then it won't work. Check out the source code in the video description for a working setup.
@@AviMamenko I have the same issue even if just copy your repo :/
when calling the api, the result doesn't contain the transcript and the errror: No transcript found.
Double check that your encoding config is correct. You can debug the base64 URL when transcribing by pasting the data:audio/;base64 string into the browser to make sure the audio is actually being recorded. Check out the source code for a working implementation, as well.
@@AviMamenko Ty for reply, I am using the source code and get from api result without result and transcirpt.
That is odd, working fine on my end - again, definitely check set up and error response from the Google API. Also, console out the base64 audio URL and view it in the browser to make sure your audio is actually being recorded.
✏