Build a Speech-to-Text App with React Native, Expo, Node.js/Express & Google API | iOS, Android, Web

Поделиться
HTML-код
  • Опубликовано: 7 янв 2025

Комментарии • 18

  • @raygdhrt
    @raygdhrt 2 месяца назад +1

    Thank you very much , you saved my day. I was failing to get the compatible encoding types between expo-av and google api

  • @KDD30
    @KDD30 4 месяца назад

    Great Job! I really enjoyed watching your tutorial. 🎉🎉🎉

  • @Daianagee
    @Daianagee 4 месяца назад

    This is some great work 👏🏻

  • @Events4MuslimsVideos-kv4je
    @Events4MuslimsVideos-kv4je 3 месяца назад

    Excellent one thanks

  • @everybodyguitar5271
    @everybodyguitar5271 21 день назад

    Do you need to create a service account? I have no option to generate API_KEY now.

  • @s1nc3r1ty
    @s1nc3r1ty 3 месяца назад

    Great video, thanks. I have a question - why do you have a server? As the server is pretty simple couldn't you just call fetch from the expo project? Is it because of the google authorisation?

    • @AviMamenko
      @AviMamenko  3 месяца назад

      EXPO_PUBLIC_ variables are visible in plain-text in your compiled Expo application. It is a fairly simple application, yes, but you still don't want your Google API token visible on the client-side. The server presumably will serve authenticated routes anyway. I would also like to make another video on audio streaming implementation using the google node.js client - which you'll definitely need a server for.

  • @nadetdevfullstack7041
    @nadetdevfullstack7041 3 месяца назад

    Excellent

  • @jackmcintyre1255
    @jackmcintyre1255 3 месяца назад

    Hi would there be any reason why when running on an android emulator that the results wouldn't be included in the server response but simply the totalBilledTime and the requestId?

    • @AviMamenko
      @AviMamenko  3 месяца назад

      Double-check that your android emulator is actually picking up your speech audio. There is an option in android emulator settings -> microphone -> “Virtual microphone uses host audio input.” Make sure that’s enabled.

  • @zaheermemon5938
    @zaheermemon5938 4 месяца назад +1

    it says recording must be prepared prior to unloading

    • @AviMamenko
      @AviMamenko  3 месяца назад

      Make sure your recording is prepared prior to triggering stopAndUnloadAsync. If _canRecord is false, then it won't work. Check out the source code in the video description for a working setup.

    • @aymericsandoz8709
      @aymericsandoz8709 Месяц назад +1

      @@AviMamenko I have the same issue even if just copy your repo :/

  • @igorronaldo
    @igorronaldo 3 месяца назад

    when calling the api, the result doesn't contain the transcript and the errror: No transcript found.

    • @AviMamenko
      @AviMamenko  3 месяца назад

      Double check that your encoding config is correct. You can debug the base64 URL when transcribing by pasting the data:audio/;base64 string into the browser to make sure the audio is actually being recorded. Check out the source code for a working implementation, as well.

    • @igorronaldo
      @igorronaldo 3 месяца назад

      @@AviMamenko Ty for reply, I am using the source code and get from api result without result and transcirpt.

    • @AviMamenko
      @AviMamenko  3 месяца назад

      That is odd, working fine on my end - again, definitely check set up and error response from the Google API. Also, console out the base64 audio URL and view it in the browser to make sure your audio is actually being recorded.

  • @Events4MuslimsVideos-kv4je
    @Events4MuslimsVideos-kv4je 2 месяца назад