I find that GPT-3.5 has much better latency than GROQ. In my own voice assistant I use GPT-3.5 for most of the interactions and use OpenAI function calling to defer tasks to other models, for me that's just GPT-4, but it could be GROQ if the workload is large enough that speed is more important than latency
To be honest here I have absolutely no clue why it took YT so long to recommend this channel... the algo is losing its touch if you ask me. I get video about cats and stupid crap. Yet a coding channel, the content I watch several hours of a day while working... not so much! Jesus wept... rant over, subscribed.
Absolutely, that's a fantastic suggestion! Implementing a self-contained version using local language models for text-to-speech (T2S) and speech-to-text (S2T), along with integrating a Groq alternative like Mistral or Llama 2 with LM Studio or Ollama, could indeed run on my Mac Studio. However, achieving the same level of quality and performance might require a substantial investment in new GPUs.
@@ai-for-devs Awesome video! WhisHper might be an option to keep things local. Online vendors that offer Privacy* are still hacked too often (see NordVPN). I'd rather run my own security than trust a third party to be meticulous and honest.
The thing that makes me laugh in this video is seeing Adam Savage trying to remove the sheet by pushing it up from the front instead of pulling it from behind.
I loved your video and was inspired. I wanted to get the code, so I joined... and you have some interesting courses, but I do not see J.A.R.V.I.S. anywhere on the site courses
Maybe she could indeed be the avatar for J.A.R.V.I.S. in the next video! Who knows, it might just bring a whole new level of charm and interaction to our AI assistant!
it is a 314b model (certainly not a reference to pi) I would have a hard time to run it on my laptop using LMstudio, because of memory restrictions, whereas I can run llama2 70b, mixtral, miqu and other models on my CPU, and offload some parts to the GPU. and my laptop has a 7th gen i7 and a GTX 1070. it's not great, some tasks take up to half an hour. I would love a 70b version of groq.
In this video he is using Groq's (with a Q) api. He is not running Twitter's recently open sourced model Grok (with a K) on his local machine. I was confused too when I first heard about Groq after only knowing about Grok.
How close are we from having a Jarvis typw AI Assistamt that can be used from Computer to Smart Phone? It would seem like we would have had it by now with all the AI stuff coming out.
The more i think about AI the more i realise how much it will change the way games are played, saying this tho it also causes humans to be more stupid as like auto-correct we rely on them for information. Either way awesome project
Hey! Where can I get a copy of the code for the web app? I saw the link to the course but just wondering if you provide a github repo for the youtube videos?
can be this set on raspberry pi 4/5 and make with cpp/c#/php or any other then python/java/javascript ??? some language that all will can understand and can program it ?
Clicking a button to record and stop recording does not feel like JARVIS. It would be more real if it was in terminal and the audio is processed in realtime without the need of clicking any button.
I concur. My initial implementation was designed to listen for any sounds exceeding a predefined threshold, rather than requiring a manual button press. However,I aimed to keep the code as straightforward as possible. I'll be sharing the alternative JS source code on ai-for-devs.com.
@@ai-for-devsthis could probably be embedded into an alexa echo skill or other assistant platforms without too much trouble i imagine. figure out a good function library or or agent framework.. now were cooking
I made a program in python which does it. It records in real time using pyaudio and recorder, then I when silence threshold is achieved it starts recording till the volume gets above threshold, then it converts to text using STT and I receive a answer using GPT-2 model from huggingface. The UI is a 3D brain(mesh to points) using vtk embbed with python code so when the user is speaking the points which make the brain start color changing according to CHUNK audio times a constant to alter the shaders of each point in perspective. I use GPT-2 because my computer is old and only has CPU, I already made it an APK but I haven't uploaded it in Github
Great question! Choosing between Deepgram and solutions like Fast Whisper often comes down to specific needs and preferences. While Deepgram is renowned for its high speed, making it one of the fastest solutions currently available, it's true that free alternatives like Fast Whisper can be very appealing, especially for those on a budget or with less urgent speed requirements. Each option has its strengths, and there's no one-size-fits-all answer.
Just a question why you didnt used the pyttsx3 library for generating the reposnses and taking the user input in audio by pyttsx3 library . Using deepgram and making temporary files for the audio input and output , with interpreting can be a heavier task and make the respond slower than using pyttsx3 module for responses and output in audio format , without interpreting with temporary files. That will be more suitable ig , and make the program more faster .
Awesome video! Is there a way to adjust settings in Groq to just answer the questions directly instead of adding friendly fluff? I'm following your example and everything works great, but I when returning the translations I get "Sure, here is a translation of promp_here in German:..."
Glad you liked the video! To get direct answers without extra fluff, try adjusting your prompt to explicitly request a straightforward response: { "role": "system", "content": "Provide a concise, one-sentence answer without unnecessary details." }
Excellent tutorial.... but am I missing something? You say "We paste the prepared code for the index page", but I do not see where this prepared code exists?
This is a video from my platform, ai-for-devs.com. Each section includes a Download Section. However, you don’t need to join just to access the code; you can simply send me a PM on Discord at discord.gg/xPBHz9tP, and I’ll provide you with access to the source code.
Whats next, show your skills? 1. CodeCraft Duel: Super Agent Showdown 2. Pixel Pioneers: Super Agent AI Clash 3. Digital Duel: LLM Super Agents Battle 4. Byte Battle Royale: Dueling LLM Agents 5. AI Code Clash: Super Agent Showdown 6. CodeCraft Combat: Super Agent Edition 7. Digital Duel: Super Agent AI Battle 8. Pixel Pioneers: LLM Super Agent Showdown 9. Byte Battle Royale: Super Agent AI Combat 10. AI Code Clash: Dueling Super Agents Edition
Not sure if you’d find it useful but I’ve made something similar but much slower using anthropic with function calls. I use eleven labs at the moment but after your video seriously looking at groq.
PROBLEM: I seem to keep getting the same error: "Exception: DeepgramApiError: Invalid credentials. (Status: 401)". I tried to create a new key and still got the same error. The variable definitely exists in my environment (Had to manually create the variable using "conda env config vars set my_var=value" since I use conda to manage my virtual envs). SOLUTION: I ended up creating a '.env' file and setting the API key in there (DG_API_KEY="keyValue"). For conda users, I installed dotenv using "conda install -c conda-forge python-dotenv" . Seemed to work that way. I decided to just comment this in case anyone runs into the same errors as I did. BTW, 'export' is for mac users, for windows users use 'set'. If your using conda like me, to set a variable in the environment use "conda env config vars set my_var=value" and then restart the environment.
hI, IVE GOT THE same problem, generating key is now different from the video because of the permissions, have you solve it? if yes, please tell me. thank you
Oh, that's unfortunate. I know that Fast Whisper supports Hungarian, which might be useful for you. You can find more details here: replicate.com/vaibhavs10/incredibly-fast-whisper. We have used it in ruclips.net/video/BBnWExtRd4A/видео.html
Which solution for tts and stt would you recommend ? What do you think about Whisper and WhisperSpeech ? Edit: I just tried Whisper, it works pretty well. Could save some stt money. Didn't figure out how to make WhisperSpeech work yet.
That's pretty cool! Yes, you're on the right track. For both text-to-speech (TTS) and speech-to-text (STT), there are local alternatives that could replace cloud-based solutions, potentially saving costs in the long run.
the command "export DG_API_KEY={key}" this is the command for mac , i tried "stex DG_API_KEY {key}" and also tried "ste DG_API_KEY={key}" after both the commands to i got the same "Exception: DeepgramApiError: Invalid credentials. (Status: 401)" Error i tried searching solution but didn't got , how to solve this one?i have made the api key as a member also but still facing this error.......
@@ai-for-devs Yea i tried doing manually in the code first still getting the same error then i tried doing manually in environment variable by creating a new one and setting up the API Key manually there the issue solved!!....
In your video at 08:38 you mention pasting the prepared code, for the index.html template but I'm not seeing where to get that code? Edit: Nvm. I'll just type it manually.
Please send a short email to sebastian@ai-for-devs.com to receive your Discord invitation. Additionally, include your GitHub username in the email to secure access to the ai-for-devs GitHub organization. We look forward to your participation.
Hello sir i want to ask that it can only chat. I want to make it so that it can access current information, open apps, surf web, cpature screenshot and analyze it and may more how can we do it please help.
To expand the AI's abilities to include real-time information access, app interactions, web surfing, and image analysis, you can leverage Groq's function calling capability together with other LMs. We have built something similar (with GPT instead of Groq) in the past. Have a look at ruclips.net/video/XJDdzwH6BRo/видео.html
Currently, the pricing for Groq's cards is around $20,000 each. Given this price point, direct purchase and deployment of Groq hardware might be a significant investment 😅
you do not need deepgram... you can install and use speech_recognition and with a bit of python you do not need any button but you can interact directly with any ollama model... at the same time once you get an answer you can use text2 speech which in instegrate in any linux/mac machine which is "SAY"... might not be amazing but not very different from what you got..... so at the and no money spent in on-line services and with ollama on your PC you have everything on your pc, zero money...
yes ,i agree too. beeing independant of any third party online services for tts or stt is more important even if it means a littler more code involved.
Absolutely. While we've explored various alternatives to Deepgram (see our last videos), its simplicity and speed were key reasons for its inclusion in the tutorial. As you've insightfully noted, the beauty of technology, much like Lego, lies in its modularity and the freedom it offers to interchange components.
I think he's just trying to educate on the service and what you can do with it as an example. He's not saying it's the best way or only way to do it. I get what you're saying, but why don't you create a video tutorial and I swear to god I'll watch it because all these tools are good to know in your tool belt and so is doing it as you mentioned. So go make a video so I can do it too!
Hello I am programmer too I and I am learning Web Development and i got your video after 4 weeks I subsed your channel. and is there already code and how to start?
Indeed, Deepgram isn't free indefinitely. It offers robust features for speech-to-text and text-to-speech conversion, making it a valuable tool, but its cost becomes a factor for long-term use.
Learn to build real-world AI apps with my entertaining video tutorials! Access 100+ more videos from me at ai-for-devs.com.
Love this - I created this same project last year, but set it aside because of latency issues. I will see how grok changes the game now
I find that GPT-3.5 has much better latency than GROQ. In my own voice assistant I use GPT-3.5 for most of the interactions and use OpenAI function calling to defer tasks to other models, for me that's just GPT-4, but it could be GROQ if the workload is large enough that speed is more important than latency
OMG I did the same exact thing. lol
To be honest here I have absolutely no clue why it took YT so long to recommend this channel... the algo is losing its touch if you ask me. I get video about cats and stupid crap. Yet a coding channel, the content I watch several hours of a day while working... not so much! Jesus wept... rant over, subscribed.
Timing and Step by Step explanations = Well done! 👍
Thanks a lot.
this is great!...you explain it clearly and it's easy to understand as you go along!...thanks ...subbed!
This is very impressive. It's given me an idea of a way to better implement AI into my companies workflow.
I have built my personal AI Assistant using Neural AI and Chatterbot but this one is Amazing!
Woah, that's cool! Thanks for checking out my AI assistant.
@@ai-for-devs 😁 I will be building another one using this
your lecture is not for beginners but for a pro level
Awesome tutorial! I'm gonna try it...
Will give a follow up. In Sha Allah
You sir are a coding machine. Was a pleasure to watch a master such as yourself. Learning a little python myself.
Awesome tutorial. Thank you for sharing!
🎉 Thank you! Great job, you inspired me. I’ve subscribed to your channel now.
Good job, very clear and informative.
very nice, now create a self-contained version that doesn't rely on internet resources.
Absolutely, that's a fantastic suggestion! Implementing a self-contained version using local language models for text-to-speech (T2S) and speech-to-text (S2T), along with integrating a Groq alternative like Mistral or Llama 2 with LM Studio or Ollama, could indeed run on my Mac Studio. However, achieving the same level of quality and performance might require a substantial investment in new GPUs.
@@ai-for-devs Awesome video! WhisHper might be an option to keep things local. Online vendors that offer Privacy* are still hacked too often (see NordVPN). I'd rather run my own security than trust a third party to be meticulous and honest.
The thing that makes me laugh in this video is seeing Adam Savage trying to remove the sheet by pushing it up from the front instead of pulling it from behind.
That is funny af XD From all people surely he would understand that concept >.>
It happens to the best of us, even Adam Savage. Glad you enjoyed the video!
You gained another subscriber, fantastic video!!!
Thank you so much! I'm glad you enjoyed the video. Welcome to the community!
Nice. Lost me at using a wav file from a previous video since this was a new recommend channel.
Sorry to hear that
@@ai-for-devs Just means I have to go watch that other video and see your take on it.
Great tutorial. easy to understand and fast ,
Thanks! I made sure to speed up the tutorial so you wouldn't fall asleep halfway through! 😄
thanks for sharing dear Sebastian!
Oh wow, this tutorial is awesome! I will try it step by step :) Thank you!
I loved your video and was inspired. I wanted to get the code, so I joined... and you have some interesting courses, but I do not see J.A.R.V.I.S. anywhere on the site courses
@PaulyWollyUTube You can find it here: www.ai-for-devs.com/products/real-time-ai-mastery-voice-smart-assistants
Thanks a lot sir finally get to know how basically a web application works in Full Stack , Thankyouu So much💌🌟
... and you also have mastered the AI part. Congrats 🙂
@@ai-for-devs Thanks to you!!...🙃
Very interesting. Would love to see how to create J.A.R.V.I.S. using local LLMs via Ollama.
Awesome tutorial. Thank you. ♥
wow man love how you explain. Subbed!
Where is the beautiful woman from the thumbnail? She is the Jarvis we all need!
Maybe she could indeed be the avatar for J.A.R.V.I.S. in the next video! Who knows, it might just bring a whole new level of charm and interaction to our AI assistant!
You should try Langchain or transformers to make it even more powerful
Completely Right 👍 We actually added some more stuff in our tutorials at ai-for-devs.com
it is a 314b model (certainly not a reference to pi)
I would have a hard time to run it on my laptop using LMstudio, because of memory restrictions, whereas I can run llama2 70b, mixtral, miqu and other models on my CPU, and offload some parts to the GPU. and my laptop has a 7th gen i7 and a GTX 1070. it's not great, some tasks take up to half an hour.
I would love a 70b version of groq.
In this video he is using Groq's (with a Q) api. He is not running Twitter's recently open sourced model Grok (with a K) on his local machine. I was confused too when I first heard about Groq after only knowing about Grok.
@@actuallyaceit I know, but I would love to see a model by X that is 70b, so I can run it locally
Klasse Video, sehr clean, auf den Punkt 🙂
Danke Dir
How close are we from having a Jarvis typw AI Assistamt that can be used from Computer to Smart Phone? It would seem like we would have had it by now with all the AI stuff coming out.
Totally
Awesome channel. Subbed!
Thank you, Mathew! Welcome to the channel!
would be nice if u used the real voice of jarvis
The more i think about AI the more i realise how much it will change the way games are played, saying this tho it also causes humans to be more stupid as like auto-correct we rely on them for information.
Either way awesome project
Absolutely, I can imagine, that assessing the quality of AI-generated work will likely become a major aspect of human jobs in the future.
Pizza Funghi with mushrooms... 😂
Hey! Where can I get a copy of the code for the web app?
I saw the link to the course but just wondering if you provide a github repo for the youtube videos?
can be this set on raspberry pi 4/5 and make with cpp/c#/php or any other then python/java/javascript ??? some language that all will can understand and can program it ?
This video is great but I don't know how to operate visual Studio Code and do the things you do. Where do I learn that?
Clicking a button to record and stop recording does not feel like JARVIS. It would be more real if it was in terminal and the audio is processed in realtime without the need of clicking any button.
I concur. My initial implementation was designed to listen for any sounds exceeding a predefined threshold, rather than requiring a manual button press. However,I aimed to keep the code as straightforward as possible. I'll be sharing the alternative JS source code on ai-for-devs.com.
@@ai-for-devsthis could probably be embedded into an alexa echo skill or other assistant platforms without too much trouble i imagine. figure out a good function library or or agent framework.. now were cooking
this is EXACTLY what I have been waiting to do for a year now! @@roody_io - reply back if you figured this out already!
@ai-for-devs You can implement wake word library like hey siri by using porcupine by picovoice for example
I made a program in python which does it. It records in real time using pyaudio and recorder, then I when silence threshold is achieved it starts recording till the volume gets above threshold, then it converts to text using STT and I receive a answer using GPT-2 model from huggingface. The UI is a 3D brain(mesh to points) using vtk embbed with python code so when the user is speaking the points which make the brain start color changing according to CHUNK audio times a constant to alter the shaders of each point in perspective. I use GPT-2 because my computer is old and only has CPU, I already made it an APK but I haven't uploaded it in Github
for some reason my API key constantly says invalid credentials.
It's over 9000!
Thank you
Cool concept
the Mythbusters demo was not at all accurate but it was a show.
Danke. Auch native-speaker ;-)
Bitte sehr!
the question here is why exactly deepgram instead other solution ?? and why not choose a free package like fast-whisper ?
Great question! Choosing between Deepgram and solutions like Fast Whisper often comes down to specific needs and preferences. While Deepgram is renowned for its high speed, making it one of the fastest solutions currently available, it's true that free alternatives like Fast Whisper can be very appealing, especially for those on a budget or with less urgent speed requirements. Each option has its strengths, and there's no one-size-fits-all answer.
@@ai-for-devs thank u for this clarification and i can add deepgram not support all languages.
Hey! Great content, yet I couldn't reach the source code of the web app part. The links below expired or not work. Can you provide me a new link?
Just a question why you didnt used the pyttsx3 library for generating the reposnses and taking the user input in audio by pyttsx3 library .
Using deepgram and making temporary files for the audio input and output , with interpreting can be a heavier task and make the respond slower than using pyttsx3 module for responses and output in audio format , without interpreting with temporary files.
That will be more suitable ig , and make the program more faster .
Great approach 🙌
Awesome video! Is there a way to adjust settings in Groq to just answer the questions directly instead of adding friendly fluff? I'm following your example and everything works great, but I when returning the translations I get "Sure, here is a translation of promp_here in German:..."
Glad you liked the video! To get direct answers without extra fluff, try adjusting your prompt to explicitly request a straightforward response:
{
"role": "system",
"content": "Provide a concise, one-sentence answer without unnecessary details."
}
Excellent tutorial.... but am I missing something? You say "We paste the prepared code for the index page", but I do not see where this
prepared code exists?
This is a video from my platform, ai-for-devs.com. Each section includes a Download Section. However, you don’t need to join just to access the code; you can simply send me a PM on Discord at discord.gg/xPBHz9tP, and I’ll provide you with access to the source code.
@@ai-for-devs I'm also looking for the index.html file. I'm looking to join your course as well, but just trying this out tonight.
Whats next, show your skills?
1. CodeCraft Duel: Super Agent Showdown
2. Pixel Pioneers: Super Agent AI Clash
3. Digital Duel: LLM Super Agents Battle
4. Byte Battle Royale: Dueling LLM Agents
5. AI Code Clash: Super Agent Showdown
6. CodeCraft Combat: Super Agent Edition
7. Digital Duel: Super Agent AI Battle
8. Pixel Pioneers: LLM Super Agent Showdown
9. Byte Battle Royale: Super Agent AI Combat
10. AI Code Clash: Dueling Super Agents Edition
I have more terminator vision in mind. Stay tuned.
Not sure if you’d find it useful but I’ve made something similar but much slower using anthropic with function calls. I use eleven labs at the moment but after your video seriously looking at groq.
@bradleybrown8428 elevenlabs has much better voices but slower response
PROBLEM: I seem to keep getting the same error: "Exception: DeepgramApiError: Invalid credentials. (Status: 401)". I tried to create a new key and still got the same error. The variable definitely exists in my environment (Had to manually create the variable using "conda env config vars set my_var=value" since I use conda to manage my virtual envs).
SOLUTION: I ended up creating a '.env' file and setting the API key in there (DG_API_KEY="keyValue"). For conda users, I installed dotenv using "conda install -c conda-forge python-dotenv" . Seemed to work that way. I decided to just comment this in case anyone runs into the same errors as I did.
BTW, 'export' is for mac users, for windows users use 'set'. If your using conda like me, to set a variable in the environment use "conda env config vars set my_var=value" and then restart the environment.
Have you tried to set the key directly in the code?
hI, IVE GOT THE same problem, generating key is now different from the video because of the permissions, have you solve it? if yes, please tell me. thank you
Hayden Pannetiere look alike
So, can we use the GROQ "Jarvis" interface to access a real LLM like Claude 3 or GPT 4?
Absolutely, the shown interface can be used with LLMs like Claude 3 or GPT-4. Just exchange the groq call to be a call to GPT or Claude.
It is amazing!
Wow your source code of 28 lines is amazing, I will not be using it, thanks
Excellent video. Can this be modified to have an avatar speak the answers when asked?
Maybe with solutions like www.heygen.com/streaming-avatar. Let me check and come back with a new video ;-)
Ah, too bad they don't have a Hungarian Text2Speech model. I hope they'll make it soon. :(
Oh, that's unfortunate. I know that Fast Whisper supports Hungarian, which might be useful for you. You can find more details here: replicate.com/vaibhavs10/incredibly-fast-whisper. We have used it in ruclips.net/video/BBnWExtRd4A/видео.html
@@ai-for-devs Thank you so much! :)
Which solution for tts and stt would you recommend ? What do you think about Whisper and WhisperSpeech ?
Edit:
I just tried Whisper, it works pretty well. Could save some stt money.
Didn't figure out how to make WhisperSpeech work yet.
That's pretty cool! Yes, you're on the right track. For both text-to-speech (TTS) and speech-to-text (STT), there are local alternatives that could replace cloud-based solutions, potentially saving costs in the long run.
Vielen Dank ;)
Bitte sehr! Es war mir ein Vergnügen.
J.A.R.V.I.S from Iron man ??
the command "export DG_API_KEY={key}" this is the command for mac , i tried "stex DG_API_KEY {key}" and also tried "ste DG_API_KEY={key}" after both the commands to i got the same "Exception: DeepgramApiError: Invalid credentials. (Status: 401)" Error i tried searching solution but didn't got , how to solve this one?i have made the api key as a member also but still facing this error.......
Please try first to set the key directly in the code. If this works you know that the key is correct.
@@ai-for-devs Yea i tried doing manually in the code first still getting the same error then i tried doing manually in environment variable by creating a new one and setting up the API Key manually there the issue solved!!....
you mean you write the api letter by letter insteead of copyng and paste?@@adityatiwari3646
Do I need to install llama on my pc? or it runs a llama image from groq?
You don't need to install LLaMA on your PC; it runs a LLaMA directly on Groq.
In your video at 08:38 you mention pasting the prepared code, for the index.html template but I'm not seeing where to get that code?
Edit: Nvm. I'll just type it manually.
www.ai-for-devs.com/pl/2148299694
@@ai-for-devs page not found
I have paid for a membership but cannot find where I get the github access and discord as well as the extra video lessons, please help
Please send a short email to sebastian@ai-for-devs.com to receive your Discord invitation. Additionally, include your GitHub username in the email to secure access to the ai-for-devs GitHub organization. We look forward to your participation.
@@ai-for-devs Thanks, have done so and look forward to participating
Bro is moving fast
Erkennt es also auch automatisch deutsch eingesprochene Sprache?
Das sollte grundsätzlich möglich sein. Ich würde den prompt noch entsprechend anpassen, dass die Ausgabe auch auf Deutsch erfolgt.
Nah, If it doesn't sound like Paul Bettany, I don't want it.
In that case, we'll need to utilize ElevenLabs instead of Deepgram. 😉
@@ai-for-devs Do it
Hello sir i want to ask that it can only chat. I want to make it so that it can access current information, open apps, surf web, cpature screenshot and analyze it and may more how can we do it please help.
To expand the AI's abilities to include real-time information access, app interactions, web surfing, and image analysis, you can leverage Groq's function calling capability together with other LMs. We have built something similar (with GPT instead of Groq) in the past. Have a look at ruclips.net/video/XJDdzwH6BRo/видео.html
@@ai-for-devs thanks for the reply and information
This is really cool, can you buy a groq chip yet?
Currently, the pricing for Groq's cards is around $20,000 each. Given this price point, direct purchase and deployment of Groq hardware might be a significant investment 😅
anyone can find the index html code anywhere?
not me been trying to find it but in vain . without it we cannot test anything
is it possible to use this code on a raspberry pi with an ALU core without the need for an internet connection??
It's probably possible to generate similar answers, but not with the same inference speed.
you do not need deepgram... you can install and use speech_recognition and with a bit of python you do not need any button but you can interact directly with any ollama model... at the same time once you get an answer you can use text2 speech which in instegrate in any linux/mac machine which is "SAY"... might not be amazing but not very different from what you got..... so at the and no money spent in on-line services and with ollama on your PC you have everything on your pc, zero money...
yes ,i agree too. beeing independant of any third party online services for tts or stt is more important even if it means a littler more code involved.
Absolutely. While we've explored various alternatives to Deepgram (see our last videos), its simplicity and speed were key reasons for its inclusion in the tutorial. As you've insightfully noted, the beauty of technology, much like Lego, lies in its modularity and the freedom it offers to interchange components.
I think he's just trying to educate on the service and what you can do with it as an example. He's not saying it's the best way or only way to do it. I get what you're saying, but why don't you create a video tutorial and I swear to god I'll watch it because all these tools are good to know in your tool belt and so is doing it as you mentioned. So go make a video so I can do it too!
Hello I am programmer too I and I am learning Web Development and i got your video after 4 weeks I subsed your channel. and is there already code and how to start?
🙏 On ai-for-devs.com we have an AI Fundamentals Course.
Precise military operation is much overrated. Most of the time it would be chaos and unexpected surprises :)
very good, github link please
www.skool.com/ai-for-devs
Aber die Übersetzung ins Deutsche war überhaupt nicht korrekt.
Die Betonung war nicht ganz sauber, aber "This is a test" => "Das ist ein Test" sollte doch eigentlich passen?
But look at the size of the cotton.
Isn't it cheating ?
Not very secure... Still decent for a tutorial!
I agree! In a production environment, it's crucial to spend much more time on hardening security.
Witzig, die Aussprache ist aber etwas holprig xD
deepgram is not free forever however
Indeed, Deepgram isn't free indefinitely. It offers robust features for speech-to-text and text-to-speech conversion, making it a valuable tool, but its cost becomes a factor for long-term use.
@@ai-for-devs i have the impression your replies are generated automatically by an LLM 🤣
Sure, a real human would not work on Sunday.
para @raquel #familia porcides