Thanks for the video! For scalability, I would need to deploy a huggingface inference API from a model of my choosing right? I think in this case, its using free resources for playgrounding?
What's unclear for me at the moment is, is the HF library just downloading the models at runtime and running them locally? Are all the models already embedded in the HF library? If I use an LLM from HF, does it download it and run it locally? Does HF provide any sort of hosting or is it just a model repo?
1. You can set up hugging face to download models and run locally. If interested in doing that with js, check out transformers.js on HF 2. Not all models are embedded in HF library, it’s mostly open source models with varying permissions on how you can use them for your applications. For example, if you want to build a commercial app, make sure to check the license first as an example. 3. If you use an LLM from HF does it download and run it locally, you can choose to do that if you have the hardware to run the selected model you can run local or choose to deploy on the cloud, which in some case you might not have any other option. With that said some models are available for inference which you can access from an API which is what I demonstrate here with Huggingface.js. You can choose to host a model from hugging face and have billing directly through HF, those models will be hosted on cloud infrastructure like aws, azure, gcp. huggingface.co/inference-endpoints I hope that helps !
Thanks for such amazing video, I love. have you faced this issue before ? `Error: Authorization header is correct, but the token seems invalid` do I need to regitster the billing information first ?
this is full error. file:///Users/usr/Documents/hugging_face/node_modules/@huggingface/inference/dist/index.js:169 throw new Error(output.error); ^ Error: unknown error at request (file:///Users/froggy/Documents/hugging_face/node_modules/@huggingface/inference/dist/index.js:169:15) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async textGeneration (file:///Users/froggy/Documents/hugging_face/node_modules/@huggingface/inference/dist/index.js:655:15) at async file:///Users/froggy/Documents/hugging_face/huggingface.js:16:1
There are free tiers as well as paid tiers for using their inference apis. To train a model would be a bit longer to explain then a comment block. I will be making a video on training models coming in future videos! Stay tuned
You may have to write the result of the image to disk or to something like an S3 bucket or equivalent. I would have to take a look at what the particular model returns! Hopefully this helps!
hello, I would like the text generated by hugging face, when I make images to text, to go directly into a cell of my Google sheet, can we copy and paste our 10-digit pay keys into my Google sheets like I do with chat gpt? anyway thank you very much for your videos
@DevelopersDigest ok thank you very much for your answer can you suggest me a tutorial for someone like me who doesn't know anything about code otherwise I saw a tutorial to learn how to use chatgpt directly in Google sheet, without touching the code, just with the chatgpt API and a Google sheet module, but all this is with text, do you think i can ask chatgpt to describe the images that are in my google sheet anyway thank you very much for your answer
Check out python.langchain.com/docs/use_cases/sql/ Langchain might be able to help, I am not sure if there is a mongodb equivalent but using a framework like langchain + a model could help you accomplish what you are looking for - cheers 🥂
Love your videos. The format of starting with the outline and dropping the code in underneath is brilliant in its simplicity. I like the javascript examples. Have you used langchain with huggingface to retrieve information from a vectordb? I used this ruclips.net/video/vpU_6x3jowg/видео.html to create embeddings in Pinecone and I'm trying to convert the client to query it. Be happy to share the working code once it's complete. Any pointers to code or advice are welcomed! have fun!
In case anybody comes back to read this thread. I had to create a HuggingFaceInferenceEmbeddings for the model: "sentence-transformers/multi-qa-mpnet-base-dot-v1" to match the pinecone index. Then I used Langchain's Pinecone wrappers and did a vectorStore.similaritySearch, and everything worked great. (haha - like that explanation? now you see why we like the way you explain things ;) )
I did everything like you did in the video, but I can't use many of the models available. When I try to use the summarization model, I get the following error: Error: Could not load model facebook/bart-large-cnn with any of the following classes: (, ). at request (file:///C:/Users/user/Desktop/hugging/node_modules/@huggingface/inference/dist/index.mjs:106:15)
The inference model free tier does have limits, with that said if you are trying to run models locally it is a bit more involved. You’ll have to make sure everything is installed and downloaded accordingly! Cheers !
Is it possible to get the "returnTimestamps: true" of whisper model with HuggingfaceJS inference API? I'm transcribing speech and I want it to return with timestamps. It seems that it's compatible with python, so I thought it is too for javascript. This is what I have now. const transcriptionResult = await hf.automaticSpeechRecognition({ model: 'openai/whisper-base', data: fs.readFileSync(chunkPath), //@ts-ignore parameters: { return_timestamps: true //this does not work. It says that parameters does not exist in automaticSpeechRecognition } }); I can't switch to python as my backend heavily depends in nodejs. I can't figure out how to do this, I've looked in the documentation, and it seems like inference api is not that good yet. Thank you!
I am not familiar whether the whisper base supports timestamps. If you have a working example of it working through Huggingface, my thought would be to log out the payload in your Python request and ensure the same payload is being sent. I have only used whisper directly through OpenAIs API so I am not familiar with this the hf implementation, sorry! 😞
This was absolutely excellent. Your style is simple and clear. Don't stop, you're a rare breed.
Thank you William, that means a lot!
@@DevelopersDigest Hello can you explain why did we have to image seperately and not use a url for the image like if we are giving src to img tag
Keep pumping out these incredible videos my man!! Thank you!
Thank you Ryan!
Excellent and straight to the point, thank you.
Thanks for watching!
Do you have a video that explains how to add ready-made templates to my WordPress site, for example?
I don’t but I can absolutely build some Wordpress examples. Stay tuned!
Thanks ! Great explanation!😁
short and simple, luv it
Cheers thanks 🙏
thank you for your js ai content hands on programing video
Thank you for watching!!
Nice, exactly what I needed, gonna watch it next w/e
Great 👍 😊
What is the advantage of using the interface as against pipeline?
Thanks for the video! For scalability, I would need to deploy a huggingface inference API from a model of my choosing right? I think in this case, its using free resources for playgrounding?
Correct ✅
Great video to kick off with HFJS, can you do 1 for using MPT-7B with nodejs
That’s a great idea, let me dig into this model. It sounds pretty compelling !
helpful! thank you
Thank you for watching!
@@DevelopersDigest :) Are you using groq for your projects? How is your experience with it so far?
Thanks for sharing this. 😊
Thanks Hemant!
This is definitely helpful.
Great! Glad I could help!
help!!!
transformer.js error SyntaxError: Unexpected token '
What's unclear for me at the moment is, is the HF library just downloading the models at runtime and running them locally? Are all the models already embedded in the HF library? If I use an LLM from HF, does it download it and run it locally? Does HF provide any sort of hosting or is it just a model repo?
1. You can set up hugging face to download models and run locally. If interested in doing that with js, check out transformers.js on HF
2. Not all models are embedded in HF library, it’s mostly open source models with varying permissions on how you can use them for your applications. For example, if you want to build a commercial app, make sure to check the license first as an example.
3. If you use an LLM from HF does it download and run it locally, you can choose to do that if you have the hardware to run the selected model you can run local or choose to deploy on the cloud, which in some case you might not have any other option. With that said some models are available for inference which you can access from an API which is what I demonstrate here with Huggingface.js. You can choose to host a model from hugging face and have billing directly through HF, those models will be hosted on cloud infrastructure like aws, azure, gcp. huggingface.co/inference-endpoints
I hope that helps !
It's running a cloud hosted instance of the models. But it's still as fast as if you'd run it on your server, maybe even faster depending on your GPU
Thanks for such amazing video, I love.
have you faced this issue before ?
`Error: Authorization header is correct, but the token seems invalid`
do I need to regitster the billing information first ?
this is full error.
file:///Users/usr/Documents/hugging_face/node_modules/@huggingface/inference/dist/index.js:169
throw new Error(output.error);
^
Error: unknown error
at request (file:///Users/froggy/Documents/hugging_face/node_modules/@huggingface/inference/dist/index.js:169:15)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async textGeneration (file:///Users/froggy/Documents/hugging_face/node_modules/@huggingface/inference/dist/index.js:655:15)
at async file:///Users/froggy/Documents/hugging_face/huggingface.js:16:1
Great video. How would you train the model for few specific tasks? By using API, they will also charge..yes?
There are free tiers as well as paid tiers for using their inference apis. To train a model would be a bit longer to explain then a comment block. I will be making a video on training models coming in future videos! Stay tuned
How do I display the result to the html code. I keep getting the error document not defined, since it's node and not browser.
sir its a request to please upload the code on the repo you provided previously. it would be very helpful sir
Hi Hemant, which video are you referring to? 🙂
I have tried this for img2img models but I am unable to get back image result. Can you tell me what could I do to get image?
You may have to write the result of the image to disk or to something like an S3 bucket or equivalent. I would have to take a look at what the particular model returns! Hopefully this helps!
hello, I would like the text generated by hugging face, when I make images to text, to go directly into a cell of my Google sheet,
can we copy and paste our 10-digit pay keys into my Google sheets like I do with chat gpt?
anyway thank you very much for your videos
You could leverage the google sheets api to do this!
@DevelopersDigest ok thank you very much for your answer can you suggest me a tutorial for someone like me who doesn't know anything about code
otherwise I saw a tutorial to learn how to use chatgpt directly in Google sheet, without touching the code, just with the chatgpt API and a Google sheet module,
but all this is with text,
do you think i can ask chatgpt to describe the images that are in my google sheet
anyway thank you very much for your answer
I need a model that could convert text into mongodb queries, I have searched huggingface but could not find such a model. Could anyone help me?
Check out python.langchain.com/docs/use_cases/sql/
Langchain might be able to help, I am not sure if there is a mongodb equivalent but using a framework like langchain + a model could help you accomplish what you are looking for - cheers 🥂
One day you have 10million subs
I won’t believe it if I make it to 10,000! Thank you 🙏
This gives me error for the fisrt import line. Plz help me with this...
Did you import the dependencies? Cheers!
Can we use huggingfac embeddings
Yes!
Lots of options for this :)
Please do a video on image to image
Thank you for the suggestion. What sort of image to image generation did you have in mind?
upload two images and combine them... also how can i deploy to server @@DevelopersDigest
Love your videos. The format of starting with the outline and dropping the code in underneath is brilliant in its simplicity. I like the javascript examples. Have you used langchain with huggingface to retrieve information from a vectordb? I used this ruclips.net/video/vpU_6x3jowg/видео.html to create embeddings in Pinecone and I'm trying to convert the client to query it. Be happy to share the working code once it's complete. Any pointers to code or advice are welcomed! have fun!
Cheers thank you. If you are interested, I have used pinecone in my video here: m.ruclips.net/video/CF5buEVrYwo/видео.html
@@DevelopersDigest Thanks! I'll go check it out...
In case anybody comes back to read this thread. I had to create a HuggingFaceInferenceEmbeddings for the model: "sentence-transformers/multi-qa-mpnet-base-dot-v1" to match the pinecone index. Then I used Langchain's Pinecone wrappers and did a vectorStore.similaritySearch, and everything worked great. (haha - like that explanation? now you see why we like the way you explain things ;) )
hello
I did everything like you did in the video, but I can't use many of the models available. When I try to use the summarization model, I get the following error: Error: Could not load model facebook/bart-large-cnn with any of the following classes: (, ).
at request (file:///C:/Users/user/Desktop/hugging/node_modules/@huggingface/inference/dist/index.mjs:106:15)
The inference model free tier does have limits, with that said if you are trying to run models locally it is a bit more involved. You’ll have to make sure everything is installed and downloaded accordingly! Cheers !
Is it possible to get the "returnTimestamps: true" of whisper model with HuggingfaceJS inference API? I'm transcribing speech and I want it to return with timestamps. It seems that it's compatible with python, so I thought it is too for javascript. This is what I have now.
const transcriptionResult = await hf.automaticSpeechRecognition({
model: 'openai/whisper-base',
data: fs.readFileSync(chunkPath),
//@ts-ignore
parameters: {
return_timestamps: true //this does not work. It says that parameters does not exist in automaticSpeechRecognition
}
});
I can't switch to python as my backend heavily depends in nodejs. I can't figure out how to do this, I've looked in the documentation, and it seems like inference api is not that good yet. Thank you!
I am not familiar whether the whisper base supports timestamps. If you have a working example of it working through Huggingface, my thought would be to log out the payload in your Python request and ensure the same payload is being sent. I have only used whisper directly through OpenAIs API so I am not familiar with this the hf implementation, sorry! 😞
Too technical for me 🫤
I can absolutely appreciate that - I think it should only be getting easier in time! Cheers