Thanks for uploading this. It is quite well explained. It looks like you are running on this your local machine. What is the computation power of your machine? I am getting a time out error (Error raised by inference API: Model google/flan-ul2 time out) when I run the last cell in the notebook. Any idea about this?
I have a doubt. How does lang change generate the embeddings ? Or it need to connect to openAi to generate embeddings with the free token? Please explain me this part. Thank you
Thanks for uploading this. It is quite well explained.
It looks like you are running on this your local machine. What is the computation power of your machine? I am getting a time out error (Error raised by inference API: Model google/flan-ul2 time out) when I run the last cell in the notebook. Any idea about this?
Nice Video
There is no repository link in the description , if you could please add it will be much helpful Cheers and Thanks
Note: Token is restarted. So I did not mask it :)
Amazing video!
nice video man
awesome can you link your notebook?
Huggingface by default use "sentence-transformers/all-MiniLM-L6-v2". If you have non english language this must be changed.
Excellent!
I have a doubt. How does lang change generate the embeddings ? Or it need to connect to openAi to generate embeddings with the free token? Please explain me this part. Thank you
We are using flan-ul2 from huggingface for tokenization and inference. If you replace that we OpenAI(), it should still work.
great !
Can you share the code/jupyter notebook?
Thanks! Just added to description. I will update that repo with new methods soon :)
@@AIology2022 Don't see the Colab or Jupyter notebook/Github in the description, planning to share it later?
@@AIology2022 Hey, I don't think the repo/notebook got added to the description, would really appreciate the code, thank you!
Can you share your colab or any github bro?
Thanks! Just added to description. I will update that repo with new methods soon :)
آقا خسته نباشید. اون وسط ها چندتا کات میدادید بهتر بود