Excellent video, I have followed the exact same procedure except I did indexing in AI search and used the same in prompt flow. When i ran it by asking a question, the bot is loading forever without replying to the question. Any idea or suggestion?
Thanks @LinoTV for this great tutorial. The default number of tokens for chunking is 1024 tokens. Do you have any idea on how to change this value for example to 256 tokens?
I noticed you had to wait for some time for the indexing to finish. Do you recommend indexing the documents (separately) on Azure Search Service-which should be faster-and then connecting the index? Thanks!
You can, if you would like. I found the speed sporadic based on the state of things in Azure at the time. There is no guarantee that it will be faster executing the index directly.
I found that based on documentation from the OpenAI assistant that the limit is 20 files and each can not be more than 512 MB. By testing I could not upload a file much smaller than that. So I think it is a work in progress.
Azure AI is really a pain in ass. Errors, synchronization issues and warnings everywhere. And super expensive. Do Microsoft know that this product is garbage ?
Truth be told, it is unstable. But they are working on it very hard to bring it to a level where it can be solid and productive. Some days I see we are moving forward and some days it goes backward. The price we pay when GENAI is moving at the speed of light and everyone is trying to catch up, even Microsoft
Excellent video. Very practical and down to earth for non-CS folks. Great job!
That was the best tutorial I've watched so far. Thanks a lot.
great tutorial, thanks!
Excellent video, I have followed the exact same procedure except I did indexing in AI search and used the same in prompt flow. When i ran it by asking a question, the bot is loading forever without replying to the question. Any idea or suggestion?
Not sure, have to look into it
Thank you Mr. Lino
Thank you for the nice video!
Very informative
Thank you!
Thanks @LinoTV for this great tutorial. The default number of tokens for chunking is 1024 tokens. Do you have any idea on how to change this value for example to 256 tokens?
Chunking does not require any tokens. The embedding does.
Nice video!
Glad you enjoyed it
I noticed you had to wait for some time for the indexing to finish. Do you recommend indexing the documents (separately) on Azure Search Service-which should be faster-and then connecting the index? Thanks!
You can, if you would like. I found the speed sporadic based on the state of things in Azure at the time. There is no guarantee that it will be faster executing the index directly.
very nice. i have a one question. what should i do, if i change the chunking size when the ai search created?
You will have to recreate the index as the 1536 floats that represent each chunk will be different during the embedding stage
Excellent video!, Is there any limitations to upload file size. example this much mb files only it will support like that.
I found that based on documentation from the OpenAI assistant that the limit is 20 files and each can not be more than 512 MB. By testing I could not upload a file much smaller than that. So I think it is a work in progress.
Thank you for sharing !
Please upload different videos about prompt flow
Subscribed🎉🎉
Please upload more videos
Azure AI is really a pain in ass. Errors, synchronization issues and warnings everywhere. And super expensive. Do Microsoft know that this product is garbage ?
Truth be told, it is unstable. But they are working on it very hard to bring it to a level where it can be solid and productive. Some days I see we are moving forward and some days it goes backward. The price we pay when GENAI is moving at the speed of light and everyone is trying to catch up, even Microsoft