In the very last step of the last notebook when you go to run the demo, I get the following error: "py4j.security.Py4JSecurityException: Method public java.lang.String com.databricks.backend.common.rpc.CommandContext.toJson() is not whitelisted on class class com.databricks.backend.common.rpc.CommandContext" How do you fix this?
Awesome demo! I one thing I wanted to do and could not quite figure out if it was possible is to to add the serving endpoint you created to the list of available models in the AI playground section. I would be nice to directly chat with our custom model without having to build a separate GUI.
Getting an issue when trying to create the serving endpoint. It's saying Served entity creation aborted for served entity `audio_transcription_chatbot_model-1`, config version 1, since the update timed out. Have not been able to figure out why.
Thank you for the video. One question, the gradio UI wasn't working for me, is there any other potential changes one needs to make other than the ones mentioned?
@jason I made it to the end of the video but when trying to launch Gradio (4 Chatbot GUI notebook in your solution), I get an issue with whitelisted java methods. you have any workaround for that?
great video. thanks so much! quick question : so when we add new documents to the volume all we have to do is resynch the vector index ? we don't have to create another new version of the model to access the new data from the new files?
26 дней назад
Anyone else getting the error: raise ValueError("Must specify a chain Type in config")? It looks like Langchain version im working on doesn't like "stuff"
Great video! Was able to get a RAG model working with the first document I added, however when I went back later to add another pdf, the model did not seem to acknowledge any new documents to reference. They were added to the docs_track table, but none of the text chunks ended up in the docs_text table... Anyone know why this may be?
@@bryancrigger So rerunning the code updates the docs_text table. You still need to go to the index in Unity Catalog for the docs_idx or what ever you named it and from the Overview tab click the Sync Now button under Data Ingest. Once you do that your chatbot should work with your new data. If your issue is that the docs_text table isn't updating then ensure that the name of the new file is not the same as previous files and that your docs_text table has the id GENERATED BY DEFAULT AS IDENTITY (to automatically create the unique key), that Change Data Feed is enabled and that cell 5 in the 2A notebook shows new rows being inserted.
Thank you. It really is a great video, But question I have is, you are manually tracking the changed files. Can you help me understand the use of `enablechangeDataFeed = True`. And also the use of Delta Sync Index if we were to add new files into the Catalog. Can you make a video on this please.
Great video and very very useful! While implementing, I got stuck uploading the pdf to a Volume in the Unity Catalog. I am the "Owner" of my Databricks Workspace and Azure account although I don't seem to have the option to add a Volume to a Catalog and thus don't have the option to add the pdf to a Volume. This seems to have to do with permissions and possibly setting up a metastore between DataBricks and Azure Blob Storage? Might you have any insights, ideas, solutions or workarounds? Thanks again for a great video and all the resources to implement this super useful technology!
Couple of things, you need USE SCHEMA and CREATE VOLUME permissions on the Schema and USE CATALOG on the catalog. Also you need CREATE EXTERNAL VOLUME permissions on the External Location you plan on using for your Volume.
Great video! Clear, straightforward, and right to the point!
Great video Jason, Thanks for putting together. Would you be able to share the notebooks as well.
great video Jason, helped me unlock Chatbot technical architecture in much greater detail
In the very last step of the last notebook when you go to run the demo, I get the following error:
"py4j.security.Py4JSecurityException: Method public java.lang.String com.databricks.backend.common.rpc.CommandContext.toJson() is not whitelisted on class class com.databricks.backend.common.rpc.CommandContext"
How do you fix this?
It was a clear and sorted demo for RAG. Thanks!!
Great video Jason...thank you! I think you forgot to add the link in the description for how to setup the access key and secret scope. thanks!
@jason what changes need to be made to this solution in order to take advantage of Databricks Apps?
Awesome demo!
I one thing I wanted to do and could not quite figure out if it was possible is to to add the serving endpoint you created to the list of available models in the AI playground section. I would be nice to directly chat with our custom model without having to build a separate GUI.
Hey thanks a lot this was great and clean demo on RAG that I was able to run end to end. Excellent job
Getting an issue when trying to create the serving endpoint. It's saying Served entity creation aborted for served entity `audio_transcription_chatbot_model-1`, config version 1, since the update timed out. Have not been able to figure out why.
You have the PAT property setup, scoped to a secret and referenced in the advanced properties of the endpoint?
very well put together & super informative...thanks a lot!
Thank you for the video. One question, the gradio UI wasn't working for me, is there any other potential changes one needs to make other than the ones mentioned?
@jason I made it to the end of the video but when trying to launch Gradio (4 Chatbot GUI notebook in your solution), I get an issue with whitelisted java methods. you have any workaround for that?
You need to use a single user cluster, you'll get that error with shared clusters. I updated the readme just now.
@@jasondrew2087 that worked! thanks Jason!
great video. thanks so much! quick question :
so when we add new documents to the volume all we have to do is resynch the vector index ? we don't have to create another new version of the model to access the new data from the new files?
Anyone else getting the error: raise ValueError("Must specify a chain Type in config")?
It looks like Langchain version im working on doesn't like "stuff"
Got the same error when serving the model. It works fine one month ago but I'm not sure how to fix it now
@@HaoyuWang-t1i i fixed it by downloading the exact langchain python modules as he did
At 09:18, the "create vector search index", has been moved to create button instead of the 3 dots.
oi got stuck because I'm only receiving 3 rows of responses. but my table have 800
I followed you tutor.I get stuck at 9.38. I type databricks-bge-large-en as the embedding model but the create button is disable not sure why
You shouldn't have to type it in, rather it should be an option in the drop down. If you go to Serving do you see it listed as a Foundational model?
@@jasondrew2087 Is neither on the drop down option or listed in the serving. Nothing is showing in both place
Great video! Was able to get a RAG model working with the first document I added, however when I went back later to add another pdf, the model did not seem to acknowledge any new documents to reference. They were added to the docs_track table, but none of the text chunks ended up in the docs_text table... Anyone know why this may be?
Did you re sync the index?
@@jasondrew2087 not sure. I added some additional pdfs and reran all the code
@@bryancrigger So rerunning the code updates the docs_text table. You still need to go to the index in Unity Catalog for the docs_idx or what ever you named it and from the Overview tab click the Sync Now button under Data Ingest. Once you do that your chatbot should work with your new data. If your issue is that the docs_text table isn't updating then ensure that the name of the new file is not the same as previous files and that your docs_text table has the id GENERATED BY DEFAULT AS IDENTITY (to automatically create the unique key), that Change Data Feed is enabled and that cell 5 in the 2A notebook shows new rows being inserted.
Thank you. It really is a great video, But question I have is, you are manually tracking the changed files. Can you help me understand the use of `enablechangeDataFeed = True`. And also the use of Delta Sync Index if we were to add new files into the Catalog. Can you make a video on this please.
any one having issue to import the .dbc notebook in Databricks?
Me as well
Wow, can't do this on a free trial. C'mon guys. Guess I will have to go the local llm way.
Great video and very very useful! While implementing, I got stuck uploading the pdf to a Volume in the Unity Catalog. I am the "Owner" of my Databricks Workspace and Azure account although I don't seem to have the option to add a Volume to a Catalog and thus don't have the option to add the pdf to a Volume. This seems to have to do with permissions and possibly setting up a metastore between DataBricks and Azure Blob Storage? Might you have any insights, ideas, solutions or workarounds? Thanks again for a great video and all the resources to implement this super useful technology!
Couple of things, you need USE SCHEMA and CREATE VOLUME permissions on the Schema and USE CATALOG on the catalog. Also you need CREATE EXTERNAL VOLUME permissions on the External Location you plan on using for your Volume.
is it possible for a yank NOT to start a sentence with "so..."? I challenge you.