@@codingcrashcourses8533 I cannot run the code in VSCode. When running the import: From langchain_community.document_loaders import TextLoader, DirectoryLoader Error: File c:\Python311\Lib\enum.py:784, in EnumType.__getattr__(cls, name) 782 return cls._member_map_[name] 783 except KeyError: --> 784 raise AttributeError(name) from None AttributeError: COBOL I have installed the langchain-community library.
Awesome video. So I glad I found this channel. Long shot question: After testing several chunk/overlaps, my experimentation indicates an optimal chunk_size=1000 and overlap=200. My RAG contains about 10 medical textbooks (~50,000 pages). However, every video I see on RAG nobody uses chunks anywhere near that large. Does it seem improbable that my ideal chunk size is 1,000, or is there likely another variable at play?
Did you find anything? At least from my experience so far, with fixed chunk methodology (whatever be the chunk size or overlap) its easier to do POC but not for production grade quality. Did you try semantic chunking or chunking based on sections/headings and then capture relationship between the chunks via graph database?
This isn't backed in any data that I found but brute force trial and error I found that I am served better with different chunk sizes for different document types. Something like sentiment is fine at rather large chunk sizes. Something like a spec sheet I will actually place it multiple times with different chunk sizes. I am not saying this is the way but certainly found an improvement with finer details and critical information if I do that. My sweet spot has been 1k/1.5k/2k depending on the document type. I am sure less works but I don't need to with most context windows and the greater context of the larger chunk does have a quality aspect. You have to tame that idea by not going too large when you need more than a general pointing direction from your chunk otherwise you start to get sentiment and not the finer details.
In the images you complain that the similarity search return dots too far away from the Red Cross, the problem imho is the umap projection, maybe it would be different had you calculated the umap projection with the queries included; the projection down from 1024 components to two might loose some important details, so have you manually inspected the allegedly incorrect similarity search results?
Can u please make a video on retrieving data from SQL using SQL agents & Runnable using LCEL. If not possible here, if you can update the same in the udemy course. It helps alot
Thanks for the video.. But while genering queries using llm_chain.invoke(query), facing exception related to output parser. OutputParserException: Invalid json output:
@@codingcrashcourses8533 well it should be advanced as much as possible since I got an advanced rag . I saw many cases that people used ragas,trulens etc. I'm indecisive
LLMChain() is deprecated and the output_parser in the examples also cause json output error. Would be nice, if you could update the github code. Thank you If anyone having issue with json output, here is a fix: from langchain_core.output_parsers import BaseOutputParser class LineList(BaseModel): lines: list[str] = Field(description="Lines of text") class LineListOutputParser(BaseOutputParser[LineList]): def __init__(self) -> None: super().__init__(pydantic_object=LineList) def parse(self, text: str) -> list[str]: lines = text.strip().split(" ") return lines
The most productive 14 minutes of my day watching and learning from this video :)
great! Thanks for your comment
Best 15 mins of my day! You explained every single component in the code clear and crisp! Excited to check the other videos of yours. Thanks a bunch
UniqueList = list(set(ListWithDuplicates)) to replace those nested for loops. Love your content!
Does not work for complex objects in that way probably;)
Very informative.👍 Love the umap visualization 2 see the query and the embeddings.
Thanks for the video. Perfect timing…. Need this for tomorrow.
really really good video. best I've seen
thank you man :)
This channel is a gem 💎
Youre vids are insanely good. I doubt there is a better ai-prog-tuber
Thank you so much :)
This video is terrific, I'll give it a try!
Thank you!
This is GREAT!!!
I don't know the umap library, its very interesting. Good explanation about RAG advanced techniques, sucess for you!
thank you :)
Thank you for the great video:)
Thanks for your comment. Glas you enjoyed it :)
Спасибо!!
Your welcome andreij:)
@@codingcrashcourses8533 I cannot run the code in VSCode.
When running the import:
From langchain_community.document_loaders import TextLoader, DirectoryLoader
Error:
File c:\Python311\Lib\enum.py:784, in EnumType.__getattr__(cls, name)
782 return cls._member_map_[name]
783 except KeyError:
--> 784 raise AttributeError(name) from None
AttributeError: COBOL
I have installed the langchain-community library.
Awesome video. So I glad I found this channel. Long shot question:
After testing several chunk/overlaps, my experimentation indicates an optimal chunk_size=1000 and overlap=200. My RAG contains about 10 medical textbooks (~50,000 pages). However, every video I see on RAG nobody uses chunks anywhere near that large. Does it seem improbable that my ideal chunk size is 1,000, or is there likely another variable at play?
Did you find anything?
At least from my experience so far, with fixed chunk methodology (whatever be the chunk size or overlap) its easier to do POC but not for production grade quality. Did you try semantic chunking or chunking based on sections/headings and then capture relationship between the chunks via graph database?
This isn't backed in any data that I found but brute force trial and error I found that I am served better with different chunk sizes for different document types. Something like sentiment is fine at rather large chunk sizes. Something like a spec sheet I will actually place it multiple times with different chunk sizes. I am not saying this is the way but certainly found an improvement with finer details and critical information if I do that. My sweet spot has been 1k/1.5k/2k depending on the document type. I am sure less works but I don't need to with most context windows and the greater context of the larger chunk does have a quality aspect. You have to tame that idea by not going too large when you need more than a general pointing direction from your chunk otherwise you start to get sentiment and not the finer details.
@@sivi3883 how much latency do the added layers add? Are you running locally or API calls?
Thank, always nice videos!
Do you have a favorite german cross-encoder?
No, I don´t! I did not work that much with cross encoders to be honest
In the images you complain that the similarity search return dots too far away from the Red Cross, the problem imho is the umap projection, maybe it would be different had you calculated the umap projection with the queries included; the projection down from 1024 components to two might loose some important details, so have you manually inspected the allegedly incorrect similarity search results?
Can u please make a video on retrieving data from SQL using SQL agents & Runnable using LCEL. If not possible here, if you can update the same in the udemy course. It helps alot
I would rather do it here than on my udemy course, since it´s quite specific. Give me some time to do something like that please ;-)
Looking for a similar video with LangChain templates. Production level SQL-ollama app. Greatly appreciated 🙏❤
@@Sonu007OP have not worked with ollama yet, i am afraid my 7 year old computer wont get it running ^^
Video about this topic will be released on 03/25 and 03/28 :)
How much computation resource (specifically GPU) required in running this cross encoder model?
It also works on a cpu
Thanks for the video..
But while genering queries using llm_chain.invoke(query), facing exception related to output parser.
OutputParserException: Invalid json output:
I resolved it temporarily by removing parser al together and formatted the output in the next step. Thank you again for the video. It is helpful.
Weird. Normally i never have Problems with that parser
What's the best way to evaluate this RAG?
Difficult topic. Performance or output Quality?
@@codingcrashcourses8533 well it should be advanced as much as possible since I got an advanced rag . I saw many cases that people used ragas,trulens etc. I'm indecisive
Is this open source/free?
You mean the cross encoder? Yes
LLMChain() is deprecated and the output_parser in the examples also cause json output error.
Would be nice, if you could update the github code. Thank you
If anyone having issue with json output, here is a fix:
from langchain_core.output_parsers import BaseOutputParser
class LineList(BaseModel):
lines: list[str] = Field(description="Lines of text")
class LineListOutputParser(BaseOutputParser[LineList]):
def __init__(self) -> None:
super().__init__(pydantic_object=LineList)
def parse(self, text: str) -> list[str]:
lines = text.strip().split("
")
return lines