🎯 Key points for quick navigation: 00:00 *📄 Introduction and Challenges with PDFs* - Introduction to the video topic, - Challenges of extracting data from PDFs for LLM applications, - Different elements and structures in PDFs complicating extraction. 01:09 *🔧 Existing Approaches to PDF Conversion* - Overview of methods to convert PDFs to plain text, - Use of machine learning models and OCR for extraction, - Comparison of PDFs to Markdown for ease of processing. 02:17 *🛠️ Introduction to Marker Tool* - Introduction to the Marker tool for converting PDFs to Markdown, - Comparison with other tools like Nugat, - Performance and accuracy benefits of using Marker. 03:36 *📚 Features of Marker* - Supported document types and languages, - Removal of headers, footers, and artifacts, - Formatting of tables and code blocks, image extraction, - Limitations and operational capabilities on different systems. 05:00 *📝 Licensing and Limitations* - Licensing terms based on organizational revenue, - Limitations in converting equations and formatting tables, - Discussion on practical limitations noticed in usage. 05:54 *💻 Setting Up and Installing Marker* - Steps to create a virtual environment for Marker, - Instructions for installing PyTorch based on OS, - Detailed steps to install Marker and optional OCR package. 07:31 *🧪 Example Conversion Process* - Steps to convert a single PDF file to Markdown, - Explanation of command parameters and process flow, - Initial example with a scientific paper. 10:10 *📊 Reviewing Conversion Output* - Review of the output structure and accuracy, - Metadata extraction and image handling, - Preview of converted Markdown and comparison with the original PDF. 12:13 *📜 Additional Examples and Output Review* - Example with Andrew Ng’s CV and another paper, - Review of the extracted content and any noticed issues, - Importance of secondary post-processing for accuracy. 13:34 *🎥 Conclusion and Future Content* - Summary of Marker tool’s utility and performance, - Announcement of future videos on related topics, - Invitation to subscribe for more content. Made with HARPA AI
Brilliant vid - it is a godsend. OCRing a PDF is just not workable, period. I gave up on attempting parsing PDF. This new information is amazing and I am once again excited.
If anyone is having trouble where its running but not actually placing the new files in the output directory, if you followed the github example command, the "--min_length 10000" is whats doing it. It simply goes through that whole process and then decides its too short. Either reduce that number to a much lower number of chars or remove the option entirely. 30 min of hunting through TMP folders for the files and finally figured it out
Thanks for the video. I try it but it did a little bit of a mess with the tables on my PDF. Not working really well. The rest of the text gets resolved properly. But tables, not really, just some of them are nicely structured and done
If you do make a video about scraping data, please go over content that requires javascript to load. It’s been difficult to find a clear guide specifically for capturing this data for LLM usage. I loved this video, thank you!
Thanks for covering Marker, this is brilliant!! Would love to see batch processing of pds using Marker. Also, for the web scraping projects, can we include one, where we scrape data apartment rental data (that keeps changing/evolving) from websites like craigslist, etc. store it persistently in a vetorstore or db and then run a query on that info?
I actually tried it out today before seeing this video and sadly it produced quite messed up results for a not so complicated document. Some sections and tables were parsed perfectly but even if there are some scrambled up parts the results are useless :/
This sounds very interesting but I will need to learn some python environment basic before I can put this to the test. A solution like this could help me a lot!
Marker only used 4gb of vram out of a a6000, can you increase the batch size and get some more speed gain? Or is it stuck at that speed regardless of the batch size? 100 seconds per page is a huge improvement over nougat, but still very slow 😢 I love the video tho, I struggled with this one time for hours making a custom script to scrape this one pdf. Definitely gonna use marker sometime soon.
I think you want to run multiple files in a batch. That will give you the best performance. I also came across megaParse (github.com/QuivrHQ/MegaParse) which is based on top of LlamaParse. That is not 100% local though,
@@engineerprompt after some research i found a docker image for marker by dibz15 on Dockerhub , but i dont have any idea as how to setup the container a video on it would be helpful
Hi , great video. can you also explore ways to extract information from powerpoints and ms word docs. I'm asking because most corporate information are stored in these formats.
check out this library, it uses llamaparse but I think will do what you are looking for. Will create a video on it if there is interest: github.com/QuivrHQ/MegaParse
def MODEL_DTYPE(self) -> torch.dtype: AttributeError: module 'torch' has no attribute 'dtype' I am getting this error while running the marker_single..... command plzz help me resolve this
Hi, do you know how much RAM is required for this application? I tried, but it said that it was out of memory. My laptop has 16 GB RAM w/o Nvidia GPU. Thanks a lot
@@engineerprompt This pdf, for example, contain huge amount of amsmath notation - www.kurims.kyoto-u.ac.jp/~motizuki/Inter-universal%20Teichmuller%20Theory%20III.pdf
This is Very useful!, Now, In next video, Tell how to finetune any model (with some long context length like "Phi-3-mini-128k-instruct") With this Markdown Data 😍😍
Could you add the description of each image to the text with the aim of having a single Markdown file, similar to the original PDF? This way, it would be possible to pass a file to a language model that is readable and maintains its content.
when I try to convert an entire folder of pdfs using this command "marker /data/inputs /data/formatted_inputs --workers 3", i get this error " ImportError: libGL.so.1: cannot open shared object file: No such file or directory ", any ideas on how to fix it?
Very informative video. Could you try to build a system that can run on a large number of PDFs and further convert these to .md files for an LLM to query or generate specific prompts with a UI?
i have pdf of scanned documents it is like images os using tesseract i am converting this into images and then reading it but the thing is my pdf has also some table and when i am generating embedding of this and then passing into to an llm it is unable to answer the question which i am asking from the table...
My recommendation will be to run them through a multimodal model like Claude Haiku, if cost is not a big concern. You can use that directly to answer questions from scanned docs. Here is a video on how to do that. ruclips.net/video/a5OW5UAyC3E/видео.html
Around 7minutes, having installed a conda environment you select pip not conda when installing PyTorch - any reason why? If there’s a working conda option doesn’t it make sense to keep using conda and only use pip when you absolutely have to? Just wondering.. (thanks for the video btw - had just been wondering about effective ways of making off content reliably available to RAG and the video is super-useful).
I usually use pip because that has most of the python packages available. conda is somehow limited with available python packaged. conda will also work in this case but its more of my own habit at this point :)
It showed some promise except it flaked out with a overflow error. On the pages it seemed to convert it scrambled the data and lost some of it. These pages are primarily transactions in a table with columns separated by whitespace. The pages with plain text worked a bit better.
would be interested to learn parse the data from URLs of websites and also query the parsed data using opesource methods. we can call it as web article/new research tool
Thank you very much!!! I was looking for something like this for a long time. I work for a large bank but with very small budget for my project. Due to budget crunch we cannot afford buying third party tools, this sounds to be a perfect fit but since there is a limit of $5MN we may not qualify to use this for free. Would you suggest going with Nougat or you have a better alternative for my use case, really appreciate your content!
Nougat can be an option or look into unstructuredio. Also I would recommend to look into Claude or GPT4o with vision if data privacy is not a big issue. Some of these proprietary tools have good data privacy based on their TOS.
Thanks so much bro 🙏🙏 My question is when your pdf has some images inside, and you want to do embedding on the pdf for the purpose of RAG, how can you pass the info of images to the vector db? Is there any way to do multi-modal RAG? In the case of scientific papers, those images contain significant amount of useful information. Many thanks in advance 🙏
If you have images, you want to run them through a vision model (such as Llava) to generate their text description and then embed that description in the vectorstore along with the metadata. You can use it directly with RAG then.
@@engineerprompt thanks bro for your guidance! For us in academia, using RAG on scientific papers is fruitless without incorporating the figures, as most of the time, figures contain way more information than their mere descriptions in the paper. I will give your suggestion a try and see how I can resolve this problem. Thanks again bro!
The dumb part of python tools is that ok, you'll install marker, but it will want python=3.10, while the langchain and crewai will work with python=3.11 and as a result you cannot authomize the process because each tool resides in its own conda env. So yeah, I like what I saw, it really looks good, but I'll have to create the markdown separately from all the other stuff I have and that's annoying.
I am uncertain about marker, it is for scientific use, but says it removes footers, that is where you normally put in your sources, and apendix links.. so?!
@@sobeck6900If there are restrictions on who can use the software (e.g., no commercial use), then it is not open source. Check the OSI definition of open source or the FSF definition of free software.
On my m1 Mac i have tried this out, installing dependencies = [ "torch>=2.3.0", "torchvision>=0.18.0", "torchaudio>=2.3.0", "marker-pdf>=0.2.13", ] Then when i try out just a single pdf it fails on a simple python import. marker_single 26572517.pdf OUTPUT --max_pages 2 --langs English Traceback (most recent call last): File "marker/.venv/bin/marker_single", line 5, in from convert_single import main File "marker/.venv/lib/python3.12/site-packages/convert_single.py", line 5, in from marker.convert import convert_single_pdf ModuleNotFoundError: No module named ‘marker.convert' Anyone getting the same? Tried with python 3.10 and 3.12
are you using a virtual environment? use this command: python -m pip install marker-pdf This will ensure its installing the package in the current virtual env.
@@engineerprompt Ok the problem seems to be with the way Rye handled the imports, sorry bout that. Creating the virtual env normally i can run the commands. Thanks for the video, i have been looking for how to do this a long time.
If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag
Man, a couple of weeks ago I was fighting this PDF chaos. Thanks for your video.
glad it was helpful.
How yo do it. I failed as non coder. How to extract table as well?
@@rizkiananda352why you are expecting to pass?
🎯 Key points for quick navigation:
00:00 *📄 Introduction and Challenges with PDFs*
- Introduction to the video topic,
- Challenges of extracting data from PDFs for LLM applications,
- Different elements and structures in PDFs complicating extraction.
01:09 *🔧 Existing Approaches to PDF Conversion*
- Overview of methods to convert PDFs to plain text,
- Use of machine learning models and OCR for extraction,
- Comparison of PDFs to Markdown for ease of processing.
02:17 *🛠️ Introduction to Marker Tool*
- Introduction to the Marker tool for converting PDFs to Markdown,
- Comparison with other tools like Nugat,
- Performance and accuracy benefits of using Marker.
03:36 *📚 Features of Marker*
- Supported document types and languages,
- Removal of headers, footers, and artifacts,
- Formatting of tables and code blocks, image extraction,
- Limitations and operational capabilities on different systems.
05:00 *📝 Licensing and Limitations*
- Licensing terms based on organizational revenue,
- Limitations in converting equations and formatting tables,
- Discussion on practical limitations noticed in usage.
05:54 *💻 Setting Up and Installing Marker*
- Steps to create a virtual environment for Marker,
- Instructions for installing PyTorch based on OS,
- Detailed steps to install Marker and optional OCR package.
07:31 *🧪 Example Conversion Process*
- Steps to convert a single PDF file to Markdown,
- Explanation of command parameters and process flow,
- Initial example with a scientific paper.
10:10 *📊 Reviewing Conversion Output*
- Review of the output structure and accuracy,
- Metadata extraction and image handling,
- Preview of converted Markdown and comparison with the original PDF.
12:13 *📜 Additional Examples and Output Review*
- Example with Andrew Ng’s CV and another paper,
- Review of the extracted content and any noticed issues,
- Importance of secondary post-processing for accuracy.
13:34 *🎥 Conclusion and Future Content*
- Summary of Marker tool’s utility and performance,
- Announcement of future videos on related topics,
- Invitation to subscribe for more content.
Made with HARPA AI
Brilliant vid - it is a godsend. OCRing a PDF is just not workable, period. I gave up on attempting parsing PDF. This new information is amazing and I am once again excited.
Glad it was helpful!
If anyone is having trouble where its running but not actually placing the new files in the output directory, if you followed the github example command, the "--min_length 10000" is whats doing it. It simply goes through that whole process and then decides its too short. Either reduce that number to a much lower number of chars or remove the option entirely. 30 min of hunting through TMP folders for the files and finally figured it out
Thanks for the video. I try it but it did a little bit of a mess with the tables on my PDF. Not working really well. The rest of the text gets resolved properly. But tables, not really, just some of them are nicely structured and done
If you do make a video about scraping data, please go over content that requires javascript to load. It’s been difficult to find a clear guide specifically for capturing this data for LLM usage. I loved this video, thank you!
I haven't look into it before so let me see what I can come up with.
Thanks for covering Marker, this is brilliant!!
Would love to see batch processing of pds using Marker.
Also, for the web scraping projects, can we include one, where we scrape data apartment rental data (that keeps changing/evolving) from websites like craigslist, etc. store it persistently in a vetorstore or db and then run a query on that info?
let me see what I can do.
Amazing, can't wait till test.. converting maths from pdf to LaTeX cost thousands of dollars..now it's free.
I actually tried it out today before seeing this video and sadly it produced quite messed up results for a not so complicated document. Some sections and tables were parsed perfectly but even if there are some scrambled up parts the results are useless :/
I am getting float error. I have installed the CUDA version. Any suggestions?
Great video, waiting for scraping video content....
This sounds very interesting but I will need to learn some python environment basic before I can put this to the test. A solution like this could help me a lot!
Very interesting. Once you have converted the pdf file, how can we give all this info to the vectordatabase for RAG ?
You will do similar chunking like text files. I will put together a video on it.
Excellent explanation. Many thanks!
Marker only used 4gb of vram out of a a6000, can you increase the batch size and get some more speed gain? Or is it stuck at that speed regardless of the batch size? 100 seconds per page is a huge improvement over nougat, but still very slow 😢
I love the video tho, I struggled with this one time for hours making a custom script to scrape this one pdf. Definitely gonna use marker sometime soon.
I think you want to run multiple files in a batch. That will give you the best performance. I also came across megaParse (github.com/QuivrHQ/MegaParse) which is based on top of LlamaParse. That is not 100% local though,
@@engineerprompt awesome!!!!! Thank you❤️❤️❤️❤️❤️!!!!!!!!!!!
How you tried the scanned documents instead of digital pdf? And handwritten text as well?
Comparison with unstructured?
I have to mark the different regions in a question paper scanned PDF(Subjective or MCQ with subquestions). Can it do this accurately
Tabula-py or this? Which is better when it gets to extracting tables?
An amazing video. Many thanks, dude!
This is really helpful to prepare the pdfs before adding them to RAG . But is there any way to install this marker application as docker container?
Yes, I am not sure about that.
@@engineerprompt after some research i found a docker image for marker by dibz15 on Dockerhub , but i dont have any idea as how to setup the container a video on it would be helpful
@@engineerprompt there is an image by dibz15 for marker on Dockerhub , could you make a video on installing it
Docker version please?
Hi , great video. can you also explore ways to extract information from powerpoints and ms word docs. I'm asking because most corporate information are stored in these formats.
check out this library, it uses llamaparse but I think will do what you are looking for. Will create a video on it if there is interest:
github.com/QuivrHQ/MegaParse
Are there ways I can also convert comments/annotations into a markdown format?
Does marker required gpu installed on system.??
can chat bot send images with this data?
normally embedding only text, but how about with images ?
def MODEL_DTYPE(self) -> torch.dtype:
AttributeError: module 'torch' has no attribute 'dtype'
I am getting this error while running the marker_single..... command
plzz help me resolve this
Is this suitable for over 250 1mb PDFs?
I see how to run this out of the terminal but how do we import and run this in a python file? I have had some issues.
Hi, do you know how much RAM is required for this application? I tried, but it said that it was out of memory. My laptop has 16 GB RAM w/o Nvidia GPU. Thanks a lot
Is it support pdf files with AMS-TeX / AMS-LaTeX math notation (amsmath)?
I am not sure, if you have a reference pdf, I can try it for you.
@@engineerprompt This pdf, for example, contain huge amount of amsmath notation - www.kurims.kyoto-u.ac.jp/~motizuki/Inter-universal%20Teichmuller%20Theory%20III.pdf
Nice video!!! Can we use this in commercial applications?
Yes, if your revenue is less than $5M ARR, if more, you need to get in touch with the author for license.
Great contribution, thank you very much!
glad its helpful.
The main challenge I faced when using nlm-ingestor was parsing SVG images from PDFs. I'm curious whether it will handle this case well.
Not sure, you might want to look into unstructuredio as well.
thank you for sharing this excelent tools!
Anything to convert to epub? Getting rid of headers and footers
This is Very useful!, Now, In next video, Tell how to finetune any model (with some long context length like "Phi-3-mini-128k-instruct") With this Markdown Data 😍😍
let me see what i can do :)
Could you add the description of each image to the text with the aim of having a single Markdown file, similar to the original PDF? This way, it would be possible to pass a file to a language model that is readable and maintains its content.
Yes, that is possible. I am going to create a video on multi-modal RAG which will cover this topic.
when I try to convert an entire folder of pdfs using this command "marker /data/inputs /data/formatted_inputs --workers 3", i get this error " ImportError: libGL.so.1: cannot open shared object file: No such file or directory ", any ideas on how to fix it?
not sure, is the path correct?
Awesome! Thanks for sharing this!
Very informative video. Could you try to build a system that can run on a large number of PDFs and further convert these to .md files for an LLM to query or generate specific prompts with a UI?
Yeah, I am thinking about it. Will post something.
If you're going to use OCR than just use images of each page. Any LLM with vision can deal with it.
you find any solution?
i have pdf of scanned documents it is like images os using tesseract i am converting this into images and then reading it but the thing is my pdf has also some table and when i am generating embedding of this and then passing into to an llm it is unable to answer the question which i am asking from the table...
My recommendation will be to run them through a multimodal model like Claude Haiku, if cost is not a big concern. You can use that directly to answer questions from scanned docs. Here is a video on how to do that. ruclips.net/video/a5OW5UAyC3E/видео.html
Would this work for annotated pdfs? Would it be advantageous to use Marker for NoteboolLM and Anthropic, or is it not necessary?
If you have PDFs, I would suggest send it to Gemini Flash to convert it into markdown and then feed that to Anthropic.
@@engineerprompt thank you
Around 7minutes, having installed a conda environment you select pip not conda when installing PyTorch - any reason why? If there’s a working conda option doesn’t it make sense to keep using conda and only use pip when you absolutely have to? Just wondering.. (thanks for the video btw - had just been wondering about effective ways of making off content reliably available to RAG and the video is super-useful).
I usually use pip because that has most of the python packages available. conda is somehow limited with available python packaged. conda will also work in this case but its more of my own habit at this point :)
Do you think this could be better than unstructuredIO?
I think this gives you some of the features which are in the premium version of unstructuredio
It showed some promise except it flaked out with a overflow error. On the pages it seemed to convert it scrambled the data and lost some of it. These pages are primarily transactions in a table with columns separated by whitespace. The pages with plain text worked a bit better.
interesting, I think it has some limitations. Hope the creator continue working on it. In the tables case, it might be good to use a multimodal model.
Hello. Thank you for your efforts and very good training. It is work in other language ?
According to the repo creator, it should.
would be interested to learn parse the data from URLs of websites and also query the parsed data using opesource methods. we can call it as web article/new research tool
Great, idea. Will do that.
If we want to do it for many pdf at once and then build chatbot?
Yes, there is a batch version. I am going to create an end to end tutorial on it
Alway I like your content. Thank you bro
I appreciate that, thank you!
Thank you very much!!! I was looking for something like this for a long time. I work for a large bank but with very small budget for my project. Due to budget crunch we cannot afford buying third party tools, this sounds to be a perfect fit but since there is a limit of $5MN we may not qualify to use this for free. Would you suggest going with Nougat or you have a better alternative for my use case, really appreciate your content!
Nougat can be an option or look into unstructuredio. Also I would recommend to look into Claude or GPT4o with vision if data privacy is not a big issue. Some of these proprietary tools have good data privacy based on their TOS.
@@engineerprompt Thanks for the prompt response!!
Thank you so much! great job !!
Thanks so much bro 🙏🙏
My question is when your pdf has some images inside, and you want to do embedding on the pdf for the purpose of RAG, how can you pass the info of images to the vector db? Is there any way to do multi-modal RAG? In the case of scientific papers, those images contain significant amount of useful information.
Many thanks in advance 🙏
If you have images, you want to run them through a vision model (such as Llava) to generate their text description and then embed that description in the vectorstore along with the metadata. You can use it directly with RAG then.
@@engineerprompt thanks bro for your guidance! For us in academia, using RAG on scientific papers is fruitless without incorporating the figures, as most of the time, figures contain way more information than their mere descriptions in the paper. I will give your suggestion a try and see how I can resolve this problem. Thanks again bro!
Is it better than Unstructured ?
really depends on the use case and the ability to run this completely local.
Can you please share GitHub URL of solution you are talking about
Perfect timing thanks!
glad its helpful :)
Really amazing project, testing today!!
Would love to see how your experience with it is.
@@engineerprompt Bro its not givin good output when there are 3 columns in a page, can we do something in this
thanks for sharing.
The dumb part of python tools is that ok, you'll install marker, but it will want python=3.10, while the langchain and crewai will work with python=3.11 and as a result you cannot authomize the process because each tool resides in its own conda env. So yeah, I like what I saw, it really looks good, but I'll have to create the markdown separately from all the other stuff I have and that's annoying.
The real question is how it compares to the current SOTA "unstructured"
I am uncertain about marker, it is for scientific use, but says it removes footers, that is where you normally put in your sources, and apendix links.. so?!
Looks like a great tool
Fantastic, thanks!
amazing video thx so much
I have been dealing with this PDF garbage since 10 years. This is a good thing, but my only question is, is this better than Adobe Acrobat?
What I want is for LLMs to cook my next meal.
I had another question, are you also on Udemy?
I am not on Udemy but just launching my RAG course here: prompt-s-site.thinkific.com/courses/rag
Yeah! 👏👏👏👏👏👏
Great content
A very tidy tool
Hey , Can we use this offline?
Yes
THis is not completely open-source so dont report it as such with clarification midway in the vid.
what do you mean it's not completely Open Source?
@@sobeck6900If there are restrictions on who can use the software (e.g., no commercial use), then it is not open source. Check the OSI definition of open source or the FSF definition of free software.
I wonder how far away we are from arbitrarily high accuracy on tasks like this.
To be honest, when it comes to voice models, open source models are lagging behind!
Docker version please
Markdowns for PDF for LLM😁
:)
dude, I wanna deploy this on huggingface as an API. make a tutorial on this.
deployment series is coming soon, will give you an idea on how to do this.
"all languages" perhaps means the various languages we speak.
👑
thanks
Cool
Ok as usual the lack of Gui destroys it for me..😢
Grow out of that and a world will open up
GPL 3.0 No thanks.
On my m1 Mac i have tried this out, installing
dependencies = [
"torch>=2.3.0",
"torchvision>=0.18.0",
"torchaudio>=2.3.0",
"marker-pdf>=0.2.13",
]
Then when i try out just a single pdf it fails on a simple python import.
marker_single 26572517.pdf OUTPUT --max_pages 2 --langs English
Traceback (most recent call last):
File "marker/.venv/bin/marker_single", line 5, in
from convert_single import main
File "marker/.venv/lib/python3.12/site-packages/convert_single.py", line 5, in
from marker.convert import convert_single_pdf
ModuleNotFoundError: No module named ‘marker.convert'
Anyone getting the same?
Tried with python 3.10 and 3.12
are you using a virtual environment? use this command:
python -m pip install marker-pdf
This will ensure its installing the package in the current virtual env.
Using rye, and yes it is there in my virtual env.
The marker scripts are there to be called.
@@engineerprompt Ok the problem seems to be with the way Rye handled the imports, sorry bout that. Creating the virtual env normally i can run the commands. Thanks for the video, i have been looking for how to do this a long time.