Marker: This Open-Source Tool will make your PDFs LLM Ready

Поделиться
HTML-код
  • Опубликовано: 26 сен 2024

Комментарии • 131

  • @engineerprompt
    @engineerprompt  3 месяца назад

    If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag

  • @ernestuz
    @ernestuz 3 месяца назад +23

    Man, a couple of weeks ago I was fighting this PDF chaos. Thanks for your video.

    • @engineerprompt
      @engineerprompt  3 месяца назад

      glad it was helpful.

    • @rizkiananda352
      @rizkiananda352 3 месяца назад

      How yo do it. I failed as non coder. How to extract table as well?

  • @kineticraft6977
    @kineticraft6977 28 дней назад +1

    If anyone is having trouble where its running but not actually placing the new files in the output directory, if you followed the github example command, the "--min_length 10000" is whats doing it. It simply goes through that whole process and then decides its too short. Either reduce that number to a much lower number of chars or remove the option entirely. 30 min of hunting through TMP folders for the files and finally figured it out

  • @greymooses
    @greymooses 3 месяца назад +2

    If you do make a video about scraping data, please go over content that requires javascript to load. It’s been difficult to find a clear guide specifically for capturing this data for LLM usage. I loved this video, thank you!

    • @engineerprompt
      @engineerprompt  3 месяца назад

      I haven't look into it before so let me see what I can come up with.

  • @gregsLyrics
    @gregsLyrics 3 месяца назад +1

    Brilliant vid - it is a godsend. OCRing a PDF is just not workable, period. I gave up on attempting parsing PDF. This new information is amazing and I am once again excited.

  • @Alvaro-cs7zs
    @Alvaro-cs7zs 3 месяца назад +3

    Thanks for the video. I try it but it did a little bit of a mess with the tables on my PDF. Not working really well. The rest of the text gets resolved properly. But tables, not really, just some of them are nicely structured and done

  • @ai-whisperer
    @ai-whisperer 3 месяца назад +6

    Thanks for covering Marker, this is brilliant!!
    Would love to see batch processing of pds using Marker.
    Also, for the web scraping projects, can we include one, where we scrape data apartment rental data (that keeps changing/evolving) from websites like craigslist, etc. store it persistently in a vetorstore or db and then run a query on that info?

  • @synthclub
    @synthclub 3 месяца назад

    Amazing, can't wait till test.. converting maths from pdf to LaTeX cost thousands of dollars..now it's free.

  • @VenkatesanVenkat-fd4hg
    @VenkatesanVenkat-fd4hg 3 месяца назад +2

    Great video, waiting for scraping video content....

  • @fabriciot4166
    @fabriciot4166 3 месяца назад +1

    Great contribution, thank you very much!

  • @cristian_palau
    @cristian_palau 3 месяца назад +1

    thank you for sharing this excelent tools!

  • @stanTrX
    @stanTrX 3 дня назад

    Tabula-py or this? Which is better when it gets to extracting tables?

  • @leomeza9396
    @leomeza9396 2 месяца назад

    Awesome! Thanks for sharing this!

  • @ilianos
    @ilianos 3 месяца назад +4

    🎯 Key points for quick navigation:
    00:00 *📄 Introduction and Challenges with PDFs*
    - Introduction to the video topic,
    - Challenges of extracting data from PDFs for LLM applications,
    - Different elements and structures in PDFs complicating extraction.
    01:09 *🔧 Existing Approaches to PDF Conversion*
    - Overview of methods to convert PDFs to plain text,
    - Use of machine learning models and OCR for extraction,
    - Comparison of PDFs to Markdown for ease of processing.
    02:17 *🛠️ Introduction to Marker Tool*
    - Introduction to the Marker tool for converting PDFs to Markdown,
    - Comparison with other tools like Nugat,
    - Performance and accuracy benefits of using Marker.
    03:36 *📚 Features of Marker*
    - Supported document types and languages,
    - Removal of headers, footers, and artifacts,
    - Formatting of tables and code blocks, image extraction,
    - Limitations and operational capabilities on different systems.
    05:00 *📝 Licensing and Limitations*
    - Licensing terms based on organizational revenue,
    - Limitations in converting equations and formatting tables,
    - Discussion on practical limitations noticed in usage.
    05:54 *💻 Setting Up and Installing Marker*
    - Steps to create a virtual environment for Marker,
    - Instructions for installing PyTorch based on OS,
    - Detailed steps to install Marker and optional OCR package.
    07:31 *🧪 Example Conversion Process*
    - Steps to convert a single PDF file to Markdown,
    - Explanation of command parameters and process flow,
    - Initial example with a scientific paper.
    10:10 *📊 Reviewing Conversion Output*
    - Review of the output structure and accuracy,
    - Metadata extraction and image handling,
    - Preview of converted Markdown and comparison with the original PDF.
    12:13 *📜 Additional Examples and Output Review*
    - Example with Andrew Ng’s CV and another paper,
    - Review of the extracted content and any noticed issues,
    - Importance of secondary post-processing for accuracy.
    13:34 *🎥 Conclusion and Future Content*
    - Summary of Marker tool’s utility and performance,
    - Announcement of future videos on related topics,
    - Invitation to subscribe for more content.
    Made with HARPA AI

  • @gauravkumargupta9622
    @gauravkumargupta9622 12 дней назад

    I have to mark the different regions in a question paper scanned PDF(Subjective or MCQ with subquestions). Can it do this accurately

  • @mariongully4087
    @mariongully4087 3 месяца назад +2

    Very interesting. Once you have converted the pdf file, how can we give all this info to the vectordatabase for RAG ?

    • @engineerprompt
      @engineerprompt  3 месяца назад +2

      You will do similar chunking like text files. I will put together a video on it.

  • @someoneelse4195
    @someoneelse4195 3 месяца назад +1

    Comparison with unstructured?

  • @tedp9146
    @tedp9146 3 месяца назад +3

    I actually tried it out today before seeing this video and sadly it produced quite messed up results for a not so complicated document. Some sections and tables were parsed perfectly but even if there are some scrambled up parts the results are useless :/

  • @tetraocean
    @tetraocean 5 дней назад

    can chat bot send images with this data?
    normally embedding only text, but how about with images ?

  • @samarthmath2952
    @samarthmath2952 3 месяца назад +1

    I am getting float error. I have installed the CUDA version. Any suggestions?

  • @mzimmerman1988
    @mzimmerman1988 3 месяца назад +1

    thanks for sharing.

  • @nuluai
    @nuluai 3 месяца назад

    Thank you so much! great job !!

  • @paulmiller591
    @paulmiller591 3 месяца назад

    Perfect timing thanks!

  • @Nick_With_A_Stick
    @Nick_With_A_Stick 3 месяца назад +2

    Marker only used 4gb of vram out of a a6000, can you increase the batch size and get some more speed gain? Or is it stuck at that speed regardless of the batch size? 100 seconds per page is a huge improvement over nougat, but still very slow 😢
    I love the video tho, I struggled with this one time for hours making a custom script to scrape this one pdf. Definitely gonna use marker sometime soon.

    • @engineerprompt
      @engineerprompt  3 месяца назад +2

      I think you want to run multiple files in a batch. That will give you the best performance. I also came across megaParse (github.com/QuivrHQ/MegaParse) which is based on top of LlamaParse. That is not 100% local though,

    • @Nick_With_A_Stick
      @Nick_With_A_Stick 3 месяца назад

      @@engineerprompt awesome!!!!! Thank you❤️❤️❤️❤️❤️!!!!!!!!!!!

  • @anandu06
    @anandu06 Месяц назад

    How you tried the scanned documents instead of digital pdf? And handwritten text as well?

  • @jamalnuh8565
    @jamalnuh8565 3 месяца назад

    Alway I like your content. Thank you bro

  • @drmetroyt
    @drmetroyt 3 месяца назад +2

    This is really helpful to prepare the pdfs before adding them to RAG . But is there any way to install this marker application as docker container?

    • @engineerprompt
      @engineerprompt  3 месяца назад

      Yes, I am not sure about that.

    • @drmetroyt
      @drmetroyt 3 месяца назад

      @@engineerprompt after some research i found a docker image for marker by dibz15 on Dockerhub , but i dont have any idea as how to setup the container a video on it would be helpful

    • @drmetroyt
      @drmetroyt 3 месяца назад

      @@engineerprompt there is an image by dibz15 for marker on Dockerhub , could you make a video on installing it

    • @drmetroyt
      @drmetroyt 3 месяца назад

      Docker version please?

  • @dezigns333
    @dezigns333 3 месяца назад

    If you're going to use OCR than just use images of each page. Any LLM with vision can deal with it.

  • @DavidJNowak
    @DavidJNowak 3 месяца назад

    What I want is for LLMs to cook my next meal.

  • @DanielHomeImprovement
    @DanielHomeImprovement 3 месяца назад

    amazing video thx so much

  • @chauyuhin5013
    @chauyuhin5013 Месяц назад

    Are there ways I can also convert comments/annotations into a markdown format?

  • @neurojitsu
    @neurojitsu 11 дней назад

    Would this work for annotated pdfs? Would it be advantageous to use Marker for NoteboolLM and Anthropic, or is it not necessary?

    • @engineerprompt
      @engineerprompt  10 дней назад +1

      If you have PDFs, I would suggest send it to Gemini Flash to convert it into markdown and then feed that to Anthropic.

    • @neurojitsu
      @neurojitsu 10 дней назад

      @@engineerprompt thank you

  • @jimlynch9390
    @jimlynch9390 3 месяца назад

    It showed some promise except it flaked out with a overflow error. On the pages it seemed to convert it scrambled the data and lost some of it. These pages are primarily transactions in a table with columns separated by whitespace. The pages with plain text worked a bit better.

    • @engineerprompt
      @engineerprompt  3 месяца назад

      interesting, I think it has some limitations. Hope the creator continue working on it. In the tables case, it might be good to use a multimodal model.

  • @JanBadertscher
    @JanBadertscher 3 месяца назад

    The real question is how it compares to the current SOTA "unstructured"

  • @MdAffan-ux2kf
    @MdAffan-ux2kf 3 месяца назад

    def MODEL_DTYPE(self) -> torch.dtype:
    AttributeError: module 'torch' has no attribute 'dtype'
    I am getting this error while running the marker_single..... command
    plzz help me resolve this

  • @ignaciopincheira23
    @ignaciopincheira23 3 месяца назад

    Could you add the description of each image to the text with the aim of having a single Markdown file, similar to the original PDF? This way, it would be possible to pass a file to a language model that is readable and maintains its content.

    • @engineerprompt
      @engineerprompt  3 месяца назад

      Yes, that is possible. I am going to create a video on multi-modal RAG which will cover this topic.

  • @tanmeshnm
    @tanmeshnm 3 месяца назад

    The main challenge I faced when using nlm-ingestor was parsing SVG images from PDFs. I'm curious whether it will handle this case well.

    • @engineerprompt
      @engineerprompt  3 месяца назад

      Not sure, you might want to look into unstructuredio as well.

  • @baitfishing6374
    @baitfishing6374 Месяц назад

    Does marker required gpu installed on system.??

  • @MrSuntask
    @MrSuntask 3 месяца назад

    Looks like a great tool

  • @Lowlightu
    @Lowlightu 3 месяца назад +10

    Is it better than Unstructured ?

    • @engineerprompt
      @engineerprompt  3 месяца назад +3

      really depends on the use case and the ability to run this completely local.

    • @AdarshMadrecha
      @AdarshMadrecha 3 месяца назад

      Can you please share GitHub URL of solution you are talking about

  • @intellect5124
    @intellect5124 3 месяца назад

    would be interested to learn parse the data from URLs of websites and also query the parsed data using opesource methods. we can call it as web article/new research tool

  • @AaronALAI
    @AaronALAI 3 месяца назад +1

    Really amazing project, testing today!!

    • @engineerprompt
      @engineerprompt  3 месяца назад

      Would love to see how your experience with it is.

    • @puneetbajaj786
      @puneetbajaj786 3 месяца назад +1

      @@engineerprompt Bro its not givin good output when there are 3 columns in a page, can we do something in this

  • @hnb13686
    @hnb13686 3 месяца назад +21

    THis is not completely open-source so dont report it as such with clarification midway in the vid.

    • @sobeck6900
      @sobeck6900 3 месяца назад

      what do you mean it's not completely Open Source?

    • @thowes
      @thowes 3 месяца назад

      @@sobeck6900If there are restrictions on who can use the software (e.g., no commercial use), then it is not open source. Check the OSI definition of open source or the FSF definition of free software.

  • @navinlikenoother
    @navinlikenoother 3 месяца назад +3

    Hi , great video. can you also explore ways to extract information from powerpoints and ms word docs. I'm asking because most corporate information are stored in these formats.

    • @engineerprompt
      @engineerprompt  3 месяца назад +3

      check out this library, it uses llamaparse but I think will do what you are looking for. Will create a video on it if there is interest:
      github.com/QuivrHQ/MegaParse

  • @danielpicassomunoz2752
    @danielpicassomunoz2752 3 месяца назад

    Anything to convert to epub? Getting rid of headers and footers

  • @chjpiu
    @chjpiu 3 месяца назад

    Hi, do you know how much RAM is required for this application? I tried, but it said that it was out of memory. My laptop has 16 GB RAM w/o Nvidia GPU. Thanks a lot

  • @mohsenghafari7652
    @mohsenghafari7652 3 месяца назад

    thanks

  • @fortran57
    @fortran57 3 месяца назад

    Great content

  • @Larsbor
    @Larsbor 3 месяца назад

    I am uncertain about marker, it is for scientific use, but says it removes footers, that is where you normally put in your sources, and apendix links.. so?!

  • @intellect5124
    @intellect5124 3 месяца назад

    Very informative video. Could you try to build a system that can run on a large number of PDFs and further convert these to .md files for an LLM to query or generate specific prompts with a UI?

    • @engineerprompt
      @engineerprompt  3 месяца назад

      Yeah, I am thinking about it. Will post something.

  • @семенантонов-ч7ф
    @семенантонов-ч7ф 3 месяца назад +1

    Is it support pdf files with AMS-TeX / AMS-LaTeX math notation (amsmath)?

    • @engineerprompt
      @engineerprompt  3 месяца назад +1

      I am not sure, if you have a reference pdf, I can try it for you.

    • @семенантонов-ч7ф
      @семенантонов-ч7ф 3 месяца назад

      @@engineerprompt This pdf, for example, contain huge amount of amsmath notation - www.kurims.kyoto-u.ac.jp/~motizuki/Inter-universal%20Teichmuller%20Theory%20III.pdf

  • @maxlgemeinderat9202
    @maxlgemeinderat9202 3 месяца назад +1

    Do you think this could be better than unstructuredIO?

    • @engineerprompt
      @engineerprompt  3 месяца назад

      I think this gives you some of the features which are in the premium version of unstructuredio

  • @ritikeshchoube1748
    @ritikeshchoube1748 3 месяца назад

    i have pdf of scanned documents it is like images os using tesseract i am converting this into images and then reading it but the thing is my pdf has also some table and when i am generating embedding of this and then passing into to an llm it is unable to answer the question which i am asking from the table...

    • @engineerprompt
      @engineerprompt  3 месяца назад

      My recommendation will be to run them through a multimodal model like Claude Haiku, if cost is not a big concern. You can use that directly to answer questions from scanned docs. Here is a video on how to do that. ruclips.net/video/a5OW5UAyC3E/видео.html

  • @publicsectordirect982
    @publicsectordirect982 3 месяца назад +2

    A very tidy tool

  • @MalikZurkiyeh
    @MalikZurkiyeh 3 месяца назад

    when I try to convert an entire folder of pdfs using this command "marker /data/inputs /data/formatted_inputs --workers 3", i get this error " ImportError: libGL.so.1: cannot open shared object file: No such file or directory ", any ideas on how to fix it?

  • @mohsenghafari7652
    @mohsenghafari7652 3 месяца назад

    Hello. Thank you for your efforts and very good training. It is work in other language ?

    • @engineerprompt
      @engineerprompt  3 месяца назад

      According to the repo creator, it should.

  • @MeinDeutschkurs
    @MeinDeutschkurs 3 месяца назад

    Yeah! 👏👏👏👏👏👏

  • @drmartinbartos
    @drmartinbartos 3 месяца назад

    Around 7minutes, having installed a conda environment you select pip not conda when installing PyTorch - any reason why? If there’s a working conda option doesn’t it make sense to keep using conda and only use pip when you absolutely have to? Just wondering.. (thanks for the video btw - had just been wondering about effective ways of making off content reliably available to RAG and the video is super-useful).

    • @engineerprompt
      @engineerprompt  3 месяца назад

      I usually use pip because that has most of the python packages available. conda is somehow limited with available python packaged. conda will also work in this case but its more of my own habit at this point :)

  • @poisonza
    @poisonza 2 месяца назад

    Cool

  • @supercker
    @supercker 2 месяца назад

    "all languages" perhaps means the various languages we speak.

  • @denijane89
    @denijane89 3 месяца назад

    The dumb part of python tools is that ok, you'll install marker, but it will want python=3.10, while the langchain and crewai will work with python=3.11 and as a result you cannot authomize the process because each tool resides in its own conda env. So yeah, I like what I saw, it really looks good, but I'll have to create the markdown separately from all the other stuff I have and that's annoying.

  • @Sri_Harsha_Electronics_Guthik
    @Sri_Harsha_Electronics_Guthik 2 месяца назад

    I have been dealing with this PDF garbage since 10 years. This is a good thing, but my only question is, is this better than Adobe Acrobat?

  • @anuraglahon
    @anuraglahon 3 месяца назад

    If we want to do it for many pdf at once and then build chatbot?

    • @engineerprompt
      @engineerprompt  3 месяца назад +3

      Yes, there is a batch version. I am going to create an end to end tutorial on it

  • @samcavalera9489
    @samcavalera9489 3 месяца назад

    Thanks so much bro 🙏🙏
    My question is when your pdf has some images inside, and you want to do embedding on the pdf for the purpose of RAG, how can you pass the info of images to the vector db? Is there any way to do multi-modal RAG? In the case of scientific papers, those images contain significant amount of useful information.
    Many thanks in advance 🙏

    • @engineerprompt
      @engineerprompt  3 месяца назад +1

      If you have images, you want to run them through a vision model (such as Llava) to generate their text description and then embed that description in the vectorstore along with the metadata. You can use it directly with RAG then.

    • @samcavalera9489
      @samcavalera9489 3 месяца назад

      @@engineerprompt thanks bro for your guidance! For us in academia, using RAG on scientific papers is fruitless without incorporating the figures, as most of the time, figures contain way more information than their mere descriptions in the paper. I will give your suggestion a try and see how I can resolve this problem. Thanks again bro!

  • @anandgs
    @anandgs 3 месяца назад

    Thank you very much!!! I was looking for something like this for a long time. I work for a large bank but with very small budget for my project. Due to budget crunch we cannot afford buying third party tools, this sounds to be a perfect fit but since there is a limit of $5MN we may not qualify to use this for free. Would you suggest going with Nougat or you have a better alternative for my use case, really appreciate your content!

    • @engineerprompt
      @engineerprompt  3 месяца назад

      Nougat can be an option or look into unstructuredio. Also I would recommend to look into Claude or GPT4o with vision if data privacy is not a big issue. Some of these proprietary tools have good data privacy based on their TOS.

    • @anandgs
      @anandgs 3 месяца назад

      @@engineerprompt Thanks for the prompt response!!

  • @drmetroyt
    @drmetroyt 3 месяца назад

    Docker version please

  • @christopherchilton-smith6482
    @christopherchilton-smith6482 3 месяца назад

    I wonder how far away we are from arbitrarily high accuracy on tasks like this.

    • @engineerprompt
      @engineerprompt  3 месяца назад +1

      To be honest, when it comes to voice models, open source models are lagging behind!

  • @anandgs
    @anandgs 3 месяца назад

    I had another question, are you also on Udemy?

    • @engineerprompt
      @engineerprompt  3 месяца назад +1

      I am not on Udemy but just launching my RAG course here: prompt-s-site.thinkific.com/courses/rag

  • @gorripotinikhileswar7087
    @gorripotinikhileswar7087 3 месяца назад

    Hey , Can we use this offline?

  • @Beetgrape
    @Beetgrape 3 месяца назад

    dude, I wanna deploy this on huggingface as an API. make a tutorial on this.

    • @engineerprompt
      @engineerprompt  3 месяца назад

      deployment series is coming soon, will give you an idea on how to do this.

  • @prodigroup
    @prodigroup 3 месяца назад

    👑

  • @Larsbor
    @Larsbor 3 месяца назад

    Ok as usual the lack of Gui destroys it for me..😢

    • @trusterzero6399
      @trusterzero6399 3 месяца назад

      Grow out of that and a world will open up

  • @only_learn6095
    @only_learn6095 3 месяца назад

    GPL 3.0 No thanks.

  • @Sneakylamah
    @Sneakylamah 3 месяца назад

    On my m1 Mac i have tried this out, installing
    dependencies = [
    "torch>=2.3.0",
    "torchvision>=0.18.0",
    "torchaudio>=2.3.0",
    "marker-pdf>=0.2.13",
    ]
    Then when i try out just a single pdf it fails on a simple python import.
    marker_single 26572517.pdf OUTPUT --max_pages 2 --langs English
    Traceback (most recent call last):
    File "marker/.venv/bin/marker_single", line 5, in
    from convert_single import main
    File "marker/.venv/lib/python3.12/site-packages/convert_single.py", line 5, in
    from marker.convert import convert_single_pdf
    ModuleNotFoundError: No module named ‘marker.convert'
    Anyone getting the same?
    Tried with python 3.10 and 3.12

    • @engineerprompt
      @engineerprompt  3 месяца назад

      are you using a virtual environment? use this command:
      python -m pip install marker-pdf
      This will ensure its installing the package in the current virtual env.

    • @Sneakylamah
      @Sneakylamah 3 месяца назад

      Using rye, and yes it is there in my virtual env.

    • @Sneakylamah
      @Sneakylamah 3 месяца назад

      The marker scripts are there to be called.

    • @Sneakylamah
      @Sneakylamah 3 месяца назад

      @@engineerprompt Ok the problem seems to be with the way Rye handled the imports, sorry bout that. Creating the virtual env normally i can run the commands. Thanks for the video, i have been looking for how to do this a long time.

  • @themorethemerrier281
    @themorethemerrier281 3 месяца назад +1

    This sounds very interesting but I will need to learn some python environment basic before I can put this to the test. A solution like this could help me a lot!

  • @manjula_1
    @manjula_1 3 месяца назад

    This is Very useful!, Now, In next video, Tell how to finetune any model (with some long context length like "Phi-3-mini-128k-instruct") With this Markdown Data 😍😍

  • @Jayden-qq1ei
    @Jayden-qq1ei 3 месяца назад

    Markdowns for PDF for LLM😁

  • @thunderwh
    @thunderwh 3 месяца назад

    Fantastic, thanks!