Instant Context: PDFs, Audio & More | SystemSculpt DevLog 2/365

Поделиться
HTML-код
  • Опубликовано: 5 фев 2025

Комментарии • 7

  • @tubaguy0
    @tubaguy0 24 дня назад +1

    I would prefer a customizable limit for context files. Gemini has one to two million tokens context limit, so it should be able to handle thousands of files.
    You’re doing great work, keep it up!

  • @covertassassin1885
    @covertassassin1885 27 дней назад +1

    As you noted in the video, the notification for the 100 file limit was hard to catch. This might lead to someone thinking the model now has the context of all 1000 files instead of the limit of 100
    Maybe change the alert for that to stay up for much longer than the others so the user sees it

  • @MarkHutchinsonProf
    @MarkHutchinsonProf Месяц назад +1

    Fantastic. It would be great to be able to establish a light-graphRAG in the system to be able to augment the identification of links and themes in the system.

  • @marcelvanmarrewijk9376
    @marcelvanmarrewijk9376 Месяц назад

    Nice, keep us posted . Love to try this out. How does this compare to the Obsidian plugin copilot were you index your whole vault with an embedded model (vector based) after which you can ask specific question on all of your notes?

  • @jeffk8900
    @jeffk8900 Месяц назад +1

    Might be nice to have a token limit also, for just the use case of small files that you mentioned.

  • @ahmaddaneshamooz3992
    @ahmaddaneshamooz3992 Месяц назад

    👍

  • @davidtorrens8647
    @davidtorrens8647 Месяц назад

    Very impressive. Ref limits - isnt it hard to have a general way to have limits after all there are limits imposed by the model one is using eithere by its token limit or the cost of tokens and different people will have a different approach to that. For myself I remember trying to do a one off task once where I wanted to analyse a set of insurance pdf documents to see if they complied to some guildlines for what sort of insurance our type of charity should have. On this occasion I did not care about the cost as it was a one off. (Cost was only about 30c anyway). So I would say needs to be flexible but easy to use.
    Another thing that I am thinking of trying to do in Obsidian somehow aided by a local LLM with Ollama is to produce a master notes index. I have a sort of system working using front matter and tags and Dataview but it would be lovely if somehow there was a way to have a sort of auto indexing that could just do it in the background based on the contents of notes.