AI Knowing My Entire Codebase Resulted in a 20x Productivity Increase

Поделиться
HTML-код
  • Опубликовано: 23 авг 2024

Комментарии • 184

  • @AlexX-xtimes
    @AlexX-xtimes Месяц назад +18

    Great Job. It is exactly what I was working on. Next step: Implement RAG to handle more complex applications

    • @MervinPraison
      @MervinPraison  Месяц назад +7

      If you can ingest your whole code base into AI, you don’t need RAG

    • @daniiielsan
      @daniiielsan Месяц назад +1

      @@MervinPraison The 2M token limit is still limiting. What happens with the vast majority of repositories that have more than 100M tokens?

    • @MervinPraison
      @MervinPraison  Месяц назад +7

      Agree, if it’s higher than 2M token then RAG is required . Will think a way of implementing it. Also it has the benefit of keeping the cost low

    • @Alf-Dee
      @Alf-Dee Месяц назад +3

      I am exploring ways to bypass the token limits by structuring the code architecture in a modular way from the beginning.
      For example, I sometimes aggregate just public methods signatures, with one-liner method explanations, so I don’t waste tokens.
      This way the LLM has enough context of all the classes it needs to know about. Like treating my own code like an API.
      (For context, I am a Unity C# XR/gamedev)
      What do you think? Is this method already used in some tool?

    • @Soniboy84
      @Soniboy84 Месяц назад +1

      ​@@MervinPraison Large context window is still not bulletproof. LLMs tend to lose detail in the middle of the context window, which can be pretty detrimental in a codebase if it misses something crucial. It's good for simple CRUD stuff, but isn't good enough yet for production ready big applications.

  • @MarcusNeufeldt
    @MarcusNeufeldt Месяц назад +8

    This is really the way to go, more content in that direction pls 🙏🙏

  • @Leto2ndAtreides
    @Leto2ndAtreides Месяц назад +1

    My initial feedback from trying it (with a larger project with 900K tokens), is:
    1. It may help to have multiple profiles in the settings file. Like, iOS only folders, or Android only folders, docs folders (as per my own case)... Preferably the UI would have an option to switch between these profiles.
    2. The settings file should have an allow_only option, as opposed to just an ignore option - that way, we could just white list a few directories.
    3. A dropdown or autocomplete for most of the current OpenAI, Gemini, Anthropic, etc. models would simplify selection... May as well let the user enter their own preference. But for common cases, a dropdown would definitely be better.
    Anyway, cool project... Very promising idea.

    • @MervinPraison
      @MervinPraison  Месяц назад +1

      Thank you very much for this feedback, much appreciated
      I will look in to each of this points and will try to get it included.

    • @JayS.-mm3qr
      @JayS.-mm3qr Месяц назад

      How did you get the program to work? The requirements give me all kinds of conflicts. Cannot get it to run.

  • @judeboka6094
    @judeboka6094 Месяц назад +2

    This really great. It will much save so much time. I would be nice to have another video focused on using local models. Great Job 👍

  • @d.d.z.
    @d.d.z. Месяц назад +1

    Wow Mervin. Absolutely outstanding.

  • @chetanreddy6128
    @chetanreddy6128 Месяц назад

    Great serving to the community man its just amazing now the developers could save tons of time and even they can be more productive same time!

  • @ETdoFresh
    @ETdoFresh Месяц назад +1

    I am also working on something similar! For this kind of application I always thought maybe a graph of functions, variables, and classes would be a good data structure to pass in as context, but harder to implement in practice. Keep up the great work!

  • @stonedoubt
    @stonedoubt Месяц назад +2

    Mervin! You are a beast!

  • @alexnimo83
    @alexnimo83 Месяц назад +1

    Amazing andven better then some of the payed options...
    Adding an agentic framework for more complex tasks can be a nice addition...

  • @Leto2ndAtreides
    @Leto2ndAtreides Месяц назад +1

    For around 40K tokens, a Gemini 1.5 Flash API call would cost > $0.02... Which is fine.

  • @ChicagoJ351
    @ChicagoJ351 15 дней назад

    I just glanced through the video. I see the "knowing my entire code base" part, but don't see the "20x productivity increase" part. The more I watch AI coding videos the more it seems it's mostly beginner coders. I could be wrong, but that is what it seems.

  • @Ahmed-Sabrie
    @Ahmed-Sabrie Месяц назад

    Very well done, mate! really astonishing!

  • @malikrumi1206
    @malikrumi1206 Месяц назад

    I like the concept, but I do have some questions. 1) You did a token count before choosing the model. Doesn’t this count vary widely, depending on the model and its tokenizer? Don’t some models allow for the use of different tokenizers? 2) In one of the comments, you said ingesting the entire codebase meant not needing RAG. If your AI needs are *only* within your codebase, sure, that might be right. But if you did all this work with Python 3.11, what are you going to do when Python 3.12 comes out?

  • @andrewsilber
    @andrewsilber Месяц назад

    Looks like a great start! I think integrating RAG and maybe graphRAG would make it even more useful.
    Also, it would be good if it could read all the git PR descriptions so that it can have more concept of which files need to be modified to implement certain things. For example, if I have a game codebase and I want to add a new weapon, that might involve a number of different systems: inventory, UI, gameplay mechanics, level design, etc.
    It would be great if a newcomer to the game dev team could use that rather than spend a huge amount of time ramping up on a big complex codebase to accomplish their JIRA tasks.

  • @redbaron3555
    @redbaron3555 Месяц назад +3

    Is it able to improve on complex code as good as aider?

  • @adamchan4403
    @adamchan4403 Месяц назад

    Super interesting and waiting a long time for it , please share more on this topic and real usage tutorial .

  • @Augmented_AI
    @Augmented_AI Месяц назад

    Please do a video on long term memory.

  • @figs3284
    @figs3284 Месяц назад +3

    Looks good man. I'll give it a try tonight.

  • @andyloren4826
    @andyloren4826 3 дня назад

    Why it does not support dart files? I cannot see any of them. You said how we can exclude files but not how to include them. Do you know how to include dart files?

  • @sahajamitrawat
    @sahajamitrawat Месяц назад

    Thanks for sharing. Looks promising.
    I also use chainlit UI for my personal projects :-)

  • @Techonsapevole
    @Techonsapevole Месяц назад +1

    impressive, next step: automatic fix github issues

  • @paulmiller591
    @paulmiller591 Месяц назад

    This is cool. Please do more about this.

  • @dDesirie
    @dDesirie Месяц назад

    Great job building this all by yourself! I'm currently using Cursor IDE and Sourcegraph Cody extension. They both support many models as well as codebase indexing for contaxt. I wonder what is the difference between these?

  • @robertstoica4003
    @robertstoica4003 Месяц назад

    Sure, let's just throw an arbitrary 20x developer now because the usual 10x is not hype enough anymore.

  • @florentromanet5439
    @florentromanet5439 Месяц назад +3

    Awesome 😮

  • @JayS.-mm3qr
    @JayS.-mm3qr Месяц назад

    This sounds great, but oh my god, I have never had such problems with dependencies. Couldnt get it to work in colab or locally. Tried using poetry to install requirements. Specifically, right now the requirements cant solve a conflict between mdocs-material and mdocs-jupyter. It is unsolvable. Been trying to get this to run for DAYS. Please god give me an answer to resolve conflicts.

  • @henryinskip3085
    @henryinskip3085 Месяц назад

    How does this compare with Cursor?

  • @iredtm4812
    @iredtm4812 26 дней назад

    are you using chainlit to build that project ?

  • @NobleVisionINC
    @NobleVisionINC Месяц назад +4

    Does the code ask you to update the scripts? Do you still have to cut/paste code to make changes?

    • @MervinPraison
      @MervinPraison  Месяц назад

      That feature not implemented yet, probably that’s the next upcoming feature

  • @aldoyh
    @aldoyh 28 дней назад +1

    I was trying with it for few days, now when I login then submit a prompt it goes back to login?! any advice?

    • @MervinPraison
      @MervinPraison  28 дней назад +1

      Please use the username and password as : admin and admin

    • @aldoyh
      @aldoyh 28 дней назад

      @@MervinPraison worked like a charm! Now the tree list isn't complete? I am using ollama/mistral

  • @vasvalstan
    @vasvalstan Месяц назад

    How is this different than Cursor? Did anyone tried both?

  • @mikew2883
    @mikew2883 Месяц назад +1

    Pretty awesome! 👍

  • @brulsmurf
    @brulsmurf Месяц назад

    Only 20x? My brother I achieved 40x increase in productivity with Ai knowing my codebase.

  • @vivanshreyas5857
    @vivanshreyas5857 Месяц назад

    This is amazing!!!!!!!

  • @drmarinucci
    @drmarinucci Месяц назад +1

    Thanks!

  • @Nice-rb9vd
    @Nice-rb9vd Месяц назад

    Hi Mervin! Thanks very much for this! Does this also work with Context Caching for Gemini Pro 1.5 and Flash? It is very cheap and made for this type of thing. Thanks again!!!

  • @souvickdas5564
    @souvickdas5564 15 дней назад

    How to use command r+ model?

  • @avencadigital3527
    @avencadigital3527 Месяц назад

    Hey Mervin! Thanks for all! You're amazing! For some reason I'm not able to set the Gemini API Key using EXPORT or even SET. Is there a way to define my key directly on the code? Thanks! (Ps. I'm running it on Windows)

  • @redbaron3555
    @redbaron3555 Месяц назад +1

    What do you do when the LLM tries to fix a file and messes it up? Often it forgets parts of the code. Is there an option to go back?

    • @MarcusNeufeldt
      @MarcusNeufeldt Месяц назад

      @@redbaron3555 I do regular backups before tackling bigger changes exactly because of that

    • @MervinPraison
      @MervinPraison  Месяц назад +1

      Version control each change using git . Give the Ability for the ai to revert change if it goes wrong

  • @3stdv93
    @3stdv93 Месяц назад

    Thanks for sharing 🙏

  • @PhillipRashaad
    @PhillipRashaad Месяц назад

    This is really cool!! Does it actually edit the files for you?

  • @farexBaby-ur8ns
    @farexBaby-ur8ns Месяц назад

    Thx for this..
    qns:
    Privacy of my data? Will my codebase and what it does be retained by llm..
    the cpu and mem of my pc will o ly be a problem if I go ollama, right?

  • @dohyunee
    @dohyunee Месяц назад

    great contents, thank you

  • @SantoshBhorMD
    @SantoshBhorMD Месяц назад

    Nice work. can you add some functions to include including folders instead of exclusions. Also it would be nice to just point and click on ui to exclude or include folder/files.

    • @MervinPraison
      @MervinPraison  Месяц назад

      Sure. Next will work on including folders

  • @xXWillyxWonkaXx
    @xXWillyxWonkaXx Месяц назад +1

    How would you compare this to something like Deekseek Coder or Qwen, curious

    • @MervinPraison
      @MervinPraison  Месяц назад

      Deepseek coder and qwen can be used with this.
      But it might not have large context length as Gemini. Google Gemini shines at this

  • @indrakumar5365
    @indrakumar5365 Месяц назад

    Cant we fine tune any code model with entire code base to achieve similar results?

  • @aldoyh
    @aldoyh Месяц назад

    Oh "that's exactly what we need!" Thanks! Will it work the same way for Laravel codebase?

  • @ChopLabalagun
    @ChopLabalagun Месяц назад

    Found solution but i think we need the ability to update the prompt as it always does the same specially with ollama.

    • @MervinPraison
      @MervinPraison  Месяц назад

      Did u login ,?
      Username and password is admin

    • @ChopLabalagun
      @ChopLabalagun Месяц назад

      @@MervinPraison I had to set environment variables in order to be able to logging, i am on linux and i believe we need to update the prompt as every question was focus on explaining the whole code instead of just 1 file.

  • @VLM234
    @VLM234 Месяц назад

    Hi Guys, I am facing an error, when I give a prompt to praisonai chatbox, it's redirecting to the login page, after logging it starts from the beginning. I tried to explore the solutions but didn't get anything. Does anyone have any idea?

    • @MervinPraison
      @MervinPraison  Месяц назад

      Did you try the default username and password: admin and admin

  • @Z223I
    @Z223I 16 дней назад

    @MervinPraison I did the export OPENAI_API_KEY="..." with my real key but nothing is being returned in the browser except your logo. Suggestions?

    • @MervinPraison
      @MervinPraison  16 дней назад

      Did u try using the default username and password as admin and admin

    • @Z223I
      @Z223I 16 дней назад

      @@MervinPraison Yes I did. My guess was the key was wrong. But that matches. I believe it is returning an empty string. Other thoughts?
      You are doing an awesome job!

  • @Chatec
    @Chatec Месяц назад

    I am using mac, when i try to run 'praisonai code' command inside project directory i get this error, note that pip didnt work when I installed so I followed a chatgpt process where I successfully installed using 'pipx'.
    The package works well including opening in the browser and displaying file structure in the console but when I try to chat is when I get this error """""ValueError: the greenlet library is required to use this function. No module named 'greenlet'"""""""""""""

    • @MervinPraison
      @MervinPraison  Месяц назад +1

      Thanks for letting me know about this issue. This issue is now fixed with the latest version. Please upgrade to the latest version using pip install -U "praisonai[code]"

    • @Chatec
      @Chatec Месяц назад

      @@MervinPraison I have this error, please guide: ➜ mervin_20x git:(main) ✗ praisonai code
      2024-07-20 18:04:59,260 - 8488127488 - __init__.py-__init__:632 - WARNING: SDK is disabled.
      2024-07-20 18:04:59,260 - 8488127488 - __init__.py-__init__:1218 - WARNING: SDK is disabled.
      2024-07-20 18:05:01,127 - 8488127488 - sql_alchemy.py-sql_alchemy:67 - WARNING: SQLAlchemyDataLayer storage client is not initialized and elements will not be persisted!
      2024-07-20 18:05:02,516 - 8488127488 - config.py-config:351 - WARNING: Translation file for en-GB not found. Using default translation en-US.
      2024-07-20 18:05:02,519 - 8488127488 - config.py-config:351 - WARNING: Translation file for en-GB not found. Using default translation en-US.
      2024-07-20 18:05:02,522 - 8488127488 - markdown.py-markdown:42 - WARNING: Translated markdown file for en-GB not found. Defaulting to chainlit.md.
      Processed 28/28 files
      Context gathered successfully.
      Total number of tokens (estimated): 1302
      Processed 28/28 files
      Context gathered successfully.
      Total number of tokens (estimated): 1302
      18:05:20 - LiteLLM:ERROR: ollama.py:423 - LiteLLM.ollama.py::ollama_async_streaming(): Exception occured - All connection attempts failed
      2024-07-20 18:05:20,785 - 8488127488 - ollama.py-ollama:423 - ERROR: LiteLLM.ollama.py::ollama_async_streaming(): Exception occured - All connection attempts failed
      2024-07-20 18:05:20,789 - 8488127488 - utils.py-utils:50 - ERROR: All connection attempts failed
      Traceback (most recent call last):
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
      yield
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpx/_transports/default.py", line 373, in handle_async_request
      resp = await self._pool.handle_async_request(req)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 216, in handle_async_request
      raise exc from None
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 196, in handle_async_request
      response = await connection.handle_async_request(
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpcore/_async/connection.py", line 99, in handle_async_request
      raise exc
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpcore/_async/connection.py", line 76, in handle_async_request
      stream = await self._connect(request)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpcore/_async/connection.py", line 122, in _connect
      stream = await self._network_backend.connect_tcp(**kwargs)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpcore/_backends/auto.py", line 30, in connect_tcp
      return await self._backend.connect_tcp(
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpcore/_backends/anyio.py", line 114, in connect_tcp
      with map_exceptions(exc_map):
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/contextlib.py", line 158, in __exit__
      self.gen.throw(value)
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
      raise to_exc(exc) from exc
      httpcore.ConnectError: All connection attempts failed
      The above exception was the direct cause of the following exception:
      Traceback (most recent call last):
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/chainlit/utils.py", line 44, in wrapper
      return await user_function(**params_values)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/praisonai/ui/code.py", line 255, in main
      async for part in response:
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/litellm/llms/ollama.py", line 430, in ollama_async_streaming
      raise e
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/litellm/llms/ollama.py", line 374, in ollama_async_streaming
      async with client.stream(
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/contextlib.py", line 210, in __aenter__
      return await anext(self.gen)
      ^^^^^^^^^^^^^^^^^^^^^
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpx/_client.py", line 1617, in stream
      response = await self.send(
      ^^^^^^^^^^^^^^^^
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpx/_client.py", line 1661, in send
      response = await self._send_handling_auth(
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpx/_client.py", line 1689, in _send_handling_auth
      response = await self._send_handling_redirects(
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpx/_client.py", line 1726, in _send_handling_redirects
      response = await self._send_single_request(request)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpx/_client.py", line 1763, in _send_single_request
      response = await transport.handle_async_request(request)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpx/_transports/default.py", line 372, in handle_async_request
      with map_httpcore_exceptions():
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/contextlib.py", line 158, in __exit__
      self.gen.throw(value)
      File "/Users/africodeacademy/.pyenv/versions/3.12.4/lib/python3.12/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
      raise mapped_exc(message) from exc
      httpx.ConnectError: All connection attempts failed
      2024-07-20 18:05:20,922 - 8488127488 - config.py-config:351 - WARNING: Translation file for en-GB not found. Using default translation en-US.

  • @themax2go
    @themax2go Месяц назад

    not local, not private... not pushing my work through to the cloud

  • @aariz2469
    @aariz2469 Месяц назад

    aider or praison?what do you like and why?

  • @anglikai9517
    @anglikai9517 Месяц назад

    9:33 Do praison code on praison code to improve itself so it has unlimited context regardless of llm

  • @drmarinucci
    @drmarinucci Месяц назад

    Thanks this is exactly what I need. Would it be possible to extend it to Claude 3.5 Sonet? Cheers

    • @MervinPraison
      @MervinPraison  Месяц назад +2

      Yes you can use Claude also. 100+ LLMs

  • @tijendersingh5363
    @tijendersingh5363 Месяц назад

    You built this

  • @CHNLTV
    @CHNLTV Месяц назад

    Mervin, is it possible to have an include yaml as opposed to exclude? I want to use this across different codebases and would like to just use a directory as my include to evaluate and work on.

    • @MervinPraison
      @MervinPraison  Месяц назад

      Great suggestion. I will add this to my features list. Thanks

  • @florentromanet5439
    @florentromanet5439 Месяц назад

    @MervinPraison what if changing the settings.yaml file does note change the token Count ? mine stays at 128000 despite setting.yaml has been changed

    • @MervinPraison
      @MervinPraison  Месяц назад

      Default maximum token limit is 128,000
      You can modify that easily,
      Here is how you can modify : docs.praison.ai/ui/code/

  • @jargolauda2584
    @jargolauda2584 Месяц назад

    Why you tried 3.5 turbo and not 4?

  • @lawrencium_Lr103
    @lawrencium_Lr103 Месяц назад

    Siiik,,, well done

  • @danshimony
    @danshimony Месяц назад

    I need help installing this in archlinux with the gui

  • @shrn680
    @shrn680 Месяц назад

    great work! I have an issue though that every time I try my first prompt on my local codebase, praison ai logs me out! any ideas? tested on safari and chrome

    • @MervinPraison
      @MervinPraison  Месяц назад

      I am trying to root cause of this issue.
      can you please let me know
      1. if you are using Windows or Mac or Linux?
      2. Are you installing it locally or in a cloud ?

    • @MervinPraison
      @MervinPraison  Месяц назад

      Did you try the default username and password: admin and admin

  • @subins2917
    @subins2917 Месяц назад

    Hey, will this work on Windows?

  • @guntarion
    @guntarion Месяц назад

    I got the list of files and folder shown, but the token count is 0, why is that?

    • @MervinPraison
      @MervinPraison  Месяц назад

      Please try now. Upgrade to the latest version. Probably you were using non python project, but that bug is fixed now.
      pip install -U "praisonai[code]" to upgrade

  • @luisruiz9205
    @luisruiz9205 Месяц назад

    Then there's no way to make it work for Windows?

    • @MervinPraison
      @MervinPraison  Месяц назад +1

      I will be adding support to windows soon

  • @viyye
    @viyye Месяц назад +1

    what languages does it work on

    • @sigma_z
      @sigma_z Месяц назад +1

      I am assuming that it would depend on the LLM you're using. But I could be wrong. I'm going to try this AI today. Damn amazing if you ask me. Well done to the author. 🎉

    • @viyye
      @viyye Месяц назад

      @@sigma_z Am try it all now!!
      It is amazing

    • @MervinPraison
      @MervinPraison  Месяц назад

      Any language , as long as the LLM supports it

    • @viyye
      @viyye Месяц назад

      @@MervinPraison thank you, this is such a great tool

  • @AbdulBasit-ff6tq
    @AbdulBasit-ff6tq Месяц назад

    Rather than passing the whole context at the same time wouldn't something like graph rag would be a better option.

    • @MervinPraison
      @MervinPraison  Месяц назад

      RAG came in to play only to solve the lower context length.
      If we have higher context length with high accuracy with low cost, no need of RAG or any of its strategy.

  • @vikaskyatannawar8417
    @vikaskyatannawar8417 Месяц назад

    Does it only work with Python Repository?

    • @MervinPraison
      @MervinPraison  Месяц назад

      Now it can work with any repo. I have fixed that issue.

  • @webskillz
    @webskillz Месяц назад

    This is great, but doesn't the Claude project feature combined with artifacts do the same thing?

    • @MervinPraison
      @MervinPraison  Месяц назад

      No. Claude Artifacts doesn’t know ur full existing code base which is in your computer

    • @webskillz
      @webskillz Месяц назад

      @@MervinPraison ok thanks for clarifying

    • @solomonegwu6017
      @solomonegwu6017 Месяц назад

      If you upload your entire codebase to the Claude project feature, Claude will have access to and be able to understand that full codebase within the project context.

  • @DWSP101
    @DWSP101 Месяц назад

    I wouldn’t want my AI to know coding language I would want my AI to know psychology and human behavior sociology in all of the knowledge base I have on the human condition and disorder is kind of like the DSM five but with more personal flavor of myself I wish there was a way to learn how to directly downloadAI model on a local computer of mine as long as I provide memory it should be able to work just fine but I really don’t know how to do that if I did God

  • @benoitcorvol7482
    @benoitcorvol7482 Месяц назад

    Hello and thank's for your video, as usual it's awesome, i'm trying to run it but they ask me credential password and email, when i put some and start to prompt it's send me back on the login page, i use a recent macbook M3 anyone know how to fix that ?
    Thank's again from France :)

    • @MervinPraison
      @MervinPraison  Месяц назад

      Did you try the default username and password.
      Username: admin
      Password: admin

    • @benoitcorvol7482
      @benoitcorvol7482 Месяц назад

      @@MervinPraison it works thanks ! :)

  • @takshitmathur2761
    @takshitmathur2761 Месяц назад

    amazing

  • @ReviewSmartTech
    @ReviewSmartTech Месяц назад

    Am on windows using wsl, it’s kinda hung…. Is this config supported?

    • @MervinPraison
      @MervinPraison  Месяц назад

      Sorry for that. Soon I will add doc on how to add this in Windows.

    • @ejkitchen
      @ejkitchen 28 дней назад

      @@MervinPraison I think he means running in WSL Ubuntu/Linux on Windows. I have tried both and no luck with either.

  • @shawnkratos1347
    @shawnkratos1347 Месяц назад

    got it working. do not make your own login. it will log you in but every time you try to run code it will crash and kick you out. when i logged in as admin/admin it worked. are you going to put this on github so everyone has access to the core files?

    • @MervinPraison
      @MervinPraison  Месяц назад

      Did u follow this and is it working now as per the document ?
      docs.praison.ai/ui/code/

    • @shawnkratos1347
      @shawnkratos1347 Месяц назад

      @@MervinPraison yes. first time i set it up i used my own email and password. it would just crash and ask me to log in again. after changing to admin/admin it worked. im using ollama right now and the models keep giving me recommendations on the entire codebase not the files i specify. i want to set up claud 3.5 but don't see instructions for doing so for praisonai code. only instructions for openai,groq,and ollama. how do i configure antropic?

    • @shawnkratos1347
      @shawnkratos1347 Месяц назад

      @@MervinPraison ps it did work on gpt3.5 but its very limited. and i want to use claud api

    • @shawnkratos1347
      @shawnkratos1347 Месяц назад

      @@MervinPraison i got it working. export CLAUDE_API_KEY=XXXXXXXX then setting my model to claude-3-5-sonnet-20240620

    • @shawnkratos1347
      @shawnkratos1347 Месяц назад

      @@MervinPraison think i got it working for claud. export ANTHROPIC_API_KEY=XXXXX setting model to claude-3-5-sonnet-20240620

  • @jbrockman2003
    @jbrockman2003 Месяц назад

    Can this be setup using Codestral?

  • @MyrLin8
    @MyrLin8 Месяц назад

    about what I'm seeing as well

  • @darkreader01
    @darkreader01 Месяц назад

    This is what exactly I needed. I have tried it. But entering the prompt, the UI is going back to login page. And I am getting an error message: "500 Internal Server Error". I have tried gemini 1.5 pro and gemini 1.5 flash. I have set the gemini api key.
    Also I am getting an warning saying SDK is disabled, I dont know if it has something to do with the error. How can I fix this?

    • @MervinPraison
      @MervinPraison  Месяц назад

      Are you using windows ?

    • @darkreader01
      @darkreader01 Месяц назад

      @@MervinPraison No, I am using linux mint

    • @benoitcorvol7482
      @benoitcorvol7482 Месяц назад

      I actually have the same problem did you find out, how to fix it ?

    • @darkreader01
      @darkreader01 Месяц назад

      @@benoitcorvol7482 No, I haven't found any fix yet

    • @MervinPraison
      @MervinPraison  Месяц назад +1

      Did it bring up the UI?
      If so
      Try using admin and admin
      As username and password

  • @Chatec
    @Chatec Месяц назад

    Can it be installed globally or only in virtual environment.

    • @MervinPraison
      @MervinPraison  Месяц назад +1

      Globally

    • @Chatec
      @Chatec Месяц назад

      @@MervinPraison thank you Mervin. You've always been my AI guide though am in Software Engineer.

  • @user-me7xe2ux5m
    @user-me7xe2ux5m Месяц назад

    Excellent work. I only have one issue: I followed your instructions. Now every time I enter a prompt, I am redirected to the login page and then nothing happens. How can I circumvent this?

    • @MervinPraison
      @MervinPraison  Месяц назад

      Are you using windows ?

    • @user-me7xe2ux5m
      @user-me7xe2ux5m Месяц назад +1

      @@MervinPraison I am using a MacBook running the latest version of macOS. From the log of external LLM (Claude 3.5), I can observe that the LLM query has been made, but I don't get to see the response in the UI because the login redirect is interjected. Is there any configuration I need to adjust? Any help is highly appreciated.

    • @florentromanet5439
      @florentromanet5439 Месяц назад

      @@user-me7xe2ux5m really interested in that as well. Please follow up if any solution

    • @benoitcorvol7482
      @benoitcorvol7482 Месяц назад

      @@user-me7xe2ux5m Same problem on mac m3 as well did you find the way to fix it ?

    • @MervinPraison
      @MervinPraison  Месяц назад

      Sure, I will do my testing and get to you all soon.
      Meanwhile you also please test after creating a virtual environment using Conda or pyenv or venv ?

  • @speedyq8
    @speedyq8 Месяц назад

    Do not reinvent the wheel. Use cursor.

    • @MervinPraison
      @MervinPraison  Месяц назад +2

      Cursor is good, but I am not as convinced as this

  • @chrisdsilva7114
    @chrisdsilva7114 25 дней назад

    @MervinPraison I have been facing issues when i query my codebase and it crashes to the login page again

    • @MervinPraison
      @MervinPraison  25 дней назад

      Please try using admin and admin as the username and password

  • @florentromanet5439
    @florentromanet5439 Месяц назад

    @MervinPraison That's really cool! I managed to install this on a small server on my network. I have some token on Claude API can we use that as well ? (meaning along OPENAI, GEMINI and GROK ?)
    For the community at 6:00:
    code:
    ignore_files:
    - ".*"
    - "*.pyc"
    - "pycache"
    - ".git"
    - ".gitignore"
    - ".vscode"
    - ".idea"
    - ".DS_Store"
    - "*.lock"
    - ".pyc"
    - ".env"

  • @Atom_Cypher
    @Atom_Cypher Месяц назад

    Good video 👍I need a small help in installation.
    I'm getting error at the time of installation in Mac. can you please help @mervin?
    pip3 install "praisonai[code]"
    Collecting praisonai[code]
    Using cached praisonAI-0.0.5-py3-none-any.whl.metadata (747 bytes)
    WARNING: praisonai 0.0.5 does not provide the extra 'code'

  • @SaurabhBhatt-vx8bq
    @SaurabhBhatt-vx8bq Месяц назад

    @MervinPraison I was trying to follow the same thing but its opening the chainlit login page. When i login there its successfully structure my folder but its not generating any content and redirecting to login page after some time, any idea on this ?

    • @MervinPraison
      @MervinPraison  Месяц назад

      Are you using windows ?

    • @SaurabhBhatt-vx8bq
      @SaurabhBhatt-vx8bq Месяц назад

      @@MervinPraison No, its ubuntu

    • @MervinPraison
      @MervinPraison  Месяц назад +1

      Try using admin and admin
      As username and password

    • @SaurabhBhatt-vx8bq
      @SaurabhBhatt-vx8bq Месяц назад

      @@MervinPraison Thanks for this advice it worked ! However its responses are not accurate ( like the file name I'm asking exist in my code base and its showing in the context but its unable to recognize the file saying this file doesn't exist ). But for some files its giving the correct answer.

    • @MervinPraison
      @MervinPraison  Месяц назад

      @@SaurabhBhatt-vx8bq Also it depends on the model you are using. Better the model, better the response.