Open Interpreter 🖥️ ChatGPT Code Interpreter You Can Run LOCALLY!

Поделиться
HTML-код
  • Опубликовано: 22 янв 2025

Комментарии • 296

  • @NOTNOTJON
    @NOTNOTJON Год назад +18

    When I saw the interpreter recode to get the tasks done I realized we are now living in the future.
    Matthew, hats off to you for being here from the start and continuing to deliver such quality content.

  • @umarfarooque3687
    @umarfarooque3687 Год назад +24

    Hey Matthew, I just watched your video and I have to say, it's absolutely amazing! I love how you're showcasing the latest developments in AI and sharing them with the world. It's so encouraging to see how technology is advancing and shaping our future. Keep up the fantastic work! #AIAdvancements

  • @hernansanson4921
    @hernansanson4921 Год назад +10

    Yes, yes please Mathew, when you've got it sorted out with Code Llama, please make a video showing how to install it with Code Llama. Also make more use cases like reading a csv file of historical stocks that you previously told it to download and make it perform some statistics on it and plotting it all in a terminal CLI console ! !

  • @theresalwaysanotherway3996
    @theresalwaysanotherway3996 Год назад +78

    once you can get this running with code-llama, please make another video showing that off, it'd be awesome to see where local models current limitations are for this sort of advanced usage.

    • @theresalwaysanotherway3996
      @theresalwaysanotherway3996 Год назад +4

      also of course if you can get wizardCoder running instead it'd be a lot better than codeLlama, but I'm not sure if they let you change it.

    • @damianhill7762
      @damianhill7762 Год назад +1

      I just saw another video that simply install code Llama first and it should work. Hope you can get it working.

    • @randyriverolarevalo2263
      @randyriverolarevalo2263 Год назад +2

      Yo instale codellama. Me dió ese mismo error, pero se soluciona instalado independientemente llamacpp python. Lo que pasa es que no me hace nada, más que chat. Osea solamente sirve como chat, no me crea código ni ejecuta nada.

    • @david12einst
      @david12einst Год назад +10

      Here's a quick fix to run the Code-Llama locally:
      "pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir"

    • @daryladhityahenry
      @daryladhityahenry Год назад

      @@damianhill7762 Link please :D:D

  • @arlogodfrey1508
    @arlogodfrey1508 Год назад +3

    The issue at the end, just copy each parameter and run the install command yourself. Not sure why it's broke, but it works.

  • @louisapplewhaite506
    @louisapplewhaite506 Год назад +5

    Another great guide Govender, thank you! I hope everyone get's to see your content!
    Truly incredible; this is a game changer!

  • @marceloboedo2503
    @marceloboedo2503 Год назад +19

    Hi Matthew, I want to tell you that I just tried it with llama and after the error install llama-cpp-python (pip install llama-cpp-python). It is incredible how well it works, thank you very much for your videos that are excellent!!! Sorry for my English.😅

    • @davidwestwood1459
      @davidwestwood1459 Год назад +7

      This worked for me as well!

    • @NOTNOTJON
      @NOTNOTJON Год назад

      Way to go Marcel!! Love the people following Matt

  • @pensiveintrovert4318
    @pensiveintrovert4318 Год назад +6

    I was able to run CodeLlama Python 34B on 4 Maxwell GPUs, at 4 bits load, using 32 float for computation. The key is to use the transformer loader, I think. It is about 5-6 gig per GPU. So less than 24gb. I used text-generation-webui.

    • @louisapplewhaite506
      @louisapplewhaite506 Год назад +1

      Hey Pensive Introvert! I'll give this a try but, if you have a moment, could you take us through the steps, where to amened, what tab within Text-generation Webui, and how I install interpreter on the Webui?

    • @VincentOrtegaJr
      @VincentOrtegaJr Год назад

      Yes please share

    • @pensiveintrovert4318
      @pensiveintrovert4318 Год назад

      I added a comment with some instructions and RUclips deleted it.

  • @drmarinucci
    @drmarinucci Год назад +1

    Thanks!

    • @drmarinucci
      @drmarinucci Год назад

      I love your videos as I am always learning so much form you!

  • @JohnLewis-old
    @JohnLewis-old Год назад +3

    We are REALLY close to AI virtual assistants. Like extremely close. I give it 3 months. So cool to see this.

  • @paraconscious790
    @paraconscious790 Год назад +2

    This is incredible, the pace with which you bring it to us is super appreciated, thanks! As you mentioned if you could fix the “local” thing kindly make a video, thanks again!

  • @unc_matteth
    @unc_matteth Год назад

    YOOOO MATT! YOU'RE GETTING SPONSORS! LFG! CONGRATS! GETTING THERE!
    this is a great video, going to try it right now. thanks!

  • @jeffk8900
    @jeffk8900 Год назад +1

    Thanks!

    • @jeffk8900
      @jeffk8900 Год назад +1

      I am interested in watching more videos that showcase the practical applications.

    • @matthew_berman
      @matthew_berman  Год назад

      cool :) did you enjoy the autogen videos?

  • @TheAIAndy
    @TheAIAndy Год назад

    This is absolute madness! Thanks for the video!

  • @sucharandomusername
    @sucharandomusername Год назад +14

    Going on vacation letting this takeover all of my oracle databases, ttyl

  • @claren2010
    @claren2010 Год назад

    I have been watching multiple of your videos and this is the coolest one. Thanks so much for doing this for all of us!! I can't miss another videos of yours from now on.

  • @murraymacdonald4959
    @murraymacdonald4959 Год назад +4

    Wow. Thanks! Given the huge benefits and security concerns of letting this run unattended on a system, I'd love to know how to run this locally within a docker container that has GPU access to locally run LLMs. I think it would be an amazingly helpful setup video

    • @edmundkudzayi7571
      @edmundkudzayi7571 Год назад

      Security not so much for me. Just waking up with a suddenly spacious hard drive after it went haywire and rm --recursive c:\* everything off your machine.

  • @manavtrapasia-z6q
    @manavtrapasia-z6q Год назад +2

    Let's go really excited about this

  • @marcfruchtman9473
    @marcfruchtman9473 Год назад

    Amazing Video! The progress is astounding. This is next level... can't wait for the next!

  • @EngkuFizz
    @EngkuFizz Год назад

    This is the most mind-blowing thing I've ever seen !

  • @jdsguam
    @jdsguam Год назад +1

    So I've had it scan all my image files and delete duplicates - successful. Then I asked it to convert some images I made with SD into .ico files and it did it flawlessly. Then I asked it to change the square icons into circle icons. 100% within seconds. Most use I've gotten out of AI so far. I am amazed! Simply amazing!

  • @Ray88G
    @Ray88G Год назад +2

    Yes please , would love to see more updates or examples

  • @H_z_r_
    @H_z_r_ Год назад

    No one wonder about the security ? In 2022 we were giving the OpenAI some of our data by putting some in chatGPT. Now we are installing that in our computers with a full access to our life... I highly recommend to use docker containers and isolate the data that you share with the container... at least, you still control what you give to OpenAI...

  • @mikewhite6561
    @mikewhite6561 Год назад +3

    I would love to see a video of use cases on open interpreter!

  • @games4us132
    @games4us132 Год назад

    self correcting code is the thing that i still can't beleive is real. This feature alone is worth every ai achievements. This is true revolution in programming. And i don't understand why it is not on the first page of every book and news feed.

  • @infocyde2024
    @infocyde2024 Год назад +1

    I stumbled on this at work today. I had it build a web site and a chat GPT bot that talked using eleven labs. I had to help it with the api call to eleven labs but that was easy I just dropped an example.txt file in that it could read and it took it from there. I think if I had a different web reader installed (response I was using?) it would have been able to pull the documentation from eleven labs. I can see this being an extremely useful tool and I will probably spend my weekend playing open interpreter (and I usually like to get away from computers on the weekend).

  • @juanjesusligero391
    @juanjesusligero391 Год назад

    Thanks for the conda environment advice! :D That way I won't be worried about Open Interpreter breaking the rest of the AI software on my computer when it starts installing packages! ^^

  • @barnes0721
    @barnes0721 Год назад +1

    I got it to run locally by installing llama-cpp-python first before running interpreter --local. Then it downloaded the models just as in your video. It's slower than GPT4, but that was to be expected.

  • @bb_ninja_cat
    @bb_ninja_cat Год назад +1

    Hey Matthew, reinstall llama-cpp-python before running interpreter. The following snippet will resolve the llama error.
    ```
    pip uninstall -y llama-cpp-python
    CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir
    ```
    I'm loving the content on this channel. Cheers!

  • @dhananjaywithme
    @dhananjaywithme Год назад +1

    Its quite expensive to run. Just tried out for like 10 minutes processed an excel sheet. Even though when i ask which model are you based on it says 3.5 when i was interacting with it, while I was billed $3.28 for the GPT4. Might not be a feasible option but yes i think it gives privacy as a great option over the Chat GPT Code interpreter which stores and can process your files.
    You pay for privacy its otherwise the ChatGPT Plus Code interpretor will do the same for Data Analytics.
    Really loved Open Interpreter ability to work on the machine although technically it requires the OpenAI API model to run. Waiting for Code Lama integration and to see if the whole thing can work locally.

  • @SODKGB
    @SODKGB 11 месяцев назад

    Wondering if some code could be adjusted for this to work with LLM Studio, running a local server instead of needing an API from ChatGPT. If yes, perhaps another video with those instructions could be made. Thanks for all your hard work.

  • @matiascoco1999
    @matiascoco1999 Год назад +1

    What if you make this see the files of itself and with a loop use another gpt to call this to improve itself. Maybe a chain that is always telling the code interpreter what to do and improve or add. What happens if it runs for a few days and check what’s the result. Really cool ideas may come from this

  • @bogdanpatedakislitvinov2549
    @bogdanpatedakislitvinov2549 Год назад +5

    It would be nice if you could run this within pycharm or spyder or some other interface. We’re getting closer every day

  • @mathef
    @mathef Год назад

    Great video, thank you! And Factorio FTW! 😁👍

  • @johnpope1473
    @johnpope1473 Год назад +1

    Please note - Anaconda is a turn off for some due 1gb install size - but there's a miniconda without al the ui guff.

  • @wvagner284
    @wvagner284 Год назад +6

    Great video, as usual! The installation went smoothly, but I'm encountering an issue with my API Key. A message keeps appearing, stating that the model either does not exist or I do not have access to it. If anyone has faced a similar situation and could offer some advice, I would be grateful.

    • @JonnnyStorm
      @JonnnyStorm Год назад +1

      did you figure this out? im getting the same issue

  • @forcanadaru
    @forcanadaru Год назад +1

    I believe that, with you, we will be among the very first people who will start to use AGI when it arrives:) Million likes!

  • @pipoviola
    @pipoviola Год назад

    This is TRULY amazing! Thank you very much!

  • @tysonwigley.creative
    @tysonwigley.creative Год назад +4

    Considering it can analyze our whole computer and play with all our files, what security concerns should we have?

    • @AJR408
      @AJR408 Год назад

      Running the code in a controlled environment will mitigate several risks

    • @H_z_r_
      @H_z_r_ Год назад

      I write a comment about that also, i recommend installing that in a docker container and give it access to the data that you only allow... too dangerous for me

  • @GarethDavidson
    @GarethDavidson Год назад

    Wow. So add something that summarises the output and decides whether to feed to TTS, and whisper for text input and you've got something close to Star Trek's computer interface

    • @GarethDavidson
      @GarethDavidson Год назад

      update: I got the tool to write this itself yesterday btw, it was very very tedius and it spent all my money trying to get it set up. It's not the best programmer

  • @johnnyskelton783
    @johnnyskelton783 Год назад

    Have you tried Aider with Open interpreter? Aider could allow open interpreter to have codebase interaction.

  • @PotatoKaboom
    @PotatoKaboom Год назад +9

    Please keep us posted about this! Would love to run CodeLLama locally without the webui but yeah... "difficult task" describes it rather well :D

    • @marconeves9018
      @marconeves9018 Год назад +1

      it's pretty straight-forward IMO-- where did you get stuck? I got the 13B q4 gguf model running locally through llama-cpp-python

    • @louisapplewhaite506
      @louisapplewhaite506 Год назад

      @@marconeves9018 are there any instructions?
      When do I use llama-cpp-python?
      I get the below error:
      'llama-cpp-python' is not recognized as an internal or external command,
      operable program or batch file.

    • @louisapplewhaite506
      @louisapplewhaite506 Год назад

      Yo sorry, "pip install llama-cpp-python" works. thank you! @@marconeves9018

  • @joyaljms25
    @joyaljms25 Год назад +1

    one doubt though... do we need chatgpt plus subscription to make this work ?
    Also how much space does it take up locally..

  • @Mr.Laffin
    @Mr.Laffin Год назад

    So I installed open interpreter I set my rate limit and entered my open API key I instructed it to build me a website and so far it has done exactly that working on all different areas of the website including back-end looks feel pop-ups a chatbot and all of the code to go with it neatly organized on my computer unfortunately the usage is costing quite a bit at this point so I'll probably be trying out code llama but in the meantime isn't that awesome it successfully written a whole bunch code for a static HTML website with a single line of code to start the server

  • @kalvinarts
    @kalvinarts Год назад +3

    running pip install llama-cpp-python before running interpreter --local worked on my Mac M1 (for the 13B model on Medium)... But its supersloooooooow

  • @timastras1120
    @timastras1120 Год назад +1

    Personal Work Assistant just change the interface for a great user experience

  • @87ricr
    @87ricr Год назад

    Jaw dropped .... im so impressed , AI is moving soooooo fast

  • @IrmaRustad
    @IrmaRustad Год назад

    Love it! Please make another vidoe. Running the api is quite costly, how can we use the paid code interpreter online, which I already pay for, and the local open interpreter when it is neccessary? For us 3usd daily is not that much, but for many it is quite a lot.

  • @YacoubSabatin
    @YacoubSabatin Год назад

    To answer your question, I would write a speech-to-text to describe my projects and plug it into an Auto-GPT and the resulting recursive plans will be the base instructions for Open Interpreter.

  • @rosemademoiselle678
    @rosemademoiselle678 Год назад

    Why there is an error with my open api key? it does not work properly. The message told me that I have no access to GPT4 : The model `gpt-4` does not exist or you do not have access to it. I use normal openai's API.

  • @ScottAshmead
    @ScottAshmead Год назад +2

    Could you make it more clear when you say everything is running local ? Not clear as to why an OpenAI key is required for something that is running local. I've watched several vids where this is the case and would really appreciate the clarification.

    • @ScottAshmead
      @ScottAshmead Год назад

      @@AI_effect Sounds like it is more like a local plug-in given how you described and not really running A.I. locally ... hopefully he makes a vid that will cover this so it is more clear.

  • @Ray88G
    @Ray88G Год назад +2

    Can you please do more examples on PC

  • @globartek
    @globartek Год назад

    Amazing content. I love Your work ❤

  • @577Pradeep
    @577Pradeep 11 месяцев назад

    when i install python=latest version it fails.. why? and also i fails to summarize big pdf files around 300 pages

  • @mickelodiansurname9578
    @mickelodiansurname9578 Год назад

    its a GPU memory thing for code Llama, you need a good bulky GPU and I doubt that will ever change.

  • @SwaLi440
    @SwaLi440 Год назад +1

    It doesn't seem to have memory from previous conversations. Is there a way to enable/fix that?

  • @florentwinamou2994
    @florentwinamou2994 Год назад

    Awesome, thanks mathew. Your videos help me a lot.

  • @周易-k8w
    @周易-k8w Год назад

    Please tell me why I was not prompted to fill in the openai api key after I logged in.

  • @cihiris2206
    @cihiris2206 Год назад

    This would be cool if it wasn't just another name for AutoGPT? We still have to use OpenAI API? Yea the lama install feature fails with a nice message saying it's difficult to get a local llm running lol

  • @alissonryan
    @alissonryan Год назад

    I'm using code interpreter on my macbook m1 and ran this command:
    LlamaCpp installation instructions:
    For MacBooks (Apple Silicon):
    CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
    For Linux/Windows with Nvidia GPUs:
    CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python

  • @carstenmaul7220
    @carstenmaul7220 Год назад

    Seeing the future of computer interaction

  • @srvapps
    @srvapps Год назад

    Imagine malware running locally, trying different methods and downloading the necessay payload to encrypt your data and then ask for ransom.
    NIGHTMARE !
    (Matthew: Great Content , thank you )

  • @seohelpz6701
    @seohelpz6701 Год назад

    what should I do to fix the "MAX retries reached" error message ?? , I have a paid account and I am stuck with this error, any help appreciated

  • @nathanbollman
    @nathanbollman Год назад

    I was able to get it running local finally... the prerequisites are installing llama-cpp-python, see installing for gpu acceleration... the initial results are impressive but the llm seems to get stuck in a repitition loop shortly after starting... not sure if I can fine tune the way it loads the model to fix or not... but this excites me.

  • @bolavaughn2903
    @bolavaughn2903 Год назад

    After typing "interpreter" l got this error message : 'interpreter' is not recognized as an internal or external command, operable program or batch file. What can l do to solve this issue inorder to get into Open Interpreter

  • @chrishardwick2309
    @chrishardwick2309 Год назад

    It sounds like its not as intuitive as people would hope, but you can eventually train it right? Even if it's slow and dumb now, it's still eventually going to figure your workflow out if you work with it, right?

  • @ricardo_cravo
    @ricardo_cravo Год назад

    Hello thank you so much i've done it! But the fans of my laptop start making too much sound and it starts getting hot, my macbook pro is 2020 model, can you tell me if it is apropriate to do this? i used the smaller model 7B

  • @francisjacquart9618
    @francisjacquart9618 Год назад

    QUITE IMPRESSIVE AND THANKS SO MUCH FOR THE DETAILED TUTORIAL, BUT I DON'T HAVE CHATGPT 4 AND I DO NOT WANT TO SUSCRIBE TO THAT VERSION. HOW TO I INSTALL THE INTERPRETER AND MAKE IT RUN THEN? THANKS IN ADVANCE FOR YOUR ANSWER AND HELP!

  • @Dondala
    @Dondala Год назад

    just superb, thank you very much

  • @almahmeed
    @almahmeed Год назад

    Great video, Mathew .. it really was simple to follow and apply! The only sad part is, OpenAI is not available to my country! I wish there was a way to get the API key .. this stops me and many others from following this video and others one you posted .. thank you so much!

  • @katykarry2495
    @katykarry2495 Год назад

    Can code interpreter “code”? Like code snake or a nodeJS application

  • @DucNguyen-99
    @DucNguyen-99 Год назад +2

    To run code Llama :
    LlamaCpp installation instructions:
    For MacBooks (Apple Silicon):
    CMAKE ARGS="-DLLAMA METAL=on" FORCE_CMAKE=1 pip install -U Ilama-cpp-python --no-cache-dir
    For Linux/Windows with Nvidia GPUs:
    CMAKE_ARGS="- DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install Ilama-cpp-python

  • @aadityamundhalia
    @aadityamundhalia Год назад

    I managed to run it local all you need to do is pre install `pip install llama-cpp-python` as the command to install llama-cpp is not correct, I will try to create a pr to fix this later

  • @diadetediotedio6918
    @diadetediotedio6918 Год назад +3

    Funny enought, some weekends ago I made a C# code interpreter using the roslyn scripting API. It works very well but the costs were not so friendly (mainly when using GPT-4, but GPT-3.5 16k was also very expensive).

  • @mochimoshi7599
    @mochimoshi7599 Год назад

    are there only python code intepretors? waiting for javascript or a browser based one

  • @abagatelle
    @abagatelle Год назад

    Re use cases - yes please!

  • @xorqwerty8276
    @xorqwerty8276 Год назад

    How to do that whole Conda install bit?

  • @theobellash6440
    @theobellash6440 Год назад

    Amazing tools !!! 😊

  • @swannschilling474
    @swannschilling474 Год назад

    Great guide! 🎉

  • @pexx99
    @pexx99 Год назад

    ChatGPT has a limited memory it can remember your interaction history with. I suppose this problem persists here too if you keep the same instance on forever? Does it forget older things?

  • @jeffdavidson5601
    @jeffdavidson5601 Год назад

    I'm not having much luck with OI. I have asked multiple ways for it to write python code that it will pipe to a text file, save the code but NOT RUN IT. It runs the code every time, no matter how I phrase it. Frustrating. There are also problems when scrolling up, the text overwrites itself multiple times. Oh ya, and it's way to slow to use in a practical setting. Has any one else had these issues or is it just me?

  • @luiskmorales1850
    @luiskmorales1850 Год назад

    Are there any privacy concerns we should consider before running it when it comes to all our personal data been transferred to GPT API?

  • @gotham4u
    @gotham4u Год назад

    What if it accidentally ran $ rm -r command that means delete all files on local system?
    Do we need a sandbox to use this 'beast'?

  • @MakilHeru
    @MakilHeru Год назад

    I'd love to see a follow up on this if they get llama to work.

  • @sordidloam
    @sordidloam Год назад

    Hmm, I had code-llama installed on E drive using text web ui, but this pip install script only knows to check C drive for llama install and model. Wonder if there is a way to force it to look in another directory?

    • @sordidloam
      @sordidloam Год назад

      figured that out btw. Just let it download to C, for now.

  • @jason_v12345
    @jason_v12345 Год назад

    Nice that it's self-correcting, but does it remember the solution that worked on subsequent similar requests?

  • @jaydev8148
    @jaydev8148 Год назад

    Can i use Conda for Tensorflow

  • @temp911Luke
    @temp911Luke Год назад

    Well the question is, if a model is not good in coding (like llama2) will it still be able to run everything so smoothly as it is shown in the video?

  • @twobob
    @twobob Год назад

    using wsl seemed to "Just work" for code-llama local only version,

    • @twobob
      @twobob Год назад

      ~$ interpreter --local
      Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model.
      blah
      ▌ Model set to Code-Llama
      Open Interpreter will require approval before running code. Use interpreter -y to bypass this.
      etc

    • @twobob
      @twobob Год назад

      yeah slow on wsl1 but it works on a machine with 8Gb Ram, No Gpu and windows 10. Glacial. but works.

  • @Regruntled
    @Regruntled Год назад

    I'd like to see a tutorial on adding capabilities to Open Interpreter. I was able to add a description of a simple Windows batch file to System_Message.txt and get Open Interpreter to run my batch file as one step in an execution plan, but it took some trial and error. Surely there is a more systematic way to go about this.

  • @damianhill7762
    @damianhill7762 Год назад +1

    Looks good - Will it work with 3.5 or just 4? Thanks

    • @ВадимСимонов-г3с
      @ВадимСимонов-г3с Год назад

      I am also wondering this question. tell me if you find out the information.

    • @mb9232
      @mb9232 Год назад

      Also wondering this. I'm guessing it's for GPT-4 only

  • @TailorJohnson-l5y
    @TailorJohnson-l5y Год назад +2

    Amazing!!! Please another with use cases! Thank you Matt

  • @jersainpasaran1931
    @jersainpasaran1931 Год назад

    can we use open interpreter, currently with chatgpt 3.5 turbo, how and where would the user be prompted?

  • @jelee7690
    @jelee7690 Год назад

    If I ask the same questions to chatGPT 4 and Open Interpreter powered by chat GPT4, I get different answers. Why is that?

  • @anjalasuun4138
    @anjalasuun4138 Год назад

    what is the minimum requirement for the computer to run this local program?

  • @NoName-ws9qv
    @NoName-ws9qv Год назад

    And how many days does it take to process a command on a high end gaming computer.

  • @jason_v12345
    @jason_v12345 Год назад

    I played around with it. This is in the prototype stages at best.

  • @georgeknerr
    @georgeknerr Год назад

    Wonder if you can use GPT-3.5, I did a couple demos, jpeg to pdf, jpeg to txt, and it was a couple dollars, that could add up quick. Forgot how quick you can run up your GPT-4 tab. Code Llama where are you??? :)

    • @georgeknerr
      @georgeknerr Год назад +1

      For gpt-3.5-turbo, use fast mode: interpreter --fast

  • @FrancoFranchi-i9q
    @FrancoFranchi-i9q Год назад

    How can I use with Chat gpt-3.5 ?

  • @VladoGe-wq3bt
    @VladoGe-wq3bt Год назад

    Amazing! Can you integrate this with whisper, so that you can talk instead of typing? Would be so Minority report bad ass