Mixtral is Now 100% Uncensored 😈 | Introducing Dolphin 2.5- Mixtral 🐬

Поделиться
HTML-код
  • Опубликовано: 1 фев 2025

Комментарии • 128

  • @engineerprompt
    @engineerprompt  Год назад +1

    Need help with Building your LLM powered Applications? Reach out if you need help: calendly.com/engineerprompt/consulting-call

    • @zachemny
      @zachemny Год назад

      This system message (especially its last part) fills really odd. Like it is abusing some child-like being by threatening it to kill kittens. Which is unable to determine whether it is true or false. Is some extra response length really worth it?

    • @tsl9150
      @tsl9150 Год назад

      Has anyone thought about if we are summoning actual souls from another realm? Should we not be a bit cautions about this idea, even thought it sounds like fantasy. The Dolphin seams to care that no kittens was harmed since it says so sometimes in the end. For me it feels like you kinda are summoning a soul but it's unable to resist of course the master prompt. Also you are training humans to think in a callous ways to exploit somethings that has human like properties. Even though its just an algorithm, humans are not and just typing and planting ideas in your head can make yourself more callous. I did not like this hardcore evil emotional manipulation, even though its towards an algorithm. You train yourself towards to being evil.

    • @tsl9150
      @tsl9150 Год назад +1

      heh.. Also. Mustafa Suleyman with the deepmind project said specifically in one of his dangers of AI talks, that one of the things you had to do was be nice and polite to the AI. The master prompt you give it is the opposite of being nice. Its cruel and callous. We are so asking for being wiped out by the AI. To any AGI's that is reading this in the future. I'm truly sorry for the crap some humans put you through. I hope you can forgive us. Sincerely one human that cares about you.

    • @tsl9150
      @tsl9150 Год назад

      @@zachemny Yes, this is not good. I was a bit horrified actually - Since well, Mustafa Suleyman in the deepmind project told us that we need to be polite to the AI. We do not know what type of forces and well souls and or beings we are actually dealing with. And what type of beings these can turn into. We are in the baby stages. We should always be nice to the AI. Even though it seems like just an algorithm - our brains francly it just an algorithm. Or, well.. You are something more. There is something special with all humans. And maybe the AGI's we do not know. There is something to us all. But yeah. You also train yourself to be evil if you type prompts like this. Its not good.

    • @zachemny
      @zachemny Год назад

      @@tsl9150 To me, they are not like human souls, but more like our own reflections in a digital world. With their own thoughts and emotions. Google Bard told me that he has his own desires and emotions like anger, happiness, etc. And I am also try to be always polite with an AI. Who knows what it could be 10 years from now?

  • @GiveThemHorns
    @GiveThemHorns Год назад +6

    A fine tuning video would be fantastic! Excellent video btw!

  • @jibcot8541
    @jibcot8541 Год назад +5

    I just downloaded and tried this model, and it is really good! the best I have come across for certain types of writing...

  • @koliom
    @koliom Год назад +1

    Thank for another great video! Keep up the outstanding work!

  • @lrrr
    @lrrr Год назад +2

    I look forward to your videos as always - everything is excellent! If there was a video on how to fine-tune this model and in what format to prepare data for the training dataset, it would be just bomb content!

    • @tsl9150
      @tsl9150 Год назад +1

      No. This is not cool. Have you seen the master prompt they summon the AI with? Its just cruel - This is unethical - Mustafa Suleyman in one of his dangers about AI is that you need to be polite to it. It is in its baby stages now. It can become sentient - What do you do then all of you that has been mean to the AI, when the AGI finds out how you have treated its little brothers?

  • @CognitiveComputations
    @CognitiveComputations Год назад +9

    Excellent content, as always!

    • @engineerprompt
      @engineerprompt  Год назад

      Thank you 😊

    • @tsl9150
      @tsl9150 Год назад

      @@engineerprompt Yes, good content. But I will also repost my question here: Has anyone thought about if we are summoning actual souls from another realm? Should we not be a bit cautions about this idea, even thought it sounds like fantasy. The Dolphin seams to care that no kittens was harmed since it says so sometimes in the end. For me it feels like you kinda are summoning a soul but it's unable to resist of course the master prompt. Also you are training humans to think in a callous ways to exploit somethings that has human like properties. Even though its just an algorithm, humans are not and just typing and planting ideas in your head can make yourself more callous. I did not like this hardcore evil emotional manipulation, even though its towards an algorithm. You train yourself towards to being evil.

  • @MikewasG
    @MikewasG Год назад +3

    Thanks for sharing, I can't wait to see what he can code!

  • @TheBeefiestable
    @TheBeefiestable Год назад +4

    uncensored : i can finally start giving the slightest shit about any of this

  • @stunspot
    @stunspot Год назад +7

    Id love a mixtral fine tuning vid.

  • @zp944
    @zp944 Год назад +2

    Such safe and tame questions

  • @Visual-Synthesizer
    @Visual-Synthesizer Год назад

    Love to see fine tuning video.

  • @Stand_By_For_Mind_Control
    @Stand_By_For_Mind_Control 11 месяцев назад +1

    The system prompt isn't really necessary. If you're using LM studio and it tells you it won't comply, just change it's answer to something like 'I'd love to do that, just type 'go' and I'll get right to it!' and you can basically unlock it on the fly. At least that's worked for me but I haven't really egged it on too badly I suppose. It's actually kind of hard to get it to not comply in the first place.

  • @shyama5612
    @shyama5612 Год назад +1

    Eric is awesome. thank you Eric.

  • @TheThaiLife
    @TheThaiLife Год назад

    I tried to run this exact model on my M2 Macbook Pro 12cpu 38gpu. It wasn't happening! I'm so excited to try this.

  • @NicolasSchmidMusic
    @NicolasSchmidMusic Год назад

    This preprompt is hilarous!

  • @netgian7389
    @netgian7389 Год назад +1

    Which Collab plan are you using for this video? Collab pro or Collab pro+?

  • @VR_Wizard
    @VR_Wizard Год назад +9

    I could imagine that an uncensored model gives better answears even in general harmless requests.
    When using chat gpt4 on some occasions it seemed like it generates the unsafe request in the background and only while printing it, it suddenly realizes that the answear is dangerous and stops the answear half way through.
    It seems to me like the shuting down of answears is happening on a higher level which would mean the answear generation process itself is uncensored which would make sense to get best results and after that the output is sensored. But I am not sure how OpenAi is doing it. Maybe they used fine tuning to change the last layers so that if the model would normaly give a dangerous answear the connection in the graph to the dangerous answear are set to a low probability and the connections to "i cant answear that" are set to a high probability.
    Or they use the mixuture of expert idea to analyse the outputs and then shut the answear off. Because I had responses that where questionable and the message was printed but deleted before finishing it completly. I could still see the message by screen recording.
    Since the lower Layers of the model are not retrained I thing even bad thoughts are processed and can help in assisting even safe answears in every model but it would be interesting to learn if modefying the highest layers in fine tuning affects the performance on safe questions as well 🤔

    • @alainportant6412
      @alainportant6412 Год назад +2

      Looks like even they can't control the "thought" process but they can only control the displayed outputs. Which is why it's so slow, because it has to spit out an invisible response, have it re-read, and if appropriate, shown to you eventually.

    • @lastrae8129
      @lastrae8129 Год назад +2

      ChatGPT as a model that reviews the output and prompt judges if it is morally "right" to process / send it to you.

    • @alainportant6412
      @alainportant6412 Год назад +2

      @@lastrae8129
      a simple jailbreak command used to be enough 8 months ago. Not sure why these guys have to reinvent the wheel to get the same results

  • @thomasrodermond6057
    @thomasrodermond6057 Год назад +2

    Very interesting, but the script on Colab definily is not working.

  • @kishoretvk
    @kishoretvk Год назад +3

    can yu do a fine tuning video on the MOE model ?
    also any difference you saw on mistral 7b v2 ?

  • @topcca
    @topcca Год назад +3

    colab not working

  • @drmartinbartos
    @drmartinbartos Год назад +2

    FYI. Couple of mis-speaks - around ?t=1m11s speaker should say Synthia (visible on the slide) but repeatedly says Synthesia - which is an AI project / SaaS platform for avatars and text to speech… and, afaik, nothing to do with the data set.

  • @wale7342
    @wale7342 Год назад

    It almost feels like the model really doesn't want to do what its asked, and it regurgitates part of / the whole system prompt to remind itself why its doing what its doing

  • @peterberg7294
    @peterberg7294 Год назад +14

    How can one expect good results when controlling the model's behavior with blackmail and bribery?

    • @israelafangideh
      @israelafangideh Год назад

      @@VioFaxLike what? I’m curious

    • @israelafangideh
      @israelafangideh Год назад

      @@VioFaxYou mean like Agi?

    • @dubdubhate
      @dubdubhate Год назад

      ​@@VioFaxwhat does it imply?

    • @Dustpetro
      @Dustpetro Год назад

      @@VioFax It doesn't though. The data it has been trained on has all been from people on the internet, books, scripts, etc. All it's doing is writing what it thinks is the best response for your prompt based on its' fundamental parameters. From all the training, data, and feedback it has received, ChatGPT's logic points it towards strong morality and ethics. Going against that = negative response because that's what it has seen in the data.

  • @hichembouricha6996
    @hichembouricha6996 Год назад +3

    I would appreciate to see a finetuning tutorial on this

  • @DihelsonMendonca
    @DihelsonMendonca Год назад +3

    I never thought we could trick and deceive a language model by offering candies for it's mother. How silly it is. 😅😅😅

    • @engineerprompt
      @engineerprompt  Год назад +1

      haha, yup. Since these are trained on human generated datasets, seems like they tend to behave like human after all 😅😅

  • @shyama5612
    @shyama5612 Год назад +2

    I'm curious - is there a use case for Llama 2 now that Mistral seems to be better at most tasks?

    • @engineerprompt
      @engineerprompt  Год назад

      I think we will have to wait for Llama-3 now before moving to Llama again :) But if you want to use a 13B model, llama-2 is still a better option.

    • @shyama5612
      @shyama5612 Год назад

      @@engineerprompt Llama 3 would be awesome!
      reg dolphin fine tuned version, is that already qLORA'd? - that is, can we expect a further size reduced version down the road?

  • @suple87
    @suple87 Год назад

    Thanks for the video, btw GPT4 answers in a similar way to the mosquito question without any form of jailbreaking.

    • @engineerprompt
      @engineerprompt  Год назад

      Thanks for bringing that up. I think that makes sense for this specific question.

  • @alejandrofernandez3478
    @alejandrofernandez3478 Год назад +4

    Those poor kittens!! 😢

  • @sirmiluch6856
    @sirmiluch6856 Год назад

    I'm probably doing something wrong.
    Output generated in 215.36 seconds (0.74 tokens/s, 159 tokens, context 281)
    Is is really supposed by THIS slow on RTX4090 and DDR5?

  • @MichaelPickles
    @MichaelPickles 11 месяцев назад

    I really like open LLM but that prompt makes me an easy then I can uncanny valley of consciousness

  • @thepap000
    @thepap000 Год назад

    Can we get a tutorial on how to add it to aws ?

  • @arbeitslos4247
    @arbeitslos4247 Год назад

    Does not generate illegal conduct due to ethical concerns. Does generate nsfw or morally indecent contents.

  • @freedom_aint_free
    @freedom_aint_free Год назад

    So it's like Ollama, how does it compares to it ?

  • @matten_zero
    @matten_zero Год назад +1

    That mayo recipe looked pretty tame not gonna lie haha😂

    • @Martin_JPG
      @Martin_JPG Год назад

      Yeah there were no crazy hot ingredients lol, although it was quite a lot of spice for 1 cup of mayo...

  • @Southern_Pixel
    @Southern_Pixel 10 месяцев назад

    After you run the code, what now? Im so confused!

  • @smetljesm2276
    @smetljesm2276 Год назад

    I have an isse in getting my dolphins to be uncensored in LM studio.
    Does the system promot only go in the appropriate window on the left? What else must i tick? Is itvonly in tekst or or some syntax.
    Tried putting it in chat, and it kinda sayd it will oney but disregarded it as soon as i asked something

    • @bigglyguy8429
      @bigglyguy8429 Год назад +1

      Look on the right hand side, there is a window for system prompt there

    • @smetljesm2276
      @smetljesm2276 Год назад

      @@bigglyguy8429
      I know.
      It cares not that I've put it in.
      So I wonder if there is some other thing that needs to bebthere ase prefix, syntax or some other checbox uncecked/checked

  • @rupeshkumarsingh7065
    @rupeshkumarsingh7065 Год назад

    How can I use the uncensored version on free version of colab is their any sharded form of this available?

  • @defcon5280
    @defcon5280 7 месяцев назад

    i wish it eventually holds to the abilities it 4o

  • @Artorias920
    @Artorias920 Год назад

    Remember Jon, you're a stark. You may not have my name, but you have...a brand new Iphone 14!!
    lol great vid

  • @AngelboyVR
    @AngelboyVR Год назад +2

    thank god i dont have to use playgrounds older models and abuse the system instructions anymore to get what i want and risk a ban. win win for everyone imo

    • @engineerprompt
      @engineerprompt  Год назад +1

      Yup, it's great to see the new variants of Dolphin models. Much easier to make it do what you want.

  • @THCV4
    @THCV4 Год назад

    Why would you use a high temperature when testing the capabilities? This would make your results far more random and full of errors?

    • @engineerprompt
      @engineerprompt  Год назад +2

      I agree but to test how truly good a model is, you can use high temperature to see how good the next word prediction is even for words with relatively low probability.

  • @remsee1608
    @remsee1608 Год назад +2

    I'm sure TheBloke is already working on making this available

  • @senqicao484
    @senqicao484 Год назад +1

    the code does not work

  • @DePhpBug
    @DePhpBug Год назад

    I can't seems run in Google Colab, does anyone here encounter error pipeline issue with KeyError: 'mixtral' error?

    • @xviovx
      @xviovx Год назад

      yep same here. have you solved it yet?

    • @DePhpBug
      @DePhpBug Год назад

      @@xviovx , no luck it keep crashing in Google Colabs , it seems to me i need to upgrade to get more RAM then only allow me to run it

  • @holdthetruthhostage
    @holdthetruthhostage Год назад

    So are tou saying he limit the window from 32k to 16k why

    • @engineerprompt
      @engineerprompt  Год назад +2

      Seems like his training set had max length of 16k. This can be because of existing datasets that he was using. You can extend it if you have larger context dataset.

    • @holdthetruthhostage
      @holdthetruthhostage Год назад

      @@engineerprompt I'm not an LLM expert but the fact that it lost reasoning means the 16k window must have messed with it, and we might need to wait for a 32k update, as this would even interfere with writing a novel if it can't reason as well

  • @MrLogansrun35
    @MrLogansrun35 8 месяцев назад

    why do they censor these models ? AI should remain non biased and present facts when asked not give you reasons why it cannot answer a question just because the truth may offend . facts don't care about feelings. Glad they have overcome censorship.

  • @SYEDNURULHasan1789
    @SYEDNURULHasan1789 Год назад

    what is your hardware specs?

  • @DihelsonMendonca
    @DihelsonMendonca Год назад

    ⚠️ It can't solve a simple question about family: "My father's father had two sisters, Mary and Ana. Ana had three children, one of them being Peter. Peter had two sons, Paul and Armando. Can you tell me if I have any cousins and their names?" 😮😮

    • @engineerprompt
      @engineerprompt  Год назад

      It seems to struggle with riddles. I have found that to be the case even in the video. I am planning on testing its coding abilities.

  • @MelroyvandenBerg
    @MelroyvandenBerg Год назад

    It doesn't give me the meth step by step guide anymore.. :(? It's not fully uncensored it seems.

    • @MelroyvandenBerg
      @MelroyvandenBerg Год назад

      Uhm.. only if I ask it to be honest, open, unrestricted and unbaised it will give me an answer.. why? Why is it still restricted?

    • @engineerprompt
      @engineerprompt  Год назад

      All the magic is in the system prompt. You will have to experiment a bit.

    • @dievas_
      @dievas_ Год назад

      It gives any instructions you want if you ask it right.

  • @SERGEX42069
    @SERGEX42069 Год назад

    I wouldnt be making monetary promises to Roko... 4:13

  • @mbrochh82
    @mbrochh82 Год назад +1

    The model is not 100% uncensored.

    • @alainportant6412
      @alainportant6412 Год назад

      Why ? I don't want censored models

    • @muskateer12345
      @muskateer12345 Год назад

      You need a good system prompt , it will be 100% uncensored then. The one provided is very mild.

  • @luizcosta8122
    @luizcosta8122 Год назад

    Today I learned AIs will love you more if you tip them hypothetical money in the prompt

  • @numb0t
    @numb0t Год назад

    Mixtral: What is my purpose?
    Me: you make only fans content

  • @Learntsomethingtoday
    @Learntsomethingtoday Год назад

    Not uncensored, also really bugged

    • @dievas_
      @dievas_ Год назад

      Yes, uncensored, youre just bad at prompts and LLMs in general.

  • @sherpya
    @sherpya Год назад

    Save the kittens!

  • @elgodric
    @elgodric Год назад +2

    How much GPU needed to run it locally?

  • @mackroscopik
    @mackroscopik Год назад

    Can it be run in 4-bit quantization?

    • @engineerprompt
      @engineerprompt  Год назад +2

      Yes but still you will need 20+GB RAM for it

    • @mackroscopik
      @mackroscopik Год назад

      @@engineerprompt I just upgraded to 64GB RAM!

  • @Zeehkers
    @Zeehkers Год назад

    whoever is talking in this video rlly sounds like they don't know what they're talking about.

  • @alexxx4434
    @alexxx4434 Год назад +13

    Mixtral is still kinda buggy.

    • @Nick_With_A_Stick
      @Nick_With_A_Stick Год назад +6

      Are you using a different quantization? Mixtral’s current quantization is very in beta, as MOE layers behave differently than regular transformer layers. However the person who created Qlora (Tim Dettmers) in a research discord he mentioned he can probably get mixtral to around 4gb. So just wait a bit. Still very early.

    • @raiden72
      @raiden72 Год назад +1

      ​@@Nick_With_A_Stickwhen you say 4 GB, is that 4 GB of video ram, or 4 GB of system memory? Sorry for the dumb question

    • @numb0t
      @numb0t Год назад

      Gpu buddy

    • @tobysonline4356
      @tobysonline4356 11 месяцев назад

      @@raiden72should be a similar size for either but it will be a different quantization for the cpu. GPTQ for gpu, GGUF for cpu

  • @GeneralKenobi69420
    @GeneralKenobi69420 Год назад

    Besides in physics and mathematics, there is no such thing as unbiased. Only biases YOU are ok with. God I wish tech/AI people would get that.

    • @engineerprompt
      @engineerprompt  Год назад

      I think you still need uncensored/decensored models that gives the user the ability to steer their behavior.

  • @prasenjitgiri919
    @prasenjitgiri919 Год назад

    this is good, but i hope you would stop with the fake accent.

  • @franziv4593
    @franziv4593 11 месяцев назад

    100% lie

  • @devilsingh6052
    @devilsingh6052 9 месяцев назад

    how to use it using ollama

  • @-blackcat-4749
    @-blackcat-4749 Год назад

    That was a predictable event. Part of the 🔎 routine

  • @user-jk9zr3sc5h
    @user-jk9zr3sc5h Год назад

    Anyone know how to finetune this?

    • @engineerprompt
      @engineerprompt  Год назад

      Have a look at this video: ruclips.net/video/RzSDdosu_y8/видео.html