Wow. . . I'm glad I've found your channel very informative, I'm still trying to wrap my head around those articles, love your accent and your smile, looking forward to more videos, best of luck in your thesis, cheers from Sydney.
Can't say I really understand most of what they have done but it seems totally wild and real creative how they managed to work it out. Good luck with the thesis :D
For what it’s worth, I completely understand your accent (it’s not very heavy in English), and wow I was completely overthinking it with my guesses on the paper on Patreon. 😅 I’m glad I can now steal all the logits if I wanted! 😜 Another fantastic paper breakdown as always!
Good luck with your thesis! If you want try to do a summary video of your thesis after you submit it and before your defense, I'd watch it. Could be good practice for the defense too. I'm sure you know more about the topic than anyone else in the world...new research is cool.
As "d" is the hidden embedding dimension, is it guaranteed somehow that the logits and embeddings themselves lie in d-dimensional space? Or they probably lie in lower dimensional sub-space?
Great observation. It could happen that they lie in a smaller-than-d dimensional space. They mention this in the Finlayson et al. Paper in a footnote. :)
I make all visuals (including the drawings in PowerPoint. 😅). I use Adobe Premiere Pro for editing (this is also the stage Ms. Coffee Bean comes into the picture).
@@AICoffeeBreak this is so impressive. I couldn't even imagine that you can do that much visual using PowerPoint. You must be a guru level in power point. Maybe this is the topic of another video to make :)
The idea that GPT-3.5-Turbo is just a 7b parameter model seems a little bit unlikely to me, its performance is leagues beyond the open-source 7b models that at least I'm familiar with. If they hadn't explicitly addressed it in the paper I would have assumed that it was a 8x7B or similar model. At least the performance of gpt-turbo-3.5 and mixtral is roughly the same.
OpenChat3.5 is 7B (mistral variant) spans what 3.5gpt does. So I believe it’s possible. However, I believe you could be right in that there could be MoE as well - or special teacher / student training that openAI did to help 3.5
WOW! 😅 I watched the whole video! I think my brain was somewhat cramping, but I grasped the concept. Good job and good luck with your thesis!!!! I subscribed but I’ll wait for my brain to heal a bit before I jump into another great video 🙃🙏👍
I am not a fan of the Title of the paper. First of all, analyzing data to determine a hidden aspect of the data where such data is publicly available is not stealing. And it does a huge disservice to professionals in the industry to call it that. It would serve well as a Joke title... but no professional in the information world should be using those kinds of titles. Other than that, my hat goes off to the researchers for analyzing the data and discovering ways to reveal this information. Thank you for making this video... Some of the best analysis on the net!
As researchers who take an adversarial approach to security, we sometimes must use prescriptive language when discussing methodologies of morally reprehensible acts. It's called red teaming. How else do you propose to succinctly describe such an action?
@@Acceleratedpayloads "Speculative API Analysis: Statistically Inferring Unpublished LLM Parameters and Updates". You could also add in "Penetration Testing" as part of the title to provide the information that this was done with security in mind: *"API Penetration Testing: Statistically Inferring Unpublished LLM Parameters and Updates"* I realize that flashy titles are the new "in" thing... but I really really don't want to see: "STUNNING new technique steals LLM secrets in SHOCKING new API HACK, GPT loses Everything!" hehehe
Our entire world is based on theft. Theft of human energy that is focused on the things those that do the steal. It will only get worse as capitalism becomes more late stage and the best capitalists start to more in to pharaohs. Happy pyramid building.
So cool. Thanks mate! I wonder how other architectures would fare against this attack. Also, super good luck on your thesis mate!!! I would love to see what you've been researching!
The intellect of a woman has always melted my heart. I think I just fell in love with Letitia. That is the most beautiful lecture I have ever enjoyed. ❤
Aside from the embarrassment and possible competitive disadvantage, do these papers also imply security risks? Rewriting the model in some way, like an old fashioned code injection? Are these methods available against any/all APIs?
Good luck with your PhD! While I wasn't missing your videos (as in I prefer higher quality videos instead of regular ones, which you are nailing), I'll definitely be looking forward to them.
my idea is openai is trying to do something similar to apple, as in whenever apple drops a new phone they make all the old ones worse. i think thats a big reason theyre so desperate to keep everything hidden
@AICoffeeBreak The paper Reads "University of Southern California", but ... You said "University of Southern Carolina" first time, and "University of South Carolina" 2nd time. -- I'm sorry, my INTJ personality picks up on the little details. 😄
* easiest way to prevent attack is to limit biases. Nobody needs to bias lots of tokens * maybe fuse last mlp with embed_out? Right now transformers do dim_embed -> dim_ffn -> dim_embed -> n_vocab. If last mlp outputs n_vocab directly instead of dim_embed, then itll be harder to figure out true dim_embed as mlp already uses non linearity
clickbait! fake news! i call BS on the title. *LeSigh...* This is like saying a smart kid figuring out a magic trick is stealing the magician's magic. quit misrepresenting to get views, it cheapens and invalidates your good work!!!
@@fatherfoxstrongpaw8968 The first time the papers are shown is at 0:35, then again at 1:45 and a few more times throughout the video. The papers are also linked in the description, if you want to have a closer look yourself.
@@fatherfoxstrongpaw8968 Wait, is your criticism with the paper or with the video? Because the video accurately explains what the authors of "Stealing Part of a Production Language Model" did in their paper. Of course the video will be titled according to the paper it explains, so that people looking for a explanation of that paper can find the video. So I don't see any false advertising or misrepresentation there. If you don't agree with the paper's title then that's a different discussion, and you might want to take up your criticism with the authors. But your comment saying "it cheapens and invalidates your good work" makes it sound like you're blaming Letitia.
Excellent! White paper from LLM monitoring, simply excellent :) ... off to the reading library. thanks muchly.
Thanks! Concentrate on your PhD writing it is important! We can wait for the next video.
🤗
Wow. I Lived. I have a long way to go to understand it all, but I held on!
Wow. . . I'm glad I've found your channel very informative, I'm still trying to wrap my head around those articles, love your accent and your smile, looking forward to more videos, best of luck in your thesis, cheers from Sydney.
Welcome aboard!
😃🙏
Good luck defending your thesis, your accent is one of the reasons i watch.
Can't say I really understand most of what they have done but it seems totally wild and real creative how they managed to work it out. Good luck with the thesis :D
Really cool video. Good luck with your thesis! ^^
Thank you!
So much linear algebra is involved in LLMs
They are a pile of linear algebra.
Good luck with the paper, remember to take some time for yourself afterwards! I look forward to your next contribution.
Perfect way to start a week with one of your videos. Great work!
Hey amazing content, quick note 2nd paper is from University of southern California (My uni) so just pointing that out!
Thanks! I've butchered it at 3:15, I did not notice that it was not "California" that came out of my mouth, sorry. 🙈
For what it’s worth, I completely understand your accent (it’s not very heavy in English), and wow I was completely overthinking it with my guesses on the paper on Patreon. 😅
I’m glad I can now steal all the logits if I wanted! 😜
Another fantastic paper breakdown as always!
Thank you! 😃 Your guesses on Patreon where highly informative. I told you I was humbled about how many good papers I did not know. 🤭
Awesome. Good video editing snd pacing too.
Good luck with your thesis! If you want try to do a summary video of your thesis after you submit it and before your defense, I'd watch it. Could be good practice for the defense too. I'm sure you know more about the topic than anyone else in the world...new research is cool.
Thanks, I think I will do something like that. :)
As "d" is the hidden embedding dimension, is it guaranteed somehow that the logits and embeddings themselves lie in d-dimensional space? Or they probably lie in lower dimensional sub-space?
Great observation. It could happen that they lie in a smaller-than-d dimensional space. They mention this in the Finlayson et al. Paper in a footnote. :)
LOL yup, still here ;)
What software and video editing tools do you use for creating this great content ?
I make all visuals (including the drawings in PowerPoint. 😅). I use Adobe Premiere Pro for editing (this is also the stage Ms. Coffee Bean comes into the picture).
@@AICoffeeBreak this is so impressive. I couldn't even imagine that you can do that much visual using PowerPoint. You must be a guru level in power point. Maybe this is the topic of another video to make :)
@AbdallahAbdallah
Here we go!!!
The idea that GPT-3.5-Turbo is just a 7b parameter model seems a little bit unlikely to me, its performance is leagues beyond the open-source 7b models that at least I'm familiar with. If they hadn't explicitly addressed it in the paper I would have assumed that it was a 8x7B or similar model. At least the performance of gpt-turbo-3.5 and mixtral is roughly the same.
OpenChat3.5 is 7B (mistral variant) spans what 3.5gpt does. So I believe it’s possible.
However, I believe you could be right in that there could be MoE as well - or special teacher / student training that openAI did to help 3.5
Good luck with the Thesis!
Thanks, Dana!
WOW! 😅 I watched the whole video! I think my brain was somewhat cramping, but I grasped the concept. Good job and good luck with your thesis!!!! I subscribed but I’ll wait for my brain to heal a bit before I jump into another great video 🙃🙏👍
Just here to say, your accent is wonderful, romanian I guess from your name. All the best.
Exactly, the ending "escu" is a perfect giveaway.
I am not a fan of the Title of the paper. First of all, analyzing data to determine a hidden aspect of the data where such data is publicly available is not stealing. And it does a huge disservice to professionals in the industry to call it that. It would serve well as a Joke title... but no professional in the information world should be using those kinds of titles.
Other than that, my hat goes off to the researchers for analyzing the data and discovering ways to reveal this information.
Thank you for making this video... Some of the best analysis on the net!
As researchers who take an adversarial approach to security, we sometimes must use prescriptive language when discussing methodologies of morally reprehensible acts. It's called red teaming.
How else do you propose to succinctly describe such an action?
@@Acceleratedpayloads "Speculative API Analysis: Statistically Inferring Unpublished LLM Parameters and Updates". You could also add in "Penetration Testing" as part of the title to provide the information that this was done with security in mind: *"API Penetration Testing: Statistically Inferring Unpublished LLM Parameters and Updates"*
I realize that flashy titles are the new "in" thing... but I really really don't want to see: "STUNNING new technique steals LLM secrets in SHOCKING new API HACK, GPT loses Everything!" hehehe
Our entire world is based on theft. Theft of human energy that is focused on the things those that do the steal. It will only get worse as capitalism becomes more late stage and the best capitalists start to more in to pharaohs. Happy pyramid building.
@@AcceleratedpayloadsYou don't sound like someone that's very good at it tbh
@@Acceleratedpayloads I have a suggestion. how about honestly? or perhaps "in context"? just a thought.
great content, glad I found the channel
I really like your content. Thank you so much for making it!
Glad you like the videos!
So cool. Thanks mate! I wonder how other architectures would fare against this attack.
Also, super good luck on your thesis mate!!! I would love to see what you've been researching!
I think quite badly as long as they have that hidden-dim to logits mapping. And most classification models have that. 😅
The intellect of a woman has always melted my heart. I think I just fell in love with Letitia. That is the most beautiful lecture I have ever enjoyed. ❤
Pronuncias muy bien. Es muy fácil hachear la ia explotando probabilisticamente los sesgos que le introducen para limitar su potencia.
Aside from the embarrassment and possible competitive disadvantage, do these papers also imply security risks? Rewriting the model in some way, like an old fashioned code injection? Are these methods available against any/all APIs?
Good luck with your PhD! While I wasn't missing your videos (as in I prefer higher quality videos instead of regular ones, which you are nailing), I'll definitely be looking forward to them.
LLM-hacker for the rise xD
Yep still there. I assume a MoE would make one’s life a bit difficult on hacking Token bias feature. Correct ?
Great point.
my idea is openai is trying to do something similar to apple, as in whenever apple drops a new phone they make all the old ones worse. i think thats a big reason theyre so desperate to keep everything hidden
@AICoffeeBreak
The paper Reads "University of Southern California", but ...
You said "University of Southern Carolina" first time, and "University of South Carolina" 2nd time.
-- I'm sorry, my INTJ personality picks up on the little details. 😄
Thanks for pointing out, and again: sorry, I had a brain fart there. 😅
@@AICoffeeBreak Must have been Coffee Beans on the Brain...🤣
Thank you❤
Thank you for your visit! Hope to see you again!
* easiest way to prevent attack is to limit biases. Nobody needs to bias lots of tokens
* maybe fuse last mlp with embed_out? Right now transformers do dim_embed -> dim_ffn -> dim_embed -> n_vocab. If last mlp outputs n_vocab directly instead of dim_embed, then itll be harder to figure out true dim_embed as mlp already uses non linearity
thank you
3:15 *California :D
Oh, yes! Sorry! 🙈
I see someone else caught that now that I looked ... Is your personality INTJ also?
💋🦒
🤭
not a big deal and it's mentioned in the paper as well
what happens when you apply this to a human?
Next step... Get the model using distillation... And obviously it will not be published..😂😂
If this approach works for LLMs in general, I wouldn't be suprised if some variation of it worked on human brains. That would be way crazier.
clickbait! fake news! i call BS on the title. *LeSigh...* This is like saying a smart kid figuring out a magic trick is stealing the magician's magic. quit misrepresenting to get views, it cheapens and invalidates your good work!!!
This is literally the title of the paper that is being explained here... There is no misrepresentation to get views here.
@@DerPylz i'm sorry, i missed that. could you point that out for me? (timestamp?)
@@fatherfoxstrongpaw8968 The first time the papers are shown is at 0:35, then again at 1:45 and a few more times throughout the video. The papers are also linked in the description, if you want to have a closer look yourself.
@@DerPylz yeeaaa... still calling BS. if you can guess or "infer" what it's doing, it's not stealing. sorry.
@@fatherfoxstrongpaw8968 Wait, is your criticism with the paper or with the video? Because the video accurately explains what the authors of "Stealing Part of a Production Language Model" did in their paper. Of course the video will be titled according to the paper it explains, so that people looking for a explanation of that paper can find the video. So I don't see any false advertising or misrepresentation there. If you don't agree with the paper's title then that's a different discussion, and you might want to take up your criticism with the authors. But your comment saying "it cheapens and invalidates your good work" makes it sound like you're blaming Letitia.