But can China's new AI write a Good Tune?

Поделиться
HTML-код
  • Опубликовано: 8 фев 2025
  • Pitting DeepSeek against ChatGPT on a series of increasingly complicated musical challenges.
    Patreon (for the code!): / marcevanstein
    Upcoming Python Music workshop: workshop.marcev...
    Kadenze Course: www.kadenze.co...

Комментарии • 153

  • @imveryangryitsnotbutter
    @imveryangryitsnotbutter 6 дней назад +216

    It should be said that the main innovation of DeepSeek is not to improve on output, but rather to drastically reduce the required hardware needed to output something in the same ballpark as current AI models. It sacrifices a little bit of quality in exchange for being more affordable, more accessible to non-profit organizations, universities, research institutes, and having less negative impact on the environment. That's a trade-off I'll gladly accept any day.

    • @marcevanstein
      @marcevanstein  6 дней назад +36

      to be sure! I mean, I've been so ambivalent about AI, because I find it really fascinating, and often times super helpful, but I really hate the resource hungry, closed, and amoral way in which it's being developed.

    • @everythingisterrible8862
      @everythingisterrible8862 6 дней назад +2

      It's not really very practical, getting only a few tokens generated a second. Consumer grade hardware isn't there.
      And it won't be there until NPU's, hardware that's the actual network in physical form, become the standard. It's a real post-AGI thing: You need a network to put on these things before they're able to do anything. (And knowing how gods-expensive it would currently be to print a human-scale 'brain' in a small form factor, it only makes sense to have a guarantee before dumping a trillion buckeroos on completely new computing paradigm foundaries.) Honestly IBM feels like the 'biggest loser' of the AI wars; they used to advertise this kind of thing back in the day, but there was little interest since you couldn't do anything that practical with them.
      DeepSeek is a fun model for enthusiasts. It's a candle in the wind compared to what will be made with the upcoming round of scaling in datacenters.

    • @valhatan3907
      @valhatan3907 6 дней назад +4

      Finally someone made a point about the environment

    • @Yarxk-j7w
      @Yarxk-j7w 6 дней назад +7

      chatgpt 4o has around 1.8 trillion parameters which is 3x larger than deepseek r1, and chatgpt o1 has probably the same or larger size. If the performance difference is less than 10-20%, it is consider extremely efficient.

    • @GageEakins
      @GageEakins 5 дней назад +2

      @@valhatan3907 Anyone talking about the environment and AI doesn't understand scale. Computer systems are by far and way the most efficient systems we have. They do not actually use that much electricity when comparing them to literally everything else we do in life. All datacenters worldwide, that is for all computer operations, not just AI, use 460TW/hr of electricity approximately. That sounds like a huge amount until you consider that worldwide energy consumption is 186,000TW/hr. So all datacenters account for 0.25% of global energy usage. Its hard to get estimates for how much of that is just AI but it isn't a significant amount of it. AI's largest energy usage is in the training but this is a one-time fixed cost. The actual prompt generation is negligible.

  • @MrDanMaster
    @MrDanMaster 5 дней назад +49

    It was DeepThink R1 that was worth of attention, because that is the reasoning model that is able to compete with o1 whilst being open-source. That is why at 2:40 you can't read what the model is actually "thinking" like when you used R1, just a summary of the thinking. Asking o1 what the internal monologue was is one of the few ways a paying customer can get permanently banned from the service.

    • @lcs.1094
      @lcs.1094 5 дней назад +13

      i know chatgpt intentionally hide their chain of thought as commercial secret, but banning users because they ask for it is quite crazy

    • @appa609
      @appa609 3 дня назад

      ​@@lcs.1094 fr just code the bot to keep its secrets

  • @franklee663
    @franklee663 5 дней назад +43

    I feel that Deepseek has more "Character" compared to ChatGPT. What I mean by character is Deepseek is more stubborn, deepseek will not try to correct errors except if it is obvious. So if you want something good, you should just start a new chat and then ask Deepseek the same question again but ask it to evaluate what it has produced previously by giving it its previous creation.
    I have tried deepseek many times and then until a certain number of tries, it will give up and refuse to give you any more new answers. That's why I think deepseek is better, hahaha, it is more like a human.

    • @Faizan29353
      @Faizan29353 4 дня назад +6

      Agreed, Idk about that
      but It does have character
      I was asking it to recommend games for my Pc hardware and It gave reasonable answers
      And also a "Reality Check, portion" Where It stated "WHy I should not dream high and my Hardware is just not that great to Emulate PS3 / PS4 Games and also Stated Why I could not Run an Old Emulator because my igpu does not support that openGL version"
      Asking the same to GPT:
      It tried to "please me" and sugar coated things
      It recommended PS2 EMulation just bcz My Pc's own hardware is better (but due to emulator's inefficiency, it will not run, DeepSeek Stated this)
      it also gave a lot of Games which "Do not run, I already have tried"
      While All suggestions by Deepseek Run.

    • @deeleon
      @deeleon 3 дня назад

      that's smart solution...

    • @谈天说剑独善何益
      @谈天说剑独善何益 2 дня назад

      确实,当你仔细观察它的思考过程,你会发现DeepSeek r1会推理用户需求,而不只是从字面理解用户问题(最典型的场景就是当你重复完全相同的问题时)。它甚至可以察觉你在开它的玩笑(哪怕是最抽象的调侃),然后不动声色的配合你。
      最近2周以来,中国的青年人乐此不疲😂😂,花式调戏DeepSeek录制视频上传,成为一种流行。

  • @questtech2698
    @questtech2698 6 дней назад +54

    "can a robot write a symphony?"
    "yes I can, can you?"
    from the director and producer of I, robot
    We, robots on theatres soon

    • @qianxin5472
      @qianxin5472 3 дня назад +2

      ruclips.net/video/QBHf_Gn9bhg/видео.htmlsi=Q71YJf7vN61MsyOj
      " Can a robot write a symphony? Can a robot turn a… canvas into a beautiful masterpiece?"
      "Can YOU?"

  • @johnchessant3012
    @johnchessant3012 6 дней назад +78

    8:09 "cher ami, you grasp the storm but forget the poetry. quartal harmonies need not be barbaric- make them sing. where is the rubato? the bass crawls where it should dance. and this ending- mon dieu, it stops rather than dissolves!"
    LOL

    • @coltzhao
      @coltzhao 6 дней назад +5

      Typical Chinese online talks… translated in English.
      One of the thing I noticed with fluent in both languages is that, DS can be wildly spit out weirdly and some time cynical/humorous even brutally insulting word, like the people in tieba arguing online . That also why I am not really buying the instillation thing.

    • @ThisAMJ
      @ThisAMJ 6 дней назад +4

      ​@@coltzhao translated *into* English. One of the *things* you noticed *being* fluent in both languages. The rest of your comment is somewhat difficult to follow. DS can randomly spit out weird and sometimes cynical/humorous phrases, even brutal insults. *That's* also why I don't really buy the claims about its training.

    • @coltzhao
      @coltzhao 6 дней назад

      @ if you ever argue and talk on various Chinese online platform you know, the style is remarkably similar to deepseek sometime, and I never saw such mood/style in ChatGPT.

    • @MrDanMaster
      @MrDanMaster 5 дней назад +2

      @@ThisAMJ no, they wanted: "One of the things you notice with fluency in both languages"

  • @TheStigma
    @TheStigma 2 дня назад +3

    "Stumbling into something interesting, rather than actually being competent"
    I think you just summed up honestly the creative process of every artist :P
    Maybe that's an insight into why these models (in their current forms at least) can work so well as idea-generators and starting points.

    • @marcevanstein
      @marcevanstein  2 дня назад

      Well, yeah, if you look back at some of my other similar videos, I point out that the missing key, as far as artistic creation is concerned, is the ability of the model to spot when it has stumbled onto something interesting or not. Between "stumbling on interesting material" and "recognizing the interesting properties and synthesizing the material", it's the latter that truly makes an artist. And obviously that's where these are falling flat.

    • @TheStigma
      @TheStigma 2 дня назад

      @@marcevanstein Very true. It's a fundamental issue to judge quality whenever it is not something you can nail down mathematically - which applies to most things in art. AFAIK the only really good method is to manually curate huge sets of data to tell it what is "good". This is orders of magnitude more work, and I can't imagine this has been done nearly as much for music as it has text and images.
      Even then - you obviously run into human biases and differences in preference.
      But this is why I think crowdsourced "feedback-as-you-go" will be invaluable no matter how good the models become.

  • @nahlene1973
    @nahlene1973 6 дней назад +21

    you should've turned on the Deepthink button from the first prompt, as they are calling for different model (it's like you started from gpt4o then let gpto1 to pickup the previous jobs)

  • @Akuma.73
    @Akuma.73 5 дней назад +29

    7:30 Keep the prompt concise. Avoid words such as: Try, Oh, Please, Can you, Would you, Could you. Instead use a direct language, be dominant with the model, it tends to produce better results. Here is a better version:
    Write a fast, dramatic miniature Chopin-style prelude, featuring quartal harmony instead of triadic harmony. Incorporate aspects of Chopin's writing, reduce superficial elements. Decide the key (except C) and the time signature. And give me a list of beats on which to change pedal.
    From DeepSeek R1 paper: Prompting Engineering: When evaluating DeepSeek-R1, we observe that it is sensitive
    to prompts. Few-shot prompting consistently degrades its performance. Therefore, we
    recommend users directly describe the problem and specify the output format using a
    zero-shot setting for optimal results.
    It's better to start new chats rather than continue existing ones and keeping the prompt as direct as possible. Just FYI, I hope it was helpful! Happy seeking :)

    • @Aeduo
      @Aeduo 4 дня назад

      I'm guessing the language formalities just do more to add "distraction". Meaningless tokens which still contribute to the end result, and take up token memory. On the other hand, seeding a bit of randomness for unique results with meaningless tokens might be useful too.

    • @orderofchaos8680
      @orderofchaos8680 4 дня назад

      Conciseness is rule of thumb. But you never know the magic of redundant words. Sometimes redundant words makes the chat it's perform much better.

  • @shadmium
    @shadmium 6 дней назад +124

    i always found the deepthink button hilarious

    • @baldeagle6531
      @baldeagle6531 6 дней назад +19

      You are saying 'always' like you had access to it 3 weeks ago. Also what's the problem with deepthink button? If it was useless, than CGPT wouldn't have added the same feature. In fact, deepseek has a far more structurally and logically correct responses in its thoughts and then it makes conclusions of them, which are sometimes wrong, even if the thoughts were correct.

    • @abhishekak9619
      @abhishekak9619 5 дней назад +4

      ​@@baldeagle6531the name sounds funny.

    • @Akuma.73
      @Akuma.73 5 дней назад +6

      @@baldeagle6531 Actually few months already. It's been available since the end of December. It was a V3 model though.

    • @381delirius
      @381delirius 5 дней назад +4

      Ask it how many Rs in strawberry😂

    • @Trey06383
      @Trey06383 5 дней назад

      @@baldeagle6531the deep think button already existed

  • @TheGuyWhoGamesAlot1
    @TheGuyWhoGamesAlot1 6 дней назад +14

    You should try Gemini models. They are technically multi-modal (understand multiple modalities such as text, images/video, and audio). I wonder if that audio understanding would help.

    • @pallharaldsson9015
      @pallharaldsson9015 3 дня назад

      Note, I can't confirm Gemini supports audio, even though multimodal, I even believe it's incorrect: "Gemini is a multimodal model from the team at Google DeepMind that can be prompted with not only images, but also text, code, and video." You might think video implies audio, but famously Google demoed with a black and while silent film. I believe since it couldn't handle the audio, and the film had text on screen. Yes, there is *other* AI that generates audio (trained on songs, only at training time takes in audio, not at inference unless I've missed some recent developments), even with singling from lyrics, but all I've seen so far, takes in a text prompt only: I find it rather amazing that unlike such, LLMs can understand music enough and generate from just reading" text (like if a deaf person could... also that it can sort of play chess, I suppose just a tiny fraction of text online has notes or chess [notation]). OpenAI has voice mode now, it was clear that the older version did only speech-to-text, then only fed in text, and then used text-to-speech for output, but that's no longer a restriction, is now "end-to-end", so it hears you tone of voice and inflections. I still think it's mostly for speech not music, might be wrong.

    • @TheGuyWhoGamesAlot1
      @TheGuyWhoGamesAlot1 3 дня назад

      @@pallharaldsson9015 ai.google.dev/gemini-api/docs/audio?lang=python
      No, it does. I don't know if there is anything in particular for understanding music in the training data, but it can label music decently from what I have tried. Maybe it isn't available in the Gemini interface, but for the API it is.

    • @TheGuyWhoGamesAlot1
      @TheGuyWhoGamesAlot1 3 дня назад +1

      @pallharaldsson9015 I don't know if my other message was posted, because it had a link, but if your read Gemini's documentation, it can process audio and sounds.

  • @vladthemagnificent9052
    @vladthemagnificent9052 6 дней назад +18

    I'm afraid real Chopin would just have a stroke.
    I agree, it's 'kinda interesting'

  • @nonetrix3066
    @nonetrix3066 2 дня назад +1

    The beginning of the first few songs by Deepseek is actually really nice but then completely drops the ball. I could see using it as an interesting starting point in my head at least, although I know nothing about music really

  • @TheStigma
    @TheStigma 2 дня назад +2

    The internal thinking is really fascinating to be able to watch fully play out IMO. Yes - it does reveal quite a bit of the "stupidity" of AI models that can otherwise seem a bit like magic, but it also allows you to identify some core misunderstandings it may have that prevents it from producing a better answer, or at least focusing it's efforts on other things. This can often be very basic things like not being quite sure about the context of your question. If you then clarify or correct some facts that it uses to arrive at the answer and ask it to reevaluate with the new info, it will usually help massively in the output quality. A system that can integrate this feedback and corrections into further learning refinement becomes self-improving by crowdsourcing. It is much harder to spot reasoning issues when this process is opaque.
    This is already a big issue in open projects like Deepseek R1 vs proprietary like GPT. Giving away the internal though process means other can distill the model and train on it very efficiently, so proprietary systems won't want to give you that insight. It's obvious why there would be a need for this from a capitalist angle, but it will also mean missing out on a great amount of natural learning and better understanding between humans and the model - which seems pretty essential for improvement of the technology as a whole. Yes, you can skim the whole internet for info and learn a lot, but that's not nearly as useful for self-reflection as direct human feedback and commentary (I think... as someone who admittedly doesn't make LLMs :P )

  • @TheStigma
    @TheStigma 2 дня назад

    Good point about open-ended instructions producing "better" results. The less constraints you put into the request the better the model can usually extrapolate from it's training something that "fits together" quite well, and thus is often perceived as "higher quality" or "more natural". The downside is obviously that the output will be more random in scope, and potentially produce things that aren't relevant. Always consider what constraints you actually need initially - and add further constraints as you narrow down and refine the idea. Over-instruction is a very real thing.
    The funny thing is, this applies just as well to something like image creation or co-writing a fictional story. It applies quite broadly to AI models in general.

  • @michaelvarney.
    @michaelvarney. 6 дней назад +12

    I broke R1 last with a question about modes and using Nashville numbers and Roman numeral analysis… it recursively puked for 5 min and invented/hallucinated five additional letter classes for the Nashville number system… 😅

    • @Mnnvint
      @Mnnvint 4 дня назад +2

      Yeah, I used one of the smaller DeepThink-tuned models, and the model's internal monologue reminded me of the bike scene in "A scanner darkly". It has two gears here, three gears there, how can it have six gears?!

  • @hundvd_7
    @hundvd_7 6 дней назад +32

    This is dumb.
    I know we are already abusing chatbots for something they were not even designed to do, but the biggest issue here is having a _single_ conversation with each.
    Sometimes both Chet Jippity and DeepSeek just start off on the wrong foot.
    Like, you'd ask "what's 1 + 1" and _most_ of the time they answer 2, but sometimes it's wrong. Which is what I feel happened to _both_ of them here-but especially DeepSeek.
    And the important thing is, you _need_ to start a new conversation if the very first answer it gave was totally, catastrophically bad. I believe the logic behind it is that if the AI looks at the history of the conversation, and sees that it was dumb and had to be corrected or reprimanded, then it will _keep_ playing that role as that is statistically way more likely to happen than the "person" suddenly becoming a genius.
    *My suggestion,* if you're gonna make another such video in the future:
    - try at least 3 (preferably 5) conversations each, sending them the exact same message
    - play them all and quickly compare them, choosing only the best ones of each AI
    - continue the rest of the video as you would

  • @N-JKoordt
    @N-JKoordt 4 дня назад +2

    Pretty random stuff - you know what they say: If you make a thousand monkeys type on typewriters... So, if you're patient, it may produce something worth hearing eventually.

  • @Leto2ndAtreides
    @Leto2ndAtreides 3 дня назад +1

    Time to test o3-Mini (High) instead of o1.
    Although, realistically... I think they just haven't been fed enough musical data to have a decent sense of it.
    Most likely, no one is testing them on this to make sure that they can do a decent job.
    May be a good idea to actually give them a few reference samples before asking them to write something - since they likely don't remember them in detail.

  • @RengokuGS
    @RengokuGS 6 дней назад +8

    That Image to music video was genius and I'd love a copy to that repo. Soooo good!
    Edit: Is it on the Patreon?

  • @firiasu
    @firiasu 4 дня назад

    So that's where the music in the corridors of the music school comes from!

  • @HillHand
    @HillHand 6 дней назад +2

    If you use the Spellbook module for VCV Rack, t's got a CSV-like text based sequencing format called 'RhythML' which is easy to explain to these models and copy & paste like code snippets, so you don't have to worry about converting into MIDI or something.

  • @glumpfi
    @glumpfi День назад

    GPT4 could be a great composer with different training data since MuseNet was based on the very small GPT2 model, but with midi as training data instead of general texts

  • @Marta1Buck
    @Marta1Buck 3 дня назад

    I want AI that can accompany me jamming while I'm bored and all my friends are busy.

  • @legendlenos1175
    @legendlenos1175 2 дня назад +1

    Using deepseek without deepthink= deepseek-v3
    With deepthink= deepseek-R1 the new model
    You should have used it from the beginning

    • @soheiladam7510
      @soheiladam7510 2 дня назад

      Exactly, people need to know how to use things before making dumps videos.

  • @sarakzite6946
    @sarakzite6946 6 дней назад +1

    Ive got two ideas that I can’t make myself because im very big noob in AI RL DL etc… 1) an ai coach for piano, that would help a student measure by measure, maybe an LLM that can read sheet music. 2) a fine tuned model that can create sheet, kinda like you did.
    I think it’s only a matter of months/years until someone does it, the second is probably already done.
    Thank you for your vid as usual, id love to see more ai music content ❤

  • @AlanDampog
    @AlanDampog 3 дня назад

    subbed, great topic/idea

  • @telotawa
    @telotawa 3 дня назад +1

    you probably should have started it on deepthink R1, rather than switching mid-interaction, to make it fair

  • @lettersandnumbers993
    @lettersandnumbers993 4 дня назад +3

    nobody is speaking about *which* deepseek model they actually used or talk about. I am sorry, but the size does indeed matter.

  • @TommyGreenTeas
    @TommyGreenTeas 6 дней назад +3

    Ahh now i hear moonlight in the storm! 👆🙂‍↔️☝️
    C’est presque bien

  • @napuzu
    @napuzu День назад

    I thought chatGPT was smarter but with command: If the subfolders has symlink, then backup all the files inside the symlink to subfolders, keep it only 5 backup.
    With deepseek, it always doing right. With chatGPT, it never backup files inside the symlink but the symlink itself no matter how much I correct it.

  • @rmt3589
    @rmt3589 4 дня назад +1

    I was planning on teaching the AI MML Music Macro Language. But now I wanna see how it can work with Python. Maybe a mix of both?

  • @grindx1292
    @grindx1292 6 дней назад +5

    Hey Marc, I find the video slightly biased toward ChatGPT, for the fact of your consistency with using their o1 model. This differs from your fluctuations of use between DeepThink (deepseek r1) and non Deepthink (deepseek v3, vastly inferior to ChatGPT o1).
    Redoing this experiment with consistency among models will net greater results on Deepseek's end, no doubt.

    • @swagataraha7396
      @swagataraha7396 3 дня назад

      Marc is paid by ChatGPT
      He is promoting ChatGPT

  • @bananechoc-1p5
    @bananechoc-1p5 4 дня назад

    I didn't know A.I. can write music. Time to check out ol' GPT and mr. Seeks for a test drive

  • @d.d.jacksonpoetryproject
    @d.d.jacksonpoetryproject 2 дня назад

    Fun but not sure what the point is when you have actual AI music creation engines tuned for this purpose that can actually do meaningful output like Udio and Suno

  • @poptropical3170
    @poptropical3170 6 дней назад +11

    9:30 giving me Legend of Zelda vibes

  • @JestroGameDev
    @JestroGameDev 6 дней назад

    The Chopin remixes (especially both revisions) sound like the active phase of Gohdan from Wind Waker

  • @m4rt_
    @m4rt_ 5 дней назад

    Well... Generative AI just uses statistics to try to generate an output that seams plausible, it can't think, it can't feel, etc.
    The only thing that really matters is the kinds of output it gives, what biases it may have (based on the input data), and how plausible the output is.

  • @LD-12345
    @LD-12345 4 дня назад +1

    0:11 Can't believe I fell for that...

    • @marcevanstein
      @marcevanstein  4 дня назад +2

      Secret purpose of the video revealed.

  • @alex_316
    @alex_316 4 дня назад

    Wow, I'm impressed it could be used to write a music. Would you be able to try a new Open AI Deep Research model? Wondering if it could do better

  • @u8qu1tis
    @u8qu1tis 5 дней назад

    I don't see the mixed meters as mistakes unless you specify that there shouldn't be measures with different time signatures. I've been playing a lot of modern movie music recently and that has mixed meters all over the place.

  • @etitis8178
    @etitis8178 5 дней назад

    The deepseek sounds like "confusing assignement"

  • @wli2718
    @wli2718 3 дня назад

    these AI's are called LLM's or Large Language Models. also called chatbots. questions that are word based and have word answers is what it is meant to do. this includes mathematical questions too. but once you depart from these parameters, they are not very good.

  • @HelamanGile
    @HelamanGile 5 дней назад +1

    I like the bass line I don't like it before the baseline hahaha 😂

    • @HelamanGile
      @HelamanGile 5 дней назад

      I really don't like chat gtp's baseline

  • @duddex
    @duddex 6 дней назад +3

    Sia++ 😂
    That’s great. How do you come up with these names?

    • @marcevanstein
      @marcevanstein  6 дней назад +2

      Oh that's some honest to goodness human dad joke cringe. I do it myself :-)

  • @TheBrighamhall
    @TheBrighamhall 4 дня назад

    Nice. How do you take the output and get it to staffpaper?

  • @thbb1
    @thbb1 4 дня назад

    The outcome is not much different from using a symbolic system (generating tunes according to logical rules), initialized with a random seed for the initial notes. Lots of technology and energy put to use for something that could have been generated with a prolog program on an intel 8086 in the 70's.

  • @TheJunky228
    @TheJunky228 5 дней назад

    I've tried getting llms to write melodies and play chords since I first ran them locally

  • @burdenedbyhope
    @burdenedbyhope 4 дня назад

    do you start a new conversation before turning on the R1 (deepthink)? If you use the same conversation, the message history will be in the context and affect the generation.

    • @soheiladam7510
      @soheiladam7510 2 дня назад

      he clearly needs to learn how to use it correctly.

  • @Draconic404
    @Draconic404 6 дней назад +3

    This is like using a wrench as a hammer. This makes for a funny video, but if someone is seriously attempting to make a melody with ai, please use ai made for that and not chatbots

    • @carljohanson3895
      @carljohanson3895 6 дней назад +1

      Yeah idk who is going "oh no the reasoning model that was never stated to be made for making melodies can't create good sounding music" like what?

    • @marcevanstein
      @marcevanstein  6 дней назад +3

      obviously I know that there are more dedicated music AI systems, but I actually still think that this is a really interesting kind of test. like, it's almost because this isn't what it was built to do that to me it gives me more insight about what these models are capable of. the last time I made one of these videos, the problems of getting the wrong number of notes in a measure, or lacking a larger sense of structure, we're really acute. but something about the metacognition, the "deep think" process has allowed the chat Bots to

    • @RamanujSarkar-i3l
      @RamanujSarkar-i3l 5 дней назад

      I agree with that. The music from before wasn't as good as the music now. Even if the AIs still aren't good at making music in this new format, it's still pretty interesting to see how they've improved. It's almost as if the "deep think" process has allowed the chat Bots to

  • @warrennelson5190
    @warrennelson5190 2 дня назад

    Music and art should be the realm of humans. We know that big tech is aiming to produce royalty free music to make clear profit. Aí should be doing the dishes and sweeping streets, the things we don't enjoy doing

  • @eldrago19
    @eldrago19 6 дней назад

    It's a Chopin polonaise, I tell you!
    On a more serious note, this was very interesting, perhaps a reflection on DeepSeek's less structured training. Obviously if you wanted AI written music, repurposing a large language model is a roundabout way of achieving that.
    I think the intro music was by OpenAI and the outro was by DeepSeek as the intro felt more structured.

  • @christianherzig1575
    @christianherzig1575 День назад

    "Interesting" in the sense of "garbage" :-)

  • @mkDaniel
    @mkDaniel День назад

    8:37 sounds like a black midi.

  • @kevinguo7097
    @kevinguo7097 2 дня назад

    Interesting 🎉

  • @thatguyalex2835
    @thatguyalex2835 6 дней назад

    Welcome to the future, where open source AI can critique itself. :)
    Also, what brand is your shirt? The blue/gray looks good on you, bro.

  • @BBarNavi
    @BBarNavi 6 дней назад

    its melody: MEIYOU GONGCHANDANG JIU MEIYOU XIN ZHONGGUO

  • @eti313
    @eti313 6 дней назад +5

    Amazing how it wrote better music than Chopin without any human interaction-just the press of a button.

    • @JestroGameDev
      @JestroGameDev 6 дней назад +7

      Eh, I wouldn’t say better. Chopin had some pretty amazing stuff

    • @thatguyalex2835
      @thatguyalex2835 6 дней назад +1

      I guess we can say the OpenAI is now on the Chopin block when Deepseek wrote a great melody. :)

  • @OnigoroshiZero
    @OnigoroshiZero 3 дня назад

    Using LLMs that can't even hear, write music in such a way is dumb.
    They can't even read notes, only descriptions and general knowledge of them. And even then, they need to break down everything to tokens, and they barely have any understanding of what individual tokens are.
    I've used them to write lyrics and then give descriptions on how the music should go to use with Suno, and they create better songs than 90-95% of anything humans have been making in the last 10-15 years.

  • @mrCheeeseGuy
    @mrCheeeseGuy 6 дней назад

    i honestly thought it was going to write red sun in the sky🤣

  • @zemlidrakona2915
    @zemlidrakona2915 6 дней назад +7

    So ChatGPT writes better shitty music than Deepseek.

    • @soheiladam7510
      @soheiladam7510 2 дня назад

      No, he just didn't use the AI correctly.

  • @luminousherbs
    @luminousherbs 5 дней назад

    you should’ve written your music-making program in C#

  • @nakamaazashiro
    @nakamaazashiro 4 дня назад

    you're right, AI by the moment can't 'hear' music, they're too dumb for it.

  • @IlstrawberrySeed
    @IlstrawberrySeed 6 дней назад

    Technically speaking, the o models are's chatGPT models, but a different line of "products."

  • @flaryx32
    @flaryx32 3 дня назад +1

    Pls do gpt o3

  • @DamiDoria
    @DamiDoria 5 дней назад

    👏👏👏

  • @sstoi
    @sstoi 15 часов назад

    try cloude sonnet

  • @darkbrumoment
    @darkbrumoment 6 дней назад +1

    deepthink is incredible lol

  • @fisophia1734
    @fisophia1734 3 дня назад

    He train by chat gpt 😂😂😂

  • @dariolapoma
    @dariolapoma 6 дней назад

    8:44...Smorz.

  • @yasscat5484
    @yasscat5484 4 дня назад

    wait no 1+1=2

  • @maayansagman5771
    @maayansagman5771 5 дней назад

    Cool vid, but why did deepseek think Copin was Fr*nch?

    • @timidlove
      @timidlove 5 дней назад +2

      he spoke French whiling teaching in Paris.

  • @TJCyan
    @TJCyan 6 дней назад

    hi

  • @di380
    @di380 4 дня назад

    Sounds pretty awful! But this is not a scientific test science the results depend on “taste”. And a specialized model trained on making music will yield better results

  • @supernerdinc5214
    @supernerdinc5214 5 дней назад +2

    I'll answer the question. No. All the AI's are shite at music. Ask for a common simple chord progression... I have yet to get a correct answer. Don't trust these for any kind of music education.

  • @hiranpeiris877
    @hiranpeiris877 6 дней назад

    RT

  • @gabrielsandstedt
    @gabrielsandstedt 6 дней назад +1

    OpenAI released the O3 model yesterday, it should be a lot better

  • @hanfucolorful9656
    @hanfucolorful9656 4 дня назад

    The world will become boring and there will be no more great composers from 2025 onwards.😭😢😥😰😱

    • @Marta1Buck
      @Marta1Buck 3 дня назад

      What are you talking about? People play and make music because it's fun. No one's gonna stop making it. Many ai music, sure. But stopping, naaaah

    • @hanfucolorful9656
      @hanfucolorful9656 3 дня назад

      @@Marta1Buck You need to go back to school to improve your reading ability. Can you understand what I am talking about?

  • @phen-themoogle7651
    @phen-themoogle7651 5 дней назад

    Haha hilarious 😂

  • @johnserpo9267
    @johnserpo9267 4 дня назад

    That's not Chopin, it's more like those atonal music from early 20th century. This proves that composer's like Arnold Schoenberg and other serial music composers were all hack job - easily replaceable with AI.

  • @andybaldman
    @andybaldman 6 дней назад

    These are all little more than random notes, which YOU are adding meaning to.

  • @peanut0brain
    @peanut0brain 4 дня назад

    Why pee-loosi was in tianamen sq on oct 1988 passing out anti govt fliers?!

  • @Mintymondos
    @Mintymondos 6 дней назад +2

    I was asking it about the CCP and then abt the anti government protests and it said that those aren’t happening and a majority of china support the ccp

    • @JeffreyHow
      @JeffreyHow 6 дней назад +8

      The populous support ratio for their government is much higher than here, that is a fact.

    • @GoodBaleadaMusic
      @GoodBaleadaMusic 6 дней назад +7

      Ask it about yesterdays US invasion of the Congo

    • @imveryangryitsnotbutter
      @imveryangryitsnotbutter 6 дней назад +1

      Unfortunately the online version of the model self-censors. The engineers basically had to do this in order to avoid bringing the wrath of the CCP down on them. Luckily, if you download the source code, the model answers questions about the Chinese government honestly.

    • @GoodBaleadaMusic
      @GoodBaleadaMusic 6 дней назад +5

      @@imveryangryitsnotbutter Ask Chat GTP why the US military invaded the congo this week. You won't but others reading this look at the desperation in the westerner these days. They are so scared. Of the revenge coming.

    • @19447427
      @19447427 5 дней назад

      yes we chinese support the gov. i approve that, the protest is basicly a color revolution backed by the US. look at Ukraine. we know color revolution is shxt