Composite Theory of Dissonance

Поделиться
HTML-код
  • Опубликовано: 21 ноя 2024

Комментарии • 77

  • @googlekopfkind
    @googlekopfkind Год назад +15

    Your videos may not get many views, but I have to say that as a music theory nerd, I find your videos groundbreaking and highly valuable. The relationship between the spectrum of sounds and the tonal system is a completely new aspect for me. Please continue making videos on these topics. We are eager to see more you can share with us.

  • @kdakan
    @kdakan Месяц назад +2

    In section 7, example 3 was my favorite, it's bell like and sounds beautiful. And it complies with your explanation of the Java/Bali tuning system derived from the overtones of the pitched percussion instruments used in that music.

  • @music-zv6je
    @music-zv6je 10 месяцев назад +1

    I love this channel so much, easily top 3 most valuable sources of information and completely new musical perspective delight on RUclips, thank you so much for the hard work

  • @johnwellbelove148
    @johnwellbelove148 Год назад +8

    Way back when computers first became available in the home I tried getting mine to create octaves based on various arithmetic and geometric sequences and with varying numbers of notes in the octave.
    I'd get the sequences to play in a repeating loop and, although they would often feel dissonant at first, my brain gradually got used to them until they sounded almost normal.
    I've always been a great fan of the sound of Gamelan, which can often sound dissonant to the western ear.

    • @new_tonality
      @new_tonality  Год назад +3

      Yes, I have a similar experience. Though for me it quite quickly resets to the usual way I perceive things if I take a break.

    • @johnwellbelove148
      @johnwellbelove148 Год назад +1

      @@new_tonality I once had a go at playing Gamelan in London many years ago. We were taught to play a tune called "Long time no herbal medicine" I quite enjoyed playing an instrument with large wooden mallets!

  • @drrodopszin
    @drrodopszin Год назад +6

    Your channel is one of the most memorable of music youtube, I love it, gives a lot of food for thought. I have two video ideas for you: 1) I was watching Benn Jordan's video on physical modeling synthetizers and how many of them produced very metallic, bell like sounds; if anyone is making non-minimalistic music knows how hard are them to fit into standard tuning songs. Usually the tech-nerd of the band brings them in, they sound amazing in a muffled demo and then the mixing engineer quietly deletes them after burning 5 hours on fitting them. Why does it happen, is there a way to fit standard tuning with non-standard tunings together, how to know when a sound will never ever fit? 2) The second idea is about hearing and reproduction of pitches. Almost all vocal lessons I have seen involve the piano; in one of your videos you have explained how complex are pianos and how pro piano tuners "cheat" for lower registers. That and the fact that one dude used a piano out of tune to teach vocals made me think: what is the best waveform for pitch training for singers? I.e. the one that you can both hear while you sing, and also very easy to determine.

  • @ekaterinaderiushkina3052
    @ekaterinaderiushkina3052 Год назад +1

    Great video! I like that there is a parallel between the interference in optical waves in physics, this is very beautiful.

  • @macronencer
    @macronencer Год назад +1

    This is a very solid breakdown of the arguments. Thank you for your great work!

  • @tango_doggy
    @tango_doggy Год назад +1

    Your synth is amazing!!

  • @lucassiccardi8764
    @lucassiccardi8764 Год назад

    Very interesting content, keep up the good work!

  • @lilac_hem
    @lilac_hem Год назад

    seriously amazing work; thank you so much for sharing, and great job on your website and synthesizer. it's so kind of you to share it, and all of this, with us.

  • @pocketcrocodile763
    @pocketcrocodile763 Год назад

    That 17:15 comment was really interesting! Thank you for the continued research on this. This is way too cool.

  • @j.b.cristian
    @j.b.cristian Год назад +2

    You should upload the full clair d lune alien version! Haha Great video 👍

  • @armandogiordano1226
    @armandogiordano1226 Год назад +2

    Superb channel!

  • @DJ_Cthulhu
    @DJ_Cthulhu 10 месяцев назад

    Just followed on bandcamp. This is fascinating stuff 🖖

  • @Gnurklesquimp2
    @Gnurklesquimp2 9 месяцев назад

    It will always be amazing that this much complexity emerges from sound waves that can basically be represented fully in just two dimensions, many people don't realize just what kind of marvel they're looking at when they see a sound wave represented on their screen. I also just love blowing people's mind with an EQ, some sines, saws and noise, a bell, speed up some polyrhythms/slow down some consonant chords with perfect ratios etc., such an incredibly simple building block that leads to so many phenomena when we perceive it. I will definitely share these videos with whoever ends up being appropriately blown away by how it all works.
    Just think about how sensitive we are to this too... It differs so much from how we perceive colors, for example, even though you'd think it would be a very similar deal. We do perceive a far wider range of sound freqs than color wavelengths, we don't even see the equivalent of an octave, but ratios like 2/3 are present, so there's also just less of an opportunity for such consonances to stand out to us, I suppose. (Purple is fake btw., google that if you think I'm crazy)
    I was completely with you that example 3 sounded the 2nd ''best'' btw., the others felt more like they mixed in a bunch of messy noise that didn't have anything to do with the rest, 3 somehow sounded more organic to me as well. 16:22 Some chords like the supposedly consonant one it's about to hit over here do still have some of that chaotic quality to them to me, though. Regarding tracks that pursue some freqs rubbing together very aggressively, Joker - Tron is a classic in the uk dubstep community and a great example by using detune on synths, where it creates color in absence of more complex harmony. Still a relatively conventionally musical example as far as dubstep goes, a seriously fascinating genre you should check out even if you dislike what you've heard so far, try Kryptic Minds, for example)
    Would you consider doing a video about phase if you hadn't yet?

    • @new_tonality
      @new_tonality  9 месяцев назад

      Not sure I am going to touch on phase any time soon. But you made me think more deeply about it. It indeed may impact a tuning especially if partials coincide perfectly as interference will be more pronounced. Like if some partials cancel each other out it can be percieved as less consonant. But I need to experiment with that first)) Thank you for comments, very cool stuff!

    • @Gnurklesquimp2
      @Gnurklesquimp2 9 месяцев назад

      @@new_tonality Awesome to have inspired you, video or not! It's pretty hard to intuit a lot of stuff that phase can do, one of those things where I usually thow stuff against the wall until it sticks.

  • @iolairmuinnmalachybromham3103
    @iolairmuinnmalachybromham3103 Год назад +5

    Great video! I love a good introduction to where a body of research is at now.
    Do you know if there are any good hypotheses about how these cultural differences in dissonance perception and preference come about? It’s of course an easier step to observe how the music that people are exposed to might affect the way they hear everything else at first, than it is to figure out why those kinds of music have those differences in the first place. I’d have thought that the Western preference for harmonicity in music might have come from both building music mainly with voices and stringed instruments and then using a lot of different notes together - Western Classical music and jazz certainly seem to me to be more polyphonic than any other music traditions around the world (this is just an observation, not a value-judgment). By the same logic, you might understand Gamelan music standards as originating from the combination of many inharmonic instruments leading to a preference for tuning that reduces roughness there. And then by extension we might assume that in music traditions that aren’t very polyphonic there is less of a concern for avoiding roughness because you are never going to have that many notes stacked together, and so it doesn’t play as big a role. But then you have music traditions that also seem to be very rooted in harmonicity, like Hindustani music, which have little to no polyphonic elements, so this hypothesis doesn’t work there. Anyway, just wondered if in any of your reading you had come across anyone trying to bring together ethnomusicology and dissonance perception to understand this better.

    • @new_tonality
      @new_tonality  Год назад +3

      Honestly I spent less time on looking into cultural component then the other two, so I am not much help here. If you are interested I would suggest reading the article about composite theory as it has a brilliant literature review in my opinion.
      However one thing I want to add is that music does not have to be polyphonic to make musicians gravitate to pitches that minimize roughness. The relation between pitches is still in our memory as a context for the current note. It may require more experience to notice but a trained singer should be able find those pitches intuitively. I had an article about this somewhere, I'll post it later when I find it))

    • @iolairmuinnmalachybromham3103
      @iolairmuinnmalachybromham3103 Год назад

      @@new_tonality Thank you! I will read it then. The whole phenomenon that you mentioned of beats being perceived sometimes when two sounds aren't actually interacting at all is really interesting, and I guess with that information it also makes sense that people might perceive some roughness between two successive pitches even when they don't interact- I'd have thought itv would be less pronounced than when they're played at the same time, though

  • @pauwel9380
    @pauwel9380 6 месяцев назад +1

    In the stretched spectrum experiment where the partials were shifted further apart, there really is a physical beating effect, you can see that the amplitude modulates and visually reflects the beating that can be perceived by ear, I think this is most noticeable since it is only slightly off from the standard octave (partials being out of phase due to inharmonicity, creating beating in amplitude), in experimenting with tuning I've found that correspondingly stretched just-intonation ratios for the stretched octave sound quite stable and consonant musically, and that certain irrational numbers as pseudo-octaves, such as phi squared, the square root of 6 or 7 etc, can sound consonant as though the roughness is masked, the waveforms seem to lack any periodicity while also lacking glaring beating or roughness issues; in some ways these sound to me somewhat reminiscent of Gamelan tunings.
    It is interesting to conceive of beating frequencies in tuning as being a reflection of low frequency brainwaves, tying in to the concept of entrainment in music this can also be related to rhythmic or melodic qualities, such as a vibrato between two close pitches being perceived as pleasant while the two played simultaneously would be dissonant, or the same with a melodic accent ("neutral intervals" being an example).
    An important consideration for theory on dissonance is the objective physical nature of sound and vibration as opposed to subjective interpretation, that dissonance can be measured and displayed with accuracy, and that people can be incorrect in judging what is more or less consonant (in this sense a person with a preference for 12TET over pure harmony could be said to have an incorrect harmonic calibration of the ear, having been entrained to the tempered harmonic deviations)

    • @new_tonality
      @new_tonality  5 месяцев назад

      Yeah I think there should be more correct mathematical description of beating as an amplitude variation. In literature, the most simple formula for equal amplitudes is used, and I agree that it seem to miss a lot of what is actually happening. So there is definitely a plenty of room for improvement

    • @gustavertboellecomposer
      @gustavertboellecomposer 18 дней назад

      I was thinking the same thing, and a really cool observation with the irrational ratios! Very intelligently put

  • @lilac_hem
    @lilac_hem Год назад

    so happy to see some more form you !! welcome back. ((':
    i've been waiting to watch this ever since i got the notification, lol; i have just been so busy with school. 😭

  • @gustavertboellecomposer
    @gustavertboellecomposer 18 дней назад

    Wow, this was absolutely fascinating! I had always suspected (like many others I assume) that perception of consonance had to do with a mix of culture, beating and periodicity, but this overview was SO rigorous and well thought-out, and the 3rd clair de lune example honestly opened up a whole new world of harmonic possibilities for me! I would have never thought that you could approach the combination of tuning and timbre in this way (although upon reflection it is similar in nature, but much more extensive, to the end of Messiaens first prelude). It does beg the question for me however, how we would go about accessing these timbres? One way is of course digital synthesizers, but acoustically, pinning down these kinds of partials exactly seems like a daunting task for physicists. Certainly it should be possible, similarly to how gamelan instruments are tuned (like you mentioned in a video of yours) or how church bells are tuned in Europe. How feasible would the construction of such instruments be? Is that a topic you have expertise of? To maybe rephrase the question, do you think you could actually physically construct a piano-equivalent instrument that could play the 3rd clair de lune example?

    • @new_tonality
      @new_tonality  16 дней назад

      It is a difficult question and I am not a specialist in this area. It seems to me that simpler way to experiment with this is non-uniform strings. They also produce inharmonic sound but probably much easier to make than bells and metal plates. You can check out article "inharmonic strings and hyperpiano". I didn't dig deep into it so I dont know the limitations, but I think it was stretching partials apart like in the 3rd example

    • @gustavertboellecomposer
      @gustavertboellecomposer 16 дней назад

      @new_tonality oh that is so interesting! Thank you so much! I thought the idea that this would only be achievable with very expertly crafted metallophones was a little troubling. But with inharmonic strings, suddenly it seems very doable to experiment with!

  • @OrchidDev-b5o
    @OrchidDev-b5o 8 дней назад

    this video is goated thank you so much

  • @timepaintertunebird8160
    @timepaintertunebird8160 Год назад +2

    Can I get the whole piece of claire de lune in the stretched version though

  • @ucanihl
    @ucanihl Год назад

    Great video, thank you :)

  • @stephenspackman5573
    @stephenspackman5573 Год назад +1

    8:35 wow. That does not sound … normal to me. Which raises the question-is the phantom beating, beating, or could it be hallucinated anti-beating, where your brain (perhaps having decided on the basis of the stereo data that there are two stimuli) is trying to supply missing signal and compensate for anticipated beating-which then actually isn't there, making the compensation itself audible (somewhat analogously to the phantom corners visual illusions)? I'm not sure quite how to test this, but I notice that _without_ headphones the effect is much weaker, which I think supports the idea (or something like it).
    At any rate, I do think we should expect to find auditory illusions when the normal inputs for spatial localisation are disrupted-localisation, including localisation of multiple similar sources, must have high (and enduring) evolutionary value.
    And I have to say I love the stretch-on-stretched CdL. I could listen to an album of that.

  • @DanielMarioPlos
    @DanielMarioPlos Год назад +1

    Love these videos!
    The first video i saw from you on the subject really blew my mind.
    Does the x axis of the roughness curve show frequency offset or ratio? And if its frequency offset, does the roughest frequency offset stay the same for different pitches? For example: if i where to play a 100hz sine wave and found that playing a 110hz sine wave at the same time sounded the most rough. Would i also find that if i played a 1000hz sine wave and then a 1010hz sine wave at the same time would be the most rough? Or would the frequency be something other than 1010hz?

    • @new_tonality
      @new_tonality  Год назад +1

      It is ratio on the x axis, and it is bigger for lower pitches and smaller for higher pitches. You can see it changing in the synth if you change the pich of the base note, it is C4 by default. This is one reason why major third in low register sound kinda dissonant, while in mid to high registers it sounds smooth.

  • @OscarCunningham
    @OscarCunningham Год назад +1

    In the experiments at 8:35 and 10:10, could part of the dissonance be coming from the fact that you are moving the slider gradually? Perhaps we hear dissonance because there are close partials compared to the sound we previously heard. I'm not musically trained, but I believe I hear some dissonance when I slide the Note Frequency control, even though the sound remains harmonic and without close partials.

    • @new_tonality
      @new_tonality  Год назад

      Interesting idea! I haven't thought about that. Will keep that in mind from now on.

  • @FelipeTellez
    @FelipeTellez 3 месяца назад

    for point n.7 of your video, did you generate the streched tuned scale AND your stretched spectrum sample within your app? would love to know how you generated the scale to match your stretched tuning and mapped it to your DAW. Very very interesting work you are doing, kudos

    • @new_tonality
      @new_tonality  3 месяца назад +1

      No the app cannot export tuning. I used scale workshop to create .tun file, and then loaded it and the sample to Serum

  • @jjacat6506
    @jjacat6506 Год назад +1

    For me, the dissonance completely disappears when you move into two ears. Does this happen for anyone else?
    And also that third Claire de Lune sounded nearly as consonant as the first version to me, and especially beautiful as if it playing by a chorus of bells.

  • @j_lsw
    @j_lsw Год назад +2

    For me culture is different from the other factors because to an extent we can change it. Composers could move slowly or quickly in some direction. But the other factors may help predict what musical cultures are likely to be stable. It's exciting to think that there may be other such factors which are "objective" in that sense but which our culture might currently not use for musical aesthetics. (Especially when you consider works of music like the soundtracks of 100+ hour games, which are so long that they may be able to create their own mini version of the cultural factor...)

  • @LatchezarDimitrov
    @LatchezarDimitrov 7 месяцев назад

    Hello, first-excuse my improvised English! Without the partials of each sound I have the following idea. In the standard music equal temperament the relation netween two sounds in the half ton is K= 1.059463... Or we have there a just octave divised by 12. In the equal temperament of Serge Cordier based on the just fifth K= 1.059643... Between these both K we can construct aproximately 178 different equal temperaments and listen how it sounds. Can you make a video with this subject? It will be a stretched equal temperament without any just interval.I try to experiment this way, but I am 75 years retired violinist and I don't have software nor hardware for do that... Please try my idea!

    • @new_tonality
      @new_tonality  5 месяцев назад

      Hi! I see what you mean, equal temperament that treats all intervals equally and does not use particular just interval as an anchor. I am finding myself more and more in Just Intonation camp as time goes on, but I will think about your idea. Thanks for commenting!

  • @sillybobby5189
    @sillybobby5189 Год назад +1

    important to note that at 19:13 it would be erroneous to say that one variable has more contribution than the other. All of the corresponding confidence intervals overlap, meaning that one would interpret the graph to mean that each one has the same coefficient. So there's a good chance differences between beta values are just due to measurement error and in reality each of the variables have a roughly equal contribution to the perception of dissonance.

    • @new_tonality
      @new_tonality  Год назад +1

      Yes, I think it is definitely a safe bet to treat contributions equally. However I think the coefficients on the graph with different genres do prioritize interference over other theories. However it is just one article and I think authors themselves state that more research on non-western listeners should be made. Perhaps I should've included that in the video.

  • @gexahedrop8923
    @gexahedrop8923 Год назад +2

    dissonance also depends on note range

  • @Elizabeth-vh6il
    @Elizabeth-vh6il Год назад +1

    Interestingly, with the 2nd example out of 4, to begin with it sounded horrible, but because of the repetitiveness of the motif, by the time I reached the end of the extract it already sounded a lot smoother to me, sufficiently so that I'd even call it musical.

  • @RememberGodHolyBible
    @RememberGodHolyBible 7 месяцев назад

    At the moſt fundamental leuel, it ſeemeth pꝛetty cleare that harmonicity trumpeth the other theoꝛies. The 12 tet noꝛmal verſion is beſt, but the ſecond example of 12 tet with inharmonic overtones is definitely ſecond beſt in terms of the intonation being right. Examples 3 and 4 one could onely like if one was a very peruerted perſon. Examples 3 and 4 ſound like one is very deeply tripping on drugs and is deuill poſſeſſed. Example 2 ſoundeth baſically in tune but with very meſſed vp timbre, but in tune neuertheleſſe.
    If one's bꝛain hath been trained to aſſeſſe intonation by hyperfocuſing on the pꝛeſent moment, then the place in the bꝛain which pꝛoceſſes and compareth things to the "harmonic template" within, getteth ignoꝛed. Inſtead a hyper fixation on beating oꝛ lack thereof taketh place It is in the place of the bꝛain which is attentiue but deeply relaxed and not ſqueezed into the pꝛeſent moment, looking to ſee if ſomething in a giuen moment is wꝛong, where the ſoule of man can compare againſt the "harmonic template" which hath moꝛe to do with the coherency of the tuning ſyſtem as a whole, and not a given inſtance of a particular cloſe harmonic note being matched oꝛ not. Foꝛ example I would ſay Pythagoꝛean tuning is moꝛe conſonant than 5 limit tuning becauſe the ſyſtem of tuning is coherent while the 5 limit ſyſtem is not.
    In all theſe demonſtrations ſaue the laſt one, real muſical context was excluded from the equation. Playing examples like this but in Pythagoꝛean tuning with coꝛrect ſpelling, and with 5 limit tuning, and maybe 1/4 comma meantone. And ſeeing with the harmonic ſpectrum noꝛmal oꝛ ſtretched, howe diſſonance is perceiued then.
    Another thing which was not mentioned by thee noꝛ the ſtudies that thou didſt pꝛeſent, is to compare different types of beating. There is harmonic beating, and inharmonic beating. And there is likely a difference in perception as long as it is compared within the context of muſicke.

    • @new_tonality
      @new_tonality  5 месяцев назад

      Thanks for comment! Not sure why Pythagorean tuning is more coherent than 5 limit?
      I think the problem is that tuning is naturally dynamic and we cannot squeeze it in the static box. When choirs or orchestras play by ear they never stay in any one tuning exactly but move around just ratios as they are the only ones that can be sung or played by ear exactly. Is it because of harmonisity or interference I don't know those contributions seem to be intertwined. I lately started to think that one cannot say that inharmonic timbre is a single pitch, a single pitch can be only harmonic. Thus there are several pitches playing at the same time in inharmonic sound therefore the coloring of unison is precieved.

  • @Elizabeth-vh6il
    @Elizabeth-vh6il Год назад

    I'm really confused by this stuff. I don't have much experience of composing and I'm trying to learn. But I'm not the kind of person to follow rules just because somebody said, "These are the rules." I want to know why certain things work and others don't.
    According to the interference metric you use, intervals larger than an octave are less consonant than the octave itself but are otherwise rather consonant. I found this curious so I set up a little experiment in MuseScore. I tried it with the pipe organ, piano and viola without much difference experienced when it came to the result.
    I created a sequence of dyads, all rooted on C, each lasting a whole note (semibreve).
    1) Major Third
    2) Major Second
    3) Major Third
    4) Major Ninth
    5) Major Second
    6) Major Ninth
    My findings were:
    1) The ninth felt much more dissonant than the second when the root was positioned on middle C. This goes against what the graph in your synth says it should feel like. However, after I lowered the whole piece by an octave then the difference in quality was much smaller and my brain latched onto the idea of octave invariance instead.
    2) The major second feels surprisingly useable in this context. In this case the strong voice leading taking us from E to D horizontally appears to have more of an impact than the vertical relationship between the notes contained within in a second. This changed however when I changed the major seconds and ninths into minor seconds and ninths.
    3) Making each chord into a triad by adding in the perfect fifth didn't change the results.
    4) Alternating between C and F as root notes (just in case my brain had filtered out the repetitive C) didn't make much difference. The second sounds a bit more colourful but still doesn't stand out massively. The ninth becomes highly noticeable and prominent again but it seems like this might simply be because the upper voice is getting pushed into the higher register again.
    My goal is to write myself a little program to assist me with writing music by reminding me of when I'm writing something that should theoretically sound smooth and when I've written something that will stand out (perhaps mistakenly or perhaps deliberately because I consciously wanted to add "colour" to a particular note or to build up some tension). Such a tool would give each step of the composition a rating according to various different metrics (dissonance, voice leading, accidentals, etc) and offer suggestions and facilitate searching for compositional ideas by exploring treating these metrics as dimensions to be moved through. It would also give me a graphical representation and visual explanation which would confirm that such and such a section was smooth because all of my chosen metrics lay within certain zones but then it would also show when I added some excitement somewhere else and tell me that was because such and such a metric wasn't in its neutral position over there, and so on.
    Thanks for the link to the Harrison and Pearce paper by the way. I'm finding it helpful even though the author's preferred solution feels less than satisfactory regarding the taking of 3 different algorithms and applying weightings to the answers output by each one until the weighted results match what the humans subjectively say the answers should be! Even more so when then there's so much variation in the data between different musical genres. Your focus on interference is a lot more scientific and I like that.

    • @new_tonality
      @new_tonality  Год назад +2

      Thank You for sharing your observations! I totally agree on your position about following rules. I think that we can derive some objective factors in music to help us better deliver our intent, but the subjective factor will always be there and that is ok. I think life is not mechanical and the music should not be mechanical too.
      Even though the roughness model that I use in the synthesizer says that minor second should be the most dissonant interval, it is probably my favorite. It has a certain color that strikes close to my heart. Though I would not enjoy a piece consisting of just minor seconds))
      The way I look at the roughness graph is that it is just a model that shows that some pitches stand out in terms of consonance and dissonance and we should consider them when Tuning an instrument. So the main application for this model is in tuning theory rather than composition.
      The theory behind Just Intonation is probably better at explaining Western music. Where this theory shines is in incorporating inharmonic sounds into tuning theory and thus finding the common ground between Western practice and something very different like Gamelan.
      The software you have in mind sounds like a very cool and ambitious project!

  • @dliessmgg
    @dliessmgg 7 месяцев назад

    I'd be curious if the presence of beating is affected by cleanness/harshness of a sound. If you have two sine waves with very close frequencies, it's definitely there. But is that also the case with for example distorted guitars or harsh metal vocals? If no, what about all the sounds inbetween on the clean to harsh spectrum?

    • @new_tonality
      @new_tonality  7 месяцев назад

      Yes, I think it is connected. In literature they call it roughness of the sound and generally speaking it is fast amplitude modulation. Not sure how exactly that works, but in cade of overdriven amps there is definately a difference when you play notes of a fifth separately into two amps and when you play them into one amp. Amp seems to magnify roughness for all intervals apart from the octave, probably cause it creates more of inbetween harmonics or somethg like that))

  • @ShadowZero27
    @ShadowZero27 4 месяца назад

    yippee 22/32 intonation!

  • @NoWayHaze
    @NoWayHaze 8 месяцев назад

    This is a very interesting channel and I'm glad I found it. However, I wonder if your claim about our brains containing a harmonic template is jumping the gun a tad. First, the mechanism of sound production in our speakers and headphone create harmonics just from natural imperfections. Are you sure that there is not any non-linear responses from our sound equipment first that allows interference between harmonics coming from non-linear distortion and the inharmonic partials you inserted? I'm genuinely curious about the answer to my question. I unfortunately don't have the best audio equipment to test this out myself yet.
    If the effect I describe is insignificant, then harmonic template generation in our brain raises many other questions about how it can biomechanically arise from something within our brains and how the brain does some sort of multiplication of frequencies to sample in the fourier/wavelet transformed signal, or how it measures peaks of autocorrelations.

    • @new_tonality
      @new_tonality  8 месяцев назад

      Well, I cannot be sure as I haven't done measurements to check that)) but I think the non-linearities in audio gear are too small to explain perceived effect. I am using monitor speakers that would've been useless for mixing if they had such huge nonlinearities.
      The way that brain determines harmonicity in the sound is actively studied and as far as I can tell there is no consensus. Two theories that I came across is autocorrelation and comparing against harmonic template. Perhaps both mechanisms are at play. In my view it is probably something akin to how AI recognises patterns. I remember seeing some images in early days of AI when neural network was trained on pugs and when given different images it kept finding pugs everywhere (in clouds, in smoke, etc). We also have this ability with recognising faces everywhere. But that is a hypothesis of course))

  • @alexandrosyuk9473
    @alexandrosyuk9473 Год назад

    I've noticed that synthesized sounds feel like they hurt my ears/brain, but natural/acoustic sounds are all fine/pleasant. What is it about acoustic sounds (vs synthetic ones) that could be the reason for this? In a way it's a bit similar to how if you ingest most natural/organic things you'll be fine as long as long as they're not poisonous (or unless alive and wants to eat you, like some bad types of bacteria), but most manmade compounds would be toxic. Actually these two may not be so far apart: music/sounds made of frequency/vibration, and matter itself is supposedly just frequency/vibration, where the sensation of it being "physical" is just the "interface" through which we interact with the world (and how it "really" is something no one knows for sure). So some vibrations would have a positive effect on oneself and others a negative one. Also, the way something "feels" is often no less important than "hard" evidence, mainly because the science in most areas still has a very long way to catch up to the intuition of the mind and body.
    On that note: it may be useful to add an option to change the frequency of each partial in addition to its amplitude, to try to imitate the sounds of acoustic instruments. And after, it might be useful to have an option to save the preset, possibly as a text file to load the next time, so that the work is not lost. However, it doesn't seem like the only thing that makes acoustic instruments different is this slight variation. After all, the body of the instrument is supposed to do something interesting as well. Itself an entire craft to be perfected.

    • @iolairmuinnmalachybromham3103
      @iolairmuinnmalachybromham3103 Год назад +1

      Regarding the Newtonality synthesiser: It does have the option to selectively change both the amplitude and the frequency of individual partials. There are controls for shaping the whole sound at once and also controls for changing each individual sine wive. As for saving files, I have used screenshots sometimes so I can recreate it later. When you mention the body of the instrument; certain things will change the sound, but the point of synthesising is that you're creating a final output, just like with any acoustic instrument there will be a final sound which is heard - and I don't that any filter/resonator is likely to change the frequencies in the sound, only the amplitude (but then I'm not a physicist). Speaking only as someone with an interest in voice, the vocal folds produce a harmonic sound, and the vocal tract then determines which parts of the harmonic spectrum are going to be louder/quieter and by how much.
      As a question back at you about acoustic versus synthesised sounds- are you sure you can reliably tell just by listening whether or not a sound has been synthesised? Sounds we think of as electronic that maybe come from early electronic music are just simpler sounds than the very complicated sounds created by complicated physical objects. They're just sounds that are easier to make. Computers are capable of reconstructing a recorded sound to sound "natural", in spite of being emitted by a speaker instead of whatever it's supposed to sound like. And presumably not all natural sounds are fine/pleasant to anyone's ears- it's just that most widespread musical instruments are the products of centuries of adjusting parameters to make it sound more "musical'- regardless of the culture or region that they come from. Likewise with ingesting a substance- as you said, if you eat a non-poisonous, naturally occurring substance, it will not be poisonous. The same is true for "man-made" compounds. Some are meant to be eaten, some are poisonous and shouldn't be eaten.

    • @alexandrosyuk9473
      @alexandrosyuk9473 Год назад

      @@iolairmuinnmalachybromham3103 My bad, it does have the option to adjust frequency. I don't know how I missed it.
      If it's a program that attempts to recreate the sound of an acoustic instrument, and it does its job well, I would not be able to tell the difference from a sample library (if anything, it has the advantage of being more responsive). PianoTeq, for example, has pretty convincing pianos (although their harpsichords sound rather thin and lifeless, they must be a lot harder to imitate). I'm more interested in the difference between acoustic sounds and intentionally synthetic sounds. To try to put it one way, it feels like "organic" sounds have all their jagged sharp edges rounded off.
      The point of my analogy is that synthetic matter does not interact as well with organic life, so I suspect that “synthetic” sounds may not interact as well with us organic beings, compared to “organic” sounds. Unfortunately, we must rely on experience and following "how it feels" as there is no way to measure many such things yet.

    • @user-kp9ud2xl4f
      @user-kp9ud2xl4f Год назад

      I think you nailed it with the phrase "slight variation". In acoustic space no sound is going to maintain constant frequency due to constant interference from its environment. factor in acoustic instruments and you also hear the imperfections of the instrument's physical construction, the imprecision of playing technique, etc. Synthesisers, especially digital, can take on a really alien uncanny quality, and that's why musicians love to degrade the sound and introduce variation.
      So maybe what the brain and ear likes is not complete consonance but a sort of near-consonance. It's nearly perfect but just rough enough to reassure you that you're not in the matrix

  • @Fire_Axus
    @Fire_Axus Год назад

    12edo with normal spectrum sounds like the reality where you have never noclipped into the backrooms.
    12edo with stretched spectrum sounds like a dangerous level of the backrooms.
    Stretched 12edo with stretched spectrum sounds like living in a safe level of the backrooms, knowing that you would never return to reality again.
    Stretched 12edo with normal spectrum sounds like you have hit a dead end and are unable to escape or die, bored until the end of time.

    • @crimsonplanks623
      @crimsonplanks623 2 месяца назад

      Apparently, your feelings are rational, but my feelings are irrational.

    • @Fire_Axus
      @Fire_Axus 2 месяца назад

      @@crimsonplanks623 no

  • @nilton61
    @nilton61 Год назад +1

    I can see a problem here that is seldom addressed. Harmonics are multiplicative in their nature whereas beating is subtractive. These are very different in nature and playing intervalls or chords will produce a multitude different beat frequencies.
    There is also another aspect of culture that is not mentioned. I have engaged thoroughly in ear training for the past years (at least 1 hour daily, and getting good at it. being able to identify all 12 12tet intervalls across several octaves in both major and minor context and doing error free melodic dictations at 120 bpm) and i can certainly say that i hear music very differently now. so what if the amount of challenge in the sounds that a proportion of a population is exposed to is implicit ear training and the more and more diverse proportion you have the more different the outcome of studies will be.

    • @new_tonality
      @new_tonality  Год назад

      About beating, it is indeed interesting what can be found if controlling for beating frequencies so they are not random but actually have some structure. Seems like a good research rabbit hole))
      And about musical training, one article I mentioned, about native Amazonians, had a group of trained western musicians. And they had a stronger preference to harmonicity then non-trained westerners, if my memory is correct.

  • @ГригорийБородинов-з8ъ
    @ГригорийБородинов-з8ъ 8 месяцев назад

    Hi! I have a question. What theory can explain why two sine waves with 3/2 ratio sounds better than two sine waves with 7/5 ratio?

    • @new_tonality
      @new_tonality  8 месяцев назад

      3/2 will have higher harmonicity, it is basically second and third harmonic in a single tone

  • @vilvd3934
    @vilvd3934 Год назад +1

    I though this channel had 1.5mill subs 😂

  • @rubberduck2078
    @rubberduck2078 Год назад +2

    To me, the chords where the partials don't "match" the tuning, so that the partials don't overlap, sound bad not because they are rough, but because they are busy. To me, the beatings are not the problem - the problem is that there are a lot of different partials all present simultaneously, and they sound muddy and overloaded

    • @Fire_Axus
      @Fire_Axus Год назад

      i havent tested headphones yet, but the third one sounds consonant to me.

  • @billhowe5343
    @billhowe5343 Год назад

    Climate change seems like it could be profitable for some?

  • @guitarrainfinita
    @guitarrainfinita 9 месяцев назад

    Buenisimo!, todos esos estudios que citestan estan sesgados porque si consideramos a la disonancia como un fenomeno universal Shoenberg estaria equivodaco, y como es tan respetado academicamente los estudios al respectos se sesgan. Pero el ejemplo que pones delos 4 audios es muy claro y alli no puede haber condicionamiento cultural. Lo mismo pasa cuando tocas un acorde mayor con la tercera mas baja, suena mas agradable y deberia sonar mas disonante si solo se tratara de la cultura, y no sucede. Ya no se puede sostener semejante mentira po mas tiempo, tambien hay estudios en bebes que confirman que la consonancia disonancia es un fenomeno universal.

    • @new_tonality
      @new_tonality  9 месяцев назад

      Sorry if I misunderstood your point, but I think that saying that consonance is 100% cultural conditioning or 100% NOT cultural conditioning is equally wrong. It plays a role though that role is not absolute. One example that comes to mind. If you ask a regular musician, which is more consonant, major chord on piano in 12-tet or the same chord in Just Intonation. I am pretty sure most musicians will pick 12-tet as more consonant, even though it has objectively more beating. Just Intonation just sounds weird on piano to modern listener. Cannot cite a study, just referencing personal experience))
      As a side note, I personally disagree with Shoenberg and I don't care how respected he is)) he, as well as most academia share a modernist mindset, where aim is to destroy tradition and not get closer to truth. That is their bias.

    • @guitarrainfinita
      @guitarrainfinita 9 месяцев назад

      @@new_tonality Creo que coincidimos solo que el hablar distintas lenguas nos lleva a malentendidos. Estoy de acuerdo que hay un condicionamiento cultual para el fenomeno de la consonancia, pero tambien hay un componente natural. Creo que te equivocas respecto del ejemplo del acorde mayor, al menos en mi experiencia, la mayoría de las personas a las que les hice la pregunta elige el de la entonación justa o les es indiferente. También respecto del acorde de dominante cuando se lo construye con el séptimo armónico natural. Gracias por tu respuesta! y tus videos, son muy buenos!. Por ultimo estoy convencido de que hay un fuerte relación entre las blues notes y el séptimoarmónico natural. Me gustaría saber que opinas. Saludos

    • @new_tonality
      @new_tonality  8 месяцев назад

      Yes, I also think that 7th harmonic is at play in blues music. I mean 7:5 triton sounds much nicer than 600 cent triton))