Blind Listening DISASTER! And what we learned from it...

Поделиться
HTML-код
  • Опубликовано: 20 дек 2024
  • НаукаНаука

Комментарии • 348

  • @blorg8206
    @blorg8206 Год назад +108

    That 90% heard a difference *when there was no difference* is a good result, and not actually that surprising. It just indicates how much imagination primes us to hear supposed differences. Not surprising that people hear such radical differences with stuff like DACs or cables, if 90% people hear the difference when there is none. This isn't a "problem" in the testing, this is a valid result.

    • @IHearEverythingDude
      @IHearEverythingDude Год назад +16

      Exactly that

    • @SalseroAt
      @SalseroAt Год назад +4

      But if 90% do hear a difference when there is none how then can be any result of another test by the same people have any validity ? This may be a valid result but in my opinion it shows that blind tests do not help either.

    • @NeilBlanchard
      @NeilBlanchard Год назад +1

      @@SalseroAt I agree - the data from this show that the test is deeply flawed; as was discussed in the video.

    • @Terra101
      @Terra101 Год назад +15

      This tells us that our ears CANNOT be trusted, a mantra alot of people are repeating. Especially when you point out that the cable (for example) doesn't have any measureable differences. "Well, just trust your ears!" @@SalseroAt

    • @razisn
      @razisn Год назад +13

      @@NeilBlanchard NO. The test is fine with regards to what you are talking about. It shows that listening cannot be trusted. It just was not a particularly well conducted test for many reasons two of which I sate in my response to SalseroAt above.

  • @ronsoffer7995
    @ronsoffer7995 Год назад +60

    This reminds me of when the flat earthers ran the $20,000 light beam experiment and proved themselves wrong, but ultimately threw away the result as unusable.

    • @joeltunnah
      @joeltunnah 9 месяцев назад +2

      Link?

    • @madmac3981
      @madmac3981 7 месяцев назад +4

      Exactly... The data is unusable, aka: the data is not what we expected so it's no good...
      I've seen a blind test for cables (think Amazon basic vs $$$$$ cables), switch done almost instantly with people keeping their positions. Results?! Many people preferred the Amazon basics (i think most)... The audiophile world is riddled with snake oil. The baseline for when the $ paid actually has an impact on quality is a lot lower than 99.9% of audiophiles think it is.

    • @BuzzardSalve
      @BuzzardSalve 5 месяцев назад

      @@madmac3981 John Dunlavy allowed people to bring their own cables to his showroom to do a comparison with his RadioShack cables. The problem is he used to trick people by never using their cables at all and just pretended to swap them. They always used to say their cables sounded much better when they were never used at all. Of course when he told them in the end they used to get pissed off with him :D LOL

  • @MaLaMayLay
    @MaLaMayLay Год назад +50

    Nothing went wrong! You have to take the data for what it is. Don’t reject data because you don’t get the expected results. Hearing changes from so much other things other than the gear too. So minute changes to the sound from gear can be masked by any other psychological, physiological or environmental changes especially if there is time between listeners session. In the end, Music is emotion. sometime we are so deep in this audiophile thing that we forgot to listen to the music and not the gear, with our hearts and not our head.

    • @PassionforSound
      @PassionforSound  Год назад

      I didn't reject the data - quite the contrary - the data is exactly what led to the learnings and potential improvements I discussed. Unfortunately, the event didn't deliver the data I had hoped though which would have been some clarity around which devices and tweaks made a discernible difference

    • @TheVeganVicar
      @TheVeganVicar Год назад +2

      You sound like someone who will appreciate the latest videos on Iain McGilchrist's RUclips channel.

    • @Douglas_Blake_579
      @Douglas_Blake_579 Год назад +7

      @@PassionforSound
      One very important aspect of attempting statistical science is that you must not be trying to prove something... you must accept the results as they actually come in. If your listening tests brought results that were "all over the place", it is likely the truth is that no real traceable differences existed and that is what you need to take from your results.
      Years ago we did a test at a gathering of our local audiophile group, with about 30 so called expert listeners in attendance. We placed the system behind a bedsheet so people could not see what we were doing and played about half a dozen sound clips for our friends. All "changes" were made behind the curtain as quickly as we could...
      The result was seriously interesting as most people preferred #3 and the results for the others were pretty much all over the place.
      *But* there was a catch ... my assistant and I behind the curtains didn't actually change anything. We stood there, moving the curtain and saying, "That goes here" or "this is for the next one" but simply replayed the same clip 6 times on the exact same system.
      We fully expected to be told "I didn't hear any differences" and what we got ranged from "better soundstage in #x" to "there was something wrong with #y" and everyone basically picked one or two as favourites... despite having heard exactly the same thing every time.
      You cannot reject the truth ... and in our case, the truth was that people were fabricating differences where none existed.

    • @PassionforSound
      @PassionforSound  Год назад +1

      Every scientific test begins with a hypothesis that you are aiming to prove or disprove.
      The ONLY thing I took as "proved" by this set of tests was that the data they provided wasn't reliable in any way.

    • @Douglas_Blake_579
      @Douglas_Blake_579 Год назад +7

      @@PassionforSound
      Just because it didn't prove what you wanted to prove doesn't mean the data is wrong...

  • @SuperReview
    @SuperReview Год назад +58

    Many of these problems with blind listening are also problems with sighted listening. Expecting differences increases the likelihood that differences are heard. The value of the blind test is you find out to what degree that explains perceived differences.

    • @thatchinaboi1
      @thatchinaboi1 Год назад

      So it isn't really a double blind test. They did the test wrong, and then blame blind testing.
      Audiophile morons.

    • @Filk320
      @Filk320 Год назад +10

      And that is why if you want to answer the question "is there a difference" you go for ABX and not AB.

    • @thatchinaboi1
      @thatchinaboi1 Год назад +1

      @@Filk320 That is better, but not even asking the question is even more accurate.

    • @PartyMusic775
      @PartyMusic775 Год назад +2

      @@Filk320 No, ABX can only detect around 1% of experiential differences. It does succeed in filtering out some of the issues in AB though. But see my other reply here explaining the other challenges. Consciousness is not experience.

    • @Filk320
      @Filk320 Год назад +3

      @@thatchinaboi1 By ABX I meant simply asking people to do abx and don't tell them what you are testing.

  • @WatchPripara
    @WatchPripara Год назад +31

    If people can't hear a difference between 2 DACs blind then that just means the 2 DACs sound practically the same and choosing one or the other based on sound alone is pointless. The same people suddenly hearing a difference and having a preference between them during a sighted test shows how much bias and placebo colors their judgement. The difference they think they heard in the sighted test are all made up in their head. If they can't hear them blind, they can't hear them at all.

    • @gioponti6359
      @gioponti6359 Год назад +1

      the word is “learning”, ie improving perception.
      In my opinion it takes a while to notice the many differences between A and B in all the categories we judge reproduction of music, which is why I wouldn’t rely on short term listening at all in judgement. Thats good to demonstrate well understood differences but not good to find out about them. IMO - perhaps I am just slow .. but with that comes more time to get into details

    • @PassionforSound
      @PassionforSound  Год назад

      I agree, Gio. To suggest the blind test as we did it was conclusive is just confirmation bias at play. The control proves to us that the whole approach was flawed so anyone drawing conclusions from the other test results is failing to understand the science and acting instead on their own biases and emotions. It would be like handicapping taste testers by having their noses blocked and then using the results of an apple taste test to suggest that all apples taste the same when the handicap is evident and obvious.

    • @WatchPripara
      @WatchPripara Год назад +4

      @@gioponti6359 You can say that if they only did blind tests. But they didn't. They did both blind and sighted tests.
      They did it blind, and they were completely clueless about whether they're hearing differences or not. Then they did the same test but sighted, and all of a sudden they now have instantly "learned" percieving tiny differences between DACs. Suddenly short term listening is good enough to tell which is which.
      This is simply bias at play. People can't choose between two DACs based off sound alone. They need their preconcieved biases about the two DACs to make up fake sonic differences in their heads that they can only hear in sighted tests but not blind.

    • @WatchPripara
      @WatchPripara Год назад +4

      @@PassionforSound Analogy fail. Taste and smell are closely linked and the sensation of flavor is the combination of both. Sight and sound are completely separate sensations. Staring at a DAC is not going to physiologically affect the sound you're hearing the same way smelling your food does.

    • @PassionforSound
      @PassionforSound  Год назад

      The point is that there is a handicap in both cases so you're not creating a valid test I'm both cases. The control round showed that this test doesn't produce valid results. Extrapolating that to mean anything other than that this test didn't work for the intended purpose is just applying your own biases to the invalid data. To be very clear, my conclusions from this are that the data doesn't tell us anything about people's ability to hear differences. In other words, it doesn't support my belief that there are audible differences, but neither does it support the belief that people can't hear differences. The ONLY valid piece of data from this test is that the testing setup does not produce valid data about hearing differences between the devices.

  • @pdcragin33
    @pdcragin33 Год назад +16

    As a market researcher who has helped design blind TASTE tests on beverages, food- I commend you for how seriously you approached this. Smarter pro testers than I learned that tests had to be “double blind,” meaning that the interviewer asking the Questions ALSO had to NOT know the “right” answer. When differences are hard to detect, tasters/listeners will latch onto any cue they can, even non-verbal, to ascertain the “right” answer. Just human nature. When you refine your protocol so that tests are back-to-back in time, consider “cleansing the palate “ with some white noise between music listens, using your control setup. Kudos that your emotional reaction to the results of your hard work were not “aw, screw it” but rather “how to improve. “ And yes, with some blind taste testing, usually when samples of people must be small, tasters are trained first on how to detect differences. Then later, with big samples (which you’ll not likely do), untrained testing validates whether the general population can also confirm what the trained people detected. Carry on!

    • @Maver1ck911
      @Maver1ck911 Год назад +3

      It's the difference between beer and wine taste tests.
      Beer has no expectations.
      Wine has a confident, self-assured snobbery amongst "serious drinkers" and even amongst casuals there's an expectation in wine that there ARE better wines not merely variation within a style.
      Wine snobs and audiophiles align much closer on this front than "average lager enjoyer" picking a preference between "American light beer" variations within the same style; with beer there is only preference, with wine and high end audio there's pressure to choose "correctly".

    • @PassionforSound
      @PassionforSound  Год назад +3

      That's some fascinating insight about the world of food tasting and it makes a lot of sense - thanks so much for sharing! It's definitely true that listening for differences in audio is a learned skill. Some are naturally better than others but anyone can learn to do it.
      For your benefit, Maver1ck, we were focussed only on if there was a difference, not what was better. The preferential questions were separated from the simple yes/no of difference and were only there for the sake of interest. Further to that, I think you're overpaying the snobbery side of things. Sure, there'll be snobby people in any hobby, but it's not uncommon in these comment threads or in my videos for the more expensive product to not get the recommendation over a cheaper one. My Chord DAVE review is a prime example.

    • @nimblegoat
      @nimblegoat 6 дней назад

      Yeah not easy to do such research , especially to have both samples appear the same.
      What I liked about the above ones , is the control , most people a/b never have controls .
      Also with say listening tests lossy vs flac. people who claim they can always tell, generally use the hardest samples the can find and just listen to a0.3 sec high hat or whatever , ie nothing like real life music playback
      Plus this also presumes the assumption that the lossy sounds more displeasing to our ears, when it fact may just sound different. Lots of songs were mastered for playback limitations eg 60s pop songs for AM radio. yet we liked the sound , even if not the true full recording .
      My take if DAC wellmade and does fundamentals will sound very good, better ones maybe are better will more smoothing to analogue eg ladder dac . but they can sound different and add flavour -
      A good Amp is a good Amp - plenty of distortion/noise free power to handle every spike , But Amp types seem to add flavour

  • @Rockapotamus91
    @Rockapotamus91 Год назад +18

    On Tylls big sound 2015 people struggled to hear any difference between any amps and dacs, all results ended being roughly a 50/50 guess.

    • @PassionforSound
      @PassionforSound  Год назад +1

      I didn't see that, but it makes sense for so many reasons and makes this a tricky pursuit (i.e. the testing and collection of data)

    • @Rockapotamus91
      @Rockapotamus91 Год назад +1

      @@PassionforSoundit’s available to watch on RUclips if you have the time

    • @PassionforSound
      @PassionforSound  Год назад

      I'll track it down. Thanks

    • @Ancient88Wisdom
      @Ancient88Wisdom Год назад

      ​@@Rockapotamus91is this it?
      ruclips.net/video/uLzXiqvfNUU/видео.htmlsi=xjxapF6e8CO8JINR

  • @shipsahoy1793
    @shipsahoy1793 Год назад +31

    The way that some people talk about DAC’s, you would think that a Topping E50 would certainly sound inferior on such an expensive great sounding system, and that the reduction in sonic “quality” would be fairly obvious. The fact that that didn’t happen has basically validated a lot of my past comments about the splitting hairs concept of sonic performance from a DAC.

    • @PassionforSound
      @PassionforSound  Год назад

      There will always be a role of preferences in audio and system synergy too. In this situation though, what became clear when we did more sighted listening at the end and were able to discuss what we were hearing was that many participants needed a different approach to the audition process to start properly hearing the range of differences. That doesn't mean that everyone walked away preferring the TT2 (I didn't do a survey), but the initial results are more indicative of the blind listening limiting people's ability to properly discern the differences rather than showing anything too concrete about those differences/ preferences.

    • @Gamez4eveR
      @Gamez4eveR Год назад +6

      @@PassionforSoundlmfao are you even aware of how ridiculous this is? What you're essentially saying is "participants had to fall under suggestion and bias to start hearing differences".
      It's crazy how unwilling you are to accept that indeed the difference is purely imaginatory and emotional. Them not being there during the blind test is expected.
      Nobody needs an expensive dac to get the most out of their music. Absolutely nobody on this planet.

    • @drivethrou
      @drivethrou Год назад

      My dac i bough 1997

    • @shipsahoy1793
      @shipsahoy1793 Год назад

      @@drivethrou and I’ll bet you prefer yours sonically to the ones that they’re making today that cost twice as much lol

    • @shipsahoy1793
      @shipsahoy1793 Год назад +1

      @@Gamez4eveR I really don’t think that’s what he was trying to say. I think you “pushed the ball out of bounds” there, so to speak, as I know he’s open to the idea, at least in some listeners. Besides, not all “differences” are imaginary and emotional as you describe. At the very least, they could very real to the person perceiving them, and not imaginary or emotional. Especially if you’re not actually always switching back and forth, and they pick up on it a large percentage of the time without visual or sound clues to the test.
      That could clinch it.

  • @robertt7238
    @robertt7238 Год назад +15

    The only disaster is spending 17 minutes trying to explain away the fact there was no difference and that most audio reviews are pure BS.

  • @matthewweflen
    @matthewweflen Год назад +21

    Having people seated in a room with each other, in addition to potentially changing the sound (which you rightly mention), also introduces the possibility of social influence on the results. If you see someone make a shocked look or start headbanging to music, you might be influenced with respect to whether you think there is a change from control.

    • @PassionforSound
      @PassionforSound  Год назад +1

      That's true, but there's no perfect solution to this and I'm not a funded university faculty so it's never going to be a perfectly controlled study. What I can say is that most people were very still and focussed on their own listening, many with their eyes closed, and that there was strictly no talking during the testing to prevent direct influence.

    • @matthewweflen
      @matthewweflen Год назад +1

      @@PassionforSound Time to start asking your local universities for funding! ;-)

    • @PassionforSound
      @PassionforSound  Год назад

      Wouldn't that be nice!

  • @johnmclean627
    @johnmclean627 Год назад +43

    Not surprising results. It’s unfortunate the data doesn’t align with your personal biases, being an audio reviewer who firmly believes huge differences. Truth hurts sometimes.
    Thanks for your time.

    • @Elegiac7
      @Elegiac7 Год назад +1

      To be fair, there are many times he'll say that there's nothing to choose between two products. Or he'll emphasise that the differences are slight. So I take into consideration the times when he says there is a pronounced difference.
      That being said, he has ideas about price, soundstage depth, and other things which inhabit his mind, which are likely to not gel with the coldest reality, most of the time. Ultimately coming out on the side of the audiophile-snob-status-quo.
      But as far as the mystical faction goes, he's far- FAR- from being the least lucid. Even if videos like this and his interview with Rob Watts reek of a greasy, carefully presented apologism.

    • @felixcarrier943
      @felixcarrier943 Год назад +2

      ​@@Elegiac7 But that's his job as a reviewer - to talk about all aspects of a product's performance that are likely to factor into consumers' decision-making, including price, sonic characteristics, and those "other things." I don't buy that there's a distinction between giving an honest appraisal of a product along those dimensions and "the coldest reality," when the reality that people are interested in *is* how the product performs in those aspects. You might say that someone could just end up being a greasy shill then. Sure! Of course. And they could be a greasy shill using numbers as well - most consumers don't have the necessary background to contextualize measurements and are entirely reliant on the reviewer to be competent enough to interpret the measurements accurately *and* honest enough to not deliberately mystify the audience with numbers that do not mean what they claim they mean.

    • @Elegiac7
      @Elegiac7 Год назад

      @@felixcarrier943 He does a fair job as a reviewer. I watch his videos if I'm interested in something. I'm not a measurements-and-nothing-else person. Things can have sonic characteristics beyond what can be measured. But when dealing with such ephemeral qualities... it gets vague. Open to exaggeration or the subject of delusions. Then exploitation. Business loves a grey area, or a compromised mind, like a bear loves honey.
      Diminishing returns are real. And I've seen people caught up in a sort of 'audio-mania'... it's not healthy. Some companies charge too much, and some release too many products. Some charge too much for products that don't do anything. The mind loses... reality. That's where I'm coming from. It's a sad way to live.
      My problem is more with consumerism in general, than any one poor schmuck caught up in it. I mean. That's all of us.
      But I save a critical eye for anyone inhabiting the role of salesman. And these reviewers are also salesmen.

    • @nitraM321
      @nitraM321 Год назад

      thats not it man

    • @PassionforSound
      @PassionforSound  Год назад +4

      I think some of you need to watch a few more of my videos and leave your own pre-existing beliefs at the door. I'll often say that two products are close enough to be indistinguishable or that the more expensive product isn't necessarily better. I've also been criticised for over-using the term "slight" because most differences are just that beyond a certain point.
      You've also missed the point of this test. As I explained in the video, the main data I was looking to collect was whether we could hear a difference (yes/no). There was no emphasis on which is better/worse because that's all personal taste. I hoped to collect some of that data if the yes/no section succeeded, but that was to understand what people liked and why, not to prove anything about what was better.

  • @tradehut2782
    @tradehut2782 Год назад +8

    One reason why audiophile world is filled with placebo is because the typical audiophile is not self-trained to hear technical details in the sound.
    As a once mix engineer, it took me a year to just hear dynamic compression, attack and release.
    Another thing that astounds me is how audiophiles are focused on changing cables and amps , but spend no money in a good EQ plugin for desktop, nor do they feel the 'need' to dive deep into EQs or saturation. They are mostly interested in how the defaullt frequency response of an IEM looks.

    • @chungang7037
      @chungang7037 Год назад +1

      can you suggest a good EQ plugin for desktop?

    • @tradehut2782
      @tradehut2782 Год назад

      @@chungang7037 SlickEQ by TDR

    • @PassionforSound
      @PassionforSound  Год назад +2

      I think you're over-generalising the behaviour of audiophiles there, but your point about the need to learn how to listen critically is very valid.

    • @AzraelAlpha
      @AzraelAlpha 9 дней назад

      When it comes to IEMs, for example, simply using different eartips will make a big impact in the sounds you hear. This and good EQ settings are what make the biggest difference; the rest is just imagination.

  • @Rockapotamus91
    @Rockapotamus91 Год назад +16

    It wasn’t a disaster it just proves people really struggle to hear any difference or just imagine things.

    • @alphaniner3770
      @alphaniner3770 Год назад +1

      it only proves the thing tested if the test was performed properly - that is what a scientific experiment is about, and that is why science can be such a pain in the ass. the smaller the differences, the better the test needs to be to get conclusive results.
      I guess that we can conclude (carefully) now from this testing that differences (if any) at least aren't large, compared to an unknown amount of factors (that were not tested, but probably including at least a few probable ones like the two you mention) that seemed to cause large(r) differences.

    • @PassionforSound
      @PassionforSound  Год назад +2

      The only thing we can draw from this data is that the testing setup produced no reliable data. We actually can't even go so far as saying that people struggle to hear differences because the control showed us that the setup didn't allow for accuracy in responses. It's very possible that no setup will, but we can't say anything for certain beyond the fact that this setup did not.

    • @alphaniner3770
      @alphaniner3770 Год назад +1

      @@PassionforSound could be also tricky if people feel (any) pressure to make a decision in a group (peer pressure) - especially if they think it can have consequences ('can't you tell the difference, what kind of audiophile are you').

    • @PassionforSound
      @PassionforSound  Год назад +1

      True. I did my best to minimise that, but it's still natural for that to occur

  • @andycrook6508
    @andycrook6508 Год назад +5

    The issue is audio memory. Most non musicians without perfect pitch couldn’t notice the difference between two notes played on the piano if there is too long between hearing them. The difference between two notes is massive compare to the subtle audio differences you are looking for. My advice, play a music piece, no longer than 30 seconds, swap equipment immediately and play again. Repeat and repeat. Ask the audience to focus on different aspects of the music each time.

  • @geoff37s38
    @geoff37s38 Год назад +6

    No surprises here. Given competent electronics as used here there will be insignificant or nil audible differences and any actual differences are subject to personal preferences. Room acoustics and loudspeaker choice are way, way more important than the choice of DAC,cables, and assorted Magic Boxes etc. even moving the listening position a few centimeters can have a profound effect on the audio experience.

    • @PassionforSound
      @PassionforSound  Год назад +1

      That's very true about the speaker positioning and room treatment, but you're drawing conclusions about the rest that this test does not demonstrate. Other than the control showing us that the testing setup is not producing reliable data, nothing else can be drawn from the data collected. To do so is just conjecture and the application of confirmation bias.

  • @Jakeysp
    @Jakeysp Год назад +7

    I disagree with your interpretation of your results. Your data has proven that the test subjects could not reliably hear differences between the configurations when blind. They could not even reliably identify the control. Yet, when given additional non audio information (Sighted test) they suddenly could hear differences that previously were not there - even when the actual audio has not changed. The conclusion I think has to be drawn from that is that humans are heavily susceptible to placebo and biases, i.e. if they know something has changed, they trick themselves into believing they can hear it even when they objectively could not without that visual information.
    This is no surprise among those who believe in the validity of objective measurements and blind testing to weed those flaws out. It's quite common for subjectivity based reviewers (broadly, not singling you out here Lachlan) to claim to hear things that are literally impossible to hear. A textbook example of this is audiophile network switches, they cannot work by definition because to alter a TCP packet is to invalidate the data. Checksummed data cannot be "cleaned" without altering and thus destroying it. But some audiophiles are adamant they hear higher quality when data has travelled through one of those devices.
    I am not saying that subjective review is meritless. For instance I found your channel through your review of Focal Clears I was purchasing, and I fully believe you can hear and assess subjective differences in headphones where the differences can be and often are massive. I also think you and the subjects in this test, had you tested a pair of Focal Clears vs say Focal Elegias, would have been able to tell the difference and your data would have shown that. But here, your data showed the opposite. They couldn't hear a difference between the equipment you tested.

    • @PassionforSound
      @PassionforSound  Год назад

      You're drawing conclusions that aren't supported by the data. The only thing this data clearly shows is that the data provided was not reliable. To draw any other conclusions beyond that is attributing your own biases to inconclusive and unreliable data. In other words, this data proves neither that people CAN hear differences nor that they CANNOT (capitals for emphasis, not shouting).

    • @Jakeysp
      @Jakeysp Год назад

      ​@@PassionforSound I would concede that yes, the blind testing component could be inconclusive. In the sense that, yes, maybe the participants could not detect any differences because of the testing methodology and not because of the actual sonic differences. Especially with the seating arrangement. But regardless, you did conclude that whether due to the methodology or the equipment, participants could not identify sonic differences in this test scenario. For one reason or another. That's very important for the sighted component.
      Without actually changing anything about the audio component of the scenario, you then re-tested the participants sighted. Again, the audio did not change. Only their sight changed. Yet, they started being able to *hear* clear differences. But by your own admission, the sound output didn't change. It was the same equipment, the same room, the same participants - all that changed was that they could *see* the gear being tested.
      I cannot see any way to get out of the conclusion that the participants displayed obvious sighted bias. Yes, maybe with better blind methodology, they may have been able to detect sonic changes in the blind test. But given your scenario, they couldn't, and their sighted results within the same scenario differing is due to factors not related to the sound, ie. bias and placebo. I'm sorry but I just can't see any way around that - you didn't prove or disprove their ability to hear differences blind but you did prove that sighted testing changes their perception of the sound (bias/placebo), even though the sound objectively did not change. Which is why blind testing, when done correctly, is so important.

    • @Jakeysp
      @Jakeysp Год назад

      @@PassionforSound I'd love to see this reattempted with headphones. Blindfold the listener, keep the same headphones, but change out DACs and amps and cables. Then, repeat the same conditions but remove the blindfold. My theory is you will see the same outcomes - listeners will be much worse at detecting differences without visual cues.

    • @PassionforSound
      @PassionforSound  Год назад

      No, the testing changed completely because we were discussing things as we went. I was suggesting the elements of the music that would reveal differences and we used a different track as well. There was nothing scientific about the sighted part of this test and I don't claim there to be anything evidential about it. It was just an interesting experience in contrast to the flawed blind setup.

  • @DaveJ6515
    @DaveJ6515 9 месяцев назад +2

    After nearly 50 years chasing the right system, I finally reached a conclusion, with the following rules:
    1) Never trust measurements.
    2) Don't even look at measurements.
    3) Don't listen to anyone telling you what you are going to hear.
    4) Put on your favorite playlist. In your setup. At your place. Not a test playlist: just the music you like to listen to: the whole thing is about MUSIC, not technology.
    5) Start doing something else: read a book. Does the music call for your attention in a way that you have to close the book and give your undivided attention to it? That's good. Go back to your book. Does it happen again? And again? Great: now we are getting somewhere.
    6) The main point: are you thrilled by something particular in HOW it sounds? Some wow moment? Are you going back to THAT particular moment to listen to the sound quality in terms of dynamics, details, anything else in particular? Not good. When you have the right system, you should be attracted by the MUSIC in its complexity ALL the time, not just in a few specific moments and for some specific reason. Your wow moments should be exclusively connected to the MUSIC FLOW, nothing else. You know what I mean.
    After many different systems, I finally listened to a DCS Rossini + clock, D'Agostino Progression S350 power amp, Borresen 03 speakers, cables from Hijiri except Ansuz C (amp power cable) and Ansuz D (speaker cables). Only music, nothing else.

    • @PassionforSound
      @PassionforSound  9 месяцев назад

      I love this approach! Music is such a personal thing and (IMO) should be about enjoyment and engagement above all else. Happy listening! 🙂🙂

  • @taidee
    @taidee Год назад +15

    I must disagree with you calling this event a disaster, it was anything but a disaster 😉, getting data that was inconclusive in itself was conclusive data because it showed you in practice what doesn't work. You saw in practice the issues of speaker layout in a room with many individuals, saw in practice how people listen and now have ideas about helping people listen better. It's good to hear you were not discouraged, this is research, you find an answer but it is not always what you expected it to be.

    • @PassionforSound
      @PassionforSound  Год назад

      That's very true. I was being a bit cheeky with the title to draw interest/attention. The data we collected was not useful at a micro level, but the uselessness of it is very helpful as you say.

  • @alessandrosuppini943
    @alessandrosuppini943 Год назад +5

    This data is far from being “unusable”, it’s actually confirming the fact that audio listening being a very subjective expereince… and that’s it!
    Do you want to splash significant cash on a power conditioner go for it, do you want to buy exotic interconnect cables, go for it. As long as you believe it is going to improve your listening experience it will 😉

    • @PassionforSound
      @PassionforSound  Год назад +1

      The only usable piece of data is the control test which showed us that the rest of the data is unreliable. To extrapolate any further meaning is conjecture and is the confirmation of your own existing biases.

  • @prithvib8662
    @prithvib8662 Год назад +6

    This is the exact wrong takeaway from the data. So close yet so far. Even when you have a good result showcasing how full of shit this hobby is, the people carrying out the experiment refuse to acknowledge the results.
    Anyhow, glad you're actually trying to blind test and be more rational.

    • @Ray-dl5mp
      @Ray-dl5mp Год назад

      Except now he seems to want to do sighted tests next time. So I agree with your assessment if that is what he is planning, but I don’t think so. Instead of doubling down on blind tests he is going the other way. Maybe it will be interesting but it seems like an attempt to talk about why certain gear is better instead of the random results he was getting with this last test which seem to show it’s all a crapshoot.

    • @PassionforSound
      @PassionforSound  Год назад

      I think you might both be missing the key takeaway from this data. The moment the control showed us that people can't reliably pick that there is no difference in the control shows that the testing setup is not appropriate to draw any conclusions from.
      As for the future, I have plans for further blind tests and also a different experience for those who want to learn more about critical listening skills (not capturing any data)

    • @Ray-dl5mp
      @Ray-dl5mp Год назад +1

      @@PassionforSound I see what you're saying but it depends on what conclusions you want to draw. I agree that your findings were not great for discussing different products because it seemed like there was really no clear preferences either way when it was blind. However, that is exactly what blind proponents would say you should have found. So I think the problem is the goal is different than what I think a lot of people imagined when you're saying you're doing a blind control setup. We are thinking you're trying to show if people can really tell differences blind - what is actually real or not in subjective audio. Where you probably wanted a place for people to talk about why product A is better than B. But if you can only get to what you wanted by taking away the blind portion, you do see how science fans would go that's the whole point. So again, you looked like you were going for true science, but now you look like you want a good discussion, and you assumed it would go a different way. That isn't what control testing is about, it's not about getting the results or discussion you want. It's about seeing what actually happens and being OK with it (Science!). But I totally get where you're coming from...I do think people are totally right to have my takeaway though. By saying to everyone I want to do a good blind test - you set up different expectations from your audience.

    • @prithvib8662
      @prithvib8662 Год назад

      @@PassionforSound well yes, it shows you that people's hearing can't really be trusted, because they'll perceive differences that aren't even there. and that's okay. This isn't a flaw of blind testing.

    • @PassionforSound
      @PassionforSound  Год назад

      You're reading too much into it, Ray. This wasn't about preferences - that was a bonus bit of data. This was purely to see if people could discern differences between setups - yes/no - with no concern for preferences. The preferences part was a second round after the simple difference test was performed. The control killed it though by showing that the testing conditions weren't reliable.
      As for the ability for people to perceive differences blind, the data shows that they couldn't when the test is setup like this. We can't assume that there is no setup in which people can until we isolate more variables (as discussed in the latter part of this video). The important thing with interpreting data like this is to only focus on what it directly proves while not inferring or extrapolating additional meaning.

  • @ronlysons6750
    @ronlysons6750 Год назад +5

    The problem with listening tests is you’re listening or trying to listen to the equipment and not the music.
    This is a completely different listening experience to relaxing and listening to the music, best way is to listen to music and then see what hits you.

    • @MagicMaus29
      @MagicMaus29 Год назад +3

      From the point of view of an end user, I can only wholeheartedly agree. Listen (on the systems/settings/etc. to be compared) to a song that you know very well and that you love. Then, at the end, ask yourself just one question - on which of these systems would you like to hear the song again?
      Unfortunately, from a tester's point of view, things look a little different, since he has to find ways to put what he hears into words and, above all, to describe the differences.

    • @hartyewh1
      @hartyewh1 Год назад

      In a meaningful way I agree, but your mood can vary for a million reasons and a reviewer should never trust their feeling with a limited number of tests. If you compare A and B say on 10-20 separate occasions then you'll likely see a pattern form, but I've systematically tested +80 headphones and about 5 I had to retest since my initial impressions were highly contrary to what most people think and measurements showed and I found that I had a sensitive ear day and was completely mistaken.

    • @ronlysons6750
      @ronlysons6750 Год назад

      Mood plays a big part, it does for me anyway.
      Reviewers heads must get mashed trying listen so many variables. And at the end of the day, it's a job@@hartyewh1

    • @PassionforSound
      @PassionforSound  Год назад +3

      I agree with all of this. My most effective testing approach over the years and years of doing this is to spend time with each device before doing any critical listening or comparisons. To just listen with it as I do with all the products I own and use daily (I actually insert review products into my daily listening chain) so that I can just see what impressions I form over time and through different moods, levels of tiredness, times of day, levels of concentration, etc. I then use quick A/B comparisons to isolate why I feel a certain way or to better describe the more subtle differences.

    • @ronlysons6750
      @ronlysons6750 Год назад

      And then your head gets mashed. Lol.@@PassionforSound

  • @ozpauls4052
    @ozpauls4052 Год назад +4

    The problem with all listening tests is that they all involve listening.
    Your blind listening test was not a disaster, be kind to yourself, it was a success. You got an outcome, not what you were hoping for, never the less you now have some valuable data which can be used when designing your next listening test.

    • @PassionforSound
      @PassionforSound  Год назад

      I agree. The disaster reference was a little bit of interest generation so that people would click on the thumbnail 🙂

  • @Will-xk4nm
    @Will-xk4nm Год назад +8

    Superb demonstration on the uselessness of fancy DACs as well as using your own untrained ears to validate minute differences between competing products. As a pro audio professional, Audiophiles and Audiophile product reviewers do not impress in the slightest with their 100% untrained critical listening skills.

    • @PassionforSound
      @PassionforSound  Год назад

      You're making a lot of assumptions here and drawing conclusions that are not supported by the data.

  • @BoredSilly666
    @BoredSilly666 Год назад +6

    Having been in many studios, production suites etc. Instant A- B ing has always been used the most in my experiences and seems to be the most common by producers/artists

    • @PassionforSound
      @PassionforSound  Год назад +1

      Yes. That's definitely my experience as a reviewer too

  • @PartyMusic775
    @PartyMusic775 Год назад +1

    Abstract on the challenges of the Blind Testing of Musical Experience.
    1. Consciousness and Experience are not the same.
    2. Blind testing only musters differences that subjects are conscious of.
    3. Psychological elements of blind testing that interfere with subjective experience.
    4. Other factors.
    1. Consciousness and Experience are not the same.
    People confuse consciousness with experience. 99% of experience does not take place at a level of conscious awareness. Indeed, consciousness is analytical and creates a focus on individual elements, to the exclusion of the non-focused elements, thereby losing the "composite" / "synthetic", or "holistic" whole experience. Other elements become repressed from experience during conscious focus. Most blind testing seems to come from philosophical and psychological ignorance of this fact.
    Imagine a complex "Where's Waldo?" screenshot, where you can't find Waldo because there are 1400 other figures in there. You're not conscious of Waldo, as you can't even find him. YET, if we suddenly make Waldo disappear from the image, we suddenly notice his disappearance instantly, as a white space suddenly appears on the image and calls attention to itself.
    Waldo always was part of our experience, but he was not part of our *conscious awareness*. Therefore, experience and consciousness are not identical. Therefore, there can be things in your experience that you're not able to discriminate during a blind test, which are nonetheless part of your experience.
    While that may be an oversimplified "proof", it should help us. This example is merely a tip of an iceberg which has much deeper ramifications into the phenomonology of subjective experience and the role of blind testing in sussing out where experiences differ.
    2. Blind testing only musters differences that subjects are conscious of.
    There's not much more to say here. If you are not conscious of a distinction in your current experience, you cannot pass a blind test nor make a real-time discrimination between A and B. This is in spite of the fact that A and B may have present and very different elements which make your experience of the two differ, on nonconscious levels, including subconscious experience of subjective enjoyment.
    3. Psychological elements of blind testing that interfere with subjective experience.
    DISTRACTION: A focus, fear or worry over being right or wrong, as well as the distracting conditions of the test environment, the change in personal status from achieving a good or bad result or score, the awareness of being tested, awareness of test methodologies and executing the methodic rules, and so on... all of these create an experiential canvas in the mind of the subject. It can be described as a plethora of distracting factors that are NOT present in an ordinary musical experience. Indeed, the best musical enjoyment takes place when one becomes "forgetful of all else". "Transcendent experiences" where one notices things in music one had not noticed before, often take place in states of equanimity, relaxed and open receptivity, or what some call "zen-mindedness". On the other hand, the A/B test environment clutters consciousness with all the above-mentioned factors. In itself, that would be bad enough to distract us from differences we could consciously detect. But that neglects the fact that even in zen-mind, we're not analytically conscious of most elements which cause a subjective enjoyment of music . (As explained in #1 and #2.)
    4. Other factors.
    a. You can never step in the same river twice. Hearing "A" a second time can never be the same as the first time. Therefore "B" cannot be experienced as identical to "A" even if it is identical to "A". INDEED: Repetition is a device used in music where the exact same notes played twice, create a different perspectival effect from the first time those notes are played. Artists exploit this effect on purpose as one of the most fundamental devices of musical composition. A2 played right after A interacts with a sonic afterimage of how A harmonizes or contrasts with its own self. This simple effect is fundamental device in the vocabulary of all musical genres.
    b. Fatigue begins to set in for each repetition in a test, changing the ability to make a pure comparison between an A and a B.
    c. Experiential enjoyment of elements of which one is not conscious, can often only be exposed via the technique of creating sudden contrast. For instance, in the "Waldo" example one becomes conscious of Waldo only through his sudden absence. In a similar way, if one is accustomed to "A" over a period of many months, one is much more likely to notice the sudden disappearance of some "Waldo element" within the experience of "B", if suddenly imposed in real time. Sussing out the consciousness of it, proves it was always present within the experience. But there is extreme difficulty in devising an AB testing architecture to do so. Inability to devise an AB architecture capable of doing so, does not prove the experience was not there. It simply proves the AB testing architecture did not adroitly navigate all the challenges listed in (1), (2), (3), and (4), above.
    SUMMARY and CONCLUSION:
    We have found that all too often, a grossly oversimplified model of consciousness and experience is responsible for devising AB and ABX tests whose architecture simply has no hope of proving nor disproving anything. Most ABX tests are ironically blind to the fact they are attempting to measure distinctions in the experience of subjective enjoyment that take place at non-conscious levels, and that the very nature of the test methodology creates conscious interference patterns that obfuscate and block the ability to discern those differences. A crowd of other factors impedes and neutralizes the exact experiences one is attempting to identify, compare, and differentiate.

  • @TimpTim
    @TimpTim Год назад +6

    I always appreciate your honesty and integrity. SOOO many variables, especially from one individual to another; listening skills...Audiogram! etc. Some good ideas for a future test, but for me, this highlights the need for each of us to take that which we learn from sources such as you (thank you, btw) and then do our own searching and listening with different gear until we find that which is most pleasing for each of us and helps us to enjoy the music more fully. It"s the nature of this "hobby."

    • @PassionforSound
      @PassionforSound  Год назад +1

      Absolutely! Well said. That's why I'm hoping to setup some future events that are less about gathering data/creating content and more about providing opportunities for people to explore and discuss some of the tweaks or upgrades they might otherwise be unsure of.
      Glad you liked the transparent approach 🙂

  • @jaybrodnax
    @jaybrodnax Год назад +3

    Surprised you did this with speakers vs headphones...seems like it would be easier to control for room issues, position, etc

    • @PassionforSound
      @PassionforSound  Год назад +2

      Speakers was more time efficient. We wouldn't have had time for any more than 1-2 devices with headphones so we'd hoped to get away with it...

    • @Douglas_Blake_579
      @Douglas_Blake_579 Год назад

      @@PassionforSound
      6 people ... 6 identical pairs of headphones ... it would take the same time and less space than using speakers.

    • @PassionforSound
      @PassionforSound  Год назад

      If you'd like to contribute the headphones or the funds to purchase them, I'd be glad to run it again. Also, with only six people, the statistical significance of any results would be exceedingly questionable.

  • @nespodzany
    @nespodzany Год назад +5

    I would add that the participant's familiarity with the music being played also plays a factor. Is this a song they have heard hundreds of times across many different systems throughout their life, creating a rich set of reference points, or is this the first time they are hearing the material, i.e., no reference point.

    • @PassionforSound
      @PassionforSound  Год назад

      It's a really interesting question. I can see an argument in both directions. Sometimes I find it easier to test with something new because I hear details that I gloss over with familiar material. On the other hand, your point about familiarity with exactly how something sounds in a familiar track is a very valid angle too

  • @frederf69
    @frederf69 Год назад +4

    thanks for this honest approach.
    some people listen to the very ends of familiar tunes very loud as a critical listening method, but the switching needs to be as instant as possible; auditory memory is poor.
    it is also very difficult for us all to describe what we have heard, let alone difference's.

    • @PassionforSound
      @PassionforSound  Год назад +2

      Absolutely! There was actually a great article on Twittering Machines recently about how good (and long term) auditory memory is - just not for these types of tasks. ☹️🙂

    • @frederf69
      @frederf69 Год назад +2

      thanks i'll look it up.
      @@PassionforSound

  • @trevr10
    @trevr10 Год назад +1

    I did a test with my adult children and 3 sets of my headphones. I played the same 3 tracks through each and asked which they preferred for each track. I then repeated the test. They all chose a different headphone for each track each time. Not once did any of them replicate their first round choices. My setup is quite old, a Roksan K2 amp, a Graham Slee Solo headphone amp with SACD's played on a Marantz SA8005. The headphones were Grado 1000i, Hifiman Sundara's and Philips Fidelio X2hr's.

    • @PassionforSound
      @PassionforSound  Год назад

      Yep. Delayed switching and repetition is tough. All it takes is to focus on a different element of the music and you get a different outcome. Nice work on your testing though 🙂🙂

  • @Douglas_Blake_579
    @Douglas_Blake_579 Год назад

    For blind tests to work there are several things that need to happen ...
    1) Changes between devices need to be instantaneous.
    2) All devices need to be precisely level matched.
    3) An erratic sequence needs to be followed... A B A A B A BB A etc.
    4) The listener cannot know what he is listening to.
    5) Each listener needs to be isolated from the others.
    If the listeners can "peg" a specific device sequence, it is fair to trust there is a difference. Otherwise we get into the psychology of expectation in which we tend to imagine differences because we expect or are told there will be differences.

  • @Phil_f8andbethere
    @Phil_f8andbethere Год назад +2

    It's a good first attempt. That's what science is about. Do an experiment - look at results, think about ways to improve the methodology, redo experiment using tweaked methodology, look at results and so on and so on, until you reach the point where you can't think of any way to improve the methodology and the results are repeatable with different groups of people. Well done for trying - FAIL means First Attempt In Learning.

    • @PassionforSound
      @PassionforSound  Год назад +2

      Thanks for the encouragement. I agree, definitely not an end point. We had some great lessons from this experience

  • @dangerzone007
    @dangerzone007 Год назад +1

    Is there any reason why you couldn't put a curtain in front of the electronics and put the speakers up high so they're not blocked by anything.

    • @PassionforSound
      @PassionforSound  Год назад

      There was a curtain in front of all the electronics. Putting the speakers up high would have solved the issue of indirect line of sight, but raised other problems instead (angle of the drivers firing treble into the floor or over people's heads).

    • @dangerzone007
      @dangerzone007 Год назад

      @@PassionforSound just put the boxes upside down and Point them downwards.

  • @rm-mastering
    @rm-mastering Год назад +4

    Great video but please may i suggest trying blind listening tests using 2 streamers or dacs playing the same song connected to a digital preamp or a preamp with remote control to switch between the sources hidden behind a screen so no one knows which is playing. There you can switch quickly and instantly recognise the differences - no brainer. The way you had seen tested is open to so many issues and inconsistencies, waste of time and effort. Please guy's do the right thing and pull your head out of your behinds.

    • @PassionforSound
      @PassionforSound  Год назад +1

      Believe it or not, the moment I do that, there'll be a bunch of people saying that the streamers might have product variance or different output levels, etc. There's really no winning this one (even when I'm not trying to win anything, but just explore...)

    • @rm-mastering
      @rm-mastering Год назад +2

      @@PassionforSound wow, I see your point, hifi enthusiasts are a very hard group to please and understand 🤯. Thank you for your response. Great work by the way.

  • @SteveWille
    @SteveWille Год назад +3

    I’d like to see future content on developing differential listening skills. I think the realization that this is an important and worthy pursuit has made this blind listening exercise far from a disaster. Differential listening is a crucial audiophile skill for many reasons. Just the other day I was wishing I was better at it when trying to decide which tubes to replace in my amp.

    • @PassionforSound
      @PassionforSound  Год назад

      I agree. That's where I'm going to head next I think. I don't know how yet and it might be a live-only experience (not a video), but we'll see

  • @ElCowboyDF
    @ElCowboyDF Год назад +4

    Good video.
    You have a healthy and rational handle. Trying to assess the differences with the method makes much more sense than subjective individual opinions based on trust. Your experiment is not a failure in my opinion. It simply has a detection threshold that is too high compared to the acoustic differences you have generated.
    It is a fundamental principle in the field of metrology (science of measurement). Not detecting a difference does not mean that it does not exist, but that it is below the detection threshold of your experimental device. And that is already information in itself. Using a high performance DAC only makes a significant difference under specific and demanding conditions. This confirms the good performance of entry-level DACs. And if we consider the principle of diminishing returns, we can ask the question of the profitability of spending 30 times more money for a subtle grain, when this money could be invested in an acoustic treatment and a DSP to correct the acoustic defect of a listening room (it is generally accepted that this intervenes for at least 40% of the audio quality). I think getting to a subtle level of difference, the number of variables influence our perception masking that difference. Only long, attentive listening, with experience, can perhaps detect them, but at what financial cost. If I organized a comparative listening of my old cassette walkman from the 90s with my Qobuz/Chord mojo 2 combo, I am convinced that everyone would be able to tell the difference, because the detection threshold will be infinitely lower than this difference . Your experience with just
    highlights the great times we are living in where we can have great quality audio gear for affordable prices. I think that high-end hardware makes very expensive engineering pay for audio gains at the limits of our perceptual abilities. This is what your experience shows. Please note, I am not denying the technical qualities of some of this equipment, nor the differences that some seasoned audiophiles like you may perceive, but this seems to me to be over-quality dedicated to a demanding and wealthy listener micro niche. Afterwards, everyone finds his pleasure where he wants.
    Sorry if my English is not good, it's the fault of google translate :)

    • @PassionforSound
      @PassionforSound  Год назад +1

      That's a great post that captures the essence of this so well - thank you!

  • @Angel-AbC9
    @Angel-AbC9 Год назад +2

    Would've liked to see that blind test

  • @Douglas_Blake_579
    @Douglas_Blake_579 Год назад

    Something to think about ...
    My own system has been used in situ for about 3 years now. Aside from occasionally hooking something up to test it after a repair, I should be hearing the same thing each time I use it.. Right?
    But some days it thrills me, other days it's a toe tapper, some days it's just "blaaa" and occasionally it irritates me to no end.
    Now it should be obvious my system is not changing... so what is?
    It's me, I'm changing... my hearing and perception of sound varies from day to day, even hour to hour. Things like mood, air pressure, humidity, temperature, diet, earwax, and more, all affect my perceptions. Hell even my last dump changes how it sounds to me...
    So now comes the difficult question....
    If these blind tests revealed pure guesswork .... what are "subjective reviewers" actually reporting?
    Is it differences in the gear they're "testing"... or day to day differences in their own hearing?

  • @xyanide1986
    @xyanide1986 Год назад +4

    Lmao that "first" result. I probably would get it wrong too though, great video. Also don't forget how people are strongly influencing one another's opinions when it gets to a group discussion.
    When I'm comparing things, I have to be intimately familiar with the music and able to switch things somewhat quickly. Headphones, amps, DACs, whatever it is.
    Blind testing is probably more like trying an item in isolation, they're both going to be good but the nuance and context won't come out.

    • @PassionforSound
      @PassionforSound  Год назад +1

      There was strictly no discussion about the tests until the end so we handled that, but yeah, the rest of the results after the control are pretty meaningless 🙂

    • @xyanide1986
      @xyanide1986 Год назад +1

      @@PassionforSound ah ok it just wasn't immediately clear in the video whether discussion was happening or not

    • @PassionforSound
      @PassionforSound  Год назад

      You're right. I didn't specify that

  • @peterallen973
    @peterallen973 11 месяцев назад +1

    “Swapping or not they knew from our background noises”
    Yes. So in a no change trial, still make “changing” noises.

    • @PassionforSound
      @PassionforSound  11 месяцев назад

      Absolutely. Every time, I did something (even if it did nothing) 🙂

  • @rhalfik
    @rhalfik Год назад +4

    Unusable data? It's very much usable. It shows that people hear a difference where isn't. That's a useful information. If you want to measure audibility then simply run ABX test. If you get 50-50 distribution, you know the difference is only in their mind.

    • @PassionforSound
      @PassionforSound  Год назад

      Yes, it's usable in its lack of overall use (i.e. not delivering on the intended purpose of identifying which items had an audible impact). Unfortunately, it also shows that the ABX process is still not enough to account for all variables such as individual listening ability, the influence of the music used, length of samples, time between samples, etc.

    • @rhalfik
      @rhalfik Год назад +4

      ​@@PassionforSound "variables such as individual listening ability"
      In other words audibility. That's what audibility is. If a person can't hear a difference, it's inaudible. ABX is all you need to do to determine audibility, you're just in denial. If the breaks are too long, I can see it as a problem except that... Those people then go write long and detailed descriptions of the gear they listened to a week before. Lets face it, you only think breaks are a problem if your abilities are being verified.
      However all that is not important. You said that ABX didn't work, except... You did not make an ABX test. The procedure you described in the video is a single blind AB comparison. ABX is a double blind sequence where a subject listens to three samples and has to match two out of the three. It's the most deterministic procedure for audibility. It takes a little more gear and software to set it up though so maybe next time.

  • @razisn
    @razisn Год назад +7

    Hahahahah!!. This has only been a disaster for 'reviewers' and others who pretend or think they are hearing differences when the average Joe doesn't. This has been a valid and successful test. It is not the best approach only for people whose interests and reputation is damaged by such tests.

    • @PassionforSound
      @PassionforSound  Год назад

      You've missed the point. The only data with any meaning in this test was the control showing that the way this blind test was setup would produce no reliable data. Any further meaning you infer (in any direction) about what this says about listening to audio gear is conjecture and your own confirmation bias at play.

  • @Ray-dl5mp
    @Ray-dl5mp Год назад +5

    I’m guessing a fan of blind testing would say there’s not much wrong with your testing at all. The fact that people couldn’t tell something was the control is exactly why science fans say so much of this is in our heads. It’s also a bad sign for subjectivists that blind didn’t favor the TT2 strongly. You would think there should be a clear difference towards the more expensive “better product”. That’s why doing non blind at the end doesn’t really make any sense. You’re just proving the blind testing fans point. By everyone knowing the story, then all of a sudden you got the results you wanted. Again you basically proved the point that knowing the story is maybe all this is at the end of subjective audio.

    • @PassionforSound
      @PassionforSound  Год назад +1

      No, that's not really accurate, Ray. The moment we saw the results of the control round, all other days collected becomes irrelevant because the control shows that it can't be trusted.
      As for preferences between products (e.g. TT2), this wasn't about proving better or worse, just asking whether we could even hear a difference. Until there's any reliable data answering that question, everything else is moot. (Also, something being more expensive/preferred by a reviewer doesn't mean that everyone should automatically also prefer it)

  • @sauhamm3821
    @sauhamm3821 Год назад +1

    i love this hobby - i hate the absolute ocean full of snake oil people fall for... to each their own tho. here's a fact - if a person can wax poetically about the differences between a $5000 amp and a $500 amp after they know the price, then they should be able to tell me which is which, blind, before they know the price.
    fwiw i blind test my equipment solo all the time. wanna blind test 2 dacs? plug in 2 sets of RCAs and snake them around so you can't see where they are coming from, literally plug in 2 dacs and put them under the desk and run the cables up above. white is left and red is right is ring. you then have 2 dacs and you have no idea which is which. name them both "dac" in roon and click play... guess what, you have no clue which dac is playing... listen and choose. #blind
    same for amps, same for pretty much everything except headphones cause once they're on your head you kinda know which is which.
    but anyone who says they can't blind test dacs and amps just doesn't want to. humans are exceedingly intuitive. if we can get to the moon - we can blind test audio equipment
    and this isn't a shot at lachlan, love that dude and watch his content - i am subbed... this is about anyone who refuses to blind test because deep down they know it's better to just enjoy the hobby and the talk about timber and tone and transients... i love all that too... but what i'm not going to do is waste my money and then say openly that amp A is "better" than amp B because the price influenced me.

    • @PassionforSound
      @PassionforSound  Год назад

      I think there's a little more nuance to it than that. There are some devices where the differences really are subtle and it's impossible to pick them blind. In other cases it's more obvious. It's also not always a case that the more expensive devices are better (or perhaps just not preferable). The key thing for me is that blind testing isn't particularly helpful in a short, quick switching type of setup. Your suggestion of setting the devices up without knowing what's what and then just listening is (IMO) the best way to really find what we prefer.

  • @ProjectOverseer
    @ProjectOverseer Год назад +2

    The human ear is a remarkable sensory organ that allows us to detect sound waves and convert them into electrical signals that our brains can understand. However, our preception of these sounds differs at an individual level.

  • @hartyewh1
    @hartyewh1 Год назад +2

    It's fantastic that these kinds of tests are done even if there are limitations and non-ideal setups. Like you pointed out one huge issue is sound memory in that every second between A and B lowers the accuracy a lot.
    I'd still say the 90% was perfectly successful data and shows, as we already know from existing research, that if people are expecting or trying to hear a difference they will. This explains a lot of believes and impressions that people have that don't make much sense or show in any measurement.
    A well designed ABX test would require switches that are flipped randomly and without delay which would be recorded and checked afterwards.
    Also the question "is there a difference between A and B" is problematic since it doesn't imply a direction. The question "is A bigger than B" is a more suitable one for this kind of testing. Since better or worse is also problematic here though personal preference could be a valid measure you could give people 5-20 minutes of time to compare the two knowing which is which in order to find differences they can point out and then have them try to do it blind.

    • @PassionforSound
      @PassionforSound  Год назад

      Difference is all we needed to identify so the direction is irrelevant and adds further problems of building expectations into what someone is listening for.
      Extrapolating this data to tell us anything solid about people's ability to discern differences between products is just playing into your own confirmation biases. The only thing this test demonstrated (specifically the control round) was that the data is not usable beyond proving that the testing conditions were flawed.

  • @martinhiggins9700
    @martinhiggins9700 Год назад +2

    I think the greatest variable in any blind testing is the person listening. (My best upgrade if only temporary is a good glass of malt whisky🙂)

  • @net_news
    @net_news Год назад +2

    the Power Conditioner is IsoTek not Isoacoustics. 👍🏻

    • @PassionforSound
      @PassionforSound  Год назад +1

      Oops! I must have been looking at my speaker stands when I said/wrote that!

  • @pfunk34
    @pfunk34 8 месяцев назад

    Perception is a tricky thing...
    In Freakonomics, a wine tasting experiment was performed at the Society of Fellows at Harvard University (brilliant scholars and researchers). They were to taste 2 expensive wines, one cheap wine, placed in 4 different decanters (1 expensive wine was place in 2 decanters). Their findings were confirmed by other independent researchers and with 10+ years of experiments and over 5,000 blind tastings.
    "Their conclusion: fancy people with lots of training can tell cheap wine from expensive wine, but regular people cannot."
    I believe that we, and many reviewers, as enthusiasts may be better than average but still fall under the category of average.
    Just like the professors at Harvard, who were experts in their field- but JUST enthusiasts with wine, our egos get in the way of our perception... and/but argue as experts.
    If we truly have better ears, wouldn't we work in this industry where our talents can be highly appreciated?
    All of that being said, buy what sounds good to you, not the pricetag. You enjoy your listening and your own journey!
    (primarily spoken in a general manner, there are always those special people who are outliers)

  • @SastusBulbas1
    @SastusBulbas1 4 месяца назад

    For decades we have had blind testing, many simply use a transparent curtain set in a way no one can see behind, and a team behind that curtain go through the procedures.
    Speakers are far better than headphones for groups and soundstage, but again that requires a non hotseat set up. Somewhat possible by aiming tweeters to opposite rear seats and staggering the seats for minimal interference or simply smaller audiences. Or accepting 10 people in 10 seats will get consistent experiences in those same seats but possibly shorten the gaps and ability to discuss with one another.
    Blind testing is certainly doable, it has been done, its simply preperation before hand and not testing multiple variables at multiple parts of a system. Being straightforward and systematic with your aproach will often lead to exactly what you found, because what blind testing has continually done regardless of good or bad set up, is highlight the problems are actually in the listeners head, often the differences are imagined and it is possible to manipulate them. Some dealers are very good at demonstrating differences, just like you yourself can easily steer an opinion on a piece of equipment by opening up discussion during listening.
    Thankfully 30 years since the 90's debacle we now have equipment and education showing us the differences and what we can't hear, showing that placebo and imagination are still a massive influence on opinion easily manipulated by confirmation bias on someone elses opinion.
    Headphones require EQ and plug ins like Goodhertz Canopener and various others to actually be viable and not just detail enhancers. Plugins and such are what proffesionals use even on the £2000 "audiophile" cans. Sure a nice headphone amp and audiophile cans are nice, but its simply expensive detailed hifi without the stage and ability to take you to a venue.

  • @hugoanderkivi
    @hugoanderkivi Год назад +1

    Interesting video. I very much encourage forming a clear hypothesis (doesn't have to be right, just something from which you can pull out a prediction and test it) as that is the basis of the scientific method; it makes truth-seeking more methodical and repeatable. Example of what I mean: "Differences will be audible when switching between the audio gear as their audio reproduction and electronics is unlike." Even if the hypothesis matches the results of the experiment, it only makes the hypothesis likelier to be true, but it could very well be wrong. Errors and biases in any of the steps of the method can invalidate, at least partially, the study.
    I applaud your attempt to substantiate hearing differences in audio, and a failed study shows room for improvement and should not be taken to heart; it shows you can do better. Keep on working at it, and you'll eventually bring about something useful.

    • @PassionforSound
      @PassionforSound  Год назад +1

      I didn't declare a hypothesis in this video, but I was working towards proving that people would/would not hear a difference when switching between the different setups. The main data for the whole test was the initial yes/no question in each round. That was directly related to the hypothesis. The rest was for curiosity's sake only.

  • @ScottoGrotto
    @ScottoGrotto Год назад +1

    I wouldn’t call this a disaster, more like a dry run?
    In my experience, critical listening relies on a well known “A”, something one is very familiar with, and it helps to have a variety of aspects you are familiar and listening for to help make a comparison.
    Aspects may include, frequency response, clarity, note weight, image definition, image separation, dynamics, soundstage, spatial cues in the recording space, image depth, how realistic is the sound, etc.
    The sound of known aspects may allow one to compare subtle changes more easily.
    Comparing a new A to a new B can be challenging.
    It might help to start with an A and B that have more obvious differences At first, and gradually work your way up to more subtle changes at the end.
    Also really helps to have a track or two or three that one knows very well.
    In this case, maybe give the participants these tracks well in advance to become very familiar with on their own.
    Solo piano, male and female vocals, a live orchestra recording, double bass, brass, saxophone, drums.
    Then spend some time hearing these tracks with the base system at your event.
    A few times to get a baseline of that new version of the tracks they will experience compared to listening at home.
    Maybe ask people to write how this sounds different to the way they’ve heard it in their own systems, to get them going.
    If I’m going to CanJam, I’m bringing my tracks, and my chord mojo, so I have one new variable, the headphones to compare, otherwise it’s a bit of guesswork about what are the differences due to?
    Last time I sat at the ZMF table, I used their source and amp, but brought my known Auteur og Blackwoods as my known reference for comparisons.
    The less time between A and B comparisons should help too.
    Great video Lachlan!

    • @PassionforSound
      @PassionforSound  Год назад +1

      Glad you liked the video. Calling it a disaster was more about encouraging people to click on the video rather than my actual feelings about it because it had value, just not how I expected.
      As for the different testing approaches, it's very hard to collect meaningful data from multiple people AND keep it blind, valid and reliable. 🙂

    • @ScottoGrotto
      @ScottoGrotto Год назад +1

      @@PassionforSound glad you’re in for the journey on this Lachlan :)
      NPR has a diy listening test for people to try and discern whether 5 recordings ( if I recall correctly ) were either mp3 or cd quality?
      On the surface one would think a seasoned listener could tell easily.
      I used my old shure se530’s from an iPhone 11. This was adequate resolution for this test.
      My first guess was wrong.
      That had me carefully re-listen to the first track a few times.
      It finally dawned on me that the mp3 image was flat/ ( it literally is compressed ! ) like a 2d sketch of a 3D thing in real life. While the cd quality tracks had images with more depth and detail- a more 3D ish representation of the sound image.
      Having that sound distinction in mind I got the rest of the test 100% correct.
      The mp3 engineers were pretty clever to come up with a compression scheme that superficially sounds like what we expect to hear.
      There was one track that was really irritating in mp3 and the cd version was ok.
      Now in my own system with great familiarity and good resolve, it is obvious when changes or tweaks either help or harm or just subtly different.

    • @PassionforSound
      @PassionforSound  Год назад +1

      That's a great example of how we sometimes need to learn what to listen for / how to hear the differences. Thanks for sharing!

  • @jeffhampton6972
    @jeffhampton6972 Год назад +3

    I really appreciate you sharing your experiences with this! It helps the community out for sure.

  • @false-set
    @false-set Год назад

    You can hear a difference with the same set up and the same song depending on a ton of factors, if you focus on a different instrument itll sound different... how can you measure that?

    • @PassionforSound
      @PassionforSound  Год назад +2

      Absolutely. It's probably impossible to properly isolate all of the variables beyond the simple performance of two devices. And that's not mentioning the fact that the performance of two devices might differ in only one area so they could be tonally identical, spatially identical, the same on detail/resolution and then just have a slight difference in the weight through the midrange and bass due to a fractionally longer decay on these frequencies (i.e. not an altered frequency response). If the listener is focussed on any of the other factors, they'll miss the difference that is present.

  • @marklobban5354
    @marklobban5354 Год назад +1

    Thanks for the glowing review of the Kudos Titan 606's - I have owned a pair for 2 years and they are wonderful 👍

  • @syanhc
    @syanhc Год назад +1

    I think everyone has different listening abilities and preferences too. So what sounds “right” to one person….

    • @PassionforSound
      @PassionforSound  Год назад +1

      We weren't actually asking what was right, just whether they heard a difference. However, you're right that everyone has different levels of listening ability

  • @thatchinaboi1
    @thatchinaboi1 Год назад

    How was the blind test conducted? Were you blindfolded? Was there any indication besides the sound itself that anything was changed?

  • @NeilBlanchard
    @NeilBlanchard Год назад

    Thanks for doing this - all types of testing have their issues, that is for certain. The different seating positions have different bass as well - the distances from each listener's ears to ALL six of the room boundaries and to the woofers and ports(?) of the speakers - mean that each position will have different reinforcement/reduction of various bass frequencies.
    And listening to short segments of music means you lose the overall "context" of the rest of the recording. The ebb and flow of the dynamics and harmonic changes through a recording are possibly where a particular component will have an effect - but if you never hear a recording all the way through, you will more likely miss hearing them.
    Were you all familiar with the music? I have found that using music that I am *very* familiar with, on my reference system - then sometimes changes from a new component is fairly obvious.
    Same goes for the system - if you have a straight listening session of familiar music before making any (possible) components swaps, lets people get a baseline. If you just dive into an ABX test in a system that no one has heard before - you are going to pick up on things as you get farther into testing, that you might have missed earlier on simply because you are getting used to the system, and learning what it is capable of.

  • @botwally2954
    @botwally2954 Год назад +1

    Proper blind test is indeed very hard to do, but most of the time the difference is just too small. Yes, your blind test was flawed, but if the difference was big enough, the time gap between switching gear shoudn't matter. For example, I can clearly differentiate a hd600 from a hd800 in a blind test 20/20 trials. I can also clearly differentiate the 2 headphones even when there are many many minutes of time gap. So your blind test cannot make a conclusion to if cable makes a difference because the test was flawed. However, it does tell that even if there is a difference, it will be tiny.

    • @PassionforSound
      @PassionforSound  Год назад

      Great point. In other words, the measurement sensitivity was insufficient to account for the differences between the samples.

    • @botwally2954
      @botwally2954 Год назад

      @@PassionforSound That's why I feel a bit cringe when people say things like quieter background, deeper bass, wider sound stage after switching cable. I'm not saying cable makes no sonic difference, but the difference is just too small if there is any. These fancy words just don't make sense when the difference is barely detectable, especially when these cables cost hundreds and thousands of dollars.

    • @PassionforSound
      @PassionforSound  Год назад

      Depending on the cables (what you're coming from and to), the difference can be significant enough to make it worthwhile, but it's rarely (if ever) as big a shift as a component change so the hyperbole from some people is a bit misleading I think

  • @poochymama2878
    @poochymama2878 7 месяцев назад

    You should do the next blind with speakers or headphones. I've done enough of these to know that the differences between electronics(dacs, cables, and even amps) are usually far too small for the human ear to hear under blind conditions. You have to do those tests sighted in order for people to hear differences that small. I think people often forget that we hear with our brains, and not our ears; ears are merely one component of hearing, as our our memories, emotions, and expectations. In the case of DACs, almost all of them are within 0.001db of each other at all frequencies, whereas our ears are only really capable of hearing differences of 0.5db or so. Your ears are simply not capable of picking up differences that small, so in order to hear differences, you need to let the other inputs to sound experience(memories, emotions, expectations, etc.) contribute to the overall sound that you hear. This requires sighted listening.

    • @DrTune
      @DrTune 14 дней назад

      Ok great, sounds like you're happy to buy a $20 USB dac put inside a fancy metal box for $1k. Which of course, is.. what we see.

    • @PassionforSound
      @PassionforSound  14 дней назад

      Actually there's a lot more to it than either of these comments are picking up on. Stay tuned...

  • @sergiymarchenko1002
    @sergiymarchenko1002 Год назад +5

    All I can say is this hobby is full of BS. So...nobody can be trusted except your own ears...

    • @IHearEverythingDude
      @IHearEverythingDude Год назад

      The thing is you cannot even trust your own ears. As what you hear is strongly dependent on your brain biases etc...
      That's why blind testing is vital as you only judge SQ not billions of other factors.

  • @mmlr312
    @mmlr312 Год назад +1

    Excellent video and thought-provoking results- thank you for doing this Lachlan!

  • @thierrywitzig4886
    @thierrywitzig4886 Год назад +1

    Did you tell the participant there would be a control, e.g. that there could be no difference between two runs ? I didn't hear you say it in the video, so sorry if I missed it but it is important, because it changes how people will react.
    If you did not tell them there would be a control, you had a bias, because people will always expect there to be a difference. Whereas if you told them, no such bias would have been added.
    Just to illustrate how powerful this effect is, I will give an example I know well : painkillers. If someone expects to receive a powerful painkiller, yet only receives water (an injection of NaCl 0.9%), he will feel between 50-80% of the effect of an actual painkiller like morphin !
    Otherwise your experiment is very interesting, though as you said it requires much work before finding anything significant. More runs and more participants will help, as will a simpler set-up (e.g., only two set ups, A and B, and doing something like AABABABB etc)
    Keep up the good work, doing science is hard !

    • @PassionforSound
      @PassionforSound  Год назад

      They were not told if there was a control. If I had told them there was a control, it would bring its own bias issues. There's no way to do this without causing challenges one way or the other. ☹️

    • @thierrywitzig4886
      @thierrywitzig4886 Год назад

      @@PassionforSound Well yes, working with human induces bias no matter what we tell them. I'm just saying not telling them is worst. What worst bias would come from telling them ?

    • @PassionforSound
      @PassionforSound  Год назад

      I'm not saying it would be worse, just that either direction introduces bias so I kept it all completely blind.

    • @thierrywitzig4886
      @thierrywitzig4886 Год назад

      @@PassionforSound Well I am saying it would be worse. Think about it, it's obviously OK if you disagree, but just think about it.

    • @PassionforSound
      @PassionforSound  Год назад

      I have thought about it and I disagree, but I also have no proof either way. 🙂

  • @danisold
    @danisold Год назад

    Two mini wiims and two schitt modi dacs... unless you're testing dacs...but you can stream the same song to both wiims

    • @PassionforSound
      @PassionforSound  Год назад +1

      The issue wasn't getting the music to the devices, it was switching between the products tested. That setup doesn't help with swapping in/out power conditioners, upsampling, cables, etc.

  • @blejzerosamigos6115
    @blejzerosamigos6115 Год назад +1

    True to that, thanks for the hard work everybody.

  • @mikegoddard7354
    @mikegoddard7354 Год назад +1

    Having them leave the room is an issue in this entire test realistically. During AXPONA kimber cable had a cable test room and done by switch without telling participants what's going on. I literally spent 30 minutes in that room and literally noticed no difference in a cable which was 1800 vs 300. *after watching you did mention my first point. I am just saying what they did at the harman labs seems very much legitimate and scientific
    There's too many issues. All that matters is that you are happy.
    My only other point is, we as audio enthusiasts are seeking high fidelity audio reproduction. You cannot get that with products that are incompetently designed and implemented, so objectivity does have merit. The fact is people will easily find mediocre products as satisfactory, which is fine. The issue comes with how much that product costs.

    • @PassionforSound
      @PassionforSound  Год назад +1

      That's interesting about the test at AXPONA. I think there are so many variables when it comes to the perception of audio that attempts to create perfect tests based on perception are maybe impossible.

    • @mikegoddard7354
      @mikegoddard7354 Год назад +1

      @@PassionforSound Indeed, and even they said the speakers which were provided to them were all tested by measurement to have the least difference in frequency. Clearly all 4 speakers were the same, but you know there are sometimes fluctuations in frequency and these were tight tolerances of less .5 db which is not distinguishable.
      I really really tried to hear a difference. I even closed my eyes for 10 mins to increase my senses, I returned to the room for another visit. They would just tell you it's switch but not which one it was.
      All the people in the room were "easily" picking up which one sounded better with no definitive answer being provided as to which cable was being played which is funny cause they possibly could have enjoyed the cheaper cable more frankly.
      But the point was from Kimbers stand point that cables matter and that we dont even have to tell you which cable it is. Once we switch to the 1800 dollar cable you will be able to tell right away, also that simply cables do matter. and I was not able to pick any difference. Absolutely nothing
      Lastly, you would think I have bias on my part because I have great respect and admiration for Kimber as they do have very detailed information, including objective performance on all their products so they are using both objective and subjective approach to design.

    • @PassionforSound
      @PassionforSound  Год назад +1

      There are so many variables at play here that it makes it very difficult to know what's what. I've recently picked up some Kimber XLRs (not for review) and they're brilliant so I do believe in the quality and performance of their cables. I just think it's very hard to try to pick differences in things like cables in that type of setup

  • @Rockapotamus91
    @Rockapotamus91 Год назад

    Some people really preferred topping e50 over chord TT2 eh, that's good value for money.

  • @brothatwasepic
    @brothatwasepic Год назад +3

    Lachlan you are the best bro ❤. Have a great one G and K and family from Vancouver Canada

  • @rolfathan
    @rolfathan Год назад +1

    I have an extension for seeing thumbs down still. GEEZ. What is wrong here? This is a video explaining a process that is going to be improved. Why's there so much dislike?

    • @PassionforSound
      @PassionforSound  Год назад

      Sadly, there is a faction in our community (where there need not be any factions) who think anything related to hearing differences between audio gear is somehow fraudulent and misleading. That will be the reason.

    • @PassionforSound
      @PassionforSound  Год назад

      Sadly, there is a faction in our community (where there need not be any factions) who think anything related to hearing differences between audio gear is somehow fraudulent and misleading. That will be the reason.

  • @Coneman3
    @Coneman3 8 месяцев назад +1

    Great in depth analysis.

    • @PassionforSound
      @PassionforSound  8 месяцев назад

      I'm glad you liked it! 🙂

    • @Coneman3
      @Coneman3 8 месяцев назад +1

      I think the truth often lies between 2 extremes in cases like this. So cables do make a difference, but it’s small. In some setups and with some cables, the difference may be negligible. In others, quite noticeable with careful listening, but still relatively subtle. Audiophile judgements are often subtle, which is why audio shows are poor for making quality assessments. They are more a social event with opportunities to see different gear and maybe buy an accessory or 2.

    • @PassionforSound
      @PassionforSound  8 месяцев назад +1

      I definitely agree. Any statements of cables being transformative are generally over exaggerated or the starting cable is very poor.

  • @nicktan4530
    @nicktan4530 Год назад +1

    Thats why we call placebo effect. Too many factors taken into consideration and hence data is somewhat usesless via respondant. Methodology and parameter of testing are important. I think headphone would be more accurate analysis. Difficult to do a/b test.

    • @PassionforSound
      @PassionforSound  Год назад

      The influence of expectation bias on the control doesn't automatically mean that people hearing differences between products are making it up. We need to be careful over-interpreting or extrapolating the data too far.

    • @nicktan4530
      @nicktan4530 Год назад +1

      @@PassionforSound Hence placebo. Too many factors to take into consideration

    • @nicktan4530
      @nicktan4530 Год назад

      @@PassionforSound I have a question. How do I a/b amplifiers ? For headphone, that would need to result of either pluggin silmultensously Amp A and Amp B which results in time loss. Same thing with switch both of the same hadphone when both are plug to Amp A and Amp B. A/B switch only applicable to dac. My testing is definitely flawed in the past.

    • @PassionforSound
      @PassionforSound  Год назад

      @nicktan4530 that's not placebo. Placebo is experiencing the specific effect of something when it hasn't been administered. Hearing or not hearing differences that are/are not there is more about expectation bias, confirmation bias, inability to discern certain cues, or just simply the complexity of audio containing so many different variables when trying to analyse it.

    • @nicktan4530
      @nicktan4530 Год назад +2

      ​@@PassionforSound That is pretty much placebo effect which associate with audio. Hearing or not hearing differences that are/are not there is more about expectation bias, confirmation bias, inability to discern certain cues are also part of placebo effect. You think you may heard something or difference but no difference in reality. The same could be said ther other way around as well.

  • @stoicar
    @stoicar Год назад +1

    Wonderful setup

  • @wavetheorysound
    @wavetheorysound Год назад +2

    Very interesting, Lachlan. Thanks for putting in this work and sharing it with us. I think it's also wise for our audiophile community (and all of humanity, really) to remember that there is no such thing as a perfect test. Every experiment is going to have limitations. And different questions require different kinds of tests/experiments. What we're really after is the preponderance of evidence. To answer questions like "does audio thing A sound different than audio thing B?" requires lots of tests both in quantity and in type. If the majority of tests all start pointing in the same, or at least similar, direction then we have something that's getting closer to the truth of the matter. Questions like "what are the sonic differences b/w audio things A & B?" are different in nature and thus require at least some different kinds of tests (although there can and will be some procedural overlap). But, the need for a preponderance of evidence still exists. We also need reproducible results for all question types, which goes back to the building of a preponderance of evidence. I think Lachlan did great work here by putting forth certain kinds of testing methods and doing a pretty thorough job explaining the strengths and weaknesses of those methods. Looking forward to more!

    • @PassionforSound
      @PassionforSound  Год назад

      Thanks Brian. I completely agree with all that you've said. Hopefully I can find some different approaches in the future...

  • @chuckmaddison2924
    @chuckmaddison2924 6 месяцев назад

    Too much time between tests. Also you need at least 3 samples as with 2, it's only 50/ 50 . You can build a switch box for the cable with make before break switching. This way swap will be instant and easier to detect any change.

    • @PassionforSound
      @PassionforSound  6 месяцев назад

      Agree. It just requires extra gear that I don't currently have and can't currently afford.

  • @djhmax09
    @djhmax09 Год назад +4

    I don't understand why most people default to blind testing. That's not how you listen to music in reality. I'm sure it can be useful in certain areas but these things take time to grasp. It's not something that suddenly comes but gradually develops day by day.

    • @PassionforSound
      @PassionforSound  Год назад +1

      The intent is to remove the potential biases that skeptics claim are the only reasons people prefer certain products (e.g. knowing that item B costs more). This test proved that no one approach solves all issues - you just trade one set of issues for another. Next time around, we'll do something else and see if it can work, but I'm also thinking of ways to test gear in a group and in a sighted setup while also challenging expectations and assumptions in other ways...

  • @dahaizang
    @dahaizang 5 месяцев назад

    Have you considered blindfolding people so they don't need to leave the room or get influenced by sight? Also it would be interesting to get real audiophiles to participate.

    • @PassionforSound
      @PassionforSound  5 месяцев назад

      Why do you say these aren't "real audiophiles"?

    • @dahaizang
      @dahaizang 5 месяцев назад

      @@PassionforSound Sorry I typed the questions in haste. What I should have said was, why don't we use those "real" audiophiles who claim to hear differences in power cables in the blind tests? The results would be very "revealing".

  • @AzraelAlpha
    @AzraelAlpha 9 дней назад

    The amount of mental gymnastics that most people in this "community" go through to justify their ridiculously overpriced purchases is astounding.

  • @BillyKueekSG
    @BillyKueekSG Год назад +2

    Pepsi vs Coca Cola?

  • @Elegiac7
    @Elegiac7 Год назад

    Most gear does sound so similar as to be interchangeable. Some doesn't. I have two chains that I can tell apart blind. Building those differences is something you've gotta work at. I'm pretty sure that nobody could mistake the smsl sh-6 for the topping a30 pro, and vice versa. And they provide different power, and perform different duties. Tubes are also a solid way to build difference.
    Anyway, that test does sound messy. Results seem about right though :) Still, a calm and uncluttered mind is best. I wouldn't bother seriously trying to blind test in a room full of people. All you'd find out is what you have: how little having different components can matter.
    I have other stuff that I can't tell apart blind. So much is in the mind.

    • @PassionforSound
      @PassionforSound  Год назад

      I think this is true when you have devices built on similar foundations (e.g. similar op-amps, DAC chips, etc.) Where it can get interesting is when there are other factors at play like different decoding approaches (Schiit Multibit vs Delta Sigma), different interconnects, etc.

  • @carminedesanto6746
    @carminedesanto6746 Год назад +2

    Fantastic video, thank you for doing this!
    Get 10 audiophiles in a room and you’ll get 100 different answers to your questions 🤣
    Easiest question to ask is what A/B or C system did you enjoy more…
    The hardest thing will be to define those particular aspects of what made it more enjoyable…regardless of how experienced a listener you are ..traits of lesser or better quality of the playback should be evident..if they’re too close to call then the next question is ..At what level of price/ performance do you need to have in-order to have night and day differences be obvious…that is the mystery that both drives and ruins this hobby of ours.
    Take care ☕️🔥🍕🍩🥓👍

    • @PassionforSound
      @PassionforSound  Год назад

      I'm glad you liked the video. Trying to balance the collection of evidence and the collection of opinions is a tricky endeavour!

  • @tavomcdouglas
    @tavomcdouglas Год назад +1

    Don't even get me started on what I've witnessed at blind wine, beer and spirit tastings!

  • @peytonsnead3114
    @peytonsnead3114 Год назад +3

    Findings are findings. You've helped define the complexity of blind testing.

    • @PassionforSound
      @PassionforSound  Год назад

      Thanks Peyton. That's how I feel about it too. The data is useful in its uselessness 🙂

    • @peytonsnead3114
      @peytonsnead3114 Год назад +1

      @@PassionforSound well now you have a new approach. Wondering how effective blind testing can be when comparing "last mile" changes such as different cables.
      Reminds me of going to the optometrist and trying to discern the final differences. Going back and forth, not sure if there's any change at all.

    • @PassionforSound
      @PassionforSound  Год назад

      Absolutely!

  • @stoicar
    @stoicar Год назад

    In a room to be for listening with speakers and in the other room to have 2 holes for speaker cables and the sistem

    • @PassionforSound
      @PassionforSound  Год назад

      That would require significant budget and space that I don't have

  • @AudioThings
    @AudioThings 9 месяцев назад +1

    Blind test only give out corrupted data, more so then A/B tests. I don't think you can find the perfect standard test, but for sure blind testing will never be completely relevant.
    You need to know what is playing, because most of the time the "sound map" is build over time, and blind tests interrupt that basic human process that ain't optional.
    And of course, personal preference comes into play. No one can truly deny the performance levels out there, but can be said to prefer something over something else.
    So maybe try a combination, never lie about a change and say what you are testing in a generic way, product 1 or 2, 3, etc. And also try to have the same sound signature and only the level of performance be different. But this can also bite you in the back, because many times, people change sound signature preference as the performance level goes up, because of the higher and higher resolution that most people are sensitive to.
    I'm sure that you tried a blind test because you were actually sick of people saying all is fake, I'm also sick of them. Most of the time, these are people with little money to spare (or just miser), that have opinions on products that they never listen to, on levels that they can't afford. Problem is, that they are everywhere and are very loud. Most of them think that SNR is enough to judge the sound, and while that may be true for noise, it says nothing about the actual sound. Of course, there is great stuff under 2000 or even 1000$ these days (that they seem to love very much because most can afford it), but what they don't seem to figure out, is that, that stuff is just yesterdays "10K snake oil". Today's 500$ DAC is the ~5000$ DAC of 20 years ago when they yelled the same old story "you can't tell the difference, is all snake oil" >>>okay then why did you upgraded?

    • @PassionforSound
      @PassionforSound  8 месяцев назад

      I completely agree with your points about the issues with blind tests and you're correct that I did this to try and provide a response to those always calling for blind testing. The reality is that both sighted and blind tests have problems and neither set of problems can be completely overcome.
      You make a great point too about preferences changing as device quality improves too. This hobby ultim6has way too many variables to simplify into any single, objectively quantifiable test IMO.

    • @AudioThings
      @AudioThings 8 месяцев назад +1

      ​@@PassionforSound I truly believe also what I've said about the budget part.
      It's easy to say "do blind tests" and "you can measure all of it" and then spend a few hundred bucks and call it a day and feel good about yourself.
      It's harder to properly match components and also be able to spend the required amount.
      Really it's just a budget thing. Don't worry with blind tests, do your part as a reviewer, that is very useful especially when comparing products (subjective) and who also does measurements (good to prove/disprove low distorsions) has his part. But the only true test remains the personal listening of that piece of tech, in a good enough system to show what that tech can do.

  • @dasninjastix
    @dasninjastix Год назад +1

    Even if your setup and tests were perfect, I still don't know how applicable the data would be. If it's to show that there are or can be differences between products okay, I definitely see where having a majority of folks agree they can hear a difference might contribute to folks being a little less dismissive of the value of certain gear. But it's still going to come down to preference at the end of day. And there's unfortunately not much you can do to appease folks that can't reconcile the fact that other people hear and enjoy audio differently. Even if your headphones test goes exactly as you intend it, you'll have a contingency of folks not accept your results solely on the headphones selected for the test. Not that it would be a reason not to test, just that I don't know what this community is going to do with the information you know? Cool video, interesting situation, hopefully the next round goes smoother.

    • @shipsahoy1793
      @shipsahoy1793 Год назад

      It’s worse than that. I did a lot of headphone dabbling, and come to find out that the performance of even the best of headphones vary in performance drastically depending on the headphone amplifier that is used to power them. It’s a very dynamic marriage between the two, and irrespective of the listener, the viewpoint of a certain pair of headphones is going to be solely predicated on the one headphone amplifier used to evaluate them, unless care is taken to use several headphone amplifiers with every pair of headphones that gets checked, it becomes a very daunting task, to be honest. If one uses an amp that drives any pair of headphones in any impedance well, the prospective buyer of that headphone may have an amp preference that doesn’t achieve the same results with that headphone.
      A much more obvious phenomenon than with speakers, bc of room effects inconsequential with headphones (except for ambient noise) and the fact that almost all hi-fi speakers and amps are 4-8 ohms.

    • @PassionforSound
      @PassionforSound  Год назад

      Yes, this was really just intended to isolate if the various variables COULD influence the sound. From a pure data point of view, I wasn't really looking to define preferences because it's all personal and preferential. Geoff and I were actually chatting afterwards about the fact that the setup we were using was a bit brighter than either of us prefer (likely due to the silver interconnects and speaker cables in combination with the speakers used). That's means that those who preferred the Topping E50 (just for example - it's an excellent DAC) might change that preference on another setup.

  • @clg_pro2009
    @clg_pro2009 3 месяца назад

    I think the takeaway is its clearly not a big difference LOL. If you spend more money, it sounds better right LOL. How we have been conditioned to think. If people think its cool or not or if it cost alot of money is how we tell if its good. If we cant see it we can't accurately judge it because we can't observe how cool or expensive it is which is how we tell if its good or not. LOL

  • @scslite5206
    @scslite5206 Год назад

    The tests failed to do the most important step. Provide a box of QTips to the participants prior.

  • @r423fplip
    @r423fplip 9 месяцев назад

    How about getting someone to do it who actually knows what they are doing.

  • @BuzzardSalve
    @BuzzardSalve 5 месяцев назад

    I don't see what the problem is. If people can't objectively tell the difference between two DAC's in a blind test then save $$$$ and buy the cheaper DAC. End of story !! Why does it have to be the more expensive item should always be better ??

    • @PassionforSound
      @PassionforSound  5 месяцев назад +1

      It doesn't have to be that the more expensive is better. In fact, I'd choose the Geshelli J2S or Schiit Bifrost 2 DACs over many higher priced alternatives.

  • @Martijn1234
    @Martijn1234 Год назад +2

    Pretty funny actually but as others have said it's that expectation bias playing its role. Not a disaster I don't think by any means 😮

    • @PassionforSound
      @PassionforSound  Год назад +1

      The control in itself was very helpful data, but it unfortunately makes all the other data meaningless.

  • @aceofspades6667
    @aceofspades6667 Год назад +2

    I love listening tests but I need to use my eyes and I need to have extended listening opportunities with my own music and nobody watching me. If somebody is hovering over me I feel like I’m being watched

    • @aceofspades6667
      @aceofspades6667 Год назад +2

      For example I can demo 2 similar priced sets. 1 from Bowers and Wilkins and 1 from dyn audio. In short intervals the B&W will have a more “live”’or “showroom” type sound and the dyns will be for less impressive. But if you listen for 15 min the B&Ws becoming grating due to their treble presentation and the dyns become far more enjoyable and natural due to the quality of their tweeter. These outcomes change based upon my listening length and not due to a change in preference or any other psycho effect

    • @PassionforSound
      @PassionforSound  Год назад

      I think that's really the ultimate way to test audio. It makes collecting data VERY time consuming, but far more reliable and valid.i think.

  • @Elbenito84
    @Elbenito84 Год назад +2

    So we’ve learned just enjoy your music, no?

    • @PassionforSound
      @PassionforSound  Год назад

      I think that's probably the best conclusion we should draw. Thank you! 🙂🙂

  • @StephenWorth
    @StephenWorth Год назад

    I'm glad medicine goes with blind testing and doesn't throw up their hands and cry foul whenever the results aren't what they expect. If that was the case, we'd all be dead by now!

    • @PassionforSound
      @PassionforSound  Год назад

      I do hope that they also continue to identify tests that are producing unreliable data and design new tests though...

    • @StephenWorth
      @StephenWorth Год назад

      @@PassionforSound The test is rarely the problem. It isn’t hard to apply controls to eliminate bias and perceptual error. The problem is when they allow bias to color their interpretation of the results. That’s what happened here.

    • @PassionforSound
      @PassionforSound  Год назад

      Medical studies and audiophile tests are very different, Stephen. In a medical study, there is a clear, measurable variable that can often be determined using a blood test or similar objective measurement approach, thus creating reliable data. With the perception of music, there are SO many variables all happening at once, including but probably not limited to: choice of music, change in timber, change in soundstage width/depth, change in image focus, change in sense of transient attack, position in the room (if using speakers), mood and mental state during testing, etc. all of this makes perception based listening tests very difficult to construct for a reliable data output. That's the conclusion drawn here.

    • @StephenWorth
      @StephenWorth Год назад

      @@PassionforSound DBX is a simple, effective tool for reducing the possibility of expectation bias. It isn’t intended to determine sound quality, it is a test to determine if two samples sound different or identical. There’s no emotions involved. All it tests for is the thresholds of perception. We can measure a lot more than we can hear. What you can’t hear doesn’t matter. Audiophiles waste untold hours and amounts of money chasing down sound that their ears can’t hear. A DBX is easy to do and will definitively tell you if you need to worry about a particular bit of tech. More audiophiles should get a simple switch box and the software to do null tests. It would save them thousands of dollars.

    • @StephenWorth
      @StephenWorth Год назад

      I do a basic blind listening test with every piece of equipment I buy to guarantee that it is audibly transparent. I’ve learned that even humble electronics produce sound better than my ears can hear. Transducers and signal processing are what make great sound, not expensive wires and DACs.

  • @DrTune
    @DrTune 14 дней назад

    Thanks for your honesty but ... the results are the results, embarrassing as they may be. What would have helped nail it would have been to include a couple of _objectively terrible_ speakers and audio sources; e.g. some dollar-store computer speakers, a 64kbps MP3, or a cassette tape; almost _everyone_ would have correctly identified these as being poor quality. This would have been really telling.

    • @PassionforSound
      @PassionforSound  14 дней назад

      There's nothing embarrassing about them, they're just unreliable results due to a poorly designed test.
      There's also some excellent research I'll share soon that better explains why blind tests so often fail.

  • @mavfan1
    @mavfan1 Год назад

    Double blind is the only way to go especially for fraudulent products like cables.

  • @paulosilva8200
    @paulosilva8200 Год назад +1

    Well spoken ...

  • @x-techgaming
    @x-techgaming Год назад

    I don't think an RCA/XLR A/B toggle switch costs millions of dollars..? I can afford them, with my 287 sub, $0 revenue channel.

    • @PassionforSound
      @PassionforSound  Год назад

      Finding an acoustically transparent one is the challenge.

  • @rodm1949
    @rodm1949 Год назад +1

    One of my favorite sayings I saw in a research lab "If we knew what we are doing it wouldn't be called research".

    • @PassionforSound
      @PassionforSound  Год назад +1

      That's so true! It's all about learning through failures until you succeed, isn't it?

  • @nonchalantd
    @nonchalantd Год назад +1

    interesting

  • @shipsahoy1793
    @shipsahoy1793 Год назад

    Headphones won’t help.

    • @PassionforSound
      @PassionforSound  Год назад

      It will remove one more variable. It's also not the only thing I'd change (as discussed)

    • @shipsahoy1793
      @shipsahoy1793 Год назад +1

      @@PassionforSound Well, you’ll never get there, but if you do convince at least some of these people that blind testing is also fallible, then I’m all for you proving them wrong. Give it go ol’ chap👍🥂😉
      I stand by the conviction that a theoretically perfect blind test is not possible. You’re invariably dealing with the fallible human mind in all cases.😵‍💫

    • @PassionforSound
      @PassionforSound  Год назад

      I totally agree. We can never isolate enough variables when we're testing based on people's perception. In the end, a focus on enjoying music is the best output I think.

    • @shipsahoy1793
      @shipsahoy1793 Год назад +1

      @@PassionforSound 👍Yes, that last sentence nailed it ! I totally agree with your assessment without any reservation! And I used to be extremely picky, so I’ve cone a long way over the years..😉🥂👨🏻