Can You Hear 1/10,000th of a Second? Updated
HTML-код
- Опубликовано: 11 июл 2024
- Lets listen and test how accurate resolution in our hearing is to time differentials between our left and right ears.
Can you hear 1ms delay? .5ms delay? .1ms delay? Or even better?
Grab you headphones and lets give a listen!
If you like this and other videos I do, please join this channel to get access to more videos, early access to videos as well as to be able to join my weekly zoom chats:
/ @daverat
Also check out:
www.soundymcsoundface.com
www.ratsoundsales.com/
ratsound.com/daveswordpress/
www.ratsound.com/
www.soundtools.com
00:00 Introduction
00:15 Test Setup
01:09 1ms Test (1000 microseconds)
01:46 Verify Left vs Right
03:24 .5ms test (500 microseconds)
03:37 1/10,000th of a second test (.1ms, 100 Microseconds)
04:10 1/23,000th of a second (43 microseconds)
04:43 Sweep time test
05:34 Method to hear gear latency
06:45 Outro
Thank you, Dave! It tells us something about hearing in the broad sense, as well as the personal capabilities.
From 171 micro seconds upwards, I can hear it the apparent source shift. That is a result I am actually happy with.
I suffered from a hearing damage in this exact feature of the hearing, for which we have nerves that are literally between our ears, running from each ear to the other. These nerves are delay lines. When sound comes in, it runs along these delays in our "wetware", and where the sounds line up, that tells the brain what the timing difference was.
This nerve got a beating due to a confetti pipe that was popped in close range from my ears. I immediately felt disconnected from the people around me, because they lost their location in my perception. For about 4,5 years afterwards, my hearing was noisy as hell. High frequency sounds, though I could hear them as good as aver, were miss-located and later de-located for some years. Think of a noise recording on a TDK D90 cassette, recorded with dolby C, but played back without it.
One day, the noise suddenly stopped, and slowly my location hearing started to come back.
I do notice, however, that when I accidentally drop something small, I can no longer point out exactly where it went, there is a 10cm uncertainty (5cm to either side) around the sound it makes. This corresponds nicely with the 171micro seconds I found here.
Oddly, I did not loose any responsiveness of my hearing. On the contrary: my hearing is normal from 100Hz to 5000Hz, and above that, I'm more sensitive than the average person: +10dB at 8kHz ! I'm 53 years old, and my upper limit has come down to about 16.000Hz.
171uSec corresponds to distance equal to the wavelength of 5.8kHz, and I can hear tones about an octave above that. Oddly, the extra sensitivity I happen to have in this top octave does not help me to locate any finer.
You mentioned you can hear 50uSecs of delay, which corresponds a distance that is equal to the wavelength of 20 kHz, and I'd guess, by your age, you won't be able to hear that as a tone anymore.
This proves, based on research on a population of n=2, consisting entirely of sound engineers named Dave, that these two senses, pitch and timing, are two distinct senses. One is about tones, the other is purely concerned with the timing with which sounds arrive at each ear.
With kind regards,
Dave, Amsterdam
Awesome deductions
Really happy your hearing is slowly improving! I damaged my right ear's hearing due to DJing when younger. There's not much worse if you're a music lover with a passion for creating music aswell!
One of the problems with this test is the joint stereo coding of RUclips. When the delay is extremely small, it's going to assume that it's a mono signal and not stereo. Dave, if you're interested, my friend that invented parametric stereo coding for AAC ( CT's HE-AACv2 / FhG's PS-AAC) would chat with you, if you want to do a deep dive. He's the guy behind ToneBoosters plugins, good guy. Anyways... If you wanted to provide the test audio as lossless .WAV or .FLAC downloads, I'm sure a few of us would appreciate it. Thanks again for your efforts to educate about oddities and to also debunk myths.
Very cool. I was concerned about that and so I actually uploaded this video to RUclips then downloaded the video, then did the comparison and as you see in the video. What I found was that the video downloaded from RUclips actually maintains the same time shifts well enough to sound the same as the non uploaded version
Could it be that even through the MP3 or AAC CODEC, you'd still be able to hear the relative differences in delay between channels, even if downstream rendering was absolute in the time domain?
You know I don't know that much about the nuances of various codecs.
I just use the brute Force method of making sure that it was audible and correct when I uploaded it and then downloaded it and made sure that the downloaded version maintained the most critical aspects and stayed time correct enough to demonstrate the impact
First I was like, nah, I'll never be able to hear that, and when you turned the knob I was like, oh my god turn it back, my brain is melting!
👍👍👍👍
Respect! Always insightful. Thank you Dave for taking the time to spread the knowledge.
🤙👍🤙
Dave, thank you very much! Very visual experiment. Great work 👍
🤙👍🤙
Great!! Thanks Dave! ❤❤
Super cool video, Dave! Great idea, it's important to know one's ears.
👍🤙👍
Always interesting Dave, thanks.
🤙👍🤙
Absolutely great demonstration! -I've wondered soo much about those scientists on audio (not mentioning a certain forum) that claim phase is inaudible when clearly and intuitively cancellations of certain frequencies should occur.
There are two different things.
One is whether we can hear phase by itself and the other is whether we can hear the phase interaction of two different signals or two similar signals or two identical signals.
The answer to the first one is that it is doubtful or quite difficult to hear phase in itself
Hearing phase in reference to another signal and their interaction is very easy to hear
*Never* disappoints!
...and, sorry if I'm way late, but I've GOT to compliment the haircut!👌💯💯
Reminds me of micing guitar amps with several mics, the phase issues introduced from a little space apart due to the microscopic delays (speed of sound). Even after a couple samples a tonal change becomes audible. Came from the Sonicare post, stayed for the audio tests.
Super cool and welcome!
There are several things going on. If you're listening in true stereo with headphones there should be no phasing and you should only hear the sound move to one side as a delay is added.
If the tone changes and it sounds fazy then somewhere in your system most likely the two signals are being combined either electronically or acoustically if you're not using headphones.
Hearing both of those things is valid though it's different keeping things completely separate in stereo gives the true effect of hearing it move to one side
@@DaveRat ruclips.net/video/c52AaUmEz5c/видео.html
DAVE YOU ARE A MAD MAN AND THIS KNOWLEDGE AND THOUGHT PROVOCATION IS GREATLY APPRECIATED
Awesome!
Very revealing.
Thanks
👍🤙👍
Fuck yes, Dave rat. Truly incredible stuff.
Brilliant idea for testing.
👍🤙👍
For those curious what's happening here technically: the delay and subsequent summation of both channels is causing comb filtering, which can easily be measured. For 0.1ms, the comb filtering will be strongest at the frequency that fits half of its wavelength in 0.1ms, which is 5khz. If you wanna assess shorter delays acoustically, white noise would be prefered over pink noise, since the filtering issues shift to higher frequencies and white noise has an emphasis towards HF, relative to pink noise, making it easier to hear those issues.
Cool demonstration, as always!
Ideally the listener would use headphones and not let the signals recombine. And avoid hearing the comb filtering and purely hear the difference in arrival times to the ears
Exactly. Phase issues are surprisingly easy to hear. If the L/R were both delayed by a uniform amount, it would still be coherent and everything would be cool.
Fantastic deterministic way to test processor delay !! Wow call me impressed !!
🤙👍🤙
Amazing result. This helps me understand why I vastly prefer analog desks versus digital desks in FOH . My Yamaha mixer has a latency of 13 ms. out of the box. I always thought it was the 48 Khz. sample rate that I disliked. The cancellation in A VS. D must be monstrous with a live act performing. Thank-you, Dave.
Keep in mind that latency only matter when there is multiple versions.
A recording has infinite latency till played back. When you speak, you hear your voice through your body and also a latent version of your voice out of your mouth to your ears travelling about 6 inches, so about .5ms latency on your own voice
That said, latency of digital vs the direct sound can be an issue.
This might be a good illustration, at least in concept, that relates to how jitter and/or phase distortions can affect stereo imaging, and perhaps why, through stereo imaging, you can hear much lower levels of distortion that you might otherwise be able to when listening to just a mono signal...
Oooo! I love this aspect of psychoacoustics! I got really interested in this when I started getting into Haas delay use in mixes and the resolution of stereo image perception in a listener. Like, how subtle could I get with it? And how micro adjustments of delay along with volume could engage both interaural timing, and interaural intensity differences to create a richer panning experience.
Again, when watching your videos I always love this kind of parallelism or something... Like, seeing how a single concept has multiple uses. Practical, objective application (finding latency times), and, subjective experiential application (making mixes sound weird and stuff lol).
So cool and thank you
@@DaveRat Heck yeah! it's the coolest! Thanks so much for your videos man, they're the absolute best!
We could hear the difference but an important question is how fast is each module in your audio chain SAMPLING the audio? 44.1k? 96k? Etc. this would be then the shortest delay you could use to reproduce any difference between the two channels, right? So a delay of day 1uS could never even be heard using a standard SR of 44.1k (about 22.7us per sample).
A further issue is the audio signal itself. Higher freq components would be more “effected” and this heard more than lower freq audio. Right?
An interesting adjunct experiment would be to play a 1k sine + 10k sine (or thereabouts) and see what audible artifacts are presented.
Interesting experiment none-the-less.
Sounds about right and also this aligns with what I was able to hear. Once I got down to 44 microseconds, I was unable to hearr any more differences.
Also summing the delayed and non delayed signals maximizes audibility as a tonal change. And again, below 44 microseconds I was unable to hear.
But this is not scientific testing, it is more to spark interest and people be aware that slight offsets in distance may be more relevant than we realize
@@DaveRat totally agreed. I’ve done a lot of DSP work over the years, and my experience is a single sample delay is audible in most cases. So that means a delay of about 20 to 30 µs is audible. Very cool experiment though!
Amazing
I fuckin love you Dave Rat
The honor is significant, thank you
Analogue human measurement way down under 1ms -- Strong! Was new for me and I guess, for many other 'cracks' too:)
🤙👍🤙
When you swept the delay with it switched on, it was a perfect demonstration of a stereo flanging effect. I’d guess it would sound really cool on cymbals like on that Led Zeppelin song.
In stereo with headphones, it should not sound flanged if your system is properly setup.
If it sounds flanged, then you are listening in mono or your system is somehow combining left and right.
In a properly setup and functioning stereo system, the time shift should move the sound image to on side
@@DaveRat Good point - I was listening on an IPad so each ear was getting a mix of left + right. Just goes to show you how different listening on headphones is compared to speakers. If you were mixing and wanted a stereo flange effect that headphone listeners could hear you would need to mix the direct and delayed sounds into both channels.
Awesome as always. I used my laptop and could definitely hear the left side being louder and a shift in tone as you said. If the signals are in phase and I turn my head sideways so that my left ear gets the sound before my right ear... how many milliseconds does that correspond to ;-)
A millisecond is a bit more that one foot
That is so cool
🤙👍🤙
This is pretty neat. I’m betting there is something similar to this happening in the fancy 7.1 headphones that I have that only have 4 drivers. Using delay to “move” audio around me when in reality might just be onboard delay processing.
Agreed
Very cool! Slightly unrelated question - what is the internal A-to-D and D-to-A sample rate of the Clark Teknik DN7103 that you used on the demonstration? If not sure, what is the highest frequency it plays before roll off? (I couldn’t find anything online). Thanks much!
Don't know the sample rate but the units are rated 20 to 20k .1% distortion
this is pretty much how I do playback azimuth on tape decks. you can hear the phase cancellation more clearly with the mono button in, obvs, but even in stereo the image shifts long before there's noticeable attentuation from the azimuth error.
Wonderfulness
🤙👍👍
I love the bass check sweep and step at the end of your videos. Unfortunately my phones only dip audibly to about 18Hz. So if you are sweeping below 20Hz I probably won't hear it anyway.
Great test setup. Great video. Thanks.
🤙👍👍
I would love if you could ever add a digital readout of the frequency during that bass sweep to help determine where the roll-off happens. I enjoyed this video as it has relevance to my work.
I could do that Wait. Oh, actually try this vid ruclips.net/video/vfOWfKX-cdk/видео.html
You look a bit like Tony Hawk with the new haircut 😎. Keep up the good videos😎
👍🤙👍
Curiously the delay causes surround avr to generate a quite different tone in the surrounds. I suppose that is part of how the upmixing works
Yeah, all kinds of weird things happen when computers try and create surround .
I think this is exactly what Dolby Pro Logic is doing, when routing to surround channel.
Dolby surround (sounds OK in stereo, but encodes surround information) uses phase differences to generate the rear signals.
That was in the days when L/R level difference was thought to be enough for stereo.
Our friend Dave here has demonstrated several times that level is less than half the story.
@1:42 - I was closing my eyes and really concentrating on the change and then BAM!!!! That change was so obvious that I has to stop and make this comment. I use a simple Tri-Path Kinter amp on a pair of Bose 2.2 speakers (using a TASCAM audio interface to PC through USB) and the results were startling.
I easily could hear it.
👍👍👍
Awesome test. I have one question, how reliable is this Delay device ? Meaning how do we know it really is only 1/10000 indeed ?
It can be easily seen on an oscilloscope as .1 ms is actually a fairly long period for 100mgz scope which are not overly pricey.
It would not be that tough for anyone with a scope to confirm the info by running the RUclips audio left and right into two scope channels, inverting one channel and delay offsetting the other channel for a null. Then noting the offset time.
In some of my other videos on latency I show time measurements using a scope.
@@DaveRat fair enough. I did not mean to doubt you. Just wanted to know what kind of deviations there can be between devices
Interesting as always! It always puzzles me that for #panning it is the widely-accepted standard to just use level differences, while the human perception is mostly based on time-delays and differences in the frequency-realm. How come that even in top-tier consoles, panning never really was engineered beyond that level-difference-aspect?
Things get messy fast and sound bad easily when the same sound gets combined with delayed vets one of itself
@@DaveRat Yes, of course. But I still think that using level differences only is kinda insulting to our aural perception abilities 🙂 and should be more sophisticated.
Yes
🤙👍🤙
It's not an "illusion" that the tone is changing. The tone is actually changing. That's what happens when you have two non-phase coherent sources combining. It's called "comb filtering", and you cannot hear the comb filtering if you solo only one source, nor can you hear a difference between the two sources if you solo them back-to-back, because their individual harmonic content isn't different, only the time they arrive at your ear differs. This is basic Physics, and not new information.
I'm building out an X32 system and I recently watched your video about AES50 latency. You said that it was approximately 0.1 ms and my head immediately calculated that as 10 kHz, and I thought "I definitely cannot hear the space between two cycles of a 10 kHz signal, so I do not believe this is going to be a perceptible latency!" Was your nose itching? Is that why you made this video? 😂
Obviously it's easy to hear the phase shift/stereo widening when you're delaying one side of a stereo signal, but I definitely don't think I'd be able to hear or notice this if it was not being played over the non-delayed signal, and I surely would not notice it in a performance setting!
Latency only maters when referencing against something with no latency or a different latency.
Recordings have up to infinite latency dependant on when they are replayed and that has nothing to do with the way they sound.
But when dealing with a direct sound and latent sound, plus the natural latency caused by sound travelling distances through air, it is important to be aware of how various latent signals combine
Interesting
Initially, the full characteristics of the delay are discernable ... and then progress to less a lateral shift, more a simple coloration.
For me with headphones, I hear the mono image shift. At first, a lot and reducing
Interesting, I feel smarter.
....I was expecting this to be the shortestest "Short" ever ; )
🤙👍🤙
I was unable to hear the difference after .15ms with ATH-m50x headphones. .15ms and slower (>.15ms) I could hear the difference.
Makes me curious of the implications for beat matching while DJing.
I don't remember if I commented this yesterday or not, probably closed the tab on accident and didn't post.
For beat matching anything under 30 or 40 milliseconds is probably not really of significance.
It's quite possible that the reason you can't hear the shorter times is because of the gear or other factors.
I could tell a difference when switching immediately from A to B at 0.1ms, but I tried muting my headphones while you flipped the switch and - in isolation - I couldn't discern the difference. Out of context, the 0.1ms delayed version sounded like aligned dual-channel pink noise. I'd be curious how this changes with an impulse where you'd hear the "slapback"
Maybe I will dive into that in another vid
Multipath regeneration.
You can't hear the interval but you can hear its effects.
But only if you've got stereo discrimination - all the signal information is in the difference between the two endpoints, take one away and the effect vanishes.
This is an interesting case of things that exist and affect our environment but on their own are undifferentiatable from the background.
Almost unbelievable, but I could easily hear the difference even on my iPhone. As always, very cool and interesting demonstration. Thanks!
🤙👍🤙
Interesting. I should try a test like this to "time align" my coaxial drivers to see if my software got it right! I was prepared to say that I can't hear this small delay, but I was assuming the test would be something like the timing of a cymbal and a snare. I'm sure I wouldn't hear such a small delay in that scenario.
For time aligning the coax by ear, use a tone at the crossover freq, polarity reversed, match level driving cone then horn.
Then with both on, time aligh for a null, minimal output. The reverse polarity back to normal. That will give max summation at crossover. Make sure the time is the lowest time you can null, else you will be a wavelength off
If it's a blip in say a 4/4 beat pattern, I can notice a delay of down to about 5ms, but I have to know there's something up with the timing to tell as I really have to fucus to spot it. I'd have no chance any lower than that though, and the average is about 20-30ms for most people. It's cool how you can tell down to 100x smaller delay when both sources are playing back at once from difersnt speakers though!
@@DaveRat Thanks! I forgot about the null test. I don't know why I didn't do that already!
@5:18 - These sweeps sound a bit like gentle panning to me on my end.
Yes, Haas panning.
ruclips.net/video/OS6CKnkEUlA/видео.html
Everything down to .5ms I could tell very easily what was going on. When it got down to .1ms I could hear it, but I might not of noticed it was delayed had I not been paying attention to specifically that and been sitting correctly in my listening position, I'd be able to tell the channels were not identical though and may of chalked it up to just a volume difference at a glance. When it got down to 43μs, I couldn't really tell on my monitoring system.
I do have a 0.01ms delay on one side to fix some weirdness with windows wasapi drivers, so maybe that's causing it to be even harder to hear, although it shouldn't and should just make it play back at exactly 43μs apart instead of 34μs as it's the right that's delayed my end too, I can tell the difference between my time correction being on and off on complex material though, ever so slightly in the high end of the stereo field, just balancing it out a tiny bit when it's on vs off. But I have to really be in the perfect position for it to matter.
[EDIT] Now that I think about it, what I use both adjusts the volume AND the delay, so maybe it's just the 0.3db volume difference that I can hear.
I haven't tried on headphones yet without it tho, will try that next! :D
[EDIT] I can now hear the transition of 43μs on headphones, but can't really tell there's anything up with the stereo field when you A/B them.
Cool stuff and definitely experiment.fun!
In one very literal sense, I’d have to say no. But if you sent me two channels with identical program and delayed one by .1mS, yes I can. In fact 25uS is enough to hear clearly the notches and peaks resulting from that mono mix. Yes, one sample.
I had a very early Sony converter that aligned the channels at the analog output, but not at the digital output, where the R channel was 1 sample late. When I dubbed this ancient converter to Pro Tools using the Sony’s digital out, the result was clearly audible. It was easy to fix, but pretty surprising.
I think most audio humans should be able to hear a .1 ms offset in one ear vs the other.
It equates to the sonic image moving about 1" off to the side.
@@DaveRat “Audio humans.” A close relative.
😎😎😎
Nice demo . Summer hair cut there Dave to start . 75 us is about 2.1k 318us is about 500hz so there you go should be able to hear that the shift at 400 plus is not that hard to notice form me .
Hmmm, would it not be to divide the speed of sound by the wavelength to get the frequency?
So 1125 f/s speed of sound divided by 75 microseconds is 15,000 Hz
If our ability to hear time offsets is similar to our ability to hear frequencies, and if the offsets we hear are converted to wavelengths, then hearing a 56 microsecond offset would be the equivalent of 20,000
@@DaveRat What I was referring to is the time constants such as used by the RIAA curve filters . Thus the difference in the freq in question . I do see your point about the time delay difference . I was looking at it as the time constant in the circuit delay acting as does the filtering effect of the Riaa curve . With the brain processing the delay a shift in the point source of the sound . Thus the effect is have a much easier time detecting than a 15k tone would lead older people to hear . Hope that is more clear as to how easy the difference as you showed is for people to hear .
Ypu know what I think is very fascinating? They say we can't hear phase. But if you wear headphones and you flip the polarity of one side, the difference is very clear. So we do hear phase/polarity. It's weird how even in very high frequencies our brain realizes that the ear drums are moving in VS out relative to one another.
Hmmm, we can't hear true polarity but we can hear the interaction of signals with differing phase and polarity.
@@DaveRat this is of course the difference in pilarity being head, and not the actual individual polarity "per ear" as it were. Fliping polarity on both sides sounds the same, as we know. The fact that we CAN hear the difference, without the two signals ever physically meeting in the air, is what's fascinating.
It's like Daniel Faraday on "Lost" saying, "The light is strange out here, isn't it? It's kinda, like, it doesn't scatter quite right." His eyes and brain could perceive it within minutes of landing on the Island, without needing any other instruments to measure or confirm what he already noticed. It's that way with sound, too. Our brain notices things that we aren't always aware of until someone like Dave points them out!
@@poldidak I love listening to how weird audio around us in everyday life really is. Forest reverbs are ridiculous, and the way sounds change as you walk away from them can be so strange.
I love the lost reference!
lets hear it for the human auditory system
,👍🤙👍
I could hear the change at ~90 microseconds, +-10. After skipping arround in the video after watching it, I think I was able to hear the 43 uS, but Im not sure wether that was real or only imagination (with eyes closed of course)
👍🤙👍
Different tweeter hz to khz
So we are hearing the audio phasing?
Ideally you shouldn't be hearing the phasing and you should only be hearing the time offset between the left and right if you're wearing headphones.
If you are listening through stereo speakers then you'll hear the phase interaction between the two sources and if you're listening from a mono source then yes you would be hearing the phasing
This is why "stereo"is only aplicable to one persons reationship to the sound source ( dual mono , or multipoint mono should be new format.... then we can remix music ourselves .....: that will be ..... erm .. interesting !
Instead of pink or white noise, why not use a click?
Can do but with pink you can actually hear the center move in a precise way.
1:55, Dude, are you intentionally doing an impersation of Garth from Wayne's World?
I can barely hear 100us, I definitely wouldn't notice if I didn't know to expect it.
Perhaps, 100 microseconds would be the sound of motion of a fly buzzing 1/2" side to side near your nose.
Not hearing the difference would mean the fly sounds stationary
@@DaveRat perhaps I'd be more audible with different types of sounds? iirc 1.5 to 2kHz is the upper limit for discerning interaural time difference
All im hearing is a comb filtering of sorts. Since you have the same signal left and right, you have dual mono, when you offset one side the "center" is smeared since you're essentially inverting whatever frequency has the length of whatever you set the offset to, so yes the signal isn't "changing", but the output IS changing since you're perfectly comb filtering some frequency (1k at 1ms or 10k at .1ms) and offsetting the rest of the frequencies. Its also a terrible haas effect at that quick of an offset lol.
You can watch this happen on a scope. If you take a sine wave of 1000hz (which is 1 cycle per millisecond) do the same test but pan them up the middle, you should get cancellation when you offset one by one millisecond. Same with 10K at .1 milliseconds.
One way we found our insert latency offsets in the studio was by using a 1000hz tone and adjusting the offset until the two signals cancelled out, then backed off by 1 more ms. Then tested at other frequencies to make sure the offset was right. (should be a 3db increase i think if its perfectly aligned at center output). Another way was to record a single 1k cycle. and measure (in time) the distance to peaks in the daw. Obv can't do that too easily when dealing with a live sound situation, but works great in the studio where you can record the round trip of your converters I/O.
Reason we found out our insert compensation was off... a very low frequency (like 808 type kick drum) can be slightly pitch shifted if there is a parallel signal coming in that isn't perfectly aligned. (tunes it down slightly). Fun stuff! Great video!
If your gear is properly setup and left and right are kept completely separate and you use headphones as I recommend in the video, you should hear the sound shift to one side and there should not be any comb filtering.
the only place the two signals should combine would be inside your head.
@@DaveRat only listening in headphones. Try it with a single frequency. Youll see what im talking about.
Hi Dave. Thanks for the information. I have digital signal processor with a resolution adjustability of .01 milliseconds.
I can hear 10 microsecond adjustments. In fact, 5 microsecond adjustments might be better for getting timing perfect. Sometimes even a 10 microsecond jump was too much of a step for my hearing.
If you use sonarworks, and have it applying a volume correction to one speaker as well as delay compensation, it might be the volume change you hear. I got really confused wondering why I can hear the 10 micro seconds delay on my system, but not the 40 micro second delay here. It was the 0.3db of gain reduction it's doing to the delayed side that I can hear, not the delay! ;)
Very cool. Also, hearing a time offset differential between the same signal sent to one ear vs the other is different than hearing the impact of the same signal recombined with a time offset version of itself.
Diff signals to diff ears is our brains localizing. Diff time signals combined alters tonality due to comb filtering and is way easier to hear.
Hearing a time offset differential or .01ms between a signal sent to one ear vs the other ear means you are hearing the equivalent of something that has moved about an 1/8"
Try tapping two toothpicks together a few inches in front of your nose and moving them 5/8" side to side. That is hearing .1ms
Now move the clicking toothpicks 1/16" side to side, that is hearing .01ms
@@DaveRat *Get's in car to go buy tooth picks.
xcD
This is what I described above. It's not new that humans can hear 10 microseconds which is .01 milliseconds. researchers found this in last century. 0.1 milliseconds is no problem at all. And I'm very happy you proved it again :)
@@andrewlodarski2452 It actually really depends on the context of that delay. For example it's much easier to hear a delay between two identical sources from two points with one delayed (like in this test), but much more difficult when listening to discrete sounds delayed from one and other with slightly different intervals, in the latter scenario we can only concern a difference down to a handful of milli seconds at best, and most people only down to 20 ms!
I'm not really getting the point of this. In psychoacousic research it's very common and known that you can hear even 0.01 to 0.02 ms, which corresponds to a 1 degree resolution in horizontal direction. Or am I missing sth?
Hmmm, are you absolutely sure that everyone can hear .1 ms offsets which you say is common knowledge?
Or perhaps is there variation from human to human dependant on a wide array of biological and other factors?
Regardless, please post a link to other tests and demos of this widely known info for myself and subscribers to experience!
Good stuff and thank you in advance.
@@DaveRat pretty sure. Read about stereophonic recordings. 0.6ms is already the maximum difference that appears in natural listening according to the distance of you ears. This means 0.6ms results in hearing it from 90 degree left or right. But considering that you are able to already hear a 1 degree difference, it goes down to 0.01ms differences that your ears are able to hear.
Well, My experiences that when you generalize and make assumptions that things apply to humanity as a whole, The assumption is inevitably flawed.
More specifically, what I'm finding very interesting is the uniqueness of each human and the exposures they've had through life and aspects they were born with have a significant effect on what they can and cannot hear
And you present a perspective that this information is readily available and I would love to see a link to testing or some sort of way people can learn more and it'll be great if you could post that
@@DaveRat It's a little bit hard for me to describe all of this in english. I'll try to tell you, were I got my knowledge from. Most of the stuff we learn about fundamental psychoacoustics on german universities and high schools is from the authors Eberhard Zwicker and Jens Blauert. Most of this stuff is research from the 1960ies and Seventies, so it's difficult to find this basic knowlege about hearing on the web. Zwicker wrote a book in english that might be interesting for you: books.google.de/books?id=0zg9hI586kcC . I believe Fletcher and Munson did some research too, they could be more known in the US. As a starting point, this article could help: en.wikipedia.org/wiki/Sound_localization
or in a nutshell - a quote from the wikipedia article: "Humans can discern interaural time differences of 10 microseconds or less"
im pretty sure i can hear (or maybe sense) radio waves
That may be possible. Did you watch the vid I did on hearing above 20k?
@@DaveRat i just watched it. curious about that boney point on the back of my head now. i feel like i can hear cell phones from when my neighbor gets home. Im sure my phone does it too but im used to it or already around it. so ill be sitting on my couch watching TV and suddenly i feel this energy or noise in my head, like a super high pitched ringing, and then i hear my neighbor pulling into his driveway. i could also hear the ultrasound bouncing around in my head when I had my neck checked out.
Is digital
Audible up to 1/10,000 of a second eh? Visible- the human eye can detect variations up to 1/10,000 of an inch.
Hmmm, a switch of energy sources to light and switch receptors to eyes, and a switch of time to distance in the form of seconds to inches.
Not much left that is that same besides the number 1/10,000th
But yes, I suppose a percentage of humans can see 1/10 the thickness of a human hair
One sounds Pink. The other sounds Mauve.
Sound like jet take off lol
You need to listen with headphones so the signals don't combine back together.
If you listen while letting the signals recombine all your hearing is comb filtering.
To hear the difference in delay times you need to use headphones and keep the signals completely separate
@@DaveRat thanks dave
Well thinking about this logically, you spend a lot of time talking about phase relationship when comparing two identical signals on two different channels. Just guessing here, when you change delay times between two identical signals your changing phase relationship. With this example it's not a huge difference, however I definitely hear it.
For me, this is an example and demo of how accurate our hearing localization is. An arrival time difference of .1ms or about an inch is enough for us to know the source has moved and which direction it is moving
Ignoring the subject for a moment : you might be the first man to cut his long hair and still look like exactly the same type of guy
Ha! This is is an older vid that I finally got around to finishing up
You know computer sound cards adds latency delays to sound of up to 10ms maybe more depending on computer. So hope you factor that in your experement
Largely irrelevant as the delay will be (or at least should be) the exact same on all channels. Even in the unlikely case it is not, it will still remain a consistent latency so any changes you hear will be the additional delay he adds
And you listening to it on RUclips ads 8 hours or 2 weeks or two years of latency depending on when you listen.
But what you're listening for is the difference between the left channel and the right channel.
When you listen to headphones even with pure analog equipment there is a latency of some fractions of a millisecond it takes the sound to travel from the speaker down your ear canal to your ear, maybe 0.2 milliseconds or 0.1 milliseconds.
In this video I'm offsetting the difference between the left and right ear such that you may be able to detect that differential in arrival time