great tutorial! Any hints on what to modify in the reading / overlap-add operation for the case of the time stretching algorithm? I got the part with the phase calculation down, however, the way to read and delete samples from the outputBuffer I think needs to change when the analysisHopSize is different than the syntehsisHopSize... Any ideas ??
Excellent as always. I think it is worth being clear that the additional latency we add to gOutputBufferWritePointer doesn't need to be an entire hop, as was stated in the video. We need to add enough latency for the FFT thread to be able to finish its task, and that isn't related to the hop value. With the current code, 16 samples seems to be plenty, giving 'int gOutputBufferWritePointer = gHopSize + 16;' This knocks 5.5ms off the latency.
Good idea! Adding one hop size is the minimum to ensure that you can use all the available CPU, but you're right that if your code reliability and consistently uses less than that, you could trim down the latency. Probably never lower than one audio block size, however, since the background thread won't even start running until render() has finished.
I don't see why the latency is the fftsize. Should the latency not be fftsize-hopsize? For instance, in our example, we have 256 samples available. Our fftsize is 1024. That means we will have a 768 samples of zero, followed by 256 samples of whatever gplayer.process() returns.
I've been looking for a good explanation of phase vocoder for months, thank you so much!
great tutorial! Any hints on what to modify in the reading / overlap-add operation for the case of the time stretching algorithm? I got the part with the phase calculation down, however, the way to read and delete samples from the outputBuffer I think needs to change when the analysisHopSize is different than the syntehsisHopSize... Any ideas ??
Excellent as always. I think it is worth being clear that the additional latency we add to gOutputBufferWritePointer doesn't need to be an entire hop, as was stated in the video.
We need to add enough latency for the FFT thread to be able to finish its task, and that isn't related to the hop value.
With the current code, 16 samples seems to be plenty, giving 'int gOutputBufferWritePointer = gHopSize + 16;'
This knocks 5.5ms off the latency.
Good idea! Adding one hop size is the minimum to ensure that you can use all the available CPU, but you're right that if your code reliability and consistently uses less than that, you could trim down the latency. Probably never lower than one audio block size, however, since the background thread won't even start running until render() has finished.
Please make part 2! I need your help!
Part 2 is now up: ruclips.net/video/2Esfl8uw-2U/видео.html
where part 2 at : )
Coming soon -- most likely next week!
@@apm414 This tutorial is so good! Will part 2 come out this month?
@@yf668 Thanks! Part 2 is coming tomorrow. There will be a part 3 too. :)
Here's part 2: ruclips.net/video/2Esfl8uw-2U/видео.html
Maybe someone has a simple vocoder written in juce?
I don't see why the latency is the fftsize. Should the latency not be fftsize-hopsize? For instance, in our example, we have 256 samples available. Our fftsize is 1024. That means we will have a 768 samples of zero, followed by 256 samples of whatever gplayer.process() returns.