And the great audio / general production quality of course. Such a relief from the usual "5 javascript tricks you MUST know" clickbait. Can the whole internet be made by Eli please?
eli, i'll repeat it: you are an exceptional, top class instructor. what a gift. all your tutorials have been invaluable for me so far. looking forward to your book!
Hi Eli, thank you for your tutorial! I'm having trouble following along with the code at 49:54. I noticed that the output audio only lasts for a very short time (about 5s) and doesn't produce chords, just a single note. I was wondering if you could help me understand what might be causing this issue, and if you have published the corresponding code and audio files on GitHub to make it easier to reproduce. Thank you in advance for your time and assistance.
Quick question. When looping the sound file with LFSaw I was trying to put an XLine function in there to increase the playback speed from 0.25 to say 100. It’s working but I’m getting some low artifacts, a low pitch. Any idea why? Thanks
Well, here's my best guess. This is tricky to accurately diagnose without knowing all the details, but I suspect this is a type of aliasing. If you scrub through the sound file at 100x normal speed, the grain start positions will be so spread out that they'll essentially be discontinuous, backwards, barely changing, etc. - you'll probably get strange results, similar to why car wheels look like they're standing still or moving backwards in car commercials (the wheels are moving much faster than can be accurately captured by the camera's frame rate). At a scrub speed this high, I think you simply need to accept/embrace some sonic artifacts. This is also probably dependent on the contents of file you're granulating, whether grains are generated synchronously, the degree to which the grains overlap, etc.
For granular synthesis, I’ve been making regular playbuf synthdefs with simple envelopes then tweaking the dur and release time in the pdef to like 1/64. My friend showed me this technique, but it works pretty good for live stuff. Also have an argument for the position and rate which can get some interesting sounds. I gotta give this a shot though
Yes, patterns are fantastic for granular synthesis, using the technique you’re describing. I’ve used a pattern-based approach many times. I might detail it in a future video.
Nice! Where do you have the formula about the reconstruction rate from? It doesn't work on smaller trigger frequencies as good as on higher trigger freqs, e.g. trigger: Impulse.ar(2) and dur: 1 doesn't quite sound like the original file.
Hm, interesting. I just tested this, and it sounds exactly like the original file to me. I don't have a specific formula I can point to, but a Hann window (the default) is one cycle of an inverted unipolar cosine function (travels from 0 to 1 to 0, sinusoidally). When copies of this window function are overlapped by half and summed, the result is a constant value of 1. From www.recordingblogs.com/wiki/overlap-correlation : "With the Hann window, in fact, the 50% overlap case has the nice property that the sum of the applied weight to each sample of the signal is exactly 1." So, if you follow my approach regarding the mathematical relationship between grain duration and trigger frequency, the result should sound exactly like the original, assuming nothing else has been changed inside GrainBuf. Note though, that there will be an audible fade-in at the very beginning equal to one half of the grain duration, because at this point in time, there is not yet any grain overlap. Are you sure your other GrainBuf parameters are suitable? If you alter the grain playback rate, grain envelope shape, the speed of the grain pointer position, etc., the results will vary.
Just great! Managed to deepen my theoretical knowledge of granular synthesis and give me clear practical guidance on how to implement it in SC. I've always been excited by granular but found it tricky to get musical results. This has opened up a rich seam for me. Thanks so much Eli. Do you have a Patreon or similar?
Thank you for these amazing tutorials! Here is a question, hope somebody has an answer - is it possible to externally trigger the individual grains without making use of the Impulse or Dust UGens, for example by sending an OSC message in? Or the other way around - is there some kind of create event for the grains or the trigger obejcts? Trying to make the granulator speak with a visualization software, any hints appreciated :)
Ultimately, a trigger signal must be used to generate grains, but yes, you can manually trigger grains by creating a trigger-rate argument and using it as the grain trigger. A basic 'set' message can then be used to trigger a grain, and this message can be encapsulated within an OSCdef, MIDIdef, etc. ( s.waitForBoot({ b = Buffer.read(s, Platform.resourceDir ++ "/sounds/a11wlk01-44_1.aiff"); s.sync; SynthDef.new(\graintrig, { arg t_trig=0, out=0; var sig; sig = GrainBuf.ar(2, t_trig, 0.1, b, 1, LFNoise1.kr(20).unipolar(1)); Out.ar(out, sig); }).add; s.sync; x = Synth(\graintrig); }); ) x.set(\t_trig, 1); //run this repeatedly
@@elifieldsteel Had the time now to test it and it works out of the box, thank you so much. Has also been a great opportunity to learn about the sync method which solved another problem I had a few days ago :)
Thank you very much for this tutorial! I seem to have a problem where I get a following error. ERROR: a PlayBuf: wrong number of channels (nil) Why is that happening? My file is 44.1khz mono wav, so I guess that shouldn't be it? EDIT: Well then, seems like I was evaluating code in the wrong place... Now it works!
Ah, glad you figured it out. For future reference, it's nearly impossible to answer a question like this if you don't also share the code that produces this error, along with a description of what you're trying to do. Without this information, I can only speculate. My guess would have been that you are loading and playing the buffer with the same keystroke. An attempt to play a buffer that hasn't finished loading may produce this error, as described here: github.com/supercollider/supercollider/issues/2005
Every time i try to play the GrainBuf my localhost server disconnects any help would be appreciated as im pretty new to this. Love the tutorials thanks.
I do have one question though Eli on your composing a piece tutorial do all samples have to mono to work there because I attempted the events an no sound was working thanks
If you’re using granulation UGens then yes, the input buffer usually has to be mono. But for other UGens like PlayBuf etc., the channel count just has to be consistent with the number of channels in the audio file.
These are the hands down the best tutorials anyone has ever done on any topic. Informative, well paced, comprehensive.
And the great audio / general production quality of course. Such a relief from the usual "5 javascript tricks you MUST know" clickbait. Can the whole internet be made by Eli please?
eli, i'll repeat it: you are an exceptional, top class instructor. what a gift. all your tutorials have been invaluable for me so far. looking forward to your book!
thanks so much!
Great tutorial, I’ve tried going over to Max msp for granular but I keep coming back to my first love, Supercollider!! It’s so great!!!
💖
35:45 omg, that psychedelic effect is awesome!
Thank you for taking the time to making these supercollider series.
Genius... Many thx for sharing your knowledge, it's just amazing ... 🔥
Thank you so much!! this was incredibly helpful
heck yes. I look forward to part II. If I might humbly ask for something on modal synthesis next? Klank and the lot? Or maybe machine listening?
+1
Thanks Eli, great coverage including the gotcha's
Hi Eli, thank you for your tutorial! I'm having trouble following along with the code at 49:54. I noticed that the output audio only lasts for a very short time (about 5s) and doesn't produce chords, just a single note. I was wondering if you could help me understand what might be causing this issue, and if you have published the corresponding code and audio files on GitHub to make it easier to reproduce. Thank you in advance for your time and assistance.
This is such an excellent tutorial. Thank you!!
Grains in reverse, and voices from the future. This is one of the more scary tutorials!
Thanks, Eli!
Quick question. When looping the sound file with LFSaw I was trying to put an XLine function in there to increase the playback speed from 0.25 to say 100. It’s working but I’m getting some low artifacts, a low pitch. Any idea why? Thanks
Well, here's my best guess. This is tricky to accurately diagnose without knowing all the details, but I suspect this is a type of aliasing. If you scrub through the sound file at 100x normal speed, the grain start positions will be so spread out that they'll essentially be discontinuous, backwards, barely changing, etc. - you'll probably get strange results, similar to why car wheels look like they're standing still or moving backwards in car commercials (the wheels are moving much faster than can be accurately captured by the camera's frame rate). At a scrub speed this high, I think you simply need to accept/embrace some sonic artifacts. This is also probably dependent on the contents of file you're granulating, whether grains are generated synchronously, the degree to which the grains overlap, etc.
Amazing Thanks Alot.
Great tutorials.
Great tutorial Eli
For granular synthesis, I’ve been making regular playbuf synthdefs with simple envelopes then tweaking the dur and release time in the pdef to like 1/64. My friend showed me this technique, but it works pretty good for live stuff. Also have an argument for the position and rate which can get some interesting sounds. I gotta give this a shot though
Yes, patterns are fantastic for granular synthesis, using the technique you’re describing. I’ve used a pattern-based approach many times. I might detail it in a future video.
Nice! Where do you have the formula about the reconstruction rate from? It doesn't work on smaller trigger frequencies as good as on higher trigger freqs, e.g. trigger: Impulse.ar(2) and dur: 1 doesn't quite sound like the original file.
Hm, interesting. I just tested this, and it sounds exactly like the original file to me.
I don't have a specific formula I can point to, but a Hann window (the default) is one cycle of an inverted unipolar cosine function (travels from 0 to 1 to 0, sinusoidally). When copies of this window function are overlapped by half and summed, the result is a constant value of 1.
From www.recordingblogs.com/wiki/overlap-correlation : "With the Hann window, in fact, the 50% overlap case has the nice property that the sum of the applied weight to each sample of the signal is exactly 1."
So, if you follow my approach regarding the mathematical relationship between grain duration and trigger frequency, the result should sound exactly like the original, assuming nothing else has been changed inside GrainBuf. Note though, that there will be an audible fade-in at the very beginning equal to one half of the grain duration, because at this point in time, there is not yet any grain overlap.
Are you sure your other GrainBuf parameters are suitable? If you alter the grain playback rate, grain envelope shape, the speed of the grain pointer position, etc., the results will vary.
Just great! Managed to deepen my theoretical knowledge of granular synthesis and give me clear practical guidance on how to implement it in SC. I've always been excited by granular but found it tricky to get musical results. This has opened up a rich seam for me. Thanks so much Eli. Do you have a Patreon or similar?
Edit: Yes! www.patreon.com/elifieldsteel
Thank you for these amazing tutorials! Here is a question, hope somebody has an answer - is it possible to externally trigger the individual grains without making use of the Impulse or Dust UGens, for example by sending an OSC message in? Or the other way around - is there some kind of create event for the grains or the trigger obejcts? Trying to make the granulator speak with a visualization software, any hints appreciated :)
Ultimately, a trigger signal must be used to generate grains, but yes, you can manually trigger grains by creating a trigger-rate argument and using it as the grain trigger. A basic 'set' message can then be used to trigger a grain, and this message can be encapsulated within an OSCdef, MIDIdef, etc.
(
s.waitForBoot({
b = Buffer.read(s, Platform.resourceDir ++ "/sounds/a11wlk01-44_1.aiff");
s.sync;
SynthDef.new(\graintrig, {
arg t_trig=0, out=0;
var sig;
sig = GrainBuf.ar(2, t_trig, 0.1, b, 1, LFNoise1.kr(20).unipolar(1));
Out.ar(out, sig);
}).add;
s.sync;
x = Synth(\graintrig);
});
)
x.set(\t_trig, 1); //run this repeatedly
@@elifieldsteel Thank you very much! Will try and report!
@@elifieldsteel Had the time now to test it and it works out of the box, thank you so much. Has also been a great opportunity to learn about the sync method which solved another problem I had a few days ago :)
Great!
top notch !
super useful as usual
Thank you very much for this tutorial! I seem to have a problem where I get a following error.
ERROR: a PlayBuf: wrong number of channels (nil)
Why is that happening? My file is 44.1khz mono wav, so I guess that shouldn't be it?
EDIT: Well then, seems like I was evaluating code in the wrong place... Now it works!
Ah, glad you figured it out. For future reference, it's nearly impossible to answer a question like this if you don't also share the code that produces this error, along with a description of what you're trying to do. Without this information, I can only speculate. My guess would have been that you are loading and playing the buffer with the same keystroke. An attempt to play a buffer that hasn't finished loading may produce this error, as described here: github.com/supercollider/supercollider/issues/2005
thank you !
Every time i try to play the GrainBuf my localhost server disconnects any help would be appreciated as im pretty new to this. Love the tutorials thanks.
Sorted it out love your work Eli thank you for these.
Glad to hear. What was the issue?
@@elifieldsteel i was adding the LFNoise1 instead of replacing it xD
I do have one question though Eli on your composing a piece tutorial do all samples have to mono to work there because I attempted the events an no sound was working thanks
If you’re using granulation UGens then yes, the input buffer usually has to be mono. But for other UGens like PlayBuf etc., the channel count just has to be consistent with the number of channels in the audio file.
Thanks man 😊
love it!!
nice