It's remarkable how much the BBCSO piccolo sounds like a violin. 🙂 Templates - all the pros use them, but very few actually talk about how to *build* them. I built this template based on the TSE short course, "Sampled Orchestration", and liked it, and then went a bit down the rabbit hole when I decided to faff about with transitioning instruments from Discover to Pro, then adding articulation maps, and then to save memory, editing instrument articulations and trying to balance the entire template and .... it mostly works. I had forgotten how the stems work (thanks for the refresher), and in using the template, discovered I need a "MISC" section (for choirs, organ, etc) like you have in this video. I have begun to appreciate the limitations of my SSD (fast, but not fast enough), my lack of memory (16GB isn't enough) and the BBCSO library, although the BBCSO library's very limitations helps me to focus on just putting thoughts down on paper, so to speak. So I am down to just three templates: a default piano template, a BBCSO Discover template (fast and agile) and a BBCSO Pro template. In other words: thanks Guy. It's fundamental bit of learning for me.
Yes, because it's a piccolo violin not a piccolo flute :) piccolo just means "small". I spent too much time reworking something I started using the discover version because I went in too much detail with it and when I changed to core (I don't have the pro version) it sounded just awful and had to tweak everything again, that's not something I will do again. So if you start with discover don't tinker too much to make it sound good and keep it simple until you commit to change it to core or pro.
@@Jonas_Fridh Ok, but apparently it's probably what he means when he's talking about "piccolo". I also thought at first it was just a bad naming of the track but later he talks about "piccolo" being loud while listening to a violin sound playing... so not sure. And I did not pay too much attention to it because I know "pro" version has extra stuff aside from the different mics and thought maybe "piccolo violin" is a thing in the pro version - which I don't have - I only have the core version.
@@Jonas_Fridh Haha, "yes and no". Anyway, it's not very important in the context of demonstrating how to build a template. Piccolo could as well be a green fighter from planet Namek, who knows. Besides he can be very loud so that works too :)
You should try using MIDI channels Guy, it will change your life. In Reaper (and I presume Cubase) you can set individual notes to seperate MIDI channel, so I have a Kontakt instance with all my articulations loaded to a unique MIDI channel, then play in the entire piece and then simply change any individual note to a new articulation by its note MIDI channel. You can colour code the notes so at a glance its easy to see shorts and longs ect. This means all your dynamics, vibrato, volumes, built-in room and verb affect the instrument as a whole and it sounds noticeably better and more human than split out over many tracks. Also, then entire orchestra fits in about 12 tracks. Happy to post a video of this in action.
Very good template Guy. Quick and easy to adapt as required. Particularly as it bypasses the need to use the Cubase Expression maps function which coming from Studio One seem to be a top priority for Cubase to re-think how they are implemented as should be a lot better seeing it's V13. Fantastic work thankyou.
Reaper. Me too. I purchased a top of the line RokBox this month to get into virtual libraries. Downloaded BBC Discover and downloading Kontakt Complete. Yes, buying the machine broke the bank for now so setting sail on the RMS Kindness of Strangers. So far smooth sailing except for the usual rigging problems and crew training (me). I’ve been doodling orchestration using my Roland 1080 and collecting or scoring midi for song tracks for decades so I’ve a library to experiment on. Mightily impressed with Discover. Many thanks to that team. This comment is to offer my humble gratitude to Guy for his videos that have saved me many hours at the dock and setting the course for this voyage. You are a true gentleman, Michelmore. We the poor and underprivileged composers say Salute! Now back to watching the download control crawl. The struggle is real but the adventure is glorious
In Cubase, you can streamline your workflow by using a single reverb for all orchestra stems. When exporting, simply select Master/Group/Send (CSPM), and the reverb will be applied separately to each stem. This method significantly reduces the load on your computer. I initially followed Guy’s approach, but I switched to this method for my recent films, and it has worked exceptionally well.
Yes - I sometimes use this approach when I have a mastering set of plugins on the master bus then it gives you stem mastering but for everyday stuff I bounce to steps as I go and with a fast machine I never have an issue but that way definitely works as well
Thank you so very much! You are a great inspiration and an excellent teacher! I'm a songwriter and have played guitar and bass for 40 years. Started watching your videos and bought a keyboard and I am having the most joyful time composig music. Your passion is contagious and I am forever grateful.
I'm currently building a be-all-do-all template with VE-Pro and Cubase. This is a weeks worth of time, but very resource-friendly. I also have only one track per instrument, because I never had any issues with volume or delay differences over different articulations.
@@ThinkSpaceEducation Good point! I already planned some empty slots in for future additions to my collection, to keep the faffing at a minimum. I already used Cubase-only templates, but they ended up sluggish and saving took minutes, if it didn't crash the program.
I’ve gone from big VE template to Logic Pro template to breaking it down to a small modular template where EVERYTHiNG is saved as either stacks of libraries plus the separate instruments and also combination of instrument groups. Loading times are so fast these days for libraries I just load whatever I need for a specific project. The main thing is to prepare the routing so things dont get messy. The main breakthru in Logic Pro is when you could put stacks within stacks. The main reason for me to go modular is to avoid sounding the same all the time….
Hi Guy! I really enjoy your videos-they’re both funny and useful. It might be interesting if you could dedicate a video to the use of reverbs. In this one, you suggest using separate reverbs for each group of instruments. Could you explain why in more detail? For example, do you use the exact same reverb (such as a convolution reverb that simulates a concert hall) for each group? Also, could you recommend a budget convolution reverb suitable for orchestral music? I currently use the old Waves IR-1, which sounds decent but has some drawbacks. Thanks, Giorgio
Whenever I see (or hear) a list of currently available DAWS, no one ever mentions Cakewalk (neither did Guy). I find this puzzling, as it's been around for about 30 years in different forms, is as useable as anything costing $400 and it's still free. I have Cubase, and Cakewalk and find Cakewalk far more intuitive to use, and always start with it.
Love CbB for music production, much smooth workflow in mixing, midi & audio editing and music production in general, however the one reason i started drifting away from it is due to it's limited film scoring & media production features. It's still doable but lacks full fledged features for seamless film scoring.
Hello Guy, I was wondering if you could please explain the basics of how an orchestra is used from a composer's point of view. So lets say that I have my theme established, what are the essentials understanding of how to bring these ideas alive with a full DAW orchestra? Do I double the theme with with strings and brass, or do I use the strings for the harmony and have 4 brass instruments playing the lead all at once. Is it just a case of approaching the orchestra with MELODY, HARMONY, RHYTHM AND TEXTURE as the building blocks. Can you please shoot a video or create a short course on the essentials of how an orchestra can be used. I am a big fan of you and have purchased 3 of your courses. Thanks
Good info - I got your template. I saw the mention of PreGain - and in a comment posted - It might have another name - I have been using the channel slider to get the meter to be in the -18db range - that seems like one of my biggest takeaways on this. I added a couple tracks for piano sketching. I had learned before your video about put a in instance of the plugin on a track and disabling it so you did not eat up your RAM and time waiting for the template to load. One thing I have been doing is removing articulations for the channel you have for Longs any articulation that is not a long thus reducing the memory when it is activated. One thing that you did in the video was setting up the empty midi channel - and setting controllers in that before the recording. That part might have some nuance to it - as it did not make it into your template. I am also used to seeing the FX and Group Tracks at the bottom of a template - not in the top section. So that feels odd at this point for me as it not what I am used to. Seeing these types of videos is the closest I may get to being able to looking over someone's shoulder and learning how they do things. ( That and the Thinkspace courses I have purchased ) - Thanks
I greatly appreciate the simplification this setup brings to a template. I mostly write in Dorico with expression maps, it is then transferred to Cubase and there the articulations are separated out to individual track automagically with a couple StreamDeck buttons. This means I don't have to deal with expression maps in Cubase. I can now see how to simplify my Cubase setup, so thanks to Guy's ideas, I'm starting to do just that.
Thank you Guy yes this was very useful. I'm halfway through organising the bloody template trying to get everything balanced plus the negative track delays... so time consuming. Cannot wait to finish it so that all time can be spent on actually creating/developing ideas
Wonderful explanation of templates, as well as answering a question lurking in my mind. That being, that one should have multiple tracks for one instrument to accommodate their different articulations. If I understand you correctly, bussing (a fuzzy concept for me) will eliminate tedious work at applying expression and dynamics, as well as other automation applied later in the mixing stage, in an orchestral work. Thank you for a great demonstration!
I use mostly orchestral tools libraries which makes it easy for articulation IDs to work. I can adjust the volume in the Sine Player Mixer for each articulation. I am someone who wants to play with the sound and arrangement of the instruments to create new sounds every time. Using articulation IDs makes that way easier because I can change the sound of a whole instrument or even section in one single interface.
That truly is super concise Guy! I am very curious -with the use of this template- how the piccolo flutists and probably the violinists also- are going to react and what they are going to play.;);).
10:53 I understand why you would want to normalise the cc at 0 bars but my only challenge to putting a midi CC on every track is you can’t therefore filter by tracks with data if you’ve got a massive template with CC on every track
Thank you for the video Guy! This has been very helpful! I have used these approaches in my own template as well. Especially setting the negative track delay on each articulation saves time with a way better result. What I haven't done ,though, is to adjust the volume / pre-gain. Doesn't this interfere with the natural orchestral balance if I adjust every instrument to the same volume?
Why not use the gain adjustment for the sample library in the UI built into the library screen? BBCSO has a volume / gain fader at the top right of their UI. Instead of gain especially for Logic Pro where starting with a Gain plugin seems clunky.
I was very glad to see you use 'pre-gain' (in Studio One, my version of HokeyCokey2000, it is 'input gain control') to bring your initial volumes up to around -18dB. This not only starts you off with each instrument about the same (good for a static mix), but it has you running into your plugins at levels consistently within the plugins' sweet spots. Super important if you want your plugins to work most like they are supposed to! 🙂
Oh the great battle for volume and balance… great video Guy. What I do when I write is get the instruments to -12, compose and do a final balance when I go to mix. Also funny thing is I built a giant template and felt lost. So I went for an empty approach recently, but will definitely be taking some of these pointers and plan for a future template. Thank you 🙏
@@ThinkSpaceEducation yea I think the scripting in midi instruments has a lot to do with it. Once you print to audio it will always be that result if you know what I mean
With the BBC So, and possibly other libraries, the shorts by default use key velocity and mod wheel, while the longs use only mod wheel. This can be changed to ignore the key velocity for all articulations.
Soooo this is WAAAAY better than how I've been doing it. I've just been grouping everything in Folders. WW folder has sub folders for each instrument and separate articulations, which are then all under a parent folder of MIDI. I then have a parent folder called Audio for when I render out all of the MIDI lol. Nightmare solved!
Thank you for sharing your knowledge. One question, did you assign the key commands you use to stream deck? I would like to do the same. Thank you again.
Very informative - thanks Guy. Did you programme all the keys on your Stream Deck yourself, or is it a bought set of key commands from a third party? It would be very educational and useful to find out what all your key commands are.
What he probably does that he didn't mention is that he disables all the tracks. That offloads the RAM. Then, when he wants to use an instrument, he enables it, it loads quickly, and is ready to go.
@@stevhard No idea if it's possible in Garage Band. I'm talking about Cubase which is what he is using, and I use. Don't really know if other DAW's have it. Maybe Logic? Not sure.
This is fantastic. The setup helped address a problem I was having with instrument volume. I am using BBCSO Core. Is it unusual that I would need to be adding 20-30ish dB to get my levels up to that 18 dB range you were mentioning? Appreciate these videos, thank you, they get me inspired to try and write something.
Thanks Guy! I am self taught (bad, but I'm trying) and have all free plug-ins, so I have almost a different plug-in for each instrument (I haven't found any good brass - the free BBC orchestra is the best I have, but it's limited). I rely on velocity for volume. I'm using FL Studio and I didn't know there was a standard orchestral scoring instrument order. How much RAM does your setup have? More Piccolo!
Thank you for sharing! Two questions. How do you implement the negative track delay for each instrument/library? and, how do you sen this to Dorico or else if you need the score using multiple channels instead of articulations maps?
Hi folks! (1) thanks again so much Guy you make things so clear with your explanations, but (2) I must be missing something obvious in 'neutralizing' the effects of velocity, mod wheel, CC7 and CC11 values per track, by sending common MIDI signals in the first bar - how does that neutralise their effect? Are they all somehow set to different CC7/CC11 etc. values within the samples/patches upon load, and these automations undo any such differences? I wish I had a better way to ask the question!
Guy. Surely you don’t need to isolate the reverbs by adding to each group channel. You can send to reverb pre or post fader in Cubase so just set it to post fader and job done. No reverb bleed.
Maybe its because I am a productivity obsessed animation composer! My clients always want each stem with reverb and although I could bounce them one by one the most time efficient way is to record into the stem audio tracks all at once, hence the need for separate reverb. But if you are time rich and CPU poor, there are lots of alternatives
Great video! I was wondering how you handle the volume matching between different instruments, i.e. something like 2 woodwinds are equals to 1 string section, 2 horns are equal to 1 trombone (if played at same dynamics), etc. Perceived volume ratio of 2 would be 6dB, wouldn't it? Thinking aloud that would mean for me that after setting e.g. all solo horn groups alike and all solo trombone groups you would set the horn groups 6dB below 1 trombone so that two horns would match a trombone...does this make any sense?
how do you tackle Negative delay for various libraries, do you set it on the track itself - or are you manually shifting the midi notes back and forward etc please?
everything has changed since Synchron Flow just released.. earth shattering updates. i know you're a Spitfire guy but man. it's like SWAM now.. they made "performance patches" Divisimate 2 just released. use that with Synchron Prime Flow now. it's going to be a whole different world. i know everyone not going to want to do this sort of thing but going ahead i'm doing this for sure. My ideas are all in my brain and the fizzle out so quick, they come and they go. the faster i can input them into the piano roll the better. This will be like Piano sketching but with an entire orchestra. then go in and do a revision and clean it up after by adding keyswitches, and modulation. the sketching will become so fun now.
Why is there not an English Horn in any of the Spitfire instrument libraries? The English Horn in Cubase sounds like a toy synth... i am so glad Guy made this video. It really makes sense. Finally someone explained how templates should be set up. The way he sets up this template really reduces the cpu overload issues. thanks Guy!!
The BBC Symphony Orchestra does have an "Engish Horn" - It goes by another name "Cor Anglais" en.wikipedia.org/wiki/Cor_anglais. Spitfire also has one in the Studio Woodwinds and Symphonic Woodwinds and in the Spitfire Symphony Orchestra.
I'm always amazed by the use of separated articulation per track. I understand the purpose, but I'm pretty sure you will loose versatility in your composition. I personnaly use Keyswitches because, I often have several articulations in the same musical sentences, which, according to me is always the case in real life... Doing this with separate tracks, It's just impossible to me. Maybe I'm wrong...
I agree. Track per articulation only works well if you don’t change articulation that often. If you do, it quickly becomes a mess. It also makes using the score editor a nightmare.
The thing you said about reverb/volume control, I kinda get what you mean but you can overcome thing by using a send and flipping the post/pre setting and it should blend in as if it were a seperate reverb or am I missing something?
Thank you so much, you are really helpful, I have a question, not speaking English well maybe I missed something, after I set mod weel, expression, etc... to make it all seem like the same instrument, if I want to work with dynamics and expressions what do I do? And then given that in a real orchestra there are many people playing and no one will have the exact same timbre as the other, for each instrument in the section do I set slightly different values?
Following Guy's example, I've tried setting Expression, Modulation and Velocity values for all tracks in the first bar within Logic and, although this works when balancing the articulations, it has a knock-on effect. All tracks now have a midi region on them with this information in it and it doesn't seem possible to "hide" tracks that are not actually used for the composition by using keyboard shortcuts. Anyone know of a way round this so that I can remove the clutter?
You can choose to show only active tracks between your set locators. (In the main edit windows bar- choose what you want to see) Everything else will temporarily disappear. This bypasses all those midi starter files.
Guy - quick question for you - how do YOU handle pre-delay on the various articulations? Do you look for a published pre-delay (in ms), or do you just 'listen' your way to something that sounds rightt?
I’m puzzled why have a track per articulation, when we can have arriculation sets and change the articulation of note(s) from a dropdown instead of dragging them across tracks. I have worked with both approaches, and it’s so hard to work with phrases that have different sequence of articulations, e.g. long-stacc-stacc-trill etc.
@@williamscolaro1159 It's a) a shitty library or b) you move the notes in question a little in time. I use only VSL products and have no issue with one track approach.
Absolute fantastic libraries, used by Pros like CS Strings have huge differences in track delays if you compare shorts to let’s say expressive legatos - for good reasons. So you end MIDI tweaking and tweaking if you can’t use negativ track delay.
@@williamscolaro1159 considering the amount of work needed to program a phrase and sound realistic, dealing with delays looks painless when compared to working with multiple tracks each with their own CCs for just one phrase. Also the time one needs to experiment on which arriculation to pick for one note, is crazy when having multiple tracks. Ofcourse it has to with the way notes are entered, e.g. performed live or imported from a midi file.
@@jelster1208 Yes I do. Since you can't really switch articulations mid-playing with either approach, tweaking the midi is necessary whatsover. When I'm putting in notes by mouse, I habitually place long notes a little before the grid. It works really well for me. I'm also able to cycle through articlulations with keyswitches and see what fits best, instead of being tied to a fixed pattern of long, short, legato or whatnot. I tried the track-per-articulation method, and it didn't work out for me at all.
How does adding all these articulations for each instrument, not stress out the resources on your machine? Is there a way to add the tracks but not have them stress the machine until you choose to use them? Like freeze track or something? Or when you start the project, do you just delete from the template all the tracks you won't be using?
I'm not sure where I'm doing something wrong, but when I set up the groups and the reverbs, the instruments sound kinda...bad? Specifically, it's due to adding them to groups. When I remove an instrument from a group and change it back to stereo out, it sounds just like the sample on its own. But when let's say violins1 is in the STR group, it sounds distant/muted and the mic balance seems off. What am I doing wrong? I'm using Spitfire Symphony Orchestra in Cubase.
Volume balance has been a mystery to me since there is no standard reference. To be clear, you aim for the same dB level for all instruments using the same volume/dynamics/velocity?
Thats how it starts but the balance lasts about 5 bars then its out of whack. There's no way around this as far as I can see you have to reblance as you write
Maybe I missed something, but the export in cubase makes all the "stem tracks" and separate reverbs unnecessary. But perhaps in others DAWs you want to do it this way. If you want to "render in place" I can somewhat see the point, but just solo the track and it's solved. I also have a question about adding a track for each articulation, instead of routing so the all articulations of a specific instrument goes to one instrument output. With Guy's way, you have a lot of outputs to control for a single instrument, and if using instruments with various mics, or plugins, this will quickly become problematic. If all you care about is the groups output, it makes a little more sense, but you still have the problem of fixing all the mics.
And about the articulations, the expression maps solves it, and it is as fast as separate tracks, and you only have to work with one track per instrument. If you don't have expression maps, separate tracks is faster and better
Professional delivery standards usually require stems. Final mix down might be in a video editor for example. It will involve foley, ADR and a lot of other audio.
I just read your answer below to someone else. Interesting how you no longer use it. It's the very same reason I haven't been using it - it takes toooo long to add instruments. That process does not support my tendency to add instruments on the fly.
If you've got a big orchestral tutti with strings divisi playing different articulations (so say about 15-20 different sounds) how are you supposed to fit these onto only 16 MIDI channels?
You don't need to have each sound on a different MIDI channel - in fact it's probably simpler to have them all on the same one. Just only record-arm the one you're currently working on, and the rest will ignore any new MIDI notes that come in (but play any that are already recorded on their track).
For me I am have issues with setting up volume (mod, vol, exp) cc 1, 7, 11. It works, however. I can press play and it will set cc's, but as soon as I press record the cc 1 drops to 0. On playback it pops back up to 90 the world is fine. Move to an open bar, drops to zero, place over a track with data back to 90. I noticed the downloaded template does not have pre cc data. Can anyone help?
My biggest mistake was getting Waaay too many boards and VST's that left me spending most of my time previewing all the different sounds each is capable of. Focus, focus,focus.🎹👀 😁🎶🎹🎹🎶Play On
I noticed Guy didn't mention Reaper at the start too. I've been using Cubase since its ancestors on the Atari ST, and currently have Nuendo 12. Over the years I've tried Reason and Ableton as well. I switched over to Reaper last year, and love its speed of startup, low memory footprint and customizeability. The community is awesome too, not least the RUclipsrs it has. Obviously I love the cost and upgrade policy compared to all their competitors. All my Native Instruments, Steinberg, Arturia, etc. VSTs work great. Because you can free-trial for a lengthy time, there is no excuse not to see it it can work for you.
It's remarkable how much the BBCSO piccolo sounds like a violin. 🙂 Templates - all the pros use them, but very few actually talk about how to *build* them. I built this template based on the TSE short course, "Sampled Orchestration", and liked it, and then went a bit down the rabbit hole when I decided to faff about with transitioning instruments from Discover to Pro, then adding articulation maps, and then to save memory, editing instrument articulations and trying to balance the entire template and .... it mostly works. I had forgotten how the stems work (thanks for the refresher), and in using the template, discovered I need a "MISC" section (for choirs, organ, etc) like you have in this video. I have begun to appreciate the limitations of my SSD (fast, but not fast enough), my lack of memory (16GB isn't enough) and the BBCSO library, although the BBCSO library's very limitations helps me to focus on just putting thoughts down on paper, so to speak. So I am down to just three templates: a default piano template, a BBCSO Discover template (fast and agile) and a BBCSO Pro template. In other words: thanks Guy. It's fundamental bit of learning for me.
Yes, because it's a piccolo violin not a piccolo flute :) piccolo just means "small".
I spent too much time reworking something I started using the discover version because I went in too much detail with it and when I changed to core (I don't have the pro version) it sounded just awful and had to tweak everything again, that's not something I will do again. So if you start with discover don't tinker too much to make it sound good and keep it simple until you commit to change it to core or pro.
@@Jonas_Fridh Ok, but apparently it's probably what he means when he's talking about "piccolo". I also thought at first it was just a bad naming of the track but later he talks about "piccolo" being loud while listening to a violin sound playing... so not sure.
And I did not pay too much attention to it because I know "pro" version has extra stuff aside from the different mics and thought maybe "piccolo violin" is a thing in the pro version - which I don't have - I only have the core version.
@@Jonas_Fridh Haha, "yes and no".
Anyway, it's not very important in the context of demonstrating how to build a template.
Piccolo could as well be a green fighter from planet Namek, who knows. Besides he can be very loud so that works too :)
16gb is SO not enough
Also How do I mix the different ways an instrument can be played, Staccato and legato?
You should try using MIDI channels Guy, it will change your life. In Reaper (and I presume Cubase) you can set individual notes to seperate MIDI channel, so I have a Kontakt instance with all my articulations loaded to a unique MIDI channel, then play in the entire piece and then simply change any individual note to a new articulation by its note MIDI channel. You can colour code the notes so at a glance its easy to see shorts and longs ect. This means all your dynamics, vibrato, volumes, built-in room and verb affect the instrument as a whole and it sounds noticeably better and more human than split out over many tracks. Also, then entire orchestra fits in about 12 tracks. Happy to post a video of this in action.
Yes, please post a video of this in action. I'd love to see what you are describing...
Absolutely brilliant. I've been setting up templates in Cubase and this couldn't have come at a better time. Very helpful and clearly explained.
Beside great content, it s a real pleasure watching your garden in the background 😊
Very good template Guy. Quick and easy to adapt as required. Particularly as it bypasses the need to use the Cubase Expression maps function which coming from Studio One seem to be a top priority for Cubase to re-think how they are implemented as should be a lot better seeing it's V13. Fantastic work thankyou.
Reaper. Me too. I purchased a top of the line RokBox this month to get into virtual libraries. Downloaded BBC Discover and downloading Kontakt Complete. Yes, buying the machine broke the bank for now so setting sail on the RMS Kindness of Strangers. So far smooth sailing except for the usual rigging problems and crew training (me). I’ve been doodling orchestration using my Roland 1080 and collecting or scoring midi for song tracks for decades so I’ve a library to experiment on. Mightily impressed with Discover. Many thanks to that team. This comment is to offer my humble gratitude to Guy for his videos that have saved me many hours at the dock and setting the course for this voyage. You are a true gentleman, Michelmore. We the poor and underprivileged composers say Salute! Now back to watching the download control crawl. The struggle is real but the adventure is glorious
Actually working on a template right now, so this couldn't have come at a better time, thanks Guy!
This is brilliant!! Thanks for sharing! all the bass-t from kayo
Thank you Guy!
In Cubase, you can streamline your workflow by using a single reverb for all orchestra stems. When exporting, simply select Master/Group/Send (CSPM), and the reverb will be applied separately to each stem. This method significantly reduces the load on your computer. I initially followed Guy’s approach, but I switched to this method for my recent films, and it has worked exceptionally well.
Yes - I sometimes use this approach when I have a mastering set of plugins on the master bus then it gives you stem mastering but for everyday stuff I bounce to steps as I go and with a fast machine I never have an issue but that way definitely works as well
@@ThinkSpaceEducation Thank you for your response and support, dear Guy. Wishing you all the best!
Yes, but the stem Export with need much more time.
Thank you so very much! You are a great inspiration and an excellent teacher! I'm a songwriter and have played guitar and bass for 40 years. Started watching your videos and bought a keyboard and I am having the most joyful time composig music. Your passion is contagious and I am forever grateful.
I want to try this method out in Studio One now.
I'm currently building a be-all-do-all template with VE-Pro and Cubase. This is a weeks worth of time, but very resource-friendly. I also have only one track per instrument, because I never had any issues with volume or delay differences over different articulations.
I stopped using VLE Pro because it took so long to build and once built was a faff to add or change anything but it is resource friendly
@@ThinkSpaceEducation Good point! I already planned some empty slots in for future additions to my collection, to keep the faffing at a minimum. I already used Cubase-only templates, but they ended up sluggish and saving took minutes, if it didn't crash the program.
I’ve gone from big VE template to Logic Pro template to breaking it down to a small modular template where EVERYTHiNG is saved as either stacks of libraries plus the separate instruments and also combination of instrument groups.
Loading times are so fast these days for libraries I just load whatever I need for a specific project.
The main thing is to prepare the routing so things dont get messy. The main breakthru in Logic Pro is when you could put stacks within stacks.
The main reason for me to go modular is to avoid sounding the same all the time….
Hi Guy! I really enjoy your videos-they’re both funny and useful.
It might be interesting if you could dedicate a video to the use of reverbs. In this one, you suggest using separate reverbs for each group of instruments. Could you explain why in more detail? For example, do you use the exact same reverb (such as a convolution reverb that simulates a concert hall) for each group?
Also, could you recommend a budget convolution reverb suitable for orchestral music? I currently use the old Waves IR-1, which sounds decent but has some drawbacks.
Thanks,
Giorgio
Love your content Guy...you do an outstanding job...learning a TON here...mucho appreciando!
Whenever I see (or hear) a list of currently available DAWS, no one ever mentions Cakewalk (neither did Guy). I find this puzzling, as it's been around for about 30 years in different forms, is as useable as anything costing $400 and it's still free. I have Cubase, and Cakewalk and find Cakewalk far more intuitive to use, and always start with it.
Love CbB for music production, much smooth workflow in mixing, midi & audio editing and music production in general, however the one reason i started drifting away from it is due to it's limited film scoring & media production features. It's still doable but lacks full fledged features for seamless film scoring.
I just have to say: Thank you Guy. Great content. With this template you save me a lot of time so I can write some music. Thank you again.
Thanks Guy. It would have been very helpful to have been able to see what you had in the folders in your own template. :)
Hello Guy, I was wondering if you could please explain the basics of how an orchestra is used from a composer's point of view. So lets say that I have my theme established, what are the essentials understanding of how to bring these ideas alive with a full DAW orchestra? Do I double the theme with with strings and brass, or do I use the strings for the harmony and have 4 brass instruments playing the lead all at once.
Is it just a case of approaching the orchestra with MELODY, HARMONY, RHYTHM AND TEXTURE as the building blocks. Can you please shoot a video or create a short course on the essentials of how an orchestra can be used. I am a big fan of you and have purchased 3 of your courses. Thanks
Another video from the most inspiring music chanell for me❤
thank you
Good info - I got your template. I saw the mention of PreGain - and in a comment posted - It might have another name - I have been using the channel slider to get the meter to be in the -18db range - that seems like one of my biggest takeaways on this. I added a couple tracks for piano sketching. I had learned before your video about put a in instance of the plugin on a track and disabling it so you did not eat up your RAM and time waiting for the template to load. One thing I have been doing is removing articulations for the channel you have for Longs any articulation that is not a long thus reducing the memory when it is activated.
One thing that you did in the video was setting up the empty midi channel - and setting controllers in that before the recording. That part might have some nuance to it - as it did not make it into your template.
I am also used to seeing the FX and Group Tracks at the bottom of a template - not in the top section. So that feels odd at this point for me as it not what I am used to.
Seeing these types of videos is the closest I may get to being able to looking over someone's shoulder and learning how they do things. ( That and the Thinkspace courses I have purchased ) - Thanks
I greatly appreciate the simplification this setup brings to a template. I mostly write in Dorico with expression maps, it is then transferred to Cubase and there the articulations are separated out to individual track automagically with a couple StreamDeck buttons. This means I don't have to deal with expression maps in Cubase. I can now see how to simplify my Cubase setup, so thanks to Guy's ideas, I'm starting to do just that.
Thank you Guy yes this was very useful. I'm halfway through organising the bloody template trying to get everything balanced plus the negative track delays... so time consuming. Cannot wait to finish it so that all time can be spent on actually creating/developing ideas
Templates are absolutely the only way to go..I have one for every possible project..it takes time but soooo worth it!
This was informative. Thank you. Still learning new tricks in Cubase.
Wonderful explanation of templates, as well as answering a question lurking in my mind. That being, that one should have multiple tracks for one instrument to accommodate their different articulations. If I understand you correctly, bussing (a fuzzy concept for me) will eliminate tedious work at applying expression and dynamics, as well as other automation applied later in the mixing stage, in an orchestral work. Thank you for a great demonstration!
In the Hokey Cokey 2000 do you just put your left legato in?
Ha! very good
No. It does it for you!
I use mostly orchestral tools libraries which makes it easy for articulation IDs to work. I can adjust the volume in the Sine Player Mixer for each articulation.
I am someone who wants to play with the sound and arrangement of the instruments to create new sounds every time. Using articulation IDs makes that way easier because I can change the sound of a whole instrument or even section in one single interface.
Great video, Guy. It's a good approach for many if not most. Going to copy this and experiment. Cheers! 😃
Great organization! Thank you for the help...
Timely info, many thanks! Interesting shirt, too, I might add.
Great walkthrough, the simplicity is appreciated! How beefy of a mac do you need/have for your full blown template?
Great video - really interesting and helpful!
thank you so much needed help on this!
Another gem - thanx Guy!
That truly is super concise Guy! I am very curious -with the use of this template- how the piccolo flutists and probably the violinists also- are going to react and what they are going to play.;);).
10:53 I understand why you would want to normalise the cc at 0 bars but my only challenge to putting a midi CC on every track is you can’t therefore filter by tracks with data if you’ve got a massive template with CC on every track
Thank you for the video Guy! This has been very helpful! I have used these approaches in my own template as well. Especially setting the negative track delay on each articulation saves time with a way better result. What I haven't done ,though, is to adjust the volume / pre-gain. Doesn't this interfere with the natural orchestral balance if I adjust every instrument to the same volume?
My workflow thanks you
Why not use the gain adjustment for the sample library in the UI built into the library screen? BBCSO has a volume / gain fader at the top right of their UI. Instead of gain especially for Logic Pro where starting with a Gain plugin seems clunky.
I was very glad to see you use 'pre-gain' (in Studio One, my version of HokeyCokey2000, it is 'input gain control') to bring your initial volumes up to around -18dB. This not only starts you off with each instrument about the same (good for a static mix), but it has you running into your plugins at levels consistently within the plugins' sweet spots. Super important if you want your plugins to work most like they are supposed to! 🙂
exactly
Why wouldn't you use one reverb but change the pre-fader volume in order to bring things forward or back? Awesome as always - thank you so much!
Oh the great battle for volume and balance… great video Guy. What I do when I write is get the instruments to -12, compose and do a final balance when I go to mix. Also funny thing is I built a giant template and felt lost. So I went for an empty approach recently, but will definitely be taking some of these pointers and plan for a future template. Thank you 🙏
At the end of the day boncing everything to audio helps the mix enormously for me at least - dont know why I cant get the same result with MIDI
@@ThinkSpaceEducation yea I think the scripting in midi instruments has a lot to do with it. Once you print to audio it will always be that result if you know what I mean
Needs more Cowbells. All my templates start with 14 channels of various cowbell samples
With the BBC So, and possibly other libraries, the shorts by default use key velocity and mod wheel, while the longs use only mod wheel. This can be changed to ignore the key velocity for all articulations.
Soooo this is WAAAAY better than how I've been doing it. I've just been grouping everything in Folders. WW folder has sub folders for each instrument and separate articulations, which are then all under a parent folder of MIDI. I then have a parent folder called Audio for when I render out all of the MIDI lol. Nightmare solved!
Thank you for sharing your knowledge. One question, did you assign the key commands you use to stream deck? I would like to do the same. Thank you again.
Very informative - thanks Guy. Did you programme all the keys on your Stream Deck yourself, or is it a bought set of key commands from a third party? It would be very educational and useful to find out what all your key commands are.
Brilliant. Thank you.
This is a pretty ram intensive approach. Thanks as always for sharing!
What he probably does that he didn't mention is that he disables all the tracks. That offloads the RAM. Then, when he wants to use an instrument, he enables it, it loads quickly, and is ready to go.
@@david.molinaHow does one disable a track to release the RAM? In GarageBand for instance?
@@stevhard No idea if it's possible in Garage Band. I'm talking about Cubase which is what he is using, and I use. Don't really know if other DAW's have it. Maybe Logic? Not sure.
@@david.molina Sounds good. Thank you.
@@stevhard For Logic, if you hit the power icon on the instrument name, that will disable Logic from rendering, thus saving RAM.
I'm about to get some new plugings and was thinking about re-doing my template. Couldn't have dropped this in a more timely fashion!
This is fantastic. The setup helped address a problem I was having with instrument volume. I am using BBCSO Core. Is it unusual that I would need to be adding 20-30ish dB to get my levels up to that 18 dB range you were mentioning? Appreciate these videos, thank you, they get me inspired to try and write something.
Thanks Guy! I am self taught (bad, but I'm trying) and have all free plug-ins, so I have almost a different plug-in for each instrument (I haven't found any good brass - the free BBC orchestra is the best I have, but it's limited). I rely on velocity for volume. I'm using FL Studio and I didn't know there was a standard orchestral scoring instrument order. How much RAM does your setup have?
More Piccolo!
Great video thanks 🙂
Could you at some point in the same way present your current set-up of Stream Deck?
Thank you for sharing! Two questions. How do you implement the negative track delay for each instrument/library? and, how do you sen this to Dorico or else if you need the score using multiple channels instead of articulations maps?
Hi folks! (1) thanks again so much Guy you make things so clear with your explanations, but (2) I must be missing something obvious in 'neutralizing' the effects of velocity, mod wheel, CC7 and CC11 values per track, by sending common MIDI signals in the first bar - how does that neutralise their effect? Are they all somehow set to different CC7/CC11 etc. values within the samples/patches upon load, and these automations undo any such differences? I wish I had a better way to ask the question!
16:23 OMG. I was wondering what you were doing with stems. I didn't get it until this moment!
Guy. Surely you don’t need to isolate the reverbs by adding to each group channel. You can send to reverb pre or post fader in Cubase so just set it to post fader and job done. No reverb bleed.
Maybe its because I am a productivity obsessed animation composer! My clients always want each stem with reverb and although I could bounce them one by one the most time efficient way is to record into the stem audio tracks all at once, hence the need for separate reverb. But if you are time rich and CPU poor, there are lots of alternatives
😆Great stuff Guy!
Great video! I was wondering how you handle the volume matching between different instruments, i.e. something like 2 woodwinds are equals to 1 string section, 2 horns are equal to 1 trombone (if played at same dynamics), etc. Perceived volume ratio of 2 would be 6dB, wouldn't it? Thinking aloud that would mean for me that after setting e.g. all solo horn groups alike and all solo trombone groups you would set the horn groups 6dB below 1 trombone so that two horns would match a trombone...does this make any sense?
how do you tackle Negative delay for various libraries, do you set it on the track itself - or are you manually shifting the midi notes back and forward etc please?
Yes on each track -another reason not to use articulation IDs or expression maps
everything has changed since Synchron Flow just released.. earth shattering updates. i know you're a Spitfire guy but man. it's like SWAM now.. they made "performance patches" Divisimate 2 just released. use that with Synchron Prime Flow now. it's going to be a whole different world. i know everyone not going to want to do this sort of thing but going ahead i'm doing this for sure. My ideas are all in my brain and the fizzle out so quick, they come and they go. the faster i can input them into the piano roll the better. This will be like Piano sketching but with an entire orchestra. then go in and do a revision and clean it up after by adding keyswitches, and modulation. the sketching will become so fun now.
Why is there not an English Horn in any of the Spitfire instrument libraries? The English Horn in Cubase sounds like a toy synth... i am so glad Guy made this video. It really makes sense. Finally someone explained how templates should be set up. The way he sets up this template really reduces the cpu overload issues. thanks Guy!!
I'm pretty sure there is an english horn in SSO
The BBC Symphony Orchestra does have an "Engish Horn" - It goes by another name "Cor Anglais" en.wikipedia.org/wiki/Cor_anglais. Spitfire also has one in the Studio Woodwinds and Symphonic Woodwinds and in the Spitfire Symphony Orchestra.
I'm always amazed by the use of separated articulation per track. I understand the purpose, but I'm pretty sure you will loose versatility in your composition. I personnaly use Keyswitches because, I often have several articulations in the same musical sentences, which, according to me is always the case in real life... Doing this with separate tracks, It's just impossible to me. Maybe I'm wrong...
I agree. Track per articulation only works well if you don’t change articulation that often. If you do, it quickly becomes a mess. It also makes using the score editor a nightmare.
The thing you said about reverb/volume control, I kinda get what you mean but you can overcome thing by using a send and flipping the post/pre setting and it should blend in as if it were a seperate reverb or am I missing something?
Thank you so much, you are really helpful, I have a question, not speaking English well maybe I missed something, after I set mod weel, expression, etc... to make it all seem like the same instrument, if I want to work with dynamics and expressions what do I do? And then given that in a real orchestra there are many people playing and no one will have the exact same timbre as the other, for each instrument in the section do I set slightly different values?
Nice video Guy. Piccolo sounded much like violins though :)
One of the other posters mentioned this. Apparently it actually was a Piccolo Violin and not A Piccolo Flute. The word Piccolo meaning small.
But it was in the woodwind group!
Piccolo, but it's Violin))
Hi Guy is it fine to use midi instruments for tv, film, ads etc? Thanks
Following Guy's example, I've tried setting Expression, Modulation and Velocity values for all tracks in the first bar within Logic and, although this works when balancing the articulations, it has a knock-on effect. All tracks now have a midi region on them with this information in it and it doesn't seem possible to "hide" tracks that are not actually used for the composition by using keyboard shortcuts. Anyone know of a way round this so that I can remove the clutter?
You can choose to show only active tracks between your set locators. (In the main edit windows bar- choose what you want to see) Everything else will temporarily disappear. This bypasses all those midi starter files.
Guy - quick question for you - how do YOU handle pre-delay on the various articulations? Do you look for a published pre-delay (in ms), or do you just 'listen' your way to something that sounds rightt?
Both. Ilook at the spreadsheet but a lot of the time I boucne a section to audio so I can see how late it is
I’m puzzled why have a track per articulation, when we can have arriculation sets and change the articulation of note(s) from a dropdown instead of dragging them across tracks. I have worked with both approaches, and it’s so hard to work with phrases that have different sequence of articulations, e.g. long-stacc-stacc-trill etc.
What if things like the track delay vary by articulation?
@@williamscolaro1159 It's a) a shitty library or b) you move the notes in question a little in time. I use only VSL products and have no issue with one track approach.
Absolute fantastic libraries, used by Pros like CS Strings have huge differences in track delays if you compare shorts to let’s say expressive legatos - for good reasons. So you end MIDI tweaking and tweaking if you can’t use negativ track delay.
@@williamscolaro1159 considering the amount of work needed to program a phrase and sound realistic, dealing with delays looks painless when compared to working with multiple tracks each with their own CCs for just one phrase. Also the time one needs to experiment on which arriculation to pick for one note, is crazy when having multiple tracks. Ofcourse it has to with the way notes are entered, e.g. performed live or imported from a midi file.
@@jelster1208 Yes I do. Since you can't really switch articulations mid-playing with either approach, tweaking the midi is necessary whatsover. When I'm putting in notes by mouse, I habitually place long notes a little before the grid. It works really well for me. I'm also able to cycle through articlulations with keyswitches and see what fits best, instead of being tied to a fixed pattern of long, short, legato or whatnot. I tried the track-per-articulation method, and it didn't work out for me at all.
How does adding all these articulations for each instrument, not stress out the resources on your machine? Is there a way to add the tracks but not have them stress the machine until you choose to use them? Like freeze track or something? Or when you start the project, do you just delete from the template all the tracks you won't be using?
Any chance you can give access to the template for other DAW like Studio One?
Hi Guy, thanks. I am new ro template matter... is there a video where you explain how to build a template from scratch..?
I'm not sure where I'm doing something wrong, but when I set up the groups and the reverbs, the instruments sound kinda...bad? Specifically, it's due to adding them to groups. When I remove an instrument from a group and change it back to stereo out, it sounds just like the sample on its own. But when let's say violins1 is in the STR group, it sounds distant/muted and the mic balance seems off. What am I doing wrong? I'm using Spitfire Symphony Orchestra in Cubase.
I love you!
Volume balance has been a mystery to me since there is no standard reference. To be clear, you aim for the same dB level for all instruments using the same volume/dynamics/velocity?
Thats how it starts but the balance lasts about 5 bars then its out of whack. There's no way around this as far as I can see you have to reblance as you write
I agree that -18dB is usually a good starting point for most instruments .
What is the best way to mix various orchestral libraries in one program inside a DAW?
Use the same reverb for all
I want to do this with EastWest composer Cloud and Opus…any hint how?
When I saw you typo in "WWE REV", I instantly heard in my head an over-reverberated version of the John Cena theme song
can you make a short break down on how you made cubase look like that and if its possible in cubase 12?
That's the default look of cubase 13.
Bizarrely I just spent the afternoon making a template for Orchestraltools Berlin Berkelee orchestra!
Maybe I missed something, but the export in cubase makes all the "stem tracks" and separate reverbs unnecessary. But perhaps in others DAWs you want to do it this way.
If you want to "render in place" I can somewhat see the point, but just solo the track and it's solved.
I also have a question about adding a track for each articulation, instead of routing so the all articulations of a specific instrument goes to one instrument output. With Guy's way, you have a lot of outputs to control for a single instrument, and if using instruments with various mics, or plugins, this will quickly become problematic.
If all you care about is the groups output, it makes a little more sense, but you still have the problem of fixing all the mics.
And about the articulations, the expression maps solves it, and it is as fast as separate tracks, and you only have to work with one track per instrument. If you don't have expression maps, separate tracks is faster and better
Professional delivery standards usually require stems. Final mix down might be in a video editor for example. It will involve foley, ADR and a lot of other audio.
Guy I just finished the template in a weekend course... It took me about 3 months 😅😅
Ha! Well heaven help you with "Sampled Orchestration in Month" then
Lately, I've been hosting samples directly in Cubase, given how much faster machines have become. Do you still use Vienna Ensemble Pro for hosting?
I just read your answer below to someone else. Interesting how you no longer use it. It's the very same reason I haven't been using it - it takes toooo long to add instruments. That process does not support my tendency to add instruments on the fly.
thanks!
How many ram memory your working template uses?
or perhaps for us mortals, what's the minimum recommended RAM on a 2024 orchestral template?
You can run it in 16gb if you disable tracks and just enable the tracks you need
Do you have a video that explains the virtues of Cubase over Logic...for your way of working?
Yes not a bad idea -I do use both but mainly Cubase
18:25 jononotbono 😎
If you've got a big orchestral tutti with strings divisi playing different articulations (so say about 15-20 different sounds) how are you supposed to fit these onto only 16 MIDI channels?
You don't need to have each sound on a different MIDI channel - in fact it's probably simpler to have them all on the same one. Just only record-arm the one you're currently working on, and the rest will ignore any new MIDI notes that come in (but play any that are already recorded on their track).
Doesn't it take a lot more RAM to load an entire instrument for each articulation? Or is there a way in most libraries to load separate articulations?
Some libraries are easier than others and BBC is not easy to load separate articulations but yu can disable unused tracks
For me I am have issues with setting up volume (mod, vol, exp) cc 1, 7, 11. It works, however. I can press play and it will set cc's, but as soon as I press record the cc 1 drops to 0. On playback it pops back up to 90 the world is fine. Move to an open bar, drops to zero, place over a track with data back to 90. I noticed the downloaded template does not have pre cc data. Can anyone help?
My biggest mistake was getting Waaay too many boards and VST's that left me spending most of my time previewing all the different sounds each is capable of. Focus, focus,focus.🎹👀
😁🎶🎹🎹🎶Play On
Tried to sign up for the template but it said my address was blocked. Love your videos
DAW challenge: Guy has to use Reaper only for week. 😂
That would be desperately sad.
I noticed Guy didn't mention Reaper at the start too. I've been using Cubase since its ancestors on the Atari ST, and currently have Nuendo 12. Over the years I've tried Reason and Ableton as well. I switched over to Reaper last year, and love its speed of startup, low memory footprint and customizeability. The community is awesome too, not least the RUclipsrs it has. Obviously I love the cost and upgrade policy compared to all their competitors. All my Native Instruments, Steinberg, Arturia, etc. VSTs work great. Because you can free-trial for a lengthy time, there is no excuse not to see it it can work for you.
I use Reaper and I like it. Great and very versatile DAW especially for the bucks.
Me too. Used it since v4. Can't get other DAWs to do as much when I want it.
i use reaper btw
I'm still on Hokikoki 1000...Is it worth upgrading?
Definitely -
How come nobody mentions Studio One?
the ctr-shift doesn't work for me, I'm on a pc, don't know if that matters?
shift+alt