Incredibly helpful overview! In hindsight, I feel a little silly I didn't think of some of these ideas earlier... Such as expression maps creating an issue with negative track delays. I think that may be enough to get me to switch over to loading individual articulations. It is a bit of a shame though! I love how convenient a good expression map is.
@@AnneKathrinDernComposer rewatched it, you were asking for Sample Libraries which are using the same sample start for all articulations in order to be suitable for keyswitches. Afaik, Audio Imperia does this with Nucleus and Jaeger and basically all libraries. The default setting is 125 ms andcan be set from 0 to 250.
I’m so happy to have found this video on setting up plugins in a template…all the articulations, groups, it can be daunting but you broke everything in excellent detail. Really all your tutorials I’ve seen so far are like this, pure gold
@@AnneKathrinDernComposer Cubase and Protools shortcuts are one thing. I spend most of my time in Sibelius. Fixed that rabbit whole finale with a good macro controller. Love the channel. It is great to see how other composers think/work.
Ah yes, it definitely also helps in notation software - especially Sibelius where nothing is named they way you'd think and everything is scattered in sub-menus that aren't super logical.
wow...just wow. thank you. dreading implementing some of this stuff. I use only spitfire orchestra right now so expression maps are keeping things clean.. but the idea of adding other libraries is scary now! This is really helpful conceptually, thank you
Anne-Kathrin Dern Don’t ever, ever feel like you need to explain or apologize. You’re doing all this out of the goodness of your heart and your work obviously must come first. We appreciate all that you do for us.
I know this video is more than a year old but I want to complement something about key switches for new viewers. Some DAWs learned that key switches exist and created solutions for the problem of accidentally deleting key switches, I'll talk about the ones I know. Studio One has "sound variations" which is like a track inside the Piano Roll where you input your articulations, it's very organized and I personally never had problems with it, and FL Studio and Reaper has a third party free plugin called BRSO Articulate that lets you choose the articulation using the color of the note, also I think Cubase has something like Studio One but I'm not a Cubase user so I'm not completely sure. So these are safe solutions we have. About the first problem that she said about the offsets being different, we key switch lovers can just sit and cry :( I personally like key switches because of my style of composition, I write and orchestrate at the same time so it's much more practical for me. It's always about feeling comfortable while making what you like :)
Thanks Anne, this is great! Different workflows for everyone, but I really appreciated your explanation of why key-switching can be problematic, with regard to track delays. I previously used articulation mapping in an attempt to keep my setup 'clean' and/or flexible, but it actually created more issues than it solved. Another reason would be reverb, as I find shorts and longs need different amounts of early/late reflections - you can't really set this up on a single track.
maybe you saw before the link i provide,maybe not...bus believe me is so helpful and classified by brands ,libraries,and growing up....thanks for all,anne..
Thank you Anne for such clear help. I’m needing even a bit more detail on the initial set up process but this was amazingly detailed. I’ll watch it several times I’m sure. Thank you
Aloha Anne-Katryn, As an older learner in the digital composition world, I deeply appreciate your tutorial videos. Clear, concise, and very informative, they have helped me immensely. Thank you! At some point I would love to see a video on your Key Commands set-up for Cubase. I have always been a pen-and-paper composer, who nows works in the digital realm. Your expertise would be welcomed!
Thanks for this video. I myself am constantly trying to optimize the workflow, but I am still learning a lot. Can you answer, please, how much was your royalty for one soundtrack on average? And how to correctly estimate a minute of music, how much should it cost? And how does the process of discussing the cost of working with a client go?
Most libraries I have found, using keys witches or expression maps is fine as long as you split the long and shorts. To me, its much easier than dealing with a template of 1,200 tracks listing every single articulation. I personally split the Legato, Longs, and Shorts out. I have also found that most of my sample libraries have a track delay adjustment within the interface which lines up all the shorts/longs, etc.
Question! When composing, and you record a melody-line, by using key-switches you can hear the result immediately. With a small edit here and there it should be fine. When you have every articulation split up: - do you record everything legato and the start to copy paste to the other articulations? - do you delete the notes from the other articulations or do you silence them? - And, doesn't this way also 'loses' time because you have to do the whole copy/paste/delete thing?
I play into whatever articulation fits best and then I copy the line over into the other tracks and mute the notes I don't need. If you're set up properly with a good template, then this only takes a few seconds, so not much longer than using key switches or other controllers.
@@AnneKathrinDernComposer plus, as mentioned in the video, using separate tracks allows for unique negative track delays per articulation, which is my main motivation for splitting rather than switching.
Glad you liked it! To be clear, everyone can absolutely use key switches, there's no right or wrong here. Whatever works for you is fine. Just personally I've found it to not be the optimal way for myself, especially in film work.
Excellent explained. Thank you very much for sharing your knowledge with all of us. All details make sense and are very important to get the entire job done in a right way 👍👍👍
Once more a great Video! I mostly tend to work with one articulation at one track 😂 Horrible for the score prep but it is really helpful when you have to deal with different offset timings. And I also don't really like Keyswitches 😆
I don't know if it's horrible for score prep. Most orchestrators I've worked with actually preferred this over keyswitches. If I were an orchestrator, I think I'd prefer this too - it's more tracks but much clearer.
@@AnneKathrinDernComposer Just wondering about exporting to a score, when exporting each individual track, say pizzicato on one track, legato on another, doesn't that become an issue? You have to merge the two onto a staff in Sibelius or Finale? When both are on one track and we are using key switches, is it not easier to edit in the scoring software once you've exported the midi? Or do you use the Merge Midi function in Cubase before exporting. Sorry for the long message.
Thanks Anne, really nice and useful walkthrough. There are some MIDI controller software as Lemur helps for shortcuts, Mcros, Expression Maps and even for articulations.
The negative track delay is mostly for compensating for when the actual 'meat' of the sound takes hold... the part of the sound that you want on the beat. Like, a piano sample may contain the sound of the initial hammer movement before it strikes the string, but the actual timing of the note begins when the hammer strikes the string. So, if the MIDI info is aligned to the grid, then to have the actual timing of the note correct you have to pre-offset the sample by the duration between the sample start and when the hammer hits the string. Or, when a violinist begins to play a note, the full intent of when the note is supposed to be on the beat will usually be after you hear the initial bow touching the string, which needs to be compensated for in order to get the timing perceived correctly.
Anne, as always... Thank you for the video. It has shed tons of light on understanding this topic. Now I’m gonna redo my template for 2021. :) Quick question pls: What do you see are the benefits to using a midi track over an instrument track for each Instrument/articulation? Does it save more memory with a midi track or is it just something you need do to because of VEP 7? Thank you very much!
Question for you Anne: what has been your strategy (emphasis on 'your' as I realize this is very subjective) with organizing all your articulation tracks? When going the route of separate tracks (vs expression maps or keyswitches) the number of tracks escalates very quickly. Do you have a track for, say, every articulation in Cinematic Studio Strings? If so, what organizational approach has worked for you?
I've been adjusting it depending on the capabilities of the libraries and based on my writing. So every library and every section is slightly different to compliment my specific programming style. You'll have to figure this out as you go. Even know I'm still changing my track orders if I notice something.
Hey Anne, you said you have it set so regions (midi/audio) automatically get the name of the track. How did you set this up? Is it a Macro? (watching the vid as i type).
Negative track offsets are super useful after they've been set but setting them is so hard. I wish more companies did what Cinematic Studio Series did in every library and Cinesamples did with CinePerc, that is having those in the manual (and also being padronized, I hate when different round-robins need different offsets.
Personally I keep going back and forth with my opinion on keyswitches and expression maps. Negative track delay is definitely the one thing that bothers me. For now I have decided to go with individual tracks per articulation...and just embrace the chaos. By the way, you can offset the bar on top to start at -2 so 0 alligns with your Music Start marker.
Yes, obviously you can have negative bar numbers but this gets screwed up when importing into Pro Tools. And I’m not going to manually correct 60 cues and waste hours of my time.
HI again Anne, So if I’m understanding correctly, VE Pro is only loaded onto the server/slave unit. But I was also told that in order to use/connect to an external computer you need the “dongle”! This (multi system approach and VE pro) is all so new… I at first thought VE Pro was loaded onto the Host and the dongle was plugged into the server and that gave access? Still in process of laying out the connection and uploading issues. My host CP is ready and I’m waiting on my server unit to arrive in 2 days.. Thank you
It's tricky that way. I have had to jump to a number of sources for clarification. You can load VE Pro on the Host System or a network system. The benefit of loading in on a network system is that all of the instruments and effects are completely processed by the other system. The benefit of loading it locally is (while not having all of the processing handled by another system, it still segregates the processing from your DAW. And being able to process them both separately can drastically improve load/save/import/export and other times, as not everything needs to be synchronized to Cubase, so you can generally unload what is extraneous. Good luck. Godspeed. Happy Holidays to you, your family, and friends.
If you're using just a Main computer running your DAW, and then another computer as a sample server for your libraries, you install Vienna Ensemble Pro 7 on your sample server computer, and use the dongle on it. You do not have to install VEP on your Main DAW computer, you simply use the VEP 'plugin' instance within your DAW. Once you have the two computers talking to each other via a network, your DAW will have that VEP plugin available within your DAW.
Great channel and very informative video! I have a question though, how do you get the CC data (expression, dynamics, breath and so on) to not be affected by the negative track delay? That's the most annoying thing with it for me. Am I missing something obvious?
Ma'am..thank you for such an exhaustive and comprehensive video. Just a question, do you create 2 separate instruments racks for each instrument and assign them to differemt outputs ..Like Flute Longs, Flute Shorts or lets say Violins Long and Short to route to different outputs ?? Wouldnt it make printing the stems and then dealing with the long and shorts differently in the mixing stage ??
Great! Thanks a lot! I was wondering if the TC display of the DAW would be the same as TC on the movie if you start your project on bar 3; I think it is just possible in PT? should you also mention that bar 1 is bar 3 in your score as well?
Yes, the TC in the movie and DAW always have to be the same. Every DAW has a function to set it up that way. It's easiest in Cubase actually, though PT in general has better TC functionalities. And no, the score looks exactly like the DAW session. So bar 3 is bar 3. Bar 1 is the click start.
Thanks for the great content! I assume there is some sort of latency between a Vienna Ensemble Pro Server and the DAW workstation. If that is the case, I would also assume you cannot use a VEP server for some samples and host other samples on the DAW workstation. Is that correct?
I notice that you set the track volumes with the Track Inspector, which to my understanding links directly to the audio output's fader in the mixer window. Since your editing this parameter in a MIDI track, where will it link to? I opened up a MIDI track quickly routed it to a VST and wasn't even able to edit that volume parameter. Why do you prefer this method over setting CC7 in your MIDI flags?
The MIDI track volume does not link to the audio output's fader. It's a separate entity, not sure if other DAWs have this one. Basically, Cubase has an extra parameter for MIDI track volume that is independent from CC7 and CC11. If you open the MIDI track lanes you can see it there. It would also correspond with the main fader if you have Steinberg's CC121 unit. I use CC11 for automation (and in my MIDI flags) because CC7 links directly to the faders in Kontakt which I'd prefer at a static volume. The audio output fader is only for the groups, not the individual MIDI tracks. I did use CC7 for static volume and CC11 for automation back in Logic though since from what I remember it doesn't have this extra parameter.
@@AnneKathrinDernComposer makes sense, thank you! I've been setting CC7 in my flags and I never mess with it once it's set. Works well but I also don't have VEP (yet) and have been doing the disable instrument track method. Your channel has been so helpful for researching and planning future upgrades!
@@AnneKathrinDernComposer The Cubase MIDI track volume is sending CC7 at least I strongly supect that after a bit of testing. And that's making perfect sense after all the receiving VST only understands MIDI.
@@rustyshaffer That doesn't seem to be accurate because CC7 controls the fader inside Kontakt, not the MIDI track fader. I can ride CC7 up and the MIDI track volume down and the two will cancel each other out so they can't be identical. Likewise when I rise CC7 up and then the MIDI track volume further up, the sound gets louder so the two add upon each other which also indicates they are separate from one another. Otherwise you wouldn't be able to go past value 127.
They will always be the same no matter what. Bar 1 on the score always has to be bar 1 in the DAW as well. Same with Pro Tools session prep. Otherwise nobody can communicate at the sessions.
I understand why key switches are a massive pain. what is your opinion on the “Sound Variations” solution that Studio One came up with? I dont know what it is called in Cubase but they have a system for keeping keyswitches out of the way of midi editing too. I think these feature are relatively recent (in the past year). Cakewalk by Bandlab has it too.
Damn, I wish I had seen this video a week ago. I was very much against using 1 articulation per track but after hearing your explanation, it seems like the easiest thins to do. Nothing more annoying to me than adjusting the midi notes off the grid so they sound in time.
There are certainly pros and cons to both ways of doing it but I do prefer split articulations - at the very least longs, shorts, and specials if not more.
Well explained Anne, thanks again. After setting up expression maps for CSS I have pretty much encountered all the problems you've discussed, so I'm with you! Articulations per track is so much less of a headache. Also I wasn't aware of the "parts get track names" preference, really grateful for that tip especially.
Glad this was helpful! Yeah, going through the preferences and key commands in detail really made my life easier. So many manual steps that I could automate. And yeah, articulations per track it is for me as well... a bit of a pain when programming a line that has different articulations but oh well - can't have it all.
@@AnneKathrinDernComposer Enjoyed your video. I was looking for info on vepro with Cubase which is how I found this. I personally find that working with separate articulations locks me into a rigid way of writing and if I want to have some nuances in my writing, I end up taking the easy way out. After setting up macros for CSS, CSB, and CSW, I now highlight my midi and it automatically does my legatos (it detects short, medium, fast velocities), shifting all the notes for me. Because the Cinematic Series uses the universal -60ms track delay, I set that for all tracks and then used the Cubase logical preset editor to create the different offsets, depending on velocity. I now find working with one track per instrument and using expression maps more elegant and as fast or faster than single articulations. I appreciate both ways of doing articulations. Thanks again!
with expression maps you cannot cut accidentally keyswitches. for the negative delays you can split to tracks (keeping also long / short distincts) with identical delays and continue to use expression maps on this tracks. no chapell or cornelian choice, just the best of the two worlds.
Great insight into your process! When you have 3 flute patches going into FLUTE 1|2 AUX channel, how do you manage to process each of them separately? I'm not sure if External MIDI tracks allow you to process them individually.
As I said in the first part, this is how I choose my splits. If you want to process each flute individually, you’ll have to assign an output to each one in VEP. But I don’t see the point in doing that.
@@AnneKathrinDernComposer What if those 3 flutes are from different libraries (different halls, reverb settings, panning, articulations etc.). Wouldn't you want more individual control to blend them better?
Regarding track delays: do you use the Classic Legato patches in CSS or the ones with the more advanced legato engine? I find the later ones to be not really usable with track delay and quantized midi notes since the delay time varies between the different legatos and even if you settle for the medium delay you'll end up with notes that come in too early if they're not using legato. I'm currently thinking about how I'd to handle this since I'm reorganizing my stuff and workflow. I know there is a KSP script for it but I'm not sure I really want to rely on that or rather use the Classic Patches.
About Volume (CC7). I understand CC7 is overall volume, and CC11 Expression is another volume control within that. You show a red arrow next to the track to adjust the Volume balances of the tracks, which affects Fader position in the MixConsole view of Cubase. Yet when you show your Cubase setup in the MixConsole, all your Faders are in the nominal position. Am I missing something?
one question ive got is....should i group folder tracks to same name for example grups with strings,brass etc...i did it..and then when i put some reverb for each instrument...all the sliders move at same time and is so annoying...what i do bad??best regards from a amateur composer in london(but from spain.lol)
Thank you for sharing Anne, this is really helpful! I have a question, I set up the volume for each MIDI track in the quick editor as you did, but when I close the project and I open it again, the numbers are correct (in the fader) but when I listen to the music I notice that the volume changed, is like if it was reset to the default position. Do you know what could be the problem?
I have this problem with a handful of libraries that don’t seem to read these values at the start of the session unless I touch that button (CineStrings comes to mind). Usually re-saving a patch with that value enabled solves it for me (you only have to do this once). It does seem a handful of libraries have the volume default locked in the script which isn’t helpful…
@@AnneKathrinDernComposer yeahh. That's what it seemed to me. I notice that it only happened with some East West libraries so I tried to fix it by changing the volume in the automation data. But it's a little bit annoying. I will try to re-save the patch to see if that works better. Thank you Anne!! :)!
Hi Katherin , thanks for the video. I have a question regarding the group tracks , I get that a group track is established when you set the outputs ( you use rack or instrument ? ) but why does the group track have a keyboard icon like an instrument ? I see instrument icons on those tracks which are supposed to be group tracks , Please explain , thanks
Just wondering about the volume balance between tracks and libraries. If you use the track volume slider for this, aren't you blocking this option for the coming mixing stage in which you always adjust the mix sliders? I recently discovered the option to adjust the track Pre-Gain in the Channel Settings of a track, would this be good as well?
Is there a reason why you don't use bar offset in the Cubase project settings so your music starts on bar one? Does it have to do with timecode not matching up? It seems like that would be easier than having to subtract bars from the current cursor position when you're finding edits or revisions.
I am curious to know the specs of your VEPro server, the machine that hosts all the samples. I am thinking of switching from Disabled Cubase tracks template to VEPro.
So, with the Spitfire products there's an "tightness" controll. Can I set the negative track delay to match the long notes and then use the tightness to match the short notes? I really really like to have all the articulations in the same track.
Do you use Cinematic Studio Series? They have only multi sessions articulations, and not single. Sorry if you don't understand well my question, my english is bad.
my two cents on microtiming: I am not THAT meticulous about tweaking the note-on samples because, in my case, I produce final orchestral music which needs to sound as human as possible. I even run a macro that changes note-on's and velocity randomly. So don't think this is absolutely a MUST DO. It depends on what you have to deliver. The rest is also working in my setup and Anne explains it very well :-)
This has been debunked by many professional composers a long time ago. While on paper it seems logical, you don't actually gain "humanity" by de-quantizing. Modern sample libraries have so many samples and variations in them that there is already plenty of natural variation, especially when mixing different libraries made by different people, recorded in different spaces. These products are edited by human beings and therefore already are inconsistent to begin with. Whenever people de-quantize it often just sounds like a sloppy high school band that can't play in time. Any professional studio orchestra plays absolutely tightly together and very much closely in the grid. Their micro-timing is on point - something some of my team members had to learn the hard way when the orchestra played tighter than their mockups and they had to go back in to quantize their MIDI and re-print their stems.
So "working in the grid" - does that mean you're going to quantize all live played notes? Because then the notes will be snapped right in place but doesn't that sound too inhuman and artificial?
Working in the grid is very common. While on paper it seems logical, you don't actually gain "humanity" by de-quantizing. Modern sample libraries have so many samples and variations in them that there is already plenty of natural variation, especially when mixing different libraries made by different people, recorded in different spaces. These products are edited by human beings and therefore already are inconsistent to begin with. Whenever people de-quantize it often just sounds like a sloppy high school band that can't play in time. Any professional studio orchestra plays absolutely tightly together and very much closely in the grid. Their micro-timing is on point - something some of my team members had to learn the hard way when the orchestra played tighter than their mockups and they had to go back in to quantize their MIDI and re-print their stems. An argument can be made for solo parts or something like a piano for example. But you'll find that 99% of the time, it'll sound just fine.
@@AnneKathrinDernComposer Vielen Dank für die Antwort! ;) Now that's quite an interesting aspect, as lots of people are usually telling: Do never quantizer your recordings - it takes out the "soul". But I totally understand your point and it seems so logical to me, to be honest.
Strange question, but sure. It's currently quite cold in LA and I recorded these videos at night when it's especially cold due to it being a desert (and I'm also close to the Pacific ocean so...). I try to use the heater only when necessary but I can't use it while I'm recording videos since it's very loud (built into the air conditioning unit). So there, that's why I'm wearing a scarf.
How do you rename your outputs in Vienna Ensemble Pro? For example, I'd like the output to read "Trumpets (1/2)" or something like that, rather than just the numbers.
Never mind, I don't think you can. In Cubase it looked like you had named our outputs, but actually those were just the track names that had been added on. Sorry if I'm being confusing- thanks for this wonderful video.
3 года назад
I also put the CC information of the mic positions in the MIDI flags, does anyone do the same? Is there a better approach to that?
The point is not the bar number, the point is to have at least two empty bars in the front. Whether that’s -1 and 0 or 1 and 2 doesn’t really matter (though you’ll find that the former won’t automatically translate into Pro Tools so you’re adding extra work later down the line and increase the chance for errors between PT session and sheet music).
It's too bad that you're not on Twitter, Anne. I'd be tagging you and sharing your videos left and right. That aside, I have a couple of questions (if you don't mind): 1.) Can VEPro Templates be saved and exported, Imported, moved across network to other machines? 2.) have you considered selling your templates at a relatively low rate ($10 - $17)? How might it benefit you?: Time and effort of developing templates begins to receive compensatory offsets Why would you charge the low rate for such valuable tools?: By including an agreement that the customer is receiving your largest templates "AS IS" and will receive no support (leaving customization their hands with a HEFTY headstart from an otherwise "Zero point"... there will be an understanding that this is a cost relative fair-exchange by which all parties benefit.
So a delay per sound slot would be a great feature for expression maps. There are already feature requests: - forums.steinberg.net/t/delay-parameter-for-expression-maps/121448 - forums.steinberg.net/t/expression-maps-improvements/130520 Given your status, maybe you could add more weight to these. By the way, expression maps already cover different velocities in different patches, so you do not need different tracks just for that issue.
I watched over 9000 tutorials and nobody ever mentioned negative. track. delays. EVER. I wonder if this "latency" I was having for the past 2 years wasn't just that... omgomgomg
Something special about you. You're very likable. Opportunities won't cease knocking. 🙂
Incredibly helpful overview! In hindsight, I feel a little silly I didn't think of some of these ideas earlier... Such as expression maps creating an issue with negative track delays. I think that may be enough to get me to switch over to loading individual articulations. It is a bit of a shame though! I love how convenient a good expression map is.
Thank you for these useful informations, one of the best videos about templates I have ever seen!
I'm glad these were helpful! Thank you for watching!
@@AnneKathrinDernComposer rewatched it, you were asking for Sample Libraries which are using the same sample start for all articulations in order to be suitable for keyswitches. Afaik, Audio Imperia does this with Nucleus and Jaeger and basically all libraries. The default setting is 125 ms andcan be set from 0 to 250.
This solved so many issues and annoyances I've run into over the years. Great content!
I'm glad this was helpful to you!
I’m so happy to have found this video on setting up plugins in a template…all the articulations, groups, it can be daunting but you broke everything in excellent detail. Really all your tutorials I’ve seen so far are like this, pure gold
Thanks so much! I really appreciate all of your videos!
Thank you so much for the support!
Totally can relate to finding out about a key shortcut years after working with a software.
The pain is real... to be fair, some of them are also labeled very badly - as in it's not entirely clear at first sight what they do.
@@AnneKathrinDernComposer Cubase and Protools shortcuts are one thing. I spend most of my time in Sibelius. Fixed that rabbit whole finale with a good macro controller. Love the channel. It is great to see how other composers think/work.
Ah yes, it definitely also helps in notation software - especially Sibelius where nothing is named they way you'd think and everything is scattered in sub-menus that aren't super logical.
wow...just wow. thank you. dreading implementing some of this stuff. I use only spitfire orchestra right now so expression maps are keeping things clean.. but the idea of adding other libraries is scary now! This is really helpful conceptually, thank you
Oh, yay, been waiting for this! I'm getting my popcorn.... whoohooo.
Haha, I know! I'm slow currently as work is ramping up but I'll get it all done eventually. :-)
Anne-Kathrin Dern Don’t ever, ever feel like you need to explain or apologize. You’re doing all this out of the goodness of your heart and your work obviously must come first. We appreciate all that you do for us.
I know this video is more than a year old but I want to complement something about key switches for new viewers.
Some DAWs learned that key switches exist and created solutions for the problem of accidentally deleting key switches, I'll talk about the ones I know. Studio One has "sound variations" which is like a track inside the Piano Roll where you input your articulations, it's very organized and I personally never had problems with it, and FL Studio and Reaper has a third party free plugin called BRSO Articulate that lets you choose the articulation using the color of the note, also I think Cubase has something like Studio One but I'm not a Cubase user so I'm not completely sure. So these are safe solutions we have. About the first problem that she said about the offsets being different, we key switch lovers can just sit and cry :(
I personally like key switches because of my style of composition, I write and orchestrate at the same time so it's much more practical for me. It's always about feeling comfortable while making what you like :)
I really learn a lot of new things with your videos. Thanks a lot, Anne.
I’m glad these videos are useful!
Thank you for this. You have helped ease my fears of VEP. I wish I had seen these videos a year ago. You have a new fan!
Thanks Anne, this is great! Different workflows for everyone, but I really appreciated your explanation of why key-switching can be problematic, with regard to track delays. I previously used articulation mapping in an attempt to keep my setup 'clean' and/or flexible, but it actually created more issues than it solved. Another reason would be reverb, as I find shorts and longs need different amounts of early/late reflections - you can't really set this up on a single track.
No, but you could split the shorts out to a separate track.
maybe you saw before the link i provide,maybe not...bus believe me is so helpful and classified by brands ,libraries,and growing up....thanks for all,anne..
finally someone with the right colors on template!
Thank you Anne for such clear help. I’m needing even a bit more detail on the initial set up process but this was amazingly detailed. I’ll watch it several times I’m sure. Thank you
Glad this is helpful!
Aloha Anne-Katryn,
As an older learner in the digital composition world, I deeply appreciate your tutorial videos. Clear, concise, and very informative, they have helped me immensely. Thank you!
At some point I would love to see a video on your Key Commands set-up for Cubase. I have always been a pen-and-paper composer, who nows works in the digital realm. Your expertise would be welcomed!
Thanks for this video. I myself am constantly trying to optimize the workflow, but I am still learning a lot. Can you answer, please, how much was your royalty for one soundtrack on average? And how to correctly estimate a minute of music, how much should it cost? And how does the process of discussing the cost of working with a client go?
Most libraries I have found, using keys witches or expression maps is fine as long as you split the long and shorts. To me, its much easier than dealing with a template of 1,200 tracks listing every single articulation. I personally split the Legato, Longs, and Shorts out. I have also found that most of my sample libraries have a track delay adjustment within the interface which lines up all the shorts/longs, etc.
Very helpful, thank you!
Great Video, Anne. I'll be moving from Logic to Cubase this year.
Question! When composing, and you record a melody-line, by using key-switches you can hear the result immediately. With a small edit here and there it should be fine.
When you have every articulation split up:
- do you record everything legato and the start to copy paste to the other articulations?
- do you delete the notes from the other articulations or do you silence them?
- And, doesn't this way also 'loses' time because you have to do the whole copy/paste/delete thing?
I play into whatever articulation fits best and then I copy the line over into the other tracks and mute the notes I don't need. If you're set up properly with a good template, then this only takes a few seconds, so not much longer than using key switches or other controllers.
@@AnneKathrinDernComposer plus, as mentioned in the video, using separate tracks allows for unique negative track delays per articulation, which is my main motivation for splitting rather than switching.
Thank you for explaining so well why not to use key switches. I'm going to redo my templates
Glad you liked it! To be clear, everyone can absolutely use key switches, there's no right or wrong here. Whatever works for you is fine. Just personally I've found it to not be the optimal way for myself, especially in film work.
Excellent explained. Thank you very much for sharing your knowledge with all of us. All details make sense and are very important to get the entire job done in a right way 👍👍👍
Thank you for watching!
@@AnneKathrinDernComposer Jouw videos zijn een goudmijn voor alle mensen die zich een weg willen banen in de filmmuziekwereld. Gewoon ''Top''.
Dank je wel! Dat hoor ik graag!
Fantastic info Thank you!
Thanks for watching!
Great Video I really like your channel thank you
Once more a great Video! I mostly tend to work with one articulation at one track 😂 Horrible for the score prep but it is really helpful when you have to deal with different offset timings. And I also don't really like Keyswitches 😆
I don't know if it's horrible for score prep. Most orchestrators I've worked with actually preferred this over keyswitches. If I were an orchestrator, I think I'd prefer this too - it's more tracks but much clearer.
@@AnneKathrinDernComposer Oh, well that’s great to hear!
@@AnneKathrinDernComposer Just wondering about exporting to a score, when exporting each individual track, say pizzicato on one track, legato on another, doesn't that become an issue? You have to merge the two onto a staff in Sibelius or Finale? When both are on one track and we are using key switches, is it not easier to edit in the scoring software once you've exported the midi? Or do you use the Merge Midi function in Cubase before exporting. Sorry for the long message.
Thanks Anne, really nice and useful walkthrough.
There are some MIDI controller software as Lemur helps for shortcuts, Mcros, Expression Maps and even for articulations.
The negative track delay is mostly for compensating for when the actual 'meat' of the sound takes hold... the part of the sound that you want on the beat.
Like, a piano sample may contain the sound of the initial hammer movement before it strikes the string, but the actual timing of the note begins when the hammer strikes the string. So, if the MIDI info is aligned to the grid, then to have the actual timing of the note correct you have to pre-offset the sample by the duration between the sample start and when the hammer hits the string.
Or, when a violinist begins to play a note, the full intent of when the note is supposed to be on the beat will usually be after you hear the initial bow touching the string, which needs to be compensated for in order to get the timing perceived correctly.
So much work 🥴 Congrats on being able to handle all this 💙
Hi! Great video. Why do you have to leave couple of bars of headroom for the music start?
For the click track, a possible upbeat / pickup, a clean music entry, the MIDI flags... there are many reasons.
@@AnneKathrinDernComposer Thanks - Good to know!
Incredible! 🙌🏾
Anne, as always... Thank you for the video. It has shed tons of light on understanding this topic. Now I’m gonna redo my template for 2021. :)
Quick question pls: What do you see are the benefits to using a midi track over an instrument track for each Instrument/articulation? Does it save more memory with a midi track or is it just something you need do to because of VEP 7? Thank you very much!
Question for you Anne: what has been your strategy (emphasis on 'your' as I realize this is very subjective) with organizing all your articulation tracks? When going the route of separate tracks (vs expression maps or keyswitches) the number of tracks escalates very quickly. Do you have a track for, say, every articulation in Cinematic Studio Strings? If so, what organizational approach has worked for you?
I've been adjusting it depending on the capabilities of the libraries and based on my writing. So every library and every section is slightly different to compliment my specific programming style. You'll have to figure this out as you go. Even know I'm still changing my track orders if I notice something.
Hey Anne, you said you have it set so regions (midi/audio) automatically get the name of the track. How did you set this up? Is it a Macro? (watching the vid as i type).
I don't think it's a macro. I think it's a checkbox in the preferences menu.
@@AnneKathrinDernComposer Thanks! Love the vid.
I have the same juice bottles as you. Nice template btw
lol @ the overdubs “look ma, no hands”
Negative track offsets are super useful after they've been set but setting them is so hard. I wish more companies did what Cinematic Studio Series did in every library and Cinesamples did with CinePerc, that is having those in the manual (and also being padronized, I hate when different round-robins need different offsets.
Personally I keep going back and forth with my opinion on keyswitches and expression maps. Negative track delay is definitely the one thing that bothers me. For now I have decided to go with individual tracks per articulation...and just embrace the chaos. By the way, you can offset the bar on top to start at -2 so 0 alligns with your Music Start marker.
Yes, obviously you can have negative bar numbers but this gets screwed up when importing into Pro Tools. And I’m not going to manually correct 60 cues and waste hours of my time.
HI again Anne, So if I’m understanding correctly, VE Pro is only loaded onto the server/slave unit. But I was also told that in order to use/connect to an external computer you need the “dongle”! This (multi system approach and VE pro) is all so new… I at first thought VE Pro was loaded onto the Host and the dongle was plugged into the server and that gave access? Still in process of laying out the connection and uploading issues. My host CP is ready and I’m waiting on my server unit to arrive in 2 days.. Thank you
It's tricky that way. I have had to jump to a number of sources for clarification.
You can load VE Pro on the Host System or a network system.
The benefit of loading in on a network system is that all of the instruments and effects are completely processed by the other system.
The benefit of loading it locally is (while not having all of the processing handled by another system, it still segregates the processing from your DAW. And being able to process them both separately can drastically improve load/save/import/export and other times, as not everything needs to be synchronized to Cubase, so you can generally unload what is extraneous.
Good luck.
Godspeed.
Happy Holidays to you, your family, and friends.
If you're using just a Main computer running your DAW, and then another computer as a sample server for your libraries, you install Vienna Ensemble Pro 7 on your sample server computer, and use the dongle on it. You do not have to install VEP on your Main DAW computer, you simply use the VEP 'plugin' instance within your DAW. Once you have the two computers talking to each other via a network, your DAW will have that VEP plugin available within your DAW.
Great channel and very informative video! I have a question though, how do you get the CC data (expression, dynamics, breath and so on) to not be affected by the negative track delay? That's the most annoying thing with it for me. Am I missing something obvious?
Ma'am..thank you for such an exhaustive and comprehensive video. Just a question, do you create 2 separate instruments racks for each instrument and assign them to differemt outputs ..Like Flute Longs, Flute Shorts or lets say Violins Long and Short to route to different outputs ?? Wouldnt it make printing the stems and then dealing with the long and shorts differently in the mixing stage ??
Great! Thanks a lot! I was wondering if the TC display of the DAW would be the same as TC on the movie if you start your project on bar 3; I think it is just possible in PT? should you also mention that bar 1 is bar 3 in your score as well?
Yes, the TC in the movie and DAW always have to be the same. Every DAW has a function to set it up that way. It's easiest in Cubase actually, though PT in general has better TC functionalities. And no, the score looks exactly like the DAW session. So bar 3 is bar 3. Bar 1 is the click start.
Good video! :D
Regarding using key switches vs using seperate tracks per articulation, how does that work when it’s time to export into a notation package?
No you would not. I have a video on Midi prep for orchestration that shows how it’s done.
Great tutorial! Have you ever tried a Breath Controller for expression? Any opinions? Thanks for another informative vid :)
Thanks for the great content! I assume there is some sort of latency between a Vienna Ensemble Pro Server and the DAW workstation. If that is the case, I would also assume you cannot use a VEP server for some samples and host other samples on the DAW workstation. Is that correct?
Anne, remind me we need to have a big nerdy chat about maps & keyswitches & negative track delay sometime :)
Yes, absolutely!
I notice that you set the track volumes with the Track Inspector, which to my understanding links directly to the audio output's fader in the mixer window. Since your editing this parameter in a MIDI track, where will it link to? I opened up a MIDI track quickly routed it to a VST and wasn't even able to edit that volume parameter. Why do you prefer this method over setting CC7 in your MIDI flags?
The MIDI track volume does not link to the audio output's fader. It's a separate entity, not sure if other DAWs have this one. Basically, Cubase has an extra parameter for MIDI track volume that is independent from CC7 and CC11. If you open the MIDI track lanes you can see it there. It would also correspond with the main fader if you have Steinberg's CC121 unit. I use CC11 for automation (and in my MIDI flags) because CC7 links directly to the faders in Kontakt which I'd prefer at a static volume. The audio output fader is only for the groups, not the individual MIDI tracks. I did use CC7 for static volume and CC11 for automation back in Logic though since from what I remember it doesn't have this extra parameter.
@@AnneKathrinDernComposer makes sense, thank you! I've been setting CC7 in my flags and I never mess with it once it's set. Works well but I also don't have VEP (yet) and have been doing the disable instrument track method. Your channel has been so helpful for researching and planning future upgrades!
That works perfectly fine too the way you’re describing. Plenty of ppl do it that way so don’t feel like you have to do it my way. :-)
@@AnneKathrinDernComposer The Cubase MIDI track volume is sending CC7 at least I strongly supect that after a bit of testing. And that's making perfect sense after all the receiving VST only understands MIDI.
@@rustyshaffer That doesn't seem to be accurate because CC7 controls the fader inside Kontakt, not the MIDI track fader. I can ride CC7 up and the MIDI track volume down and the two will cancel each other out so they can't be identical. Likewise when I rise CC7 up and then the MIDI track volume further up, the sound gets louder so the two add upon each other which also indicates they are separate from one another. Otherwise you wouldn't be able to go past value 127.
I Set the beginning to bar -3 or -4, so i don't need the music start Marker, thx for your Video ☺️
You still need the music start marker because every composer does it differently. Better safe than sorry, misunderstandings can be costly at sessions.
I do it Like this, because later i can easily Switch between my cubase Session and my Score editor and the bar Numbers are the Same ☺️
They will always be the same no matter what. Bar 1 on the score always has to be bar 1 in the DAW as well. Same with Pro Tools session prep. Otherwise nobody can communicate at the sessions.
I understand why key switches are a massive pain. what is your opinion on the “Sound Variations” solution that Studio One came up with? I dont know what it is called in Cubase but they have a system for keeping keyswitches out of the way of midi editing too. I think these feature are relatively recent (in the past year). Cakewalk by Bandlab has it too.
Damn, I wish I had seen this video a week ago. I was very much against using 1 articulation per track but after hearing your explanation, it seems like the easiest thins to do. Nothing more annoying to me than adjusting the midi notes off the grid so they sound in time.
There are certainly pros and cons to both ways of doing it but I do prefer split articulations - at the very least longs, shorts, and specials if not more.
Well explained Anne, thanks again. After setting up expression maps for CSS I have pretty much encountered all the problems you've discussed, so I'm with you! Articulations per track is so much less of a headache. Also I wasn't aware of the "parts get track names" preference, really grateful for that tip especially.
Glad this was helpful! Yeah, going through the preferences and key commands in detail really made my life easier. So many manual steps that I could automate. And yeah, articulations per track it is for me as well... a bit of a pain when programming a line that has different articulations but oh well - can't have it all.
@@AnneKathrinDernComposer Enjoyed your video. I was looking for info on vepro with Cubase which is how I found this. I personally find that working with separate articulations locks me into a rigid way of writing and if I want to have some nuances in my writing, I end up taking the easy way out. After setting up macros for CSS, CSB, and CSW, I now highlight my midi and it automatically does my legatos (it detects short, medium, fast velocities), shifting all the notes for me. Because the Cinematic Series uses the universal -60ms track delay, I set that for all tracks and then used the Cubase logical preset editor to create the different offsets, depending on velocity. I now find working with one track per instrument and using expression maps more elegant and as fast or faster than single articulations. I appreciate both ways of doing articulations. Thanks again!
with expression maps you cannot cut accidentally keyswitches. for the negative delays you can split to tracks (keeping also long / short distincts) with identical delays and continue to use expression maps on this tracks. no chapell or cornelian choice, just the best of the two worlds.
Great insight into your process!
When you have 3 flute patches going into FLUTE 1|2 AUX channel, how do you manage to process each of them separately?
I'm not sure if External MIDI tracks allow you to process them individually.
As I said in the first part, this is how I choose my splits. If you want to process each flute individually, you’ll have to assign an output to each one in VEP. But I don’t see the point in doing that.
@@AnneKathrinDernComposer What if those 3 flutes are from different libraries (different halls, reverb settings, panning, articulations etc.). Wouldn't you want more individual control to blend them better?
No, that’s why I say to set up the patches properly in part 1.
@@AnneKathrinDernComposer Right...I got that. But for you personally, you don't prefer individual control on processing each library separately ?
No, personally I do not. Others do though. I haven’t found it to improve my sound.
Regarding track delays: do you use the Classic Legato patches in CSS or the ones with the more advanced legato engine? I find the later ones to be not really usable with track delay and quantized midi notes since the delay time varies between the different legatos and even if you settle for the medium delay you'll end up with notes that come in too early if they're not using legato.
I'm currently thinking about how I'd to handle this since I'm reorganizing my stuff and workflow. I know there is a KSP script for it but I'm not sure I really want to rely on that or rather use the Classic Patches.
About Volume (CC7). I understand CC7 is overall volume, and CC11 Expression is another volume control within that.
You show a red arrow next to the track to adjust the Volume balances of the tracks, which affects Fader position in the MixConsole view of Cubase. Yet when you show your Cubase setup in the MixConsole, all your Faders are in the nominal position. Am I missing something?
Hi again Katherin , I see vat instruments in your template in addition to the midi tracks , what do you use those for ??
Hello! Do you prefer CC controlled (automated) staccato or velocity controlled for strings and horn.
one question ive got is....should i group folder tracks to same name for example grups with strings,brass etc...i did it..and then when i put some reverb for each instrument...all the sliders move at same time and is so annoying...what i do bad??best regards from a amateur composer in london(but from spain.lol)
Thank you for sharing Anne, this is really helpful!
I have a question, I set up the volume for each MIDI track in the quick editor as you did, but when I close the project and I open it again, the numbers are correct (in the fader) but when I listen to the music I notice that the volume changed, is like if it was reset to the default position. Do you know what could be the problem?
I have this problem with a handful of libraries that don’t seem to read these values at the start of the session unless I touch that button (CineStrings comes to mind). Usually re-saving a patch with that value enabled solves it for me (you only have to do this once). It does seem a handful of libraries have the volume default locked in the script which isn’t helpful…
@@AnneKathrinDernComposer yeahh. That's what it seemed to me. I notice that it only happened with some East West libraries so I tried to fix it by changing the volume in the automation data. But it's a little bit annoying. I will try to re-save the patch to see if that works better. Thank you Anne!! :)!
Hi Katherin , thanks for the video. I have a question regarding the group tracks , I get that a group track is established when you set the outputs ( you use rack or instrument ? ) but why does the group track have a keyboard icon like an instrument ? I see instrument icons on those tracks which are supposed to be group tracks , Please explain , thanks
Just wondering about the volume balance between tracks and libraries. If you use the track volume slider for this, aren't you blocking this option for the coming mixing stage in which you always adjust the mix sliders? I recently discovered the option to adjust the track Pre-Gain in the Channel Settings of a track, would this be good as well?
Is there a reason why you don't use bar offset in the Cubase project settings so your music starts on bar one? Does it have to do with timecode not matching up? It seems like that would be easier than having to subtract bars from the current cursor position when you're finding edits or revisions.
Hye Anne, justed wanted to know if your studio is a home studio, or a different place. Cheers!
It's currently a home studio since office spaces have been closed for nearly a year now.
I am curious to know the specs of your VEPro server, the machine that hosts all the samples. I am thinking of switching from Disabled Cubase tracks template to VEPro.
So, with the Spitfire products there's an "tightness" controll. Can I set the negative track delay to match the long notes and then use the tightness to match the short notes? I really really like to have all the articulations in the same track.
Is this the same setup for all the new videos? Thx.
Do you use Cinematic Studio Series? They have only multi sessions articulations, and not single. Sorry if you don't understand well my question, my english is bad.
If we insert a midi note to activate a key switch in the beginning of a track, how it activate another key switch in the middle of the track?
my two cents on microtiming: I am not THAT meticulous about tweaking the note-on samples because, in my case, I produce final orchestral music which needs to sound as human as possible.
I even run a macro that changes note-on's and velocity randomly.
So don't think this is absolutely a MUST DO. It depends on what you have to deliver.
The rest is also working in my setup and Anne explains it very well :-)
This has been debunked by many professional composers a long time ago. While on paper it seems logical, you don't actually gain "humanity" by de-quantizing. Modern sample libraries have so many samples and variations in them that there is already plenty of natural variation, especially when mixing different libraries made by different people, recorded in different spaces. These products are edited by human beings and therefore already are inconsistent to begin with. Whenever people de-quantize it often just sounds like a sloppy high school band that can't play in time. Any professional studio orchestra plays absolutely tightly together and very much closely in the grid. Their micro-timing is on point - something some of my team members had to learn the hard way when the orchestra played tighter than their mockups and they had to go back in to quantize their MIDI and re-print their stems.
Have you tried the KSP script that auto delays for css/csb? I don’t think I could use css without it
So "working in the grid" - does that mean you're going to quantize all live played notes? Because then the notes will be snapped right in place but doesn't that sound too inhuman and artificial?
Working in the grid is very common. While on paper it seems logical, you don't actually gain "humanity" by de-quantizing. Modern sample libraries have so many samples and variations in them that there is already plenty of natural variation, especially when mixing different libraries made by different people, recorded in different spaces. These products are edited by human beings and therefore already are inconsistent to begin with. Whenever people de-quantize it often just sounds like a sloppy high school band that can't play in time. Any professional studio orchestra plays absolutely tightly together and very much closely in the grid. Their micro-timing is on point - something some of my team members had to learn the hard way when the orchestra played tighter than their mockups and they had to go back in to quantize their MIDI and re-print their stems. An argument can be made for solo parts or something like a piano for example. But you'll find that 99% of the time, it'll sound just fine.
@@AnneKathrinDernComposer Vielen Dank für die Antwort! ;) Now that's quite an interesting aspect, as lots of people are usually telling: Do never quantizer your recordings - it takes out the "soul". But I totally understand your point and it seems so logical to me, to be honest.
May i ask you why the scarf? Room is not heated. Off topic i know but I'm curious. And thanks for the video of course.
Strange question, but sure. It's currently quite cold in LA and I recorded these videos at night when it's especially cold due to it being a desert (and I'm also close to the Pacific ocean so...). I try to use the heater only when necessary but I can't use it while I'm recording videos since it's very loud (built into the air conditioning unit). So there, that's why I'm wearing a scarf.
@@AnneKathrinDernComposer The scarf brand should sponsor you for making their products "POP"!!!
Why is there all that midi region at the start?
MIllisecond-Nudging MIDI tracks into the sweet spot has become a full time job really... 😑🎶✨
Yeah... one I'm not willing to do anymore. I'd rather deal with split patches then. :-)
How do you rename your outputs in Vienna Ensemble Pro? For example, I'd like the output to read "Trumpets (1/2)" or something like that, rather than just the numbers.
Never mind, I don't think you can. In Cubase it looked like you had named our outputs, but actually those were just the track names that had been added on. Sorry if I'm being confusing- thanks for this wonderful video.
I also put the CC information of the mic positions in the MIDI flags, does anyone do the same? Is there a better approach to that?
You can start the music at bar 1 if you enter a positive number in Project | Project Setup... | Bar Offset.
The point is not the bar number, the point is to have at least two empty bars in the front. Whether that’s -1 and 0 or 1 and 2 doesn’t really matter (though you’ll find that the former won’t automatically translate into Pro Tools so you’re adding extra work later down the line and increase the chance for errors between PT session and sheet music).
@@AnneKathrinDernComposer I see. I didn't know about the potential issues. I am only in Cubase.
It's too bad that you're not on Twitter, Anne. I'd be tagging you and sharing your videos left and right.
That aside, I have a couple of questions (if you don't mind):
1.) Can VEPro Templates be saved and exported, Imported, moved across network to other machines?
2.) have you considered selling your templates at a relatively low rate ($10 - $17)?
How might it benefit you?:
Time and effort of developing templates begins to receive compensatory offsets
Why would you charge the low rate for such valuable tools?:
By including an agreement that the customer is receiving your largest templates "AS IS" and will receive no support (leaving customization their hands with a HEFTY headstart from an otherwise "Zero point"... there will be an understanding that this is a cost relative fair-exchange by which all parties benefit.
Wait..Did you say you got around 700 tracks in you template...? How come that much of tracks are there..
Good job beautiful!!
F@%king key switches. I do t like them either. Pain in the a$$!!! 😂
So a delay per sound slot would be a great feature for expression maps. There are already feature requests:
- forums.steinberg.net/t/delay-parameter-for-expression-maps/121448
- forums.steinberg.net/t/expression-maps-improvements/130520
Given your status, maybe you could add more weight to these.
By the way, expression maps already cover different velocities in different patches, so you do not need different tracks just for that issue.
I watched over 9000 tutorials and nobody ever mentioned negative. track. delays. EVER.
I wonder if this "latency" I was having for the past 2 years wasn't just that... omgomgomg