The technical term for what these formats are doing is called Modified Discreet Cosine Transform, which basically involves splitting up the input signal into frames, further splitting these frames into overlapping blocks (generally representing anywhere from 192 to 576 samples in MP3), and analyzing what frequencies/amplitudes are present within this block. They then approximate these frequencies by converting them into a series of cosine transformations (basically just a series of compact, easy to represent equations) that can roughly approximate the samples present in this block with only a small fraction of the bytes. The MDCT will output a lot of frequencies even with a very tiny bitrate (just like JPEG will put a lot of noise into a very highly compressed file), but low bitrates affect the accuracy of the encoding. At low bitrates, only a small amount of bytes can be used when applying the MDCT, so it will try to prioritize getting the louder and more noticeable frequencies right, but won't be able to store enough data to correct audible artifacts in the details. As you increase the bitrate, it stores more data and more transformations that affect the details of the encoding and reduce the audible artifacts. However, even at a theoretically unlimited bitrate, there would still be artifacts due to the way that MP3 and AAC work. MDCT is still only an approximation and will never be able to give a perfect sample for sample result. Furthermore, AAC and MP3 will always struggle to perfectly represent transients and rapidly changing sounds due to the fact that they are splitting the input stream into frames and blocks and are encoding these individually. You can hear these kinds of effects by playing a song on RUclips at 0.5x speed. It will sound much choppier and washed out, and you can hear the non-smooth transitions between blocks and frames when being played at significantly slower speeds. While AAC and MP3 are great for what they do (given a good bitrate), even the most advanced lossy codecs still have some of these limitations. Even at the highest bitrates, this effect will still be present, and the approximations done by the MCDT will still be present and add artifacts. The real problem comes when streaming services have to re-encode these files into their own formats. Just like re-encoding JPEG files over and over again causes quality loss and an increase in artifacts (even when being encoded at the same quality settings), the same thing occurs with lossy audio formats. If you encode an MP3 file at 320kbps and then Apple Music re-encodes it in their own format for streaming at 256kbps, it will sound worse than if it were just encoded once at 256kbps to begin with, and will contain more compression artifacts. Furthermore, many bluetooth devices actually re-encode their audio streams as 256kbps AAC anyway, so if you send a 320kbps MP3 to a mastering engineer who then submits it to Spotify, it then gets re-encoded to Ogg Vorbis by Spotify and then AGAIN gets re-encoded to 256kbps AAC by the listener's device to stream to their earbuds. You've effectively just re-compressed the data with a lossy format three different times! This is why it's so critically important to always use lossless compression when submitting these kinds of things for mastering or release. These kinds of codecs are designed to be transparent (indistinguishable to the average listener from the original) when encoded ONCE at a good bitrate. However, they aren't really designed to do quite as good of a job when they're encoding stuff that has already been encoded repeatedly, and already had lossy compression applied to it. The result will always sound worse than if it were just encoded once, even if you use the highest possible bitrates.
One thing I do every time: Whenever I send an email with links, I ALWAYS click on every link to make sure they all work the way they're supposed to, particularly that they go to the right places.🔭 Guess it comes from my old broadcasting days.
You gotta love it when you get a multi-track file and it's all mp3s. I found some remix packs for a song with one having 8 stems while the other had 16 multi-tracks.... guess which one were the mp3s. Grrrrr.
Thanks for another great video Joe! I also watched your 5 step mix guide and took some helpful tips from there - even though I've mixing for quite a number of years. We never stop learning. Subscribed and onboard!
Great advices, and I have a few to add, having done experimental research on both subject, all by myself. 1. When you prepare mix or master for mp3 export, and the very last next stage is exporting or converting to mp3, do NOT push your signal peaks all the way up to 0 dBFS, keep it up to -1 dBFS, at max. That's because clipping might occur in the mp3 file. The same with all other lossy compressed audio format files. Doing experiments with different program material, I've found that now forgotten WMA format had a safety margin at -3 dBFS. For your listeners, there's no much difference between a song mastered at 0 and a song mastered at -1 dBFS. Always check your resulting mp3 (as well as other lossy compression audio file formats) on everything important, including clipping. 2. As Joe already mentioned in the video, do NOT send mp3 files to further mixing, and (I add to that) also raw tracks while collaborating with other musicians/producers, for another reason. It's not about low quality only, but about timing, too. If you export/convert WAV file to mp3, it will not be of the same length as WAV. Mp3 format audio file length have to fall on certain "window length". So, difference, albeit small, can pile up through the mixing process, and people often ask why their timing is wrong.
I would add one more point. 3. If you send your files to someone, be it mixing or collaboration, make sure you render all your files starting at the same point/bar. DO NOT just send your "Audio" folder to another person. Especially if you recorded or chopped up some tracks. I had files sent to me quite a few times where people would just send me a folder with random stuff without telling me what DAW there were using and without the session file from that DAW so I can work out what they've actually done.
Sample Rate = How many data points are analysed per second Bit Rate = How precisely are those data points written down. You can write with a soft crayon or a nice pen. The words are the same, but one will look better and be way more easy to read later. Imprecise data writing with low bitrate = more noise and overall crappy sound in the final file.
Bless up from Jamaica, ever since I set up my private studio, uve really been a great help, especially since I am a studio one user. Loving the content, keep it up 💯
The space at the start of a song issue . . . back in the day, when people / bands sent demo songs to record companies (usually on DAT), I had the thankless task of listening to dozens of demos a day . . . and I can tell you as a matter of fact that in the industry if your song is not playing within a few seconds the DAT (or today MP3) is ejected / binned and they move onto the next song. It really doesn't matter how much work you've put into it, or that you have the best in the world, people don't have time to scrub through songs looking for the start or waiting to see if it starts at all, they have a box/inbox of 200 other submissions to get through each day, your song will be one of those 15-20 that never even got a listen.
I also want to point something out that I learned while creating music that loops for video games. Exporting in MP3 will create an extremely small gap at the beginning of the track, and you cannot remove it. It will make a perfect loop sound like there’s a micro space before it loops. It really gets on my nerves. I would use the OGG file format to make them smaller but not create the small gap. I don’t think that file format is supported much though.
I had the same thing some time ago! Went for OGG as well in the end. As I recall this format is very well supported on IOS and Android (those were the platforms I was making music / sfx for).
never done that kind of work (although i have had a track of mine chosen by an indie dev for his steam game) but this is something i noticed a few years back now when exporting tracks to MP3. I switched to Ableton from FL Studio almost a year ago now and my only options are WAV and AIFF. Well, MP3 is available but the defaults are WAV and AIFF lol
I use Cubase and that DAW won't let you Export anything if you didn't set your markers/loop. You'll just get warning. Now. Here's fun part. I very often loop parts of the song when I do quick master and forget to change the range of the selection before exporting. I can't even count how many times I had my friend asking me "Why there is only 30 seconds of the song, and why is it just chorus?" 🤣
Literally just did this the other day haha (also a Cubase user). That and accidentally leaving monitor correction on. People be like, why is this mix 6db quieter than the last one and sound horrible? 😂
@@asthemoon8296 What Cubase version do you use? (Elements? Artist? Pro?) Do you have access to Control Room? If yes then put all room/speaker correction as an insert in Control Room. That part of signal chain is not rendered when exporting files so you won't have to worry if you turned it off or not. 😉
Mixing to 0dB. This could (and likely will) cause clipping when decoding. The AES standard for mastering to streaming/compression is -1dB (as well as -14 LUFS but who follows that?!).
Back in the old days, when people were less informed, I would get collaborators' tracks like guest solos etc. in .mp3 all the time, even when I asked for .wav. :p
Joe's use of a 44.1k sample rate (cd standard) also brings up the question of whether 48k is better for current recordings given the increased CPU power of current computers. I used 44.1 previously but have switched to 48 upon upgrading my interface and DAW.
There are few Dan Worrall videos on youtube on that topic. What I learnt from it is to stick to one resolution if possible and that the higher sample rate will/may create less artefacts during mixing especially if you use a lot of plugins. That's why a lot of good plugins have oversampling option added to them.
I've heard that 48k is industry standard for sharing files. Not just music industry but film and any industry that uses audio. 48k and 24bit seems like a good standard to me.
I generally use 48k nowadays when I can. It is standard for videos, which means that upsampling won’t be required for RUclips if one ever decides to upload to these platforms. It also gives the low pass filters a little more headroom when removing frequencies above 20KHZ (allows them to be less steep), and can give a little more headroom if you perform audio quantization or any kind of pitch correction that might do any sort of time stretching on the audio. I generally do get better results with 48K, especially if I’m ever doing any sort of timing adjustment or fine tuning tempo stuff after a bed track has already been recorded. The extra headroom noticeably reduces artifacts from some of these kinds of things.
For sending to be mixed you can always zip the files which will reduce the size, but not effect the quality, they will just need to unzip when they get them.
Ring the bell! It's a banner day! I did something right all on my own. I've been using 320 forever. I've been using it so long, in fact, my DAW calls it "Insane, 320 kbps"
I remember something I had call the 320 kbps "Insane". I have been using 320 since the days when I started creating digital files for my iPod. I use m4a for my music files now
How about the fact that the major streaming platforms, RUclips, TikTok etc ... transmit music at a maximum of 128 kBps. So isn't it better to use the Presonus converters, which are certainly better than the converters of these streaming platforms, to get the best results at this frequency and upload the video already at 128 kBps?
Hi Joe when are you starting that programm where you mix with us. I am a total beginner and using studio one 6.1 it would be nice to learn from the current version of studio one.
Joe. Thanks for the video. One other issue with mp3s getting sent instead of .wav files. The mp3 encoding process messes with file size (length) as well which will mess with the tempo and the relative positions of the file in the DAW. That causes a whole bunch of other problems. Blessings!
Thanks for a great video! The problem with MP3 is that the algorithm is very, very old now. It's the sound equivalent to the image format JPeG, which is also as widely used as it's antiquated. With MP3, 320 kbps bitrate is the highest you can go - and it still sounds significantly worse than the source PCM. Meanwhile, next-generation format such as Apple's AAC or the open source FLAC, are much better event at their 256 kbps. But they have slightly higher processing requirement... and of course, there's that annoying division. Apple devices will play AAC, but not FLAC (natively; there are way to do it, but not for your average user). And Android and Windows will happily play FLAC, but not AAC (well, they will - but again, it requires that the user knows what to look for if they encounter a "file format unknown" prompt)... To make a long story short: MP3 is a destructive compression algorithm, that will run at 16-bit resolution as the highest. And while it CAN go beyond 44.1 kHz, it's not often used. However, no matter the resolution and sample rate, the information is stored in compressed blocks that cuts off inaudible frequencies (lower than 20 Hz, higher than 20 kHz) as well as apply Hoffman compression. And this is why I draw the comparison to JPeG images: They use a variant of the very same alg. What it does is to look for common binary sequences, and then store them as something simpler. Think of words like "the" and "and". If I were to save "the" as simply "a" and "and" as "b", in a big book, I'd save a lot of space, right? Oh but that's not destructive. What's destructive is when you find sequences that are ALMOST like something you have defined as a common binary sequence, and then just round it to make the damn shoe fit. So "they" will also be saved as "a", and "ant" will also be saved as "b". Now I saved even more space, but the meaning will get fuzzier... Even the so-called "lossless" 320 kbps MP3 is destroyed ever so slightly, which especially becomes audible on precise audio equipment (such as good studio monitors or studio headphones with high analytical properties), in songs that have a wide dynamic range. AAC and FLAC are much more intelligent. They analyze each block and see how it can be compressed most effectively, then stores than info for each block, in a lookup table. By doing this, one segment of 16-bit audio can be compressed to 12-bit with no audible loss, because with 16-bit you have 65,536 grades of resolution, but if only 3,500 are being used, the 12-bit space with its 4,096 grades, is enough. You just need to redefine high and low - and convert - and it's done without any degradation at all. Truly lossless. Well, in some cases, AAC and FLAC are destroyed - that's when they must round down, in order to accomodate the requirement set for the bitrate. But it's often much less intrusive and sounds much better than MP3. And oh yeah - they fully support 24-bit, 96 kHz as well - although most (non-cinematic) music seems to be made in 24-bit 48 kHz these days.
Sample Rate = How many data points are analysed per second Bit Rate = How precisely are those data points written down. You can write with a soft crayon or a nice pen. The words are the same, but one will look better and be way more easy to read later.
Unfortunately S1 is not accurate with bounce MP3 in/out markers or between loop range. When I export to wav It`s ok, but only MP3 is not accurate and It always put some extra milliseconds. I work with broadcast and commercials and clients always ask me some MP3 files and I`ve got to be very precise with lengh time audio, I always have to do It ouside of S1 because of this issue. Don`t get me wrong, I love S1 It`s my DAW of choice, I`ve already ask for the people from Presonus about it and they did nothing. Hope they fix it some day ! Cheers anyway.
Thank you for the video. I have this problem on the Project page. There is a 2 seconds gap that i can never find out how to remove it from the beginning of the first song. Can you also help with that, thank you.
Even I know that anything under 128kbps is gonna sound bad. And, I hear no audible difference with anything over 128, so that whole “192/256/320kbps file size sounds better” is something that I dispute. (128 also makes rather beat up vinyl transfers sound slightly better, especially after doing a noise reduction in Audacity, so I’ll stick with the 128kbps file mp3.)
I pretty much always use VBR when it's available. CBR is fine if you're using a very high bitrate (256kbps+), but VBR allows more complex frames (yes, MP3 splits audio up into frames) to use more bits to better approximate their data. If you're using a reasonably high bitrate, it's probably not a very perceptible difference, but if you're just throwing test mixes at 128 or 160kbps onto a device to listen to a vocal comp or something in a car, VBR will generally give you a better result.
Variable bit rate is going help make the file smaller without losing quality. It essentially uses lower bit rate when the content is simple (silent or less dense parts) but expands to the higher bit rate when needed. I never use constant bit rate anymore.
thanks for the advice ! And one question : is it right to use a non destructive compressed format (lossless or supposed to be), instead of WAV format, when we want to send the tracks to someone ?
Very useful MP3 export reminders, thanks Joe. Off topic a bit, but what’s the best way to include the Marker track events (I label part sections, solo sections using these) when Exporting via Stems? (Or updating a collaborator with a Mixdown?). Are these Marker events only stored in a Project, or the songs cache?
I don’t understand why I have software that suggest I will have “CD quality” exporting at MP3 128. CDs sound great so how come it takes MP3 320 to make the export sound best?
So I did everything suggested. But, my low end out of my DAW (Studio One) sounds great. But when I export to MP3 or WAV, the bass is considerably lower. Any ideas?
cool i have been doing this right but i have seen and used 480kbps or 460kbps but i cant remember wtf it was on or whatever maybe itunes convertor or vlc convertor idk cant remember
Every time I export to an mp3, I have a limiter set in place. It seems to be the only way I can export without getting multiple clipping notifications. Is that bad? Exporting with a limiter?
But why would anyone's song start that many bars in anyway? It would drive me nuts to see that in my daw 😂 One thing I didn't see you mention (maybe it's FL Studio specific) is how the songs ends. So often I have people send me beats and they didn't trim it so there's like 10 seconds of silence at rhe end. In FL there's an option that says "Leave Remainder" that should be set to "Cut Remainder" then you can set where you want it to end by pressing shift+T in the playlist in dragging the loop icon all the way to the end. It takes two seconds but no one does it.
I only use mp3 because I cannot send a wav file by google mail - it exceeds the limit. Is there a simpler way to send a wav file And avoid the whole mp3 issue or issues as in the comments?
trying to save space when is comes to video or music is bad. MP3s for example are already compressed audio. so are WAV files but WAv is much cleaner quality. In my younger days I would record USA up all night movie on VHS at SLP/EP. I could fit 4 2 hour movies on a tape. The quality was horrible but that is the price for trying to save money. Only one movie could fit in normal recording and the quality was good. the same thing happens with audio. Those who collect as much music as possible without upgrading their HDD space suffer because they get the MP3s at 48bps rather than 192bps or 256bps. 1 megabyte per song vs 5 to 8 megabytes per song. Basically you are trying to cram all that beautiful sound into a small file. You are quite literally sacrificing quality for space. allowing a larger file means that the sound isn't packed on top of itself and the read heads actualy detect the magnetic print on the drive that contain all the pertainent information. It''s like printing a normal 300 page book so small it fits on 100 pages perhaps a little but how many passes or rereads do you have to do to get the context. Read heads don't have that luxury. Imagine words being so squished together that you can't make them out. That is quite literally what is happening with your sound.
Thank you for the video and information. Just Some constructive criticism perhaps? Can we do away with the goofy faces on the thumbnail thing. It could be me, but it actually makes me want to click on the video less, not more.
Why bother to spend all your time mixing and mastering to get the best possible sound only to export to any kind of lossy format? I refuse to do it. Mp3s made sense, I suppose, when storage space was small, but not these days.
The technical term for what these formats are doing is called Modified Discreet Cosine Transform, which basically involves splitting up the input signal into frames, further splitting these frames into overlapping blocks (generally representing anywhere from 192 to 576 samples in MP3), and analyzing what frequencies/amplitudes are present within this block. They then approximate these frequencies by converting them into a series of cosine transformations (basically just a series of compact, easy to represent equations) that can roughly approximate the samples present in this block with only a small fraction of the bytes.
The MDCT will output a lot of frequencies even with a very tiny bitrate (just like JPEG will put a lot of noise into a very highly compressed file), but low bitrates affect the accuracy of the encoding. At low bitrates, only a small amount of bytes can be used when applying the MDCT, so it will try to prioritize getting the louder and more noticeable frequencies right, but won't be able to store enough data to correct audible artifacts in the details. As you increase the bitrate, it stores more data and more transformations that affect the details of the encoding and reduce the audible artifacts. However, even at a theoretically unlimited bitrate, there would still be artifacts due to the way that MP3 and AAC work. MDCT is still only an approximation and will never be able to give a perfect sample for sample result. Furthermore, AAC and MP3 will always struggle to perfectly represent transients and rapidly changing sounds due to the fact that they are splitting the input stream into frames and blocks and are encoding these individually.
You can hear these kinds of effects by playing a song on RUclips at 0.5x speed. It will sound much choppier and washed out, and you can hear the non-smooth transitions between blocks and frames when being played at significantly slower speeds. While AAC and MP3 are great for what they do (given a good bitrate), even the most advanced lossy codecs still have some of these limitations. Even at the highest bitrates, this effect will still be present, and the approximations done by the MCDT will still be present and add artifacts.
The real problem comes when streaming services have to re-encode these files into their own formats. Just like re-encoding JPEG files over and over again causes quality loss and an increase in artifacts (even when being encoded at the same quality settings), the same thing occurs with lossy audio formats. If you encode an MP3 file at 320kbps and then Apple Music re-encodes it in their own format for streaming at 256kbps, it will sound worse than if it were just encoded once at 256kbps to begin with, and will contain more compression artifacts. Furthermore, many bluetooth devices actually re-encode their audio streams as 256kbps AAC anyway, so if you send a 320kbps MP3 to a mastering engineer who then submits it to Spotify, it then gets re-encoded to Ogg Vorbis by Spotify and then AGAIN gets re-encoded to 256kbps AAC by the listener's device to stream to their earbuds. You've effectively just re-compressed the data with a lossy format three different times!
This is why it's so critically important to always use lossless compression when submitting these kinds of things for mastering or release. These kinds of codecs are designed to be transparent (indistinguishable to the average listener from the original) when encoded ONCE at a good bitrate. However, they aren't really designed to do quite as good of a job when they're encoding stuff that has already been encoded repeatedly, and already had lossy compression applied to it. The result will always sound worse than if it were just encoded once, even if you use the highest possible bitrates.
THIS IS A REALLY GOOD EXPLANATION, NOW HOW DO YOU STOP PRE DELAY :)
One thing I do every time: Whenever I send an email with links, I ALWAYS click on every link to make sure they all work the way they're supposed to, particularly that they go to the right places.🔭 Guess it comes from my old broadcasting days.
You gotta love it when you get a multi-track file and it's all mp3s. I found some remix packs for a song with one having 8 stems while the other had 16 multi-tracks.... guess which one were the mp3s. Grrrrr.
Good video for beginners. Very important information. Now we can talk about dither. Well done.
Thanks for another great video Joe! I also watched your 5 step mix guide and took some helpful tips from there - even though I've mixing for quite a number of years. We never stop learning. Subscribed and onboard!
Thanks for this! I didn’t even realize my bounces were exported at 128kbps all this time.🙇🏻♂️
Great advices, and I have a few to add, having done experimental research on both subject, all by myself.
1. When you prepare mix or master for mp3 export, and the very last next stage is exporting or converting to mp3, do NOT push your signal peaks all the way up to 0 dBFS, keep it up to -1 dBFS, at max. That's because clipping might occur in the mp3 file. The same with all other lossy compressed audio format files. Doing experiments with different program material, I've found that now forgotten WMA format had a safety margin at -3 dBFS. For your listeners, there's no much difference between a song mastered at 0 and a song mastered at -1 dBFS. Always check your resulting mp3 (as well as other lossy compression audio file formats) on everything important, including clipping.
2. As Joe already mentioned in the video, do NOT send mp3 files to further mixing, and (I add to that) also raw tracks while collaborating with other musicians/producers, for another reason. It's not about low quality only, but about timing, too. If you export/convert WAV file to mp3, it will not be of the same length as WAV. Mp3 format audio file length have to fall on certain "window length". So, difference, albeit small, can pile up through the mixing process, and people often ask why their timing is wrong.
I would add one more point.
3. If you send your files to someone, be it mixing or collaboration, make sure you render all your files starting at the same point/bar. DO NOT just send your "Audio" folder to another person. Especially if you recorded or chopped up some tracks.
I had files sent to me quite a few times where people would just send me a folder with random stuff without telling me what DAW there were using and without the session file from that DAW so I can work out what they've actually done.
i want to correct you. timing difference occurs when files have different samplerate (48 vs 44.1) and the program doesn't work in upsampling mode.
@@DeeKeyLP Well, now I want to correct you. That's a different problem and has nothing to do with what I said in my first comment.
@@zvuksvetla alright! never too late to learn, thanks
For full quality export for later mixing, FLAC is ideal, as it uses lossless compression. In principle, FLAC is to WAV what PNG is to BMP.
m4a is what I always go for... But I am not if other DAWs offer it other than Logic Pro
Sample Rate = How many data points are analysed per second
Bit Rate = How precisely are those data points written down.
You can write with a soft crayon or a nice pen. The words are the same, but one will look better and be way more easy to read later.
Imprecise data writing with low bitrate = more noise and overall crappy sound in the final file.
@Joe - the last tip was awesome.!! 👍👍
Bless up from Jamaica, ever since I set up my private studio, uve really been a great help, especially since I am a studio one user.
Loving the content, keep it up 💯
Roots and culture 🔥
you helped me man...my mentor from today onwards🤜
The space at the start of a song issue . . . back in the day, when people / bands sent demo songs to record companies (usually on DAT), I had the thankless task of listening to dozens of demos a day . . . and I can tell you as a matter of fact that in the industry if your song is not playing within a few seconds the DAT (or today MP3) is ejected / binned and they move onto the next song. It really doesn't matter how much work you've put into it, or that you have the best in the world, people don't have time to scrub through songs looking for the start or waiting to see if it starts at all, they have a box/inbox of 200 other submissions to get through each day, your song will be one of those 15-20 that never even got a listen.
I also want to point something out that I learned while creating music that loops for video games. Exporting in MP3 will create an extremely small gap at the beginning of the track, and you cannot remove it. It will make a perfect loop sound like there’s a micro space before it loops. It really gets on my nerves. I would use the OGG file format to make them smaller but not create the small gap. I don’t think that file format is supported much though.
I had the same thing some time ago! Went for OGG as well in the end. As I recall this format is very well supported on IOS and Android (those were the platforms I was making music / sfx for).
never done that kind of work (although i have had a track of mine chosen by an indie dev for his steam game) but this is something i noticed a few years back now when exporting tracks to MP3.
I switched to Ableton from FL Studio almost a year ago now and my only options are WAV and AIFF. Well, MP3 is available but the defaults are WAV and AIFF lol
Wow - I've been exporting mp3 files, and they were sounding pretty good - I definitely will keep that in mind. Thanks.
Thanks, nice tut, I just can't see how to NAVIGATE TO the section where I can change the mp3 bitrate setting.
Thanks for the information Joe!
I use Cubase and that DAW won't let you Export anything if you didn't set your markers/loop. You'll just get warning.
Now. Here's fun part. I very often loop parts of the song when I do quick master and forget to change the range of the selection before exporting.
I can't even count how many times I had my friend asking me "Why there is only 30 seconds of the song, and why is it just chorus?" 🤣
Literally just did this the other day haha (also a Cubase user). That and accidentally leaving monitor correction on. People be like, why is this mix 6db quieter than the last one and sound horrible? 😂
@@asthemoon8296 What Cubase version do you use? (Elements? Artist? Pro?) Do you have access to Control Room? If yes then put all room/speaker correction as an insert in Control Room. That part of signal chain is not rendered when exporting files so you won't have to worry if you turned it off or not. 😉
@@RudalPL Cubase Pro! I never use the control room but that's probably a perfect reason to start haha. Thanks!
I always use 320 thank God.
Thanks Joe, love the channel 👍
Mixing to 0dB. This could (and likely will) cause clipping when decoding. The AES standard for mastering to streaming/compression is -1dB (as well as -14 LUFS but who follows that?!).
I now nominally use -11 LUFS.
Back in the old days, when people were less informed, I would get collaborators' tracks like guest solos etc. in .mp3 all the time, even when I asked for .wav. :p
Joe's use of a 44.1k sample rate (cd standard) also brings up the question of whether 48k is better for current recordings given the increased CPU power of current computers. I used 44.1 previously but have switched to 48 upon upgrading my interface and DAW.
There are few Dan Worrall videos on youtube on that topic. What I learnt from it is to stick to one resolution if possible and that the higher sample rate will/may create less artefacts during mixing especially if you use a lot of plugins. That's why a lot of good plugins have oversampling option added to them.
I've heard that 48k is industry standard for sharing files. Not just music industry but film and any industry that uses audio. 48k and 24bit seems like a good standard to me.
48 kHz is the standard sample rate for video. I see no reason to use CD standard anymore, since CDs are essentially deprecated.
I generally use 48k nowadays when I can. It is standard for videos, which means that upsampling won’t be required for RUclips if one ever decides to upload to these platforms. It also gives the low pass filters a little more headroom when removing frequencies above 20KHZ (allows them to be less steep), and can give a little more headroom if you perform audio quantization or any kind of pitch correction that might do any sort of time stretching on the audio.
I generally do get better results with 48K, especially if I’m ever doing any sort of timing adjustment or fine tuning tempo stuff after a bed track has already been recorded. The extra headroom noticeably reduces artifacts from some of these kinds of things.
Really helpful! Thanks!
For sending to be mixed you can always zip the files which will reduce the size, but not effect the quality, they will just need to unzip when they get them.
Ring the bell! It's a banner day! I did something right all on my own. I've been using 320 forever. I've been using it so long, in fact, my DAW calls it "Insane, 320 kbps"
I remember something I had call the 320 kbps "Insane". I have been using 320 since the days when I started creating digital files for my iPod. I use m4a for my music files now
The disadvantage of being older. I can't hear those higher frequencies.
Thanks man. Good job. I'm gonna try this. RUclips Neal Hathaway. Learning all the time. Doubling down @ 66.
Great advice 😊
How about the fact that the major streaming platforms, RUclips, TikTok etc ... transmit music at a maximum of 128 kBps. So isn't it better to use the Presonus converters, which are certainly better than the converters of these streaming platforms, to get the best results at this frequency and upload the video already at 128 kBps?
Hi Joe when are you starting that programm where you mix with us. I am a total beginner and using studio one 6.1 it would be nice to learn from the current version of studio one.
Joe. Thanks for the video. One other issue with mp3s getting sent instead of .wav files. The mp3 encoding process messes with file size (length) as well which will mess with the tempo and the relative positions of the file in the DAW. That causes a whole bunch of other problems. Blessings!
Great stuff Joe!! Thanks for all you do!
I got to witness this...i wanted to ask for some help with my mixes how can i reach out to you i love your tutorials man
If you're leaving an 10-second gap at the beginning of your file, you probably nodded out from that last hit of smack you took.
Thanks for a great video! The problem with MP3 is that the algorithm is very, very old now. It's the sound equivalent to the image format JPeG, which is also as widely used as it's antiquated.
With MP3, 320 kbps bitrate is the highest you can go - and it still sounds significantly worse than the source PCM. Meanwhile, next-generation format such as Apple's AAC or the open source FLAC, are much better event at their 256 kbps.
But they have slightly higher processing requirement... and of course, there's that annoying division. Apple devices will play AAC, but not FLAC (natively; there are way to do it, but not for your average user). And Android and Windows will happily play FLAC, but not AAC (well, they will - but again, it requires that the user knows what to look for if they encounter a "file format unknown" prompt)...
To make a long story short: MP3 is a destructive compression algorithm, that will run at 16-bit resolution as the highest. And while it CAN go beyond 44.1 kHz, it's not often used. However, no matter the resolution and sample rate, the information is stored in compressed blocks that cuts off inaudible frequencies (lower than 20 Hz, higher than 20 kHz) as well as apply Hoffman compression.
And this is why I draw the comparison to JPeG images: They use a variant of the very same alg.
What it does is to look for common binary sequences, and then store them as something simpler. Think of words like "the" and "and". If I were to save "the" as simply "a" and "and" as "b", in a big book, I'd save a lot of space, right? Oh but that's not destructive. What's destructive is when you find sequences that are ALMOST like something you have defined as a common binary sequence, and then just round it to make the damn shoe fit. So "they" will also be saved as "a", and "ant" will also be saved as "b". Now I saved even more space, but the meaning will get fuzzier...
Even the so-called "lossless" 320 kbps MP3 is destroyed ever so slightly, which especially becomes audible on precise audio equipment (such as good studio monitors or studio headphones with high analytical properties), in songs that have a wide dynamic range.
AAC and FLAC are much more intelligent. They analyze each block and see how it can be compressed most effectively, then stores than info for each block, in a lookup table. By doing this, one segment of 16-bit audio can be compressed to 12-bit with no audible loss, because with 16-bit you have 65,536 grades of resolution, but if only 3,500 are being used, the 12-bit space with its 4,096 grades, is enough. You just need to redefine high and low - and convert - and it's done without any degradation at all. Truly lossless.
Well, in some cases, AAC and FLAC are destroyed - that's when they must round down, in order to accomodate the requirement set for the bitrate. But it's often much less intrusive and sounds much better than MP3. And oh yeah - they fully support 24-bit, 96 kHz as well - although most (non-cinematic) music seems to be made in 24-bit 48 kHz these days.
Something that is often confusing is the difference between bit rate & sample rate in recording versus mixing and exporting.
Sample Rate = How many data points are analysed per second
Bit Rate = How precisely are those data points written down.
You can write with a soft crayon or a nice pen. The words are the same, but one will look better and be way more easy to read later.
oops, you were speaking of recording vs mixing... my bad :)
@@manuelkoch375 No worries.
Unfortunately S1 is not accurate with bounce MP3 in/out markers or between loop range. When I export to wav It`s ok, but only MP3 is not accurate and It always put some extra milliseconds. I work with broadcast and commercials and clients always ask me some MP3 files and I`ve got to be very precise with lengh time audio, I always have to do It ouside of S1 because of this issue. Don`t get me wrong, I love S1 It`s my DAW of choice, I`ve already ask for the people from Presonus about it and they did nothing. Hope they fix it some day ! Cheers anyway.
Ha ha, my high frequency hearing loss finally provides an actual benefit! I can’t hear those chirpy bad MP3 export artifacts 😊
Thank you for the video. I have this problem on the Project page. There is a 2 seconds gap that i can never find out how to remove it from the beginning of the first song. Can you also help with that, thank you.
Even I know that anything under 128kbps is gonna sound bad. And, I hear no audible difference with anything over 128, so that whole “192/256/320kbps file size sounds better” is something that I dispute. (128 also makes rather beat up vinyl transfers sound slightly better, especially after doing a noise reduction in Audacity, so I’ll stick with the 128kbps file mp3.)
I always called that the underwater sound with that crappy setting
Can you address the advantage of Constant Bitrate vs. Variable Bitrate?
I pretty much always use VBR when it's available. CBR is fine if you're using a very high bitrate (256kbps+), but VBR allows more complex frames (yes, MP3 splits audio up into frames) to use more bits to better approximate their data.
If you're using a reasonably high bitrate, it's probably not a very perceptible difference, but if you're just throwing test mixes at 128 or 160kbps onto a device to listen to a vocal comp or something in a car, VBR will generally give you a better result.
Variable bit rate is going help make the file smaller without losing quality. It essentially uses lower bit rate when the content is simple (silent or less dense parts) but expands to the higher bit rate when needed. I never use constant bit rate anymore.
24 bit depth is very good ok then great
thanks for the advice ! And one question : is it right to use a non destructive compressed format (lossless or supposed to be), instead of WAV format, when we want to send the tracks to someone ?
We have to do 16 bit 44.100 and a wav file is this bad?
Very useful MP3 export reminders, thanks Joe. Off topic a bit, but what’s the best way to include the Marker track events (I label part sections, solo sections using these) when Exporting via Stems? (Or updating a collaborator with a Mixdown?). Are these Marker events only stored in a Project, or the songs cache?
exporting labels(markers) are yet to come, which is available with Audacity !!
Hi Joe - I mixed a few songs for a country artist and he said he can't upload 24 bit to Distro Kid - is that accurate?
I don’t understand why I have software that suggest I will have “CD quality” exporting at MP3 128. CDs sound great so how come it takes MP3 320 to make the export sound best?
Itunes used to say that..... Whoops. Anytime I exported stuff for mp3 format it was 320 all the time.
WTF This weird sound j'apprends everytime I render.. Guess you pointed out the MP3 issue for me. Thanks.❤
So I did everything suggested. But, my low end out of my DAW (Studio One) sounds great. But when I export to MP3 or WAV, the bass is considerably lower. Any ideas?
Hi Joe. One thing I discover is agter ecpoerting mp3 some play back systems separate audio. Y is that
I'm shocked that there are people who will send out mp3s for mastering.
cool i have been doing this right but i have seen and used 480kbps or 460kbps but i cant remember wtf it was on or whatever maybe itunes convertor or vlc convertor idk cant remember
Hey Joe... I need a method to use reference tracks inthe project page
Every time I export to an mp3, I have a limiter set in place. It seems to be the only way I can export without getting multiple clipping notifications. Is that bad? Exporting with a limiter?
i export in RealPlayer format
But why would anyone's song start that many bars in anyway? It would drive me nuts to see that in my daw 😂
One thing I didn't see you mention (maybe it's FL Studio specific) is how the songs ends. So often I have people send me beats and they didn't trim it so there's like 10 seconds of silence at rhe end. In FL there's an option that says "Leave Remainder" that should be set to "Cut Remainder" then you can set where you want it to end by pressing shift+T in the playlist in dragging the loop icon all the way to the end. It takes two seconds but no one does it.
Thanks, Joe. You could have explained your three majors points more concisely. 🧐
I only use mp3 because I cannot send a wav file by google mail - it exceeds the limit. Is there a simpler way to send a wav file And avoid the whole mp3 issue or issues as in the comments?
Joe is there a way to set 320 as a default???
In my experience in Cubase the export is always on the last setting, so yes.
trying to save space when is comes to video or music is bad. MP3s for example are already compressed audio. so are WAV files but WAv is much cleaner quality. In my younger days I would record USA up all night movie on VHS at SLP/EP. I could fit 4 2 hour movies on a tape. The quality was horrible but that is the price for trying to save money. Only one movie could fit in normal recording and the quality was good. the same thing happens with audio. Those who collect as much music as possible without upgrading their HDD space suffer because they get the MP3s at 48bps rather than 192bps or 256bps. 1 megabyte per song vs 5 to 8 megabytes per song. Basically you are trying to cram all that beautiful sound into a small file. You are quite literally sacrificing quality for space. allowing a larger file means that the sound isn't packed on top of itself and the read heads actualy detect the magnetic print on the drive that contain all the pertainent information. It''s like printing a normal 300 page book so small it fits on 100 pages perhaps a little but how many passes or rereads do you have to do to get the context. Read heads don't have that luxury. Imagine words being so squished together that you can't make them out. That is quite literally what is happening with your sound.
Oops. Guess I'll be checking my Mixdown dialog and making sure that mp3s are 320 kbps in the future. My car will not recognize wav files at all.
The ONLY mistake you're making when exporting in MP3 is exporting in MP3
Why does this guy have to prolong his videos? I love the channel, but why this long. LOL
If he compacted it he would lose information.
Thank you for the video and information. Just Some constructive criticism perhaps? Can we do away with the goofy faces on the thumbnail thing. It could be me, but it actually makes me want to click on the video less, not more.
Why bother to spend all your time mixing and mastering to get the best possible sound only to export to any kind of lossy format? I refuse to do it. Mp3s made sense, I suppose, when storage space was small, but not these days.
Hey…. Don’t export to mp3….