For those of you wondering how I casually mention decapitation.....One of my principle voice teachers was pre-med in college before she turned to voice. During her pre-med stint, she observed cadavers as part of her study, and witnessed one of her professors demonstrate this as part of a larger context. She was the one who connected the card-on-bicycle spokes sound through the headless body with the pharynx being responsible for timbre, color and vowel of the voice back in the 1960's. This was before she had studied vocal pedagogy and voice science, which confirm this in less, er, graphic ways.
Thank you for explaining this! hahaha, it really was a surprise how suddenly we were talking about air being released through the throat-flaps of headless cadavers... ohhh too funny, you epiglott- me... epi-got-me, oh well i tried :/
The t-pain effect on "fly me to the moon" changes the premise of the song. It changes from romantic to an actual robot literally trying travel to Mars.
@@dondamon4669 lol... is this even a quote by either of them? it really sounds like bullshit facebook LiVe LaUgH LoVe tripe falsely attributed to someone famous to get shares
@@michaelwerkov3438 no it’s Kurt who also said “what I have in my heart and soul must find a way out , that’s the reason for music, and to play without passion is inexcusable “ Kurt and Beethoven were brethren’s
With Sinatra, his breathiness actually gives a bassy thickness to his vocals. Whereas the fixed one sounds anemic. Its insane how something so small can make a big difference. Wow.
@@leaveitorsinkit242 modifying anything about the voice will change how it's perceived, it's impossible for a program to simulate what the singer would have sung a couple cents higher or lower
I think that's an artifact of the AI stem separation software, not Melodyne. I would guess the lower frequencies of the human voice are more difficult to distinguish for the software, just as they are for us
This reminded me of a classic moment from MASH, where Charles, a man of means and culture, explains to an injured pianist the he always wanted to play the piano, saying "I have hands that can make a scalpel sing, but above all I wanted to play. I can play the notes, but I can not *make the music*." It's such a succinct way to express the imperfections or uniqueness of musical performance.
That's why Chopin composed so much on the black keys. His pianos sounded off in those keys (pre equal temperament). Nowadays classical musicians are very busy forgetting it was the case. ;-)
More Mama Neely!!! I run a recording studio in Western Kentucky and I had a young vocalist (~13-14) who was recording her first album. She would add these little diatonic embellishments at the front of phrases. Instead of a scoop into the phrase it was an actual pitch. The performance struck me as strange and it wasn’t until about tune 5 that I realized she was mimicking auto tune. It got really obvious on her more modern cover tunes. Great topic to go deep on.
Damn I just tested that on myself (I'm gen Z) and tried humming the melody of a song I know is definitely autotuned and then the Zeppelin song, I can totally hear the effect the song which had autotune had on my voice. Every note came in as on tune as I could make it (which wasn't very on tune since I'm not much of a singer). It almost felt like I was trying to play my voice like it was a piano, while with the Zeppelin song felt more like I was just singing normally... Really interesting effect.
I wonder if part of that piano feel was simply trying to match pitch. Try to match pitch with Plant and you may feel the same way. Don't "feel" it, just match his pitch. Curious how that feels to you.
Hey Adam, I wonder what would happen if you went the opposite direction: take a song that has already been pitch-corrected to death and use melodyne to bluesify it. It would be interesting to see if this makes the vocal more soulful or just renders it even more uncanny. Of course, this would be *way* more work, as you'd have to choose which notes to bend and how far instead of just jacking all the sliders to 11, but it would be super interesting to find out what happens.
I think an easier approach to finding out what a heavily pitch corrected song would sound like without pitch correction would be to see if the artist ever sang the song live without an autotune microphone and maybe had a few pitches off. You can also see if anyone did a cover of that song without pitch correction. Though, I think it would be hard to find heavily pitch corrected songs that would sound better without pitch correction because I would assume that music artists and producers kinda know what they are doing and they probably wouldn’t pitch correct a song where it really sounds worse with it. Unless the song is kinda bluesy or has a semi-conversational style of singing, it probably would sound either the same with pitch correction or better. I guess musical theater numbers and operas would probably sound awful with pitch correction too though. But, I think pop music uses pitch correction for a good reason.
Yeah, that’s not really possible, but I appreciate the sentiment. Like someone else said, all you can really do is try to find a live recording without pitch correction. Unfortunately, a lot of pop artists are forced to use it even in their live shows, so that might be difficult to find.
I was about to comment the same thing. I was wondering if the uncanniness if because the effect mangles the original tone / timbre of the voice. This experiment would reveal if it is the corrected tuning or the distortion caused by the effect that is making the singer sound "off" or "robotic".
If you look at any instrument, being 100% in tune is not desirable. That’s what vibrato is. It’s a rhythmic variation in the pitch. When you 100% correct this it will remove all the emotion.
@@TheMAU5SoundsLikThis What gets me is that the common Western equal temperament scale isn't isn't 100% in tune so "correcting" vocals to it makes even less sense. Sometimes the in between notes count.
@@garydiamondguitarist Nevermind how pianos are tuned. As you go towards one end of the keyboard, every note starts being "stretched." (called octave stretching)
This reminds me of what I learned about Indian singing traditions. Often how you get to the note matters more than the note itself because the little bends along the way can convey a lot.
Losing Aretha's pitch slide on the word "for" in "for a little respect" was the most jarring uncanny valley moment for me. It needs that slide to work!
I liked seeing the phrase "what you need" on the grid, you can see the way she slid up to the third 3 times and each iteration was slightly flatter; "need" was almost a semitone below "what" (at around 8:20)
I didn't even consciously notice a clear difference on these direct and short comparisons, but if listening to longer pieces, I'd definitely feel either enriched or dulled depending on whether it has life in it or is sterilized.
Ficaria legal dobrar algumas segundas ali pô. Principalmente nas terminações das frases. Mesmo usando esse pedal de harmônico, achei muito interessante harmonia
Pitch correction is a very useful tool when you have an epic performance filled with emotion and mojo but just missed those one or two notes that were off enough to mar an otherwise excellent track. A little snip snip fix in the mix and you save all the juicy goodness. Where it goes too far is snapping the entire vocal track to grid and losing all of the human element. Those scoops and slides are the secret sauce that separate an epic and memorable performance that touches our soul from a cold robotic one.
For me, it's those imperfections that create perfection. It's also why I usually prefer live performance. I'm not keen on everything being adjusted, artificially, towards some record industry 'standard' sound.
This is so true for most music but what if it's an integral part of the genre? I've heard a lot of heavily emotional performances where they purposefully use portamento with autotune to get that snappy "dodODODO" sound between notes and it's awesome in my opinion (and a lot of others). For example Fri3ndzone - streetview. It's really blowing up with the youngsters lately lol
@@abasement666Using it deliberately as an effect tool can have good results. But those who use it as that are good singers in the first place. They don't use it as a crutch. As an example there is someone here on RUclips who is analysing Chers 'Believe' which was one of the first commercial uses of autotune. And maybe to everybody's surprise - it isn't used throughout the whole piece but actually as an accenting tool.
I found Adam's mom's blog after this, and really enjoyed reading it. She had several posts about working with post-menopausal vocalists and the voice-changes they can experience. As a post-menopausal vocalist myself, I found it very encouraging ;) Also read her description of her father (= Adam's grandfather!), a brilliant organist. So interesting to read examples of how music can be passed down and transformed within a family.
yeah, music is a family thing)) my grandpa played the accordeon and the piano (without any formal education!), my grandma sang in a pop band, my mom is a singer and choir conductor, and I...um... I played some drums in high school ._.
@@doitnowvideosyeah5841 Everyone's grandma played piano at parties. There was no tv, radio, or recorded music back in the day. Just live music or if you had thirty grand a player piano, the cutting edge technology of the time.
Exactly. If you think this "fixes" Led Zeppelin, then you don't know ANYTHING about rock music. Next, let's get rid of all of those pesky bends that Jimmy page is playing.
Kinda whack you of all people don't understand splitting the stems creates terrible artificing and is in no way ever going to result in a good mix when re rendered after editing. When you do that things like reverb and echo get lost or tuned too and it's always gonna sound "off" This would only really work to prove your points (of which I admittedly agree with) if you did it with official stems, or with a singer directly. This is NOT how to showcase the loss of what inexplicable artistry you keep harping on. You are already damaging the vocal when you force it into a stem with an algorithm. The second you touch any of those stems, the entire mix suffers. I'd love to see a reiteration of these key points with actual stems or raw recordings with a vocalist that's part of the video.
As soon as I heard "an app separating vocals from any stereo recording" I scrolled the comments just to find this comment lol Yeah you can't do that without damaging the audio.
@@jaxnitsua1200 I agree with him completely just think it's not the ideal way to make these points. You should try pitch correcting a stripped stem sometime, the level of jank is at least 10 times what it otherwise would be with a pure raw recording. Especially if you're doing it to vocals extracted from music that old. I just found it an odd premise to make the points but it was certainly more entertaining this way I give Adam that!
how did you get 6k likes on this comment on a video from a year ago? every other comment on this vid is like, 300-600 likes. and you only have 2 replies. fishy ass comment.
All singers are worsened by pitch correction. It's not their voice, so by definition it's not a genuine performance. Might as well have an AI singer at that point.
@@LocrianDorian Nah, some people are so clearly out of tune (the more distorted the voice is, the more the pitch had to be corrected), that I'd rather listen to a robotic voice than their original mess.
It's interesting how pitch correction can have the inverse effect in blues adjacent music as opposed to the added expressiveness that's possible in kinds of rap and R&B with pitch correction. Cool how the context influences the outcome. Great video as always man
In rap and r&b they're using the pitch correction to add to the sound but in this they're effectively using it to subtract sound, to take all the variation and remove it
If you snap to the grid, it rarely sounds good. If a blues singer overshoots a slide from the flat third to the major third, pitch correction can help.
I think after being hammered with perfect pop vocals for years now, I actually crave something a bit pitchy. The vulnerability of singing, the cracks and timbre of the voice, all of that stuff adds colour and richness to a song. We’re here for the stuff that fall outside the lines!
Last week I got really stoned and listened to "Whole Lotta Love" on repeat on my new vinyl record player set up... and came to the conclusion it was the greatest rock song ever recorded, so to hear someone is "fixing" it is like nails down the chalk board of my soul. 🤣
@@kohaponx well when you get to this level of ‘f***ing around’ it doesn’t matter if you call it Autotune, Melodyne, or even Tune Real-Time whatever lmao
@@Guinneissik It does matter, because it's a different technique and sound. Especially when making jokes about "sounding like T-Pain" when using a different tool and purpose.
How many times have you had to use a mobile rig to record a sax player you get one day for overdubbing 10+ songs? Not every 'bap' is going to be perfectly in tune, and you won't catch every out of tune one because you're blazing through. Human beings get tired. 'Imperfections' are also 'mistakes' sometimes. Melodyne is subtle enough to fix pitch drift or going slightly sharp on the occasional note, without artifacts, which was ignored in this video. You cannot catch very error, so Melodyne is a lifesaver when you get a great performance that has a few tiny flaws that can be fixed other ways, but is far easier with Melodyne. Imperfections that make you wince are not the ones you want to keep. If you suck at using Melodyne, then you make a sax into a tone generator - and that's user error. It requires skill as an engineer to use it properly.
@@CHHuey There is a level to this debate about what constitutes "proper" use. In most cases I would actually prefer to hear the wince. Ever since I started buying records (because I am old enough to have had to save up for music on vinyl) I have preferred live albums over studio ones. There is a magic that happens when a live band is absolutely on it and hearing a slightly bum note or a glitch is what proves that it is live and not re-engineered. So that leads in to the issue of who decides what to leave in and what to take out. There is an artistic discretion being exercised between the musician and the engineer/producer but that inevitably creates tension. The businessmen want something that fits between the lines because there is a risk that the audience, who are now so used to perfection that they expect nothing less, will be turned off by something that has a few warts so they require what I would consider to be excessive correction. But the warts could be deliberate. Frank Sinatra could clearly sing in tune but chose to sing flat for the effect. Unless the person applying the "corrections" sits with the artist and goes through every change they want to make to see if the musician meant to create the wart then there are going to be places where the software is obliterating a deliberate part of the performance. An engineer should never try to be a better guitarist than the guitarist. Or singer. Or drummer. Or just about any other instrument. It may be that you and I are in agreement on this but I suspect that for some people "proper" means polishing out all the imperfections and with them all of the character and for me that taints the whole process.
@@Birkguitars Frank Sinatra wasn't singing flat. He knew where the note was. It just wasn't a piano note. Blues and jazz have a long tradition of using microtonal methods of singing. Melodyne isn't build with that mindset, but it can help if the occasional problem sneaks in and you just didn't catch it. You can say you like the mistakes at gigs but in all honesty, you don't hear the majority of them because the acoustics usually aren't good enough which is why bands will still 'doctor' live albums, because the bassist couldn't hear himself in the monitors and hit the wrong note, which no one in the audience noticed. There is no magic when you have the same person recording 3 different saxophones while wearing headphones, playing to a pre-recorded track with 2 people staring at him for hours on end. That psychological barrier is difficult for people. Not everyone is comfortable and it affects the performance. You're equivocating between the 'acceptable mistakes' and the kind of mistake that makes you wince and notice the mistake. Anything acceptable doesn't need to be fixed. You statements about engineers makes me think you've never had to work with one. The software doesn't 'obliterate' anything. I have had to fix notes on a pedal harp because a pedal harp is an instrument that goes out of tune in a very unpleasant way that no harpist likes. It's a problem with the instrument. Those discs make the strings go out of tune - design flaw. The engineer is working for the producer, who is making those decisions with the artist. That is the job of the producer, to listen to those mistakes and decide which ones to keep in and which ones to let go. I never said fix every problem, but the ones that make you wince are the ones you fix because you just didn't catch it in time. You've totally restated my argument into something that fits your world view instead of analyzing it for its own merit. When you get someone who is composer, producer, engineer and performer and in general in charge, you get to see the big picture. I have. There are mistakes you keep in and those you throw out. I never said you purge it of humanity, but there is a threshold of mistakes that you can tolerate, and those you can't. Do you really think that something like Frank Zappa's 'Waka Jawaka' or 'Hot Rats' is better with the wrong mistakes left in, because they weren't. He edited them out, but the guy edited so well, composed produced and engineered, that he knew what was supposed to be there and what wasn't. That's producing. I do not think we agree because I'm not sure you get that recording a live concert and a multi-track recording represent different things, and if you're doing it on a budget with limited time, this is not 'demeaning the music' or anything like that, it's bring out the intent of what the person playing meant to play. It removes a barrier that lets you focus on performance and not on all the weird technical problems. Most saxophone players don't WANT to hit the wrong note. That they do is not proof of 'humanity'. You can easily correct the pitch while leaving the intonation as it should be - Adam severely under utilized that aspect of Melodyne, which is one of the best features of Melodyne. The person who decides to leave it in or take it out is the producer, musical director, or the band if they're working with an engineer who really sticks to that strictly. If you want to go with 'live album', fair enough, but once you abandon that, you have to go with 'this isn't reality'. That's where Eddie Kramer, George Martin, guys like that come in. There was a Melodyne before Melodyne - it was called session musicians who wouldn't screw up and producers like Tom Wilson would often insist on it if the band wasn't cutting it. The person in charge of the production that the musicians hire makes the decision. That's why they exist. Maybe it's one band member, maybe it's someone running a group like a jazz group instead of the Beatles, but at the end of the day there's no tyranny of Melodyne trying to purge humanity of it's expression, just an ability to make things that in the past would've taken forever since an analog synth goes out of tune the longer you leave it one. Those PITA problems go away and the music sounds better. But of course a band like Black Flag isn't served like that the way that Frank Zappa was. But that's because someone MADE the decision, and that was Greg Ginn in Black Flag and Frank Zappa, the composers. The only reason I care to put all of this in writing is that someone who could do something extraordinary might not because they got peer-pressured into 'auto-tune is bad'. I'm tired of that crap just as much as I'm tired of bands that don't do pre-production demos, and practice while they record. Studio time is for studio stuff, and this is a blessing for fixing stuff when you're limited by time and other resources. That is ignoring the creative uses of it, too. Laziness is laziness, and this video was only about the laziness.
@@CHHuey I have actually seen a documentary on Sinatra that mentioned singing flat but I can't track it down at the moment so unfortunately I can't quote authority but I will hold to it. You mention microtonal methods but I think that is actually what I am talking about - being a few cents under true pitch rather than being randomly out of key. That aside I don't actually disagree with any of your other observations but I think I have a different reference. Years ago I saw an interview with Pete Waterman who said explicitly that he did not want any of "his" acts (note the possessiveness of the attitude) to sing live because he and his team spent so long in the studio making sure the vocal performance sounded as good as they could make it anything done live would be second best. In HIS hands Melodyne would be a purely business tool not a musical one. I am old enough to remember newspaper headlines about how the Bay City Rollers didn't play their own instruments so I am aware of the rent a band concept with session musicians. I went through University to the soundtrack of Relax by Frankie Goes to Hollywood on which Norman Watt-Roy came up with and played the iconic bass line, not the band member. And I have heard Glen Fricker screaming about bands that turn up to a studio without having done the practice to know how difficult it can be to tease out a recordable performance. Ultimately the choice of which points to change is an aesthetic one and that is where the differences in preference come in. A while back I was in a band doing a cover version of Rocking in the Free World and in researching options to create our own interpretation I found a version that Pearl Jam did live. The audio is straight off the video so there is no editing or correcting. One of the guitars is clearly out of tune by a significant margin but I love it as a performance because it captures so much energy. I have no doubt that others would think it unmusical garbage. I concede that this is a personal preference hence my understanding that Melodyne can have a place. My concern lies around the difference between what you describe, a tweak to correct something that is not on the button because of practical limitations, and the ability to take a voice that honks like a drunken goose and put it on tune and on time. I want to hear an authentic representation of what the performer is capable of and wants to project but with Melodyne I cannot be sure that this is what I am getting particularly when someone like Pete Waterman would be using it to create an electronic version of Milli Vanilli, hence my deep distrust of the outcomes.
This was so interesting to me - I especially noticed with the Aretha clip how much her subtle bending of the notes added emotion and “humanity” to her singing. Really fascinating, thanks!
I was always taught when singing thirds in a choral setting, you aim high or low on it depending on if you're ascending or descending, or depending on the chord and other harmonies. I understood it to be voices aren't limited by the tuning system that instruments are beholden to. AFAIK this is very important in barbershop too.
@@mikesmovingimages I believe you. But vocal and especially choral music is closer to my heart. And no one really told me to change colour of notes. It was always just about "right" intonation.
@@mikesmovingimages though the popularity of the technique of bending notes on guitar has been a more recent thing, you're technically right that fretless instruments such as violin and related instruments have been able to bend and/or play pitches outside of 12edo for a long time. Or if you meant "bending notes" as in using any tuning system other than 12edo, that's certainly true. 12edo has only had widespread adoption within the past couple hundred years, and before that, all sorts of other tunings have been used throughout history and in all different cultures.
@@bragtime1052 in addition to bending notes for melodic interest, the idea of being in tune is a subjective concept. In reality there are not 12 tones in a harmonic scale but 17. All the emharmonic notes are comprised of two. Eb and D# are not the same, for example. Which one to use depends on the context. The ratio of a third or sixth to fundamental tone yields an irrational number. Orchestras and choruses never needed equal temperament, and as Adam's video demonstrates, "fixing" art by forcing it to a grid is a dehumanizing exercise. There is much beauty beyond the cold algorithms.
quantizing larger tempo shifts also really throws the listener for a loop. I heard a version of SOAD's Know that had been corrected to a constant tempo across all sections for the purposes of being mashup-able and it's absolutely bizare sounding to not have the choruses slow down
Without elaborating, it reminds me of Photoshop on models. It's done so well that it's beginning to look natural and anything that doesn't quite match that ideal is seen as not as attractive. And it's setting a standard that we artificially have created for ourselves and will idk, I doubt it will be as damaging as Photoshop, but it'll have an impact for sure.
Kinda but no, like you said. In both cases, the tech is being used to take a product that isn't quite what they want it to be and get it there. But physical appearance isn't in the same realm culturally/psychologically as music. People seem to tend to reject when music is artificial, even with no prior reference. People would rather have a naturally good singer but commercial music prioritizes image (looks) over the talent, and thus we get an inferior sounding product. Interesting how the looks part is connected to this. Flawless personal appearance is something we seem to desire or aspire to, and not because of photoshop. Makeup and other personal physical attribute enhancements have been a staple of all cultures since before written history (interestingly this exists among the different cultures' varying ideals of what is attractive). So in the photoshop case, the market is supplying a demand (but I do agree that it becomes circular because that ideal then in turn does influence the cultural ideals).
I'm a former music major, and I hear nothing different except pitch center, like he chose a slightly higher approach, which as a classical singer, is exactly what you want to do. I think this is more a recording mix video, where people who work sound boards are more conditioned to hear those differences in headphones. On stage I can hear really subtle things, but in a recording mix, those subtleties just aren't as present, they're mixed out, and so hearing the post mixed sound takes a certain kind of practice.
So "Ain't No Sunshine" is really noticeable, but that's a very stripped down recording, so it makes sense. Pink Floyd as well, but I never liked that drifting Tom Petty vocal sound, so I'm ignoring it. With "Ain't No Sunshine", smoothing his pitch actually changes the rhythm of the song, bc he uses the pitch inflection to set up the attack of each word. By smoothing it out, the music looses it's 'swung' quality and it sounds more like he holds a note too long and has to play catch up. That's when losing expression actually fundamentally changes the song in my mind.
Everything’s fine until Those blues thirds start falling away, it’s kinda the thing that distinguishes vocals from other instruments-so the phrases sound similar in isolation but if one listened to the whole track “perfected” you’d never hear the vocals do “their thing” in the pitch landscape. Kinda like how the guitar is tuned just shapes the sort of phrases we hear from it, like “this is a guitar melody, I can tell by the notes” even when it comes out of piano.
this kinda reminds me of when people take old animation and "improve" it by using software to interpolate it to 60 fps, destroying all the nuance and detail of the animation
@@lolwutizit The original animation artists were constrained by the medium and lack of technology of the time. They implied motion. Computers, when trained correctly, can see that implied motion and fill in the detail. I won't argue that it's not a computer's "opinion", it certainly is. I will argue if you say it's objectively a bad thing. It's quite subjective and IMO it's a good thing for many people, myself included. I like it, and I like how the implied motion is automatically smoothed. "I feel like it adds detail and makes the nuance easier to appreciate." is my original statement and I stand by it. The interpolation is quite sophisticated, taking implied detail and realizing it. I appreciate that and believe it accentuates nuance.
Hahahhah ... I immediately thought of Lou Reed ... the chap wasn't a "singer" for sure, but he sang his lyrics as he felt on the day (and never the same in two performances). Among the best as a story-teller poet there was. Have a feeling he would have refused to use autotune ... he consistently eschewed "embelishments" of any form unless he felt the song needed tham.
I think being "off pitch" can sometimes sound better or more natural to us probably because it's closer to just intonation than equal temperament. Maybe?
got nothing to do with intonation or music theory, and everything to do with our organs and ears + what we are used to. it's just "how it is" and theorizing around it won't change that :P of course there's subjective taste as well
I think it sounds more natural because it just…IS more natural. Most people (and even a significant amount of instruments) don’t have absolutely perfect pitch and hearing something tuned so perfectly probably rings as unnatural not just because we have context for the original versions, but because we don’t usually hear natural examples of perfect tuning in that same vein.
@@paniccleo it depends on the instance. I don't agree with the person above who said that it difinitively has nothing to do with intonation, but I do think it depends on the instance. I am sure there are times where this is the case. There are plenty of songs I know of that are intentionally out of tune that sound better that way than they would *in tune*
As someone who got into music late and was born after 1985, I've found it took me a really long time to appreciate singers like Bill Withers, whom I found "pitchy". Then, when I really started listening to older music in general, I realized almost everyone was "pitchy" and pitch was an appropriate sacrifice to emotion and soul. Now, when I listen to modern music, I find it more devoid of character. I appreciate this video because it definitely affects how I'll correct my own voice and trust my ear more when something sounds right and isn't perfectly on the pitch grid. Thanks Adam!
I've noticed this with a lot of people who predominantly listen to contemporary pop. They perceive natural singing as pitchy. But of course perfection is rare in actual musical performance and it's also not aimed for by performers because it's boring and unnatural. Slight deviations from the perfect pitch are the norm with singers, violinists, cellists, etc. It's even used consciously for an emotional effect. The same thing is true for slight rhythmic variations. A lot of people have become used to music that has been dehumanized in many different ways.
That’s a dumbass quote lmao. Style is just honed by realizing your mistakes overtime. Clearly that quote was the dude trying to sound smart and it fails miserably
So by your logic, polyphias whole style is mistakes they can’t help but make, but their whole style is very technically demanding and has almost NO mistakes. So please. Make it make sense for me.
@@NBrixH I’m not saying polyphia sounds good. I don’t like them either, but to say their technical prowess are mistakes is insulting. They are more technically skilled than you or I more than likely ever will. They deserve respect due to the work they put in. I can acknowledge how outrageously hard something is, and acknowledging the talent needed to play it, whilst still thinking it doesn’t sound great. But some people do think it sounds great, so to say it sounds terrible is just your opinion, and you’re stating it as though it’s fact.
One thing Adam isn't considering is that the notes actually ARE perfect. They just don't align to this perfectly spaced grid that we 'say' is perfect. This...degree of separation that someone said is 'perfection'. Humans are perfectly imperfect. And those human imperfections are what Neely would describe as "mojo".
Another part of this is that pitch correction introduces audio artifacts. The software is altering the audio samples to produce a different pitch, which is in itself an imperfect process. The more the software alters the audio, the more artifacting there is. This further exacerbates the unnatural feeling of heavily-tuned vocals. Now, to your point, this can be desirable for an aesthetic purpose, but if you want to actually tune a vocal and still sound "natural", you have to apply it VERY delicately. In many cases, audio engineers are not strictly locking the audio to the desired pitch (as was done in this video), and are instead trying to get it "close enough" that it is read as "in-tune" (according to Western tuning systems) without introducing too much artifacting. The main goal is to eliminate distractions, fixing where a note is sour, rather than simply being bent for creative effect. It's a delicate dance to pitch correct a vocal without "ruining" it.
Also hot take: the best mainstream rock, blues, and RNB singers of the 70s are not representative of all the singers from the 70s. It kind of goes without saying that there are a lot of people who would have killed for this technology back in the day and these post hoc arguments are a product of validating nostalgia. Hell, as a producer, I work with plenty of people who I don’t think need any pitch tuning.
@@BradsGonnaPlay also, another hot take: moses is just not good enough for this. you need perfectly isolated vocals before room acoustics being applied for this to work at all. Software separated vocals with natural room reverb just ain’t it,
This is so wild. I was just listening to IV last night and noticed that in "Battle of Evermore" there's a moment where all the harmonies come together in unison and they are not at all on the same pitch...and that makes it infinitely better.
That had to be said. I love Adam Neely but if I had no idea about Melodyne and never used it before I would probably create a worse image about that tool. It is made to be able to keep imperfections more than other pitch correction softwares and its mostly how u choose to use it.
I personally tend to just slightly automate the pitch for little corrections, but this is probably a lot more convenient and visually reassuring. Every way of changing pitch without changing the speed also has it's artifacts when working with audio files, but that only seems to be on more drastic pitch changes, I could see myself picking up this plugin for that as well, if it's got a unique and cool algorithm.
The drift on vocals is what really kills things, but usually vocals are things you can recut until you get it right. There are numerous technical problems (making a 3 saddle telecaster stay in tune when doubling piano melody) that used to require detuning a string to get around the intonation issues, then only playing part, doing another pass after retuning, that you can simply fix after the fact now. It's not fix it in the mix, it's fix something that is inherently flawed in the instrument without wasting time. Why is it that Adam didn't do something I know he is probably annoyed by on a regular basis - fix a trumpet that is slightly sharp on an otherwise great take that you didn't realize was off? You nudge it in a few seconds. Beats cutting and pasting the note from another bar. It's a powerful tool and vocals are the worst example. You wouldn't do this to BB King, but Frank Zappa... that's where you see the usefulness. Blues singers have great control to begin with. A marimba and harp will fight each other - harps don't stay in tune. Melodyne make that go away quickly with no artifacts. It's not the enemy. It's your best friend for technical issues that aren't laziness.
A violinist friend of mine once asked me "What is the difference between a good musician and a great one?" I shrugged and he replied "The great musician knows when to get the notes wrong and by how much. The note has to serve the music not the other way around". I think that this is what your video is laying bare. Great work, by the way. Keep it up
There’s a certain inertia to the sounds of blues; like a heavy rock sliding on ice. You push it to change its direction but it wants to keep going straight. I think that sound gives a solo or melody much more authority over the other parts of the song.
@@Youngapollo47 I think the fuss is all about the "tune" part, not whether you do it manually or algorithmically. At least to listeners who are not producers. I mean, most people consider production itself as "computers doing it", and not as actual hard work, talent and artistic decisions that are involved.
@@Medytacjusz i dont think that’s relevant to the original comment or my reply, right as you may be lol i was just pointing out that it’s funny he’s using melodyne but calls it auto tune
I dunno if this sounds crazy, it’s like the soul - is in the moments the voice changes from one note to the other. The story telling is in HOW the voice goes from one note to the other. That’s what colors the emotion into it. Just like not all human emotion will ever be pitch perfect- so Auto tune will never be able to place true emotion into the song. Awesome video was engaged from the second second on thank you for making this.
The crazy thing is that none of them really sound bad per se; there’s some artifacting but that’s to be expected; they just sound off. They just sound a little less human, a little less expressive. As Alejandro Jodorowsky once said, “Too much perfection is a mistake.”
13:11 this, for me, is the difference between Melodyme and natural. The "corrected" Bill Withers sounds good, but when you played the uncorrected I immediately got goosebumps
The harmonizer is not the Tpain effect. T pain uses actual auto tune which, when the sensitivity is turned up all the way, locks the vocals to the note so precisely that it's causes an unnatural robotic sound.
9:10 When I heard the melodyne version I was like "yeah it still sounds decent" and then you played the original and I instantly got goosebumps. I could see her singing in my head as the original played, it instantly felt so vivid. It's crazy how you can get such a forceful emotional response from the original simply by hearing the doctored version first.
Lol You guys aren't taking into account nostalgia. I'm sure you guys would still get chills if the original vocals had pitch correction and it was the only version you knew.
I was joking - last year Adam made a big issue over our focus on "classical music" (and related theory) sidelining other music traditions and being inappropriate to analyse such musics.
David Bowie's classics just wouldn't sound the same without him continuously going flat and managing to correct himself without most people ever noticing. It's part of the beauty of not using autotune, those near misses are exactly the things that bring an emotion into the music. The art in music as a singer is to get back on pitch where you want to be. Autotune is basically relegating that personal art and skill to a computer, meaning, the singer will not learn or maintain that skill him or herself. Which is one of the reasons why some singers just can't perform without a laptop in the room. Worst case scenario, which sadly happens more and more these days, is that people are selected as singers based only on their looks and sense of rythm, without any skill in hitting a note even close, because autotune will fix that.
the singer of tomorrow are based purely on their looks, they will have no sense of rhythm, time, musical tune, Key and pitch. they will have a Apple Macbook with GarageBand Tools, Autotune, song writing software (AI produced lyrics) and the index finger to press start. sell out concerts watching someone sound fantastic (according to the tone deaf who cant hear Autotune Vocals), only to have a major software malfunction happen, having the once amazing angel like vocals turn into the sound of a cat being tortured as their claws are dragged across a blackboard, with the accompaniment of a dentist's drill grinding away at a root canal. the news headline the next day. "singer's career is over after his laptop failed to cover up his real voice, causing 25 people to suffer major injuries while trying to flee the venue" one of the injured said " all of a sudden, there was a spark, a popping noise in the speakers , then his real vocals came through.. it was so bad the front row started to vomit while the rest of the crowd screamed in agony, that sounded more in tune than the vocals did"
The most important part is what he has to say though, and what he has to say is always interesting and enlightening. The editing is merely at the service of the discourse.
A prime example of why I have loved your channel for years, This and the Coltrane Fractal 2 of my all-time favorite musicology videos on here. Also, your mom is awesome!
@@joewalker3166 I listen to Bob and I dont like his style. Anyways it adds atmosphere to his songs witch are unique and original and coresponds well with lyrics etc. And yes, he is well tuned. Its just about the taste.
For me Dylan had one of the best voices up to 78/9 then it was harder for me but before that his voice was out of this world. But I don’t like proper singers anyway I can’t even stand Mercury
@@joewalker3166 um you are aware he was drunk as hell on stage like 90% of the acts he performed right? I feel you are cherry picking with rose tinted nostalgia friend. do I love the songs, yes, but I can be hard on them with that love.
What melodyne does is make a microtonal voice instrument "pirch perfect", but music is not about perfect pitch (that is all about production value and conformity). The microtones are there for a reason. It's like removing the bends from a guitar solo.
Side tangent: T-Pain went into a big depressive episode after Usher told him that he "ruined music". After some time away, T-Pain went on to win the masked singer, to prove to himself and the world that he can sing without the "crutch" of autotune.
Yeah we all saw that Netflix doc. And even if we didn't its pretty well known at this point. That along with the tiny desk concert is why everyone loves T-Pain now.
lol it wasnt subtle, it sounds subtle but it was too aggressive, you must not take all the way up the settings and once you tune, you need to fix the vibrato and connect the note vibrato which he wont do it for a video, the vibrato its what makes a voice sounds natural and alive, also if you dont connect the notes it sounds unnatural in most cases, pop singers make it more subtle in most of the cases
I don't think it enhances it in any way. I feel like it ruins it. The imperfections are the beauty of pre-digital music. John Bonham from this same band always played a little behind the beat which makes Led Zeppelin who they are. To fix his drum beats to a precise bpm would ruin it even worse than Melodyne ruins Plant's vocals.
As a ballet dancer, this is very similar to what a teacher bestowed upon me...as an artist, we establish that we know the rules and express through breaking them. That is the definition of art.
Maybe because I've been a full time recording engineer for 30 years or so now, AutoTune/Melodyne/etc. always gives me the heebie-jeebies. It just removes all the emotional payload for me. I find it as engaging as being serenaded by a Speak and Spell would be. Uncanny valley indeed.
Ever recorded a pedal harp? I have. I love Melodyne. The instrument itself will not stay in tune by design. Using it on vocals like shown is just laziness. You may have 30 years, but you haven't had Melodyne for 30 years so I'd really suggest pulling up any instrument that is prone to intonation problems, or analog synths that drift in tuning as they heat up, and seeing how useful it is and how it doesn't remove 'soul' but 'technical error' so you can get a better performance without doing all the workarounds you likely know about. I don't tempo map, don't fix bad vocals, but a jazz chord on a 3 saddle uncompensated Tele around the 8th fret? A nudge fixes that problem. You're missing out.
You can watch the bonus video and the un-edited original cut over on Nebula, if you'd like. (curiositystream.com/adamneely) Also, yes, I use the term "Autotune" to mean many different things in this video, including simply "pitch-correction" and not the piece of software specifically. I used Melodyne in the video mainly since I don't own Antares Autotune, but they can do very similar things.
Great work fixing all those out of tune amateurs - but you missed a trick, I noticed that they are all off the beat, I mean really, it's like they just sing whatever they want whenever they want, lets lock it down and get them strictly in time for the next video.
It's no surprise that hours and hours of work honing the skill of playing a piano (an instrument that is pitch-quantized, incidentally) can produce far more sophisticated and musical results than simply 'banging on the keys'. Is it surprising that Melodyne (as an instrument in it's own right, incidentally) is any different?
It might make it mechanically and acoustically more perfect. But it loses the human aspect, the "soul" of the music. Those little imperfections and variations.
“If you were to cut off your head above the vocal chords and pass air through, it would sound like a baseball card in bicycle spokes”… uh… Thanks, Adam Neely’s Mom.
It's honestly impressive how subtle it is at the end of the day. Like, until you really crank it up and really process the sh*t out of the vocals, it sounds quite natural -- if clinical. So much so that it's likely impossible to tell if the vocals have been tweaked or not (assuming it's not a performance you're intimately familiar with). Weird.
@@markgriz It's those little imperfections and inaccuracies that make songs feel alive. In many cases, studio recordings get processed and cleaned up to the extent that I often find myself gravitating towards live performances over the studio stuff.
*Dynamics* and *tones of voice* are conveyed not only with volume and timbre, but also via tiny pitch "artifacts". Singing a loud sound has its own pitch signature applied on top of the note you hit, singing a growly sound has another pitch signature, and so on. Which means that flattening/"correcting" pitch strips away some of the perceived dynamics and tone as well. (Centering a note in Melodyne is essentially decreasing the amount of information in audio.) So tuning vocals needs very careful consideration - I often end up having tuned the vocals for a whole part with everything sounding great and smooth in isolation, but when playing it back it doesn't hit as hard since it's lost that _tiny_ bit of perceived dynamics. This is also one reason why synthetic vocals are so difficult. You can't just isolate factors like pitch and volume and tone and simply sum them together. You need a massively smarter algorithm in order to sound natural. Tweaking a Vocaloid to sound natural, for example, is super time-consuming and finicky with the current tech.
Imagine autotuning some even more out of tune masterpieces, like John Wetton in King Crimson (always sliding into and out of notes, usually fighting not to be flat, love him nevertheless), or Bob Dylan singing anything off of Blonde On Blonde. Goodness sakes that would be terrifying.
That was what was so much worse for me though! If the autotuned ones were played first I wasn't sure I was actually hearing anything different. It was only when the unaltered version was played that I noticed just how much was missing from the autotuned one. I feel like that's really what's so tragic about autotune being the default for every single vocal today. If that's all you listen to, you probably aren't even aware that there's anything missing because you've got nothing to compare it to.
13:48 Today I learned smething new (besides the autotune of vintage artists and the pitch correction): change is not always good change is not always bad change is change understanding change is the important thing Great quote. Thanks Adam.
I would be more insteresting comparing the "perfected" version and the original without knowing which one is one. I felt the same way as Adam, but I think placebo has a major role in the feeling of those melodies (at least in the ones that the original and the modified are very similar).
My 50-something ears honestly couldn't hear a huuuuuge difference. Maybe with longer samples. I think I'll download the trial version of this godawful program and and play around.
I would as well. I don't think the knowledge ruins my perception though. I've loved the sound of a raw voice & despised autotune for a vvveeeerrryy long time. There's so much subtly in the imperfections that make something sound human.
I wasn't paying attention to the screen while listening to this video, and, personally, I didn't notice any difference at all! Well, maybe except the very first correction. And I'm not sure if I would notice any difference even if swap relatively cheap speakers for ATH-M50 and super-duper-dedicate myself solely to finding what part of the video was actually corrected.
FYI everyone, that's not how we edit vocals. When the edited vocal sounds worse than the original, that's a pretty bad vocal edit job 😂🤣 A good vocal editor will leave in the imperfections if it makes the line great, and only brush up the areas where it's needed. So yes it is very subjective regarding what's good or bad and that's where the skill/taste of the editor comes in. In general, Melodyne is used to enhance the vocal performance, not ruin it. And when it comes to great singers, not much needs to be done. For other singers who are not as naturally perfect, more fixing is needed. Let's say we have a great artist, writes a great song, sings with emotion, but just a little shaky or off pitch at times. Do we say, "come back when you can sing better"? Of course not. That's when we use Melodyne. So Melodyne isn't 'bad', it's how we use it. And in the case of flattening and tuning vintage vocal performances? Yea its bad, because it doesn't need it. But I guess the point of the video is to show that the right natural imperfections can make a performance great. That's true. My point of this comment is just to educate and clear up some negative connotations around vocal tuning. Not salty, I promise 😁 And also, a BIG common misconception, Autotune is not Melodyne. Melodyne is more like 'Manual-tune' What was done in this video is manual editing with Melodyne, not Autotune. They both have different applications and serve different purposes. ✌
However, I should add, the the artistic intent of the creator must always be prioritized, if they decide to have a vocal performance corrected by a professional so be it, if not it should be left untouched.
Hi @Dinocaster thanks for explaining that. TBH I only found out about autotune/pitch correction - and producing in general. It's quite upsetting to me. That to be a good musician you need all this gear and all this time to manipulate recordings ....And to hear how recordings are manipulated .... kinda really stressed out about it. I'm trying to attain the impossible.
@@kegs5556 yeah it actually is in many cases. unfortunately. but as he mentioned, in most cases you would never know the difference. its akin to someone doing color grading on a film.
"I love Bill Withers' voice so much. It's so pure but there's so much soul and so much feeling when he sings AND WE'RE 'BOUT READY TO CORRECT ALL OF THAT"
I'm practically tone deaf... I couldn't really hear a difference in Robert Plant's or Frank Sinatra. But Aretha Franklin, Dave Gilmour and Bill Withers were ruined for me.
What's she's saying about how being trained by listening to digitally enhanced vocals is totally true and very insightful. I find myself naturally trying to naturally imitate fairly heavily autotuned vocals because I like the sound.
I got it! Was trying to figure out what the corrected vocals sound like. It's beginners that are too focused on pitch and/or "classically trained" singers. Groove is what happens when you're comfortable enough with an instrument that you can be playful and remove the training wheels. Tightening up these performances is like regressing their skill level. Timid singers also try to button down every now, so it also sounds like you're making the singer more timid.
It's kind of annoying that he put on that energy on I guess because he expects the audience to expect it? I'm not sure. He's obviously proud of his mother and he invited her on the show, so why not give her the respect and just treat her like a regular guest and not like she came to his school with his gym shorts because he forgot them at home.
i'm absolutely with you on the fact that some things don't need the pitch correction, or will even get worse if you apply it, but i must say that selecting all notes in melodyne and letting the algorithm tune it by itself isn't really an accurate representation of what melodyne is... to really see where it shines you gotta tune every note manually by ear, and ONLY those who sounded a little off to begin with, that way you can really keep something sounding natural but more in tune.
Yep, I genuinely winced when he selected everything and just auto-applied it. You want to go through and just pitch correct parts that might sound off. Some notes that are slightly sharp or flat will sound fine in sequence, and lend a natural quality when left as is.
It's just a way to make a more extreme example where the difference is easily heard. But in general almost all modern music I think sounds like crap mostly because its auto-tuned, auto-timed etc. though in videos like this they always make sure to say that auto-tune can be used successfully in the right circumstances but tends to be over used.
The phenomenon that you talk about with your mom is something I've heard about with guitar players. With the ease of being able to edit guitar performances to perfection in DAWs, you have a whole generation of guitar players who can play with almost robotic precision because they learned off of those records that were edited and time aligned to perfection.
It's true that if you hear today's djent metal bands, they definitely do struggle for more mathematic precision even in live shows compared to for example 80s trash metal bands. Today's prog rock values technical precision way more than 70s prog rock did. And it's noticeable in every genre. That's exactly what hyperpop mocks with it's cartoonish overproduction.
@@Posiman Cool that you mention hyperpop. On first discovery I found it to be revolting, but after a while I really started to get the parody and now I find it to be "on the other side of the uncanny valley".
It's late at night and I was dosing off while watching this and when the video ended with the "BASS" sound effect I almost shat. Thanks for the sheer spike in adrenaline, Adam.
For those of you wondering how I casually mention decapitation.....One of my principle voice teachers was pre-med in college before she turned to voice. During her pre-med stint, she observed cadavers as part of her study, and witnessed one of her professors demonstrate this as part of a larger context. She was the one who connected the card-on-bicycle spokes sound through the headless body with the pharynx being responsible for timbre, color and vowel of the voice back in the 1960's. This was before she had studied vocal pedagogy and voice science, which confirm this in less, er, graphic ways.
Often the most jarring image is the most memorable, which is a great pedagogical tool. Love your cameos on Adam's channel!
Your a hero Momma, thank you for your insight!
Thank you for explaining this! hahaha, it really was a surprise how suddenly we were talking about air being released through the throat-flaps of headless cadavers... ohhh too funny, you epiglott- me... epi-got-me, oh well i tried :/
Oh my, it's Adam's Mom!
Hey! Lol
I just had to come to the commentary section for that. That's a fact I'll never forget. Prrrrttt.
My psychologist: T-Plant is not real he cant hurt you .
T-Plant: Y0u n33d c00l111iiing
😂😂😂
🤣 this really tickled me
LMFAO BROOO HAHAHAHA
"Y0u n336 K001-A1d"
Fucking hilarious.
Aretha: "All I'm askin', just a little respect"
Adam: "That's wrong, let's fix that"
-Adam Neely Out Of Context
Sounds a lot in context, in fact
I mean he did just straight disrespect her right after.
The t-pain effect on "fly me to the moon" changes the premise of the song. It changes from romantic to an actual robot literally trying travel to Mars.
that’s hilarious 😂
To high smoke crack
the elon musk version
ye why actually use artists, when using vocoders gonna remove un-needed human element
Adam: Wow, what a huge difference!
Me: Hmm yes, I too notice difference. Much difference, indeed. Everywhere. Yes
Same :D
I feel like this everytime I watch on my phone. Adam: "This bass is very thick and rich." Me: "Ah yes, such low frequency."
Theres some parts where i dont think i could tell if i wasnt told it was corrected, but theres a few notes where im like "wait... tf was that"
Same
So I'm not the only one
“To sing a wrong note is insignificant, but to sing without passion is unforgivable.”
-Beethoven
Wonder why Beethoven would forgive a few wrong notes here and there
@@ZoSo42084 Maybe he didn't hear them? 😉
That’s what Kurt Cobain said as well. Kurt and Beethoven were exact same
@@dondamon4669 lol... is this even a quote by either of them? it really sounds like bullshit facebook LiVe LaUgH LoVe tripe falsely attributed to someone famous to get shares
@@michaelwerkov3438 no it’s Kurt who also said “what I have in my heart and soul must find a way out , that’s the reason for music, and to play without passion is inexcusable “ Kurt and Beethoven were brethren’s
With Sinatra, his breathiness actually gives a bassy thickness to his vocals. Whereas the fixed one sounds anemic. Its insane how something so small can make a big difference. Wow.
Melodyne doesn’t care about your breathiness though…
@@leaveitorsinkit242 modifying anything about the voice will change how it's perceived, it's impossible for a program to simulate what the singer would have sung a couple cents higher or lower
@@albertweber1617 not impossible, just not yet possible
I think that's an artifact of the AI stem separation software, not Melodyne. I would guess the lower frequencies of the human voice are more difficult to distinguish for the software, just as they are for us
it takes away the "tonewood"
They do not need
Auto tune
Their voice is freaking
POWERFUUUUULLLLL
MASTER EXPLODER 🔥🔥🔥🔥
I did not mean to blow your mind
@@PassoutProductions but that shit happens to me....all the timeeeeeeee
@@Leon-Coils19912 Now take a look (now take a look)
Tell me what you see!
“Perfection is not very expressive” …great thought to think on. 💭
This reminded me of a classic moment from MASH, where Charles, a man of means and culture, explains to an injured pianist the he always wanted to play the piano, saying "I have hands that can make a scalpel sing, but above all I wanted to play. I can play the notes, but I can not *make the music*." It's such a succinct way to express the imperfections or uniqueness of musical performance.
That's why Chopin composed so much on the black keys. His pianos sounded off in those keys (pre equal temperament). Nowadays classical musicians are very busy forgetting it was the case. ;-)
Get it perfect, first. Then, mess it up expressively.
says the man with the tilted frames on the background :-)
As a saying, it's right up there with "repetition legitimizes", or my corollary "legitimacy repeats"
More Mama Neely!!!
I run a recording studio in Western Kentucky and I had a young vocalist (~13-14) who was recording her first album. She would add these little diatonic embellishments at the front of phrases. Instead of a scoop into the phrase it was an actual pitch. The performance struck me as strange and it wasn’t until about tune 5 that I realized she was mimicking auto tune. It got really obvious on her more modern cover tunes. Great topic to go deep on.
Damn I just tested that on myself (I'm gen Z) and tried humming the melody of a song I know is definitely autotuned and then the Zeppelin song, I can totally hear the effect the song which had autotune had on my voice. Every note came in as on tune as I could make it (which wasn't very on tune since I'm not much of a singer). It almost felt like I was trying to play my voice like it was a piano, while with the Zeppelin song felt more like I was just singing normally... Really interesting effect.
I wonder if part of that piano feel was simply trying to match pitch. Try to match pitch with Plant and you may feel the same way. Don't "feel" it, just match his pitch. Curious how that feels to you.
I had no idea that there were recording studios out here in Western KY.
I honestly want to hear that. That sounds actually pretty interesting to look into!
@@drewburchett2824 We're set up in Paducah, KY. Our business is called Time On The String, feel free to look us up!
Hey Adam, I wonder what would happen if you went the opposite direction: take a song that has already been pitch-corrected to death and use melodyne to bluesify it. It would be interesting to see if this makes the vocal more soulful or just renders it even more uncanny. Of course, this would be *way* more work, as you'd have to choose which notes to bend and how far instead of just jacking all the sliders to 11, but it would be super interesting to find out what happens.
I think an easier approach to finding out what a heavily pitch corrected song would sound like without pitch correction would be to see if the artist ever sang the song live without an autotune microphone and maybe had a few pitches off. You can also see if anyone did a cover of that song without pitch correction. Though, I think it would be hard to find heavily pitch corrected songs that would sound better without pitch correction because I would assume that music artists and producers kinda know what they are doing and they probably wouldn’t pitch correct a song where it really sounds worse with it. Unless the song is kinda bluesy or has a semi-conversational style of singing, it probably would sound either the same with pitch correction or better. I guess musical theater numbers and operas would probably sound awful with pitch correction too though. But, I think pop music uses pitch correction for a good reason.
That’s not how stuff works
Yeah, that’s not really possible, but I appreciate the sentiment. Like someone else said, all you can really do is try to find a live recording without pitch correction. Unfortunately, a lot of pop artists are forced to use it even in their live shows, so that might be difficult to find.
@@AVL64 T-Pain’s Tiny Desk concert is maybe the best example you will find.
I was about to comment the same thing. I was wondering if the uncanniness if because the effect mangles the original tone / timbre of the voice. This experiment would reveal if it is the corrected tuning or the distortion caused by the effect that is making the singer sound "off" or "robotic".
It's amazing how a "better" vocal performance can sound alot worse than a more rough vocal.
💯
*a lot"
If you look at any instrument, being 100% in tune is not desirable. That’s what vibrato is. It’s a rhythmic variation in the pitch. When you 100% correct this it will remove all the emotion.
@@TheMAU5SoundsLikThis What gets me is that the common Western equal temperament scale isn't isn't 100% in tune so "correcting" vocals to it makes even less sense. Sometimes the in between notes count.
@@garydiamondguitarist Nevermind how pianos are tuned. As you go towards one end of the keyboard, every note starts being "stretched." (called octave stretching)
I wasn't ready for "T-Pain" Led Zeppelin. Wow. 😂
Ayyy, love your videos man!
i laughed soooo hard
Agreed, listening to it was “painful”.
@@HangsLopsided t-painful
Led Zep-pain
This reminds me of what I learned about Indian singing traditions. Often how you get to the note matters more than the note itself because the little bends along the way can convey a lot.
Adam's mum saying "imagine you had your head cut off". Real conversation starter
How does she know that 😱
@@mikew6840 former students
How did they figure out how human vocal cords sound without the head? 😶
@@Dowlphin By using Mathematical Anti Telharsic Harfatum Septomins
Reminds me of that line from Almost Famous: "Your mom kinda' freaked me out."
This is an absolute documentary of what it sounds like when the soul is removed from the music.
Well put. When it comes to autotune, all I can say is turn that shet off and learn how to play and sing.
Actually this vid shows how really today auto tune is not a correction or a tool anymore, but a real instrument by itself, like a vocal synth
@@raphaellerouxzielinski1731lmao no it certainly doesn’t
@@cured_bacon647fax
@@cured_bacon647 I mean Adam himself makes that point
Losing Aretha's pitch slide on the word "for" in "for a little respect" was the most jarring uncanny valley moment for me. It needs that slide to work!
I liked seeing the phrase "what you need" on the grid, you can see the way she slid up to the third 3 times and each iteration was slightly flatter; "need" was almost a semitone below "what" (at around 8:20)
Yeah, that really jumped out to me too.
It "works", but it is no longer Aretha. The rest is a matter of taste. This stuff is as bad as a click track.
I didn't even consciously notice a clear difference on these direct and short comparisons, but if listening to longer pieces, I'd definitely feel either enriched or dulled depending on whether it has life in it or is sterilized.
@@mikesmovingimages did you seriously just compare autotune with one of the most useful musical tools ever created? Please go away.
"Let's see, the name of that vocal teacher is...?" "Let me guess, mama Neely?" "My name is Cate Frazier-Neely." Called it.
The tension of this comment on iPad mini, a line cuts just before the “Neely”
Madam Neely
As long as it isn't like in the vtuber community where everyone's simping for the moms
So not only is the voice a naturally microtonal instrument, but that microtonality may be integral to many performances. Neat.
Ficaria legal dobrar algumas segundas ali pô.
Principalmente nas terminações das frases.
Mesmo usando esse pedal de harmônico, achei muito interessante harmonia
Only if your pleasure is a real, human vocal performance.
Imagine autotuning Korn. It would completely ruin their entire library of songs.
@@mikegamerguy4776 oh my god no
Well, Autotune is simply designed to “correct” pitches to a pitches used in European/western music. It would destroy eastern styles of music.
Pitch correction is a very useful tool when you have an epic performance filled with emotion and mojo but just missed those one or two notes that were off enough to mar an otherwise excellent track. A little snip snip fix in the mix and you save all the juicy goodness. Where it goes too far is snapping the entire vocal track to grid and losing all of the human element. Those scoops and slides are the secret sauce that separate an epic and memorable performance that touches our soul from a cold robotic one.
exactly correct
For me, it's those imperfections that create perfection. It's also why I usually prefer live performance. I'm not keen on everything being adjusted, artificially, towards some record industry 'standard' sound.
The correct take
This is so true for most music but what if it's an integral part of the genre? I've heard a lot of heavily emotional performances where they purposefully use portamento with autotune to get that snappy "dodODODO" sound between notes and it's awesome in my opinion (and a lot of others). For example Fri3ndzone - streetview.
It's really blowing up with the youngsters lately lol
@@abasement666Using it deliberately as an effect tool can have good results. But those who use it as that are good singers in the first place. They don't use it as a crutch.
As an example there is someone here on RUclips who is analysing Chers 'Believe' which was one of the first commercial uses of autotune. And maybe to everybody's surprise - it isn't used throughout the whole piece but actually as an accenting tool.
Adam: "The name of the vocal teacher is..."
Me: "Don't say your mom don't say your mom don't say your mom-"
Cool mum though.
@@normanfreund yeah she's amazing.
At this point I just assume any singing-focused topic is going to have a Mama Neely cameo but I still always love to see it. 😊
"How can I show off that my mom was cited in a book?"
I found Adam's mom's blog after this, and really enjoyed reading it. She had several posts about working with post-menopausal vocalists and the voice-changes they can experience. As a post-menopausal vocalist myself, I found it very encouraging ;) Also read her description of her father (= Adam's grandfather!), a brilliant organist. So interesting to read examples of how music can be passed down and transformed within a family.
Tell me more. (Also, I'm lazy.)
yeah, music is a family thing)) my grandpa played the accordeon and the piano (without any formal education!), my grandma sang in a pop band, my mom is a singer and choir conductor, and I...um... I played some drums in high school ._.
.My Mom went to school on a piano scholarship. I am a lifelong guitarist. Grandma played piano at parties ( they used to do that kids)
@@doitnowvideosyeah5841 Everyone's grandma played piano at parties. There was no tv, radio, or recorded music back in the day. Just live music or if you had thirty grand a player piano, the cutting edge technology of the time.
No doubt that Ma Neely's contributions to Adam's videos are always interesting, intuitive, and instructive.
“The next artist we’re gonna ruin…”🤣
I guess repetition legitimizes
Truth….
Exactly. If you think this "fixes" Led Zeppelin, then you don't know ANYTHING about rock music. Next, let's get rid of all of those pesky bends that Jimmy page is playing.
@@happyjack4813 that’s the point of the video. the “next artist we’re gonna ruin” quote is directly from the video lmao
@@happyjack4813 ... did you watch the video bro?
Kinda whack you of all people don't understand splitting the stems creates terrible artificing and is in no way ever going to result in a good mix when re rendered after editing.
When you do that things like reverb and echo get lost or tuned too and it's always gonna sound "off" This would only really work to prove your points (of which I admittedly agree with) if you did it with official stems, or with a singer directly.
This is NOT how to showcase the loss of what inexplicable artistry you keep harping on. You are already damaging the vocal when you force it into a stem with an algorithm. The second you touch any of those stems, the entire mix suffers. I'd love to see a reiteration of these key points with actual stems or raw recordings with a vocalist that's part of the video.
As soon as I heard "an app separating vocals from any stereo recording" I scrolled the comments just to find this comment lol Yeah you can't do that without damaging the audio.
True, but even beyond artifacts, I kind of agree with what he's saying about a lack of expressive range when limited to exact notes
@@jaxnitsua1200 I agree with him completely just think it's not the ideal way to make these points. You should try pitch correcting a stripped stem sometime, the level of jank is at least 10 times what it otherwise would be with a pure raw recording. Especially if you're doing it to vocals extracted from music that old. I just found it an odd premise to make the points but it was certainly more entertaining this way I give Adam that!
how did you get 6k likes on this comment on a video from a year ago? every other comment on this vid is like, 300-600 likes. and you only have 2 replies. fishy ass comment.
@@baker8981 Probably because people are music nerds and reacted to it 😅
It's so unironically incredible how the greatest singers are almost unaffected, if not worsened, by the use of Melodyne. Really fantastic artists.
All singers are worsened by pitch correction. It's not their voice, so by definition it's not a genuine performance. Might as well have an AI singer at that point.
@@LocrianDorian Nah, some people are so clearly out of tune (the more distorted the voice is, the more the pitch had to be corrected), that I'd rather listen to a robotic voice than their original mess.
What do you mean? The original sounds better. They should get rid of these computer software
@@vitorfernandes651 musicians should keep banging on 10th century pots and pans instead of utilizing new tools to make new sounds
@@vitorfernandes651 sometimes autotune can be used as a stylistic tool and sounds good
It's interesting how pitch correction can have the inverse effect in blues adjacent music as opposed to the added expressiveness that's possible in kinds of rap and R&B with pitch correction. Cool how the context influences the outcome. Great video as always man
To me, you really have to sing with the intention of hard tuning the vocals in order for it to sound correct.
@@GuyNamedSean yeah, it’s like you’re “playing” the pitch correction
In rap and r&b they're using the pitch correction to add to the sound but in this they're effectively using it to subtract sound, to take all the variation and remove it
If you snap to the grid, it rarely sounds good. If a blues singer overshoots a slide from the flat third to the major third, pitch correction can help.
@@1dkappe you can't grab a phrase and 100% it. You grab 3 or 4 words in a song and slightly shift them...or re-record
I think after being hammered with perfect pop vocals for years now, I actually crave something a bit pitchy.
The vulnerability of singing, the cracks and timbre of the voice, all of that stuff adds colour and richness to a song.
We’re here for the stuff that fall outside the lines!
I agree! that's something that got me into Rex Orange County. there are a lot of imperfections in his older stuff but it's really emotional and moving
This is why I love going to live performances and listening to old music.
Try flamenco -- listen to a bulería by Camaron de la Isla lol
Bent knee. If you haven’t heard them yet
You might like “Hear Between the Lines” (RUclips channel)
Last week I got really stoned and listened to "Whole Lotta Love" on repeat on my new vinyl record player set up... and came to the conclusion it was the greatest rock song ever recorded, so to hear someone is "fixing" it is like nails down the chalk board of my soul. 🤣
Adam Neely f***ing around with Autotune for 17 minutes is a whole vibe.
Yep.
He’s actually using Melodyne, not Autotune. The capitalization makes a difference in whether you’re referring to the process or Antares’ product.
@@kohaponx well when you get to this level of ‘f***ing around’ it doesn’t matter if you call it Autotune, Melodyne, or even Tune Real-Time whatever lmao
@@Guinneissik It does matter, because it's a different technique and sound. Especially when making jokes about "sounding like T-Pain" when using a different tool and purpose.
@@MrAfusensi it definitely doesn't matter as much as you guys seem to think lol
Imperfections are extremely important in music; if a saxophone was perfect, it would sound like a tone generator.
How many times have you had to use a mobile rig to record a sax player you get one day for overdubbing 10+ songs? Not every 'bap' is going to be perfectly in tune, and you won't catch every out of tune one because you're blazing through. Human beings get tired. 'Imperfections' are also 'mistakes' sometimes. Melodyne is subtle enough to fix pitch drift or going slightly sharp on the occasional note, without artifacts, which was ignored in this video. You cannot catch very error, so Melodyne is a lifesaver when you get a great performance that has a few tiny flaws that can be fixed other ways, but is far easier with Melodyne. Imperfections that make you wince are not the ones you want to keep. If you suck at using Melodyne, then you make a sax into a tone generator - and that's user error. It requires skill as an engineer to use it properly.
@@CHHuey There is a level to this debate about what constitutes "proper" use.
In most cases I would actually prefer to hear the wince. Ever since I started buying records (because I am old enough to have had to save up for music on vinyl) I have preferred live albums over studio ones. There is a magic that happens when a live band is absolutely on it and hearing a slightly bum note or a glitch is what proves that it is live and not re-engineered.
So that leads in to the issue of who decides what to leave in and what to take out. There is an artistic discretion being exercised between the musician and the engineer/producer but that inevitably creates tension. The businessmen want something that fits between the lines because there is a risk that the audience, who are now so used to perfection that they expect nothing less, will be turned off by something that has a few warts so they require what I would consider to be excessive correction. But the warts could be deliberate. Frank Sinatra could clearly sing in tune but chose to sing flat for the effect. Unless the person applying the "corrections" sits with the artist and goes through every change they want to make to see if the musician meant to create the wart then there are going to be places where the software is obliterating a deliberate part of the performance.
An engineer should never try to be a better guitarist than the guitarist. Or singer. Or drummer. Or just about any other instrument.
It may be that you and I are in agreement on this but I suspect that for some people "proper" means polishing out all the imperfections and with them all of the character and for me that taints the whole process.
@@Birkguitars Frank Sinatra wasn't singing flat. He knew where the note was. It just wasn't a piano note. Blues and jazz have a long tradition of using microtonal methods of singing. Melodyne isn't build with that mindset, but it can help if the occasional problem sneaks in and you just didn't catch it. You can say you like the mistakes at gigs but in all honesty, you don't hear the majority of them because the acoustics usually aren't good enough which is why bands will still 'doctor' live albums, because the bassist couldn't hear himself in the monitors and hit the wrong note, which no one in the audience noticed.
There is no magic when you have the same person recording 3 different saxophones while wearing headphones, playing to a pre-recorded track with 2 people staring at him for hours on end. That psychological barrier is difficult for people. Not everyone is comfortable and it affects the performance. You're equivocating between the 'acceptable mistakes' and the kind of mistake that makes you wince and notice the mistake. Anything acceptable doesn't need to be fixed.
You statements about engineers makes me think you've never had to work with one. The software doesn't 'obliterate' anything. I have had to fix notes on a pedal harp because a pedal harp is an instrument that goes out of tune in a very unpleasant way that no harpist likes. It's a problem with the instrument. Those discs make the strings go out of tune - design flaw.
The engineer is working for the producer, who is making those decisions with the artist. That is the job of the producer, to listen to those mistakes and decide which ones to keep in and which ones to let go. I never said fix every problem, but the ones that make you wince are the ones you fix because you just didn't catch it in time. You've totally restated my argument into something that fits your world view instead of analyzing it for its own merit.
When you get someone who is composer, producer, engineer and performer and in general in charge, you get to see the big picture. I have. There are mistakes you keep in and those you throw out. I never said you purge it of humanity, but there is a threshold of mistakes that you can tolerate, and those you can't. Do you really think that something like Frank Zappa's 'Waka Jawaka' or 'Hot Rats' is better with the wrong mistakes left in, because they weren't. He edited them out, but the guy edited so well, composed produced and engineered, that he knew what was supposed to be there and what wasn't. That's producing.
I do not think we agree because I'm not sure you get that recording a live concert and a multi-track recording represent different things, and if you're doing it on a budget with limited time, this is not 'demeaning the music' or anything like that, it's bring out the intent of what the person playing meant to play. It removes a barrier that lets you focus on performance and not on all the weird technical problems. Most saxophone players don't WANT to hit the wrong note. That they do is not proof of 'humanity'. You can easily correct the pitch while leaving the intonation as it should be - Adam severely under utilized that aspect of Melodyne, which is one of the best features of Melodyne.
The person who decides to leave it in or take it out is the producer, musical director, or the band if they're working with an engineer who really sticks to that strictly. If you want to go with 'live album', fair enough, but once you abandon that, you have to go with 'this isn't reality'. That's where Eddie Kramer, George Martin, guys like that come in. There was a Melodyne before Melodyne - it was called session musicians who wouldn't screw up and producers like Tom Wilson would often insist on it if the band wasn't cutting it.
The person in charge of the production that the musicians hire makes the decision. That's why they exist. Maybe it's one band member, maybe it's someone running a group like a jazz group instead of the Beatles, but at the end of the day there's no tyranny of Melodyne trying to purge humanity of it's expression, just an ability to make things that in the past would've taken forever since an analog synth goes out of tune the longer you leave it one. Those PITA problems go away and the music sounds better. But of course a band like Black Flag isn't served like that the way that Frank Zappa was. But that's because someone MADE the decision, and that was Greg Ginn in Black Flag and Frank Zappa, the composers.
The only reason I care to put all of this in writing is that someone who could do something extraordinary might not because they got peer-pressured into 'auto-tune is bad'. I'm tired of that crap just as much as I'm tired of bands that don't do pre-production demos, and practice while they record. Studio time is for studio stuff, and this is a blessing for fixing stuff when you're limited by time and other resources. That is ignoring the creative uses of it, too. Laziness is laziness, and this video was only about the laziness.
@@CHHuey If a sound engineer can't tell something is out of tune it isn't significantly out of tune.
@@CHHuey I have actually seen a documentary on Sinatra that mentioned singing flat but I can't track it down at the moment so unfortunately I can't quote authority but I will hold to it. You mention microtonal methods but I think that is actually what I am talking about - being a few cents under true pitch rather than being randomly out of key.
That aside I don't actually disagree with any of your other observations but I think I have a different reference. Years ago I saw an interview with Pete Waterman who said explicitly that he did not want any of "his" acts (note the possessiveness of the attitude) to sing live because he and his team spent so long in the studio making sure the vocal performance sounded as good as they could make it anything done live would be second best. In HIS hands Melodyne would be a purely business tool not a musical one.
I am old enough to remember newspaper headlines about how the Bay City Rollers didn't play their own instruments so I am aware of the rent a band concept with session musicians. I went through University to the soundtrack of Relax by Frankie Goes to Hollywood on which Norman Watt-Roy came up with and played the iconic bass line, not the band member.
And I have heard Glen Fricker screaming about bands that turn up to a studio without having done the practice to know how difficult it can be to tease out a recordable performance.
Ultimately the choice of which points to change is an aesthetic one and that is where the differences in preference come in. A while back I was in a band doing a cover version of Rocking in the Free World and in researching options to create our own interpretation I found a version that Pearl Jam did live. The audio is straight off the video so there is no editing or correcting. One of the guitars is clearly out of tune by a significant margin but I love it as a performance because it captures so much energy. I have no doubt that others would think it unmusical garbage.
I concede that this is a personal preference hence my understanding that Melodyne can have a place. My concern lies around the difference between what you describe, a tweak to correct something that is not on the button because of practical limitations, and the ability to take a voice that honks like a drunken goose and put it on tune and on time.
I want to hear an authentic representation of what the performer is capable of and wants to project but with Melodyne I cannot be sure that this is what I am getting particularly when someone like Pete Waterman would be using it to create an electronic version of Milli Vanilli, hence my deep distrust of the outcomes.
You gotta appreciate Sinatra's rythmic placement. Even with lot of autotune he swings like crazy
he is spot on
The robotic voice reminds me a bit of Mills Brothers.
Best ever.
He's feeling the ands and the uhs
I'm sure that could be "corrected" digitally as well.
This was so interesting to me - I especially noticed with the Aretha clip how much her subtle bending of the notes added emotion and “humanity” to her singing. Really fascinating, thanks!
I was always taught when singing thirds in a choral setting, you aim high or low on it depending on if you're ascending or descending, or depending on the chord and other harmonies. I understood it to be voices aren't limited by the tuning system that instruments are beholden to. AFAIK this is very important in barbershop too.
That's cool, I knew barbershop quartets change tempered tuning. But I've never heard it from any conductor I sung under.
@@emilciapaa9077 Instrumentalists have been bending notes since instruments were invented.
@@mikesmovingimages I believe you. But vocal and especially choral music is closer to my heart. And no one really told me to change colour of notes. It was always just about "right" intonation.
@@mikesmovingimages though the popularity of the technique of bending notes on guitar has been a more recent thing, you're technically right that fretless instruments such as violin and related instruments have been able to bend and/or play pitches outside of 12edo for a long time. Or if you meant "bending notes" as in using any tuning system other than 12edo, that's certainly true. 12edo has only had widespread adoption within the past couple hundred years, and before that, all sorts of other tunings have been used throughout history and in all different cultures.
@@bragtime1052 in addition to bending notes for melodic interest, the idea of being in tune is a subjective concept. In reality there are not 12 tones in a harmonic scale but 17. All the emharmonic notes are comprised of two. Eb and D# are not the same, for example. Which one to use depends on the context. The ratio of a third or sixth to fundamental tone yields an irrational number. Orchestras and choruses never needed equal temperament, and as Adam's video demonstrates, "fixing" art by forcing it to a grid is a dehumanizing exercise. There is much beauty beyond the cold algorithms.
Every time he said he was going to “fix” something, all I could think was “why do you hate us, Adam?”
Using Autotune on Robert Plant is akin to Sauron creating Orcs from Elves.
i'm about to be that guy, but Morgoth created orcs bc he was big mad that he couldn't create life (with sentience) from nothing like Eru
Sauron was his subordinate
@@frybabyofficial 👍 thanks for the correction
@@frybabyofficial actually Melkor :p
Why didn't Frodo use a drone to drop the ring in Mordor?
This is why most pop music made since the 2000s makes me feel like i’m being forced to eat rubber.
Hahahahahahahaha! Good one.
Next episode: Quantising a J Dilla beat/D'angelo track
We are not ready for such blasphemy
I don't know if you're serious but I think that would be super interesting !
quantizing larger tempo shifts also really throws the listener for a loop. I heard a version of SOAD's Know that had been corrected to a constant tempo across all sections for the purposes of being mashup-able and it's absolutely bizare sounding to not have the choruses slow down
Disgusting
YES AHHAHAH
Without elaborating, it reminds me of Photoshop on models. It's done so well that it's beginning to look natural and anything that doesn't quite match that ideal is seen as not as attractive. And it's setting a standard that we artificially have created for ourselves and will idk, I doubt it will be as damaging as Photoshop, but it'll have an impact for sure.
Kinda but no, like you said.
In both cases, the tech is being used to take a product that isn't quite what they want it to be and get it there.
But physical appearance isn't in the same realm culturally/psychologically as music.
People seem to tend to reject when music is artificial, even with no prior reference. People would rather have a naturally good singer but commercial music prioritizes image (looks) over the talent, and thus we get an inferior sounding product. Interesting how the looks part is connected to this.
Flawless personal appearance is something we seem to desire or aspire to, and not because of photoshop.
Makeup and other personal physical attribute enhancements have been a staple of all cultures since before written history (interestingly this exists among the different cultures' varying ideals of what is attractive).
So in the photoshop case, the market is supplying a demand (but I do agree that it becomes circular because that ideal then in turn does influence the cultural ideals).
"All I'm asking is for a little respect..."
**disrespectfully autotunes the life out of the voice**
Couldn't agree more
sometimes its the raw live imperfections that create a classic masterpiece.
course it is
Me while not really hearing a difference: "Yes, quite, I agree."
Honesty right there, people.
I'm a former music major, and I hear nothing different except pitch center, like he chose a slightly higher approach, which as a classical singer, is exactly what you want to do. I think this is more a recording mix video, where people who work sound boards are more conditioned to hear those differences in headphones.
On stage I can hear really subtle things, but in a recording mix, those subtleties just aren't as present, they're mixed out, and so hearing the post mixed sound takes a certain kind of practice.
So "Ain't No Sunshine" is really noticeable, but that's a very stripped down recording, so it makes sense. Pink Floyd as well, but I never liked that drifting Tom Petty vocal sound, so I'm ignoring it.
With "Ain't No Sunshine", smoothing his pitch actually changes the rhythm of the song, bc he uses the pitch inflection to set up the attack of each word. By smoothing it out, the music looses it's 'swung' quality and it sounds more like he holds a note too long and has to play catch up. That's when losing expression actually fundamentally changes the song in my mind.
Everything’s fine until Those blues thirds start falling away, it’s kinda the thing that distinguishes vocals from other instruments-so the phrases sound similar in isolation but if one listened to the whole track “perfected” you’d never hear the vocals do “their thing” in the pitch landscape. Kinda like how the guitar is tuned just shapes the sort of phrases we hear from it, like “this is a guitar melody, I can tell by the notes” even when it comes out of piano.
I'm with you. I also watched on my phone speakers though, so I don't know how much I trust my ears atm.
this kinda reminds me of when people take old animation and "improve" it by using software to interpolate it to 60 fps, destroying all the nuance and detail of the animation
But but 120hz 4k tom and jerry is the way it was meant to be seen!!!
I love it when people do that. I feel like it adds detail and makes the nuance easier to appreciate.
@ian m can't and shouldn't are very different things
@@AfonsodelCB No, like, you literally can't automatically add extra detail.
@@lolwutizit The original animation artists were constrained by the medium and lack of technology of the time. They implied motion. Computers, when trained correctly, can see that implied motion and fill in the detail.
I won't argue that it's not a computer's "opinion", it certainly is. I will argue if you say it's objectively a bad thing. It's quite subjective and IMO it's a good thing for many people, myself included. I like it, and I like how the implied motion is automatically smoothed.
"I feel like it adds detail and makes the nuance easier to appreciate." is my original statement and I stand by it. The interpolation is quite sophisticated, taking implied detail and realizing it. I appreciate that and believe it accentuates nuance.
The Sinatra one with the robo-harmony was actually really awesome
Agreed! This could become its own genre.
Yeah, especially given the lyrics! It feels like the effects really blended together with what he was singing about in a really cool way
@@ping163 yep!
@@andybaldman Croon-o-tune
@@digitaljanus Auto-croon!!
It would be so interesting to hear you give Bob Dylan and Lou Reed this treatment.
Bob Dylan would first need to learn how to sing.
Niel Young too.
@@Blinknone *speak
Hahahhah ... I immediately thought of Lou Reed ... the chap wasn't a "singer" for sure, but he sang his lyrics as he felt on the day (and never the same in two performances). Among the best as a story-teller poet there was.
Have a feeling he would have refused to use autotune ... he consistently eschewed "embelishments" of any form unless he felt the song needed tham.
Bob Dylan said it himself he hits all the notes. He doesn’t need auto tune lol
I think being "off pitch" can sometimes sound better or more natural to us probably because it's closer to just intonation than equal temperament. Maybe?
got nothing to do with intonation or music theory, and everything to do with our organs and ears + what we are used to. it's just "how it is" and theorizing around it won't change that :P
of course there's subjective taste as well
@@genesises yes, well just intonation is determined by our ears & organs, so.
I think it sounds more natural because it just…IS more natural. Most people (and even a significant amount of instruments) don’t have absolutely perfect pitch and hearing something tuned so perfectly probably rings as unnatural not just because we have context for the original versions, but because we don’t usually hear natural examples of perfect tuning in that same vein.
@@paniccleo it depends on the instance. I don't agree with the person above who said that it difinitively has nothing to do with intonation, but I do think it depends on the instance. I am sure there are times where this is the case. There are plenty of songs I know of that are intentionally out of tune that sound better that way than they would *in tune*
@@genesises How so when intonation is minor pitch changes?
As someone who got into music late and was born after 1985, I've found it took me a really long time to appreciate singers like Bill Withers, whom I found "pitchy". Then, when I really started listening to older music in general, I realized almost everyone was "pitchy" and pitch was an appropriate sacrifice to emotion and soul. Now, when I listen to modern music, I find it more devoid of character. I appreciate this video because it definitely affects how I'll correct my own voice and trust my ear more when something sounds right and isn't perfectly on the pitch grid. Thanks Adam!
I've noticed this with a lot of people who predominantly listen to contemporary pop. They perceive natural singing as pitchy. But of course perfection is rare in actual musical performance and it's also not aimed for by performers because it's boring and unnatural. Slight deviations from the perfect pitch are the norm with singers, violinists, cellists, etc. It's even used consciously for an emotional effect. The same thing is true for slight rhythmic variations. A lot of people have become used to music that has been dehumanized in many different ways.
There is a great Neil Gaiman quote which goes something like "Style is the mistakes you can't help but make"
That’s a dumbass quote lmao. Style is just honed by realizing your mistakes overtime. Clearly that quote was the dude trying to sound smart and it fails miserably
So by your logic, polyphias whole style is mistakes they can’t help but make, but their whole style is very technically demanding and has almost NO mistakes. So please. Make it make sense for me.
@@SolaceWhorethat is their style, cause it sounds terrible!
@@NBrixH I’m not saying polyphia sounds good. I don’t like them either, but to say their technical prowess are mistakes is insulting. They are more technically skilled than you or I more than likely ever will. They deserve respect due to the work they put in. I can acknowledge how outrageously hard something is, and acknowledging the talent needed to play it, whilst still thinking it doesn’t sound great. But some people do think it sounds great, so to say it sounds terrible is just your opinion, and you’re stating it as though it’s fact.
@@SolaceWhore oh no no, they are definitely talented. And their skill can not be ignored. It just think it’s wasted skills cause it sounds like shit.
One thing Adam isn't considering is that the notes actually ARE perfect. They just don't align to this perfectly spaced grid that we 'say' is perfect. This...degree of separation that someone said is 'perfection'. Humans are perfectly imperfect. And those human imperfections are what Neely would describe as "mojo".
To translate this idea to guitar. Try playing "Scar Tissue" by RHCP on a 'perfectly tuned' guitar. It just won't sound good.
I thought that was the whole point when he said it was incompatible with the blues scale used!
Another part of this is that pitch correction introduces audio artifacts. The software is altering the audio samples to produce a different pitch, which is in itself an imperfect process. The more the software alters the audio, the more artifacting there is. This further exacerbates the unnatural feeling of heavily-tuned vocals. Now, to your point, this can be desirable for an aesthetic purpose, but if you want to actually tune a vocal and still sound "natural", you have to apply it VERY delicately.
In many cases, audio engineers are not strictly locking the audio to the desired pitch (as was done in this video), and are instead trying to get it "close enough" that it is read as "in-tune" (according to Western tuning systems) without introducing too much artifacting. The main goal is to eliminate distractions, fixing where a note is sour, rather than simply being bent for creative effect. It's a delicate dance to pitch correct a vocal without "ruining" it.
Also hot take: the best mainstream rock, blues, and RNB singers of the 70s are not representative of all the singers from the 70s. It kind of goes without saying that there are a lot of people who would have killed for this technology back in the day and these post hoc arguments are a product of validating nostalgia.
Hell, as a producer, I work with plenty of people who I don’t think need any pitch tuning.
Yeah, is it bad that I can hear the quantization artifacts like 1000 times easier than I can tell the original is out of tune?
@@BradsGonnaPlay also, another hot take: moses is just not good enough for this.
you need perfectly isolated vocals before room acoustics being applied for this to work at all. Software separated vocals with natural room reverb just ain’t it,
This is so wild. I was just listening to IV last night and noticed that in "Battle of Evermore" there's a moment where all the harmonies come together in unison and they are not at all on the same pitch...and that makes it infinitely better.
There is a reason why Melodyne is usually applied manually instead of just pushing everything to 100%
Exactly. It's a way to save an otherwise great take.
That had to be said. I love Adam Neely but if I had no idea about Melodyne and never used it before I would probably create a worse image about that tool. It is made to be able to keep imperfections more than other pitch correction softwares and its mostly how u choose to use it.
I personally tend to just slightly automate the pitch for little corrections, but this is probably a lot more convenient and visually reassuring. Every way of changing pitch without changing the speed also has it's artifacts when working with audio files, but that only seems to be on more drastic pitch changes, I could see myself picking up this plugin for that as well, if it's got a unique and cool algorithm.
The drift on vocals is what really kills things, but usually vocals are things you can recut until you get it right. There are numerous technical problems (making a 3 saddle telecaster stay in tune when doubling piano melody) that used to require detuning a string to get around the intonation issues, then only playing part, doing another pass after retuning, that you can simply fix after the fact now. It's not fix it in the mix, it's fix something that is inherently flawed in the instrument without wasting time. Why is it that Adam didn't do something I know he is probably annoyed by on a regular basis - fix a trumpet that is slightly sharp on an otherwise great take that you didn't realize was off? You nudge it in a few seconds. Beats cutting and pasting the note from another bar. It's a powerful tool and vocals are the worst example. You wouldn't do this to BB King, but Frank Zappa... that's where you see the usefulness. Blues singers have great control to begin with. A marimba and harp will fight each other - harps don't stay in tune. Melodyne make that go away quickly with no artifacts. It's not the enemy. It's your best friend for technical issues that aren't laziness.
A violinist friend of mine once asked me "What is the difference between a good musician and a great one?" I shrugged and he replied "The great musician knows when to get the notes wrong and by how much. The note has to serve the music not the other way around". I think that this is what your video is laying bare.
Great work, by the way. Keep it up
There’s a certain inertia to the sounds of blues; like a heavy rock sliding on ice. You push it to change its direction but it wants to keep going straight. I think that sound gives a solo or melody much more authority over the other parts of the song.
Adam: *calls pitch correction "Autotune"*
Antares: Wait that's illegal
Underrated comment.
it's the popculture term for pitch-correction, now
it’s funny cause it’s kinda the opposite of autotune, it’s more manutune since it involves a more manual approach😂
@@Youngapollo47 I think the fuss is all about the "tune" part, not whether you do it manually or algorithmically. At least to listeners who are not producers.
I mean, most people consider production itself as "computers doing it", and not as actual hard work, talent and artistic decisions that are involved.
@@Medytacjusz i dont think that’s relevant to the original comment or my reply, right as you may be lol i was just pointing out that it’s funny he’s using melodyne but calls it auto tune
I love how the corrected version of Wish You Were Here sounds like a Coldplay song
….or Joe Satriani
God damn, I thought the same, it really does.
I thought the same haha Just like Coldplay
I wonder if running the program backwards would make a Coldplay song sound like Pink Floyd
ROFL
I dunno if this sounds crazy, it’s like the soul - is in the moments the voice changes from one note to the other.
The story telling is in HOW the voice goes from one note to the other. That’s what colors the emotion into it. Just like not all human emotion will ever be pitch perfect- so
Auto tune will never be able to place true emotion into the song.
Awesome video was engaged from the second second on thank you for making this.
The crazy thing is that none of them really sound bad per se; there’s some artifacting but that’s to be expected; they just sound off. They just sound a little less human, a little less expressive. As Alejandro Jodorowsky once said, “Too much perfection is a mistake.”
He also once said "I am r*ping Frank Herbert", so take that with a grain of salt ;)
"I've conquered the Holy Mountain.... horizontally!"
@@aldeayeah "But with love... with love." LOL, okay Jodo, thanks for that. But seriously, major kudos for pulling that quote. :)
13:11 this, for me, is the difference between Melodyme and natural. The "corrected" Bill Withers sounds good, but when you played the uncorrected I immediately got goosebumps
All the mojo.
"If you were to cut off your head above the vocal cords..."
What experiments have you been running Adam's mum?
"Would sound like a card being hit by bicycle wheel spokes" I think there might be some gurgling going on there too.
It's Adam's Family after all :D
Momma Neely
@@stevierv22 The best comment :)
She’s in The Suicide Squad as Polkadot Man’s Mom. Lmfao
The harmonizer is not the Tpain effect. T pain uses actual auto tune which, when the sensitivity is turned up all the way, locks the vocals to the note so precisely that it's causes an unnatural robotic sound.
9:10 When I heard the melodyne version I was like "yeah it still sounds decent" and then you played the original and I instantly got goosebumps.
I could see her singing in my head as the original played, it instantly felt so vivid. It's crazy how you can get such a forceful emotional response from the original simply by hearing the doctored version first.
Aretha is God! Blasphemous indeed.
oh man i felt the same thing at 13:15. bill wither gave me goosebumps with the original vocals
Lol You guys aren't taking into account nostalgia. I'm sure you guys would still get chills if the original vocals had pitch correction and it was the only version you knew.
It wasn’t that crazy lol
this is the beginning of adams rick beato arc
Adam tomorrow: why today’s music is trash
But 13:48 is what will save him from that terrible fate.
@@GabeWilliams why Western 18th century music is trash
I hope not...
I was joking - last year Adam made a big issue over our focus on "classical music" (and related theory) sidelining other music traditions and being inappropriate to analyse such musics.
10:45 "If you were to cut off your head above the vocal cords". Momma Neely goes full Metal.
I came to say pretty much exactly this haha
🤣🤣🤣
A headless person sounding like bicycle spokes is terrifying
How does she know this? SCARY!
David Bowie's classics just wouldn't sound the same without him continuously going flat and managing to correct himself without most people ever noticing. It's part of the beauty of not using autotune, those near misses are exactly the things that bring an emotion into the music. The art in music as a singer is to get back on pitch where you want to be. Autotune is basically relegating that personal art and skill to a computer, meaning, the singer will not learn or maintain that skill him or herself. Which is one of the reasons why some singers just can't perform without a laptop in the room. Worst case scenario, which sadly happens more and more these days, is that people are selected as singers based only on their looks and sense of rythm, without any skill in hitting a note even close, because autotune will fix that.
the singer of tomorrow are based purely on their looks, they will have no sense of rhythm, time, musical tune, Key and pitch. they will have a Apple Macbook with GarageBand Tools, Autotune, song writing software (AI produced lyrics) and the index finger to press start.
sell out concerts watching someone sound fantastic (according to the tone deaf who cant hear Autotune Vocals), only to have a major software malfunction happen, having the once amazing angel like vocals turn into the sound of a cat being tortured as their claws are dragged across a blackboard, with the accompaniment of a dentist's drill grinding away at a root canal.
the news headline the next day. "singer's career is over after his laptop failed to cover up his real voice, causing 25 people to suffer major injuries while trying to flee the venue"
one of the injured said " all of a sudden, there was a spark, a popping noise in the speakers , then his real vocals came through.. it was so bad the front row started to vomit while the rest of the crowd screamed in agony, that sounded more in tune than the vocals did"
Aretha's voice compared to the harmonies when autotuned was a good example of that "something's off" indescribable artifact.
“...Your Scientists Were So Preoccupied With Whether Or Not They Could, They Didn’t Stop To Think If They Should.”
Scientists: "We do what we must because we can."
@@fredashay This was a triumph!
@@PavloPravdiukov I'm making a note here: _Huge success!_
history show again and again how nature points out the folly of men.
@@fredashay for the good of all of us... except for those who are dead.
Your video editing style is as elaborate as your musical education is. The best part is you're always growing.
Würde ich bei ihnen auch so unterschreiben Herr Fischer
Schön, dass wir uns das selbe anschauen 😊 Hab mir grad gestern das neue pass it on mit fewjar gegönnt 💜👍🏼
Is that an insult?
@@thegoodlord6518 im lost as well
The most important part is what he has to say though, and what he has to say is always interesting and enlightening. The editing is merely at the service of the discourse.
A prime example of why I have loved your channel for years, This and the Coltrane Fractal 2 of my all-time favorite musicology videos on here. Also, your mom is awesome!
"If you were to cut off your head..." Mom's metal as fuck.
never piss off mrs neely
I absolutely love that Adam calls his mom.
...AND she has intelligent things to add to the conversation! What a great resource!
He is very lucky to be born out of a such wise mom
I'd like to hear what Autotune would do with Bob Dylan's singing -- as an experiment only,
@@joewalker3166 Dylan scoops in a lot of his songs.
@@joewalker3166 there was an early review of his first record that said he's a great singer but won't be known as a good songwriter
@@joewalker3166 I listen to Bob and I dont like his style. Anyways it adds atmosphere to his songs witch are unique and original and coresponds well with lyrics etc. And yes, he is well tuned.
Its just about the taste.
For me Dylan had one of the best voices up to 78/9 then it was harder for me but before that his voice was out of this world. But I don’t like proper singers anyway I can’t even stand Mercury
@@joewalker3166 um you are aware he was drunk as hell on stage like 90% of the acts he performed right? I feel you are cherry picking with rose tinted nostalgia friend. do I love the songs, yes, but I can be hard on them with that love.
What melodyne does is make a microtonal voice instrument "pirch perfect", but music is not about perfect pitch (that is all about production value and conformity). The microtones are there for a reason.
It's like removing the bends from a guitar solo.
Side tangent: T-Pain went into a big depressive episode after Usher told him that he "ruined music".
After some time away, T-Pain went on to win the masked singer, to prove to himself and the world that he can sing without the "crutch" of autotune.
Yeah we all saw that Netflix doc. And even if we didn't its pretty well known at this point. That along with the tiny desk concert is why everyone loves T-Pain now.
@@nedisahonkey I didn't know that, I don't even know who TPain is.
@@dragons_red Thankfully the internet and Google exist so no one needs to live in ignorance!
@@dragons_red come on, yes you do, even if you don't know his name I'm sure you know a song/that sound
His NPR Tiny Desk concert proved to many that he is a talented singer. Everyone should check it out if they haven't.
It's kind of scary how autotune kills the soul of the voice in such a subtle, yet all-encompassing manner.
lol it wasnt subtle, it sounds subtle but it was too aggressive, you must not take all the way up the settings and once you tune, you need to fix the vibrato and connect the note vibrato which he wont do it for a video, the vibrato its what makes a voice sounds natural and alive, also if you dont connect the notes it sounds unnatural in most cases, pop singers make it more subtle in most of the cases
@@raufmeister
Your reading comprehension needs work. Try not to be so presumptuous, it makes you look pretty arrogant.
@@raufmeister Or you go all the way and use it as effect like Cher or T-Pain do.
i blame the vocal isolation for a lot of it, but he was also way too aggressive on the settings.
Agree; even with the other “fixes” others are suggesting, it just sounds like an audio robot desert to me
Slight adjustments may enhance the studio performance, but that semi-perfection of the organic vocals is so charming
Why would you want to fix pure artistry with " auto tune " ? Auto tune ? Really ?! Try learning how to play and sing .
I don't think it enhances it in any way. I feel like it ruins it. The imperfections are the beauty of pre-digital music. John Bonham from this same band always played a little behind the beat which makes Led Zeppelin who they are. To fix his drum beats to a precise bpm would ruin it even worse than Melodyne ruins Plant's vocals.
As a ballet dancer, this is very similar to what a teacher bestowed upon me...as an artist, we establish that we know the rules and express through breaking them. That is the definition of art.
Maybe because I've been a full time recording engineer for 30 years or so now, AutoTune/Melodyne/etc. always gives me the heebie-jeebies. It just removes all the emotional payload for me. I find it as engaging as being serenaded by a Speak and Spell would be. Uncanny valley indeed.
"Serenaded by a Speak & Spell" needs to become a common expression for when a performance lacks any emotion and passion :)
Ever recorded a pedal harp? I have. I love Melodyne. The instrument itself will not stay in tune by design. Using it on vocals like shown is just laziness. You may have 30 years, but you haven't had Melodyne for 30 years so I'd really suggest pulling up any instrument that is prone to intonation problems, or analog synths that drift in tuning as they heat up, and seeing how useful it is and how it doesn't remove 'soul' but 'technical error' so you can get a better performance without doing all the workarounds you likely know about. I don't tempo map, don't fix bad vocals, but a jazz chord on a 3 saddle uncompensated Tele around the 8th fret? A nudge fixes that problem. You're missing out.
You can watch the bonus video and the un-edited original cut over on Nebula, if you'd like. (curiositystream.com/adamneely)
Also, yes, I use the term "Autotune" to mean many different things in this video, including simply "pitch-correction" and not the piece of software specifically. I used Melodyne in the video mainly since I don't own Antares Autotune, but they can do very similar things.
Dm7, Bbm maj7, Fmaj7
Great work fixing all those out of tune amateurs - but you missed a trick, I noticed that they are all off the beat, I mean really, it's like they just sing whatever they want whenever they want, lets lock it down and get them strictly in time for the next video.
It's no surprise that hours and hours of work honing the skill of playing a piano (an instrument that is pitch-quantized, incidentally) can produce far more sophisticated and musical results than simply 'banging on the keys'. Is it surprising that Melodyne (as an instrument in it's own right, incidentally) is any different?
Were any of the singers effectively using just intonation rather than equal temperament?
1) Please pin this.
2) Please link to the nebula vid or your nebula creator page.
I had to scroll through so many comments to find this.
I feel like we, as a culture, have become obsessed with perfection. Maybe to the point that it is unproductive. It's making music less perfect.
It really puts me off singing tbh.. I don't want to use auto tune but because I won't be "Perfect" people wont like it....
Im not part of your culture, I dont care about perfection and i dont listen to music anymore
@@thoticcusprime9309 Who asked you?
@@thoticcusprime9309 wow so brave going against the grain.
It might make it mechanically and acoustically more perfect. But it loses the human aspect, the "soul" of the music. Those little imperfections and variations.
I fell for the rage bait so hard. Love the video. Well done :)
“If you were to cut off your head above the vocal chords and pass air through, it would sound like a baseball card in bicycle spokes”… uh… Thanks, Adam Neely’s Mom.
So... an oscillator for a biological synthesizer???
@@snowmanplan Arguably, that's what singing is. Chopping off your head to alter your voice would be biological circuit bending.
I was going to say, she goes "what that tells us is..." and I was afraid of what she'd say next...
It's honestly impressive how subtle it is at the end of the day. Like, until you really crank it up and really process the sh*t out of the vocals, it sounds quite natural -- if clinical. So much so that it's likely impossible to tell if the vocals have been tweaked or not (assuming it's not a performance you're intimately familiar with). Weird.
It is subtle, and also why most modern music is so soulless and horrible
@@markgriz this is a boomer take lol
are you deaf? even a few cents of tuning sounds horrible and processed
@@markgriz It's those little imperfections and inaccuracies that make songs feel alive. In many cases, studio recordings get processed and cleaned up to the extent that I often find myself gravitating towards live performances over the studio stuff.
@@cataleast That was the point of my comment. I guess preferring actual vocals over fine-tuned perfect vocals makes me a "boomer" (not even close BTW)
Being a little bit "out of tune" at the right time, adds tension, IMO. Kind of like throwing a second or a fourth into a chord.
Yeah, it keeps you listening as if the note didn't fully resolve yet.
*Dynamics* and *tones of voice* are conveyed not only with volume and timbre, but also via tiny pitch "artifacts". Singing a loud sound has its own pitch signature applied on top of the note you hit, singing a growly sound has another pitch signature, and so on. Which means that flattening/"correcting" pitch strips away some of the perceived dynamics and tone as well. (Centering a note in Melodyne is essentially decreasing the amount of information in audio.) So tuning vocals needs very careful consideration - I often end up having tuned the vocals for a whole part with everything sounding great and smooth in isolation, but when playing it back it doesn't hit as hard since it's lost that _tiny_ bit of perceived dynamics.
This is also one reason why synthetic vocals are so difficult. You can't just isolate factors like pitch and volume and tone and simply sum them together. You need a massively smarter algorithm in order to sound natural. Tweaking a Vocaloid to sound natural, for example, is super time-consuming and finicky with the current tech.
Imagine autotuning some even more out of tune masterpieces, like John Wetton in King Crimson (always sliding into and out of notes, usually fighting not to be flat, love him nevertheless), or Bob Dylan singing anything off of Blonde On Blonde. Goodness sakes that would be terrifying.
Yeah I'd love to hear Visions of Johanna ruined lol
Interesting. Even when I feel something was lost, even when it seems a bit uncanny...it doesn't feel *wrong* to my ears.
Maybe it's because we're so used to it nowadays you know? It is interesting though I felt the same.
The led zeppelin one felt terrible, but sinatra kinda worked, a bit uncanny, but if i didn't know it was autotuned i wouldn't notice
That was what was so much worse for me though! If the autotuned ones were played first I wasn't sure I was actually hearing anything different. It was only when the unaltered version was played that I noticed just how much was missing from the autotuned one. I feel like that's really what's so tragic about autotune being the default for every single vocal today. If that's all you listen to, you probably aren't even aware that there's anything missing because you've got nothing to compare it to.
_"Autotune was invented in 1996, and as we all know all songs prior to that were out of tune"_
- Adam Neely (2021)
They also used to speed up/slow down tape to alter pitch!
13:48 Today I learned smething new (besides the autotune of vintage artists and the pitch correction):
change is not always good
change is not always bad
change is change
understanding change is the important thing
Great quote. Thanks Adam.
I would be more insteresting comparing the "perfected" version and the original without knowing which one is one. I felt the same way as Adam, but I think placebo has a major role in the feeling of those melodies (at least in the ones that the original and the modified are very similar).
My 50-something ears honestly couldn't hear a huuuuuge difference. Maybe with longer samples. I think I'll download the trial version of this godawful program and and play around.
I would as well. I don't think the knowledge ruins my perception though. I've loved the sound of a raw voice & despised autotune for a vvveeeerrryy long time. There's so much subtly in the imperfections that make something sound human.
I wasn't paying attention to the screen while listening to this video, and, personally, I didn't notice any difference at all! Well, maybe except the very first correction. And I'm not sure if I would notice any difference even if swap relatively cheap speakers for ATH-M50 and super-duper-dedicate myself solely to finding what part of the video was actually corrected.
Adam: "Okay, let's fix up this Jacob Collier track..." Melodyne: "I am nothing." [weeps]
Someone, please make this video. XD
Jacob uses so much microtonality Melodyne would completely obliterate his stuff.
FYI everyone, that's not how we edit vocals. When the edited vocal sounds worse than the original, that's a pretty bad vocal edit job 😂🤣
A good vocal editor will leave in the imperfections if it makes the line great, and only brush up the areas where it's needed.
So yes it is very subjective regarding what's good or bad and that's where the skill/taste of the editor comes in.
In general, Melodyne is used to enhance the vocal performance, not ruin it. And when it comes to great singers, not much needs to be done.
For other singers who are not as naturally perfect, more fixing is needed.
Let's say we have a great artist, writes a great song, sings with emotion, but just a little shaky or off pitch at times.
Do we say, "come back when you can sing better"? Of course not. That's when we use Melodyne.
So Melodyne isn't 'bad', it's how we use it.
And in the case of flattening and tuning vintage vocal performances? Yea its bad, because it doesn't need it.
But I guess the point of the video is to show that the right natural imperfections can make a performance great. That's true.
My point of this comment is just to educate and clear up some negative connotations around vocal tuning. Not salty, I promise 😁
And also, a BIG common misconception, Autotune is not Melodyne. Melodyne is more like 'Manual-tune'
What was done in this video is manual editing with Melodyne, not Autotune.
They both have different applications and serve different purposes. ✌
However, I should add, the the artistic intent of the creator must always be prioritized, if they decide to have a vocal performance corrected by a professional so be it, if not it should be left untouched.
Hi @Dinocaster thanks for explaining that. TBH I only found out about autotune/pitch correction - and producing in general. It's quite upsetting to me. That to be a good musician you need all this gear and all this time to manipulate recordings ....And to hear how recordings are manipulated .... kinda really stressed out about it. I'm trying to attain the impossible.
THANK YOU. Was pretty hard to see him complaining about the singer's intent when he was smoothing the whole recording's pitch drifting to 100%🤦🏽♂
@@MarkVank this vid really hurt my soul when it first came out
@@kegs5556 yeah it actually is in many cases. unfortunately. but as he mentioned, in most cases you would never know the difference. its akin to someone doing color grading on a film.
The reason for the flatness when you melodyne sinatra, is that he has a slight vibrato (pitch goes slightly up and down.) that the melodyne flattens.
"I love Bill Withers' voice so much. It's so pure but there's so much soul and so much feeling when he sings AND WE'RE 'BOUT READY TO CORRECT ALL OF THAT"
I'm practically tone deaf... I couldn't really hear a difference in Robert Plant's or Frank Sinatra. But Aretha Franklin, Dave Gilmour and Bill Withers were ruined for me.
Should've done Biz Markie's "Just A Friend" would have been a nice tribute to a late legend.
That song being so out of tune that it's almost painful is what makes that song so iconic.
@@dangerkeith3000 yes, it being "corrected" woukd be the joke.
What's she's saying about how being trained by listening to digitally enhanced vocals is totally true and very insightful. I find myself naturally trying to naturally imitate fairly heavily autotuned vocals because I like the sound.
I dont lmao
I got it! Was trying to figure out what the corrected vocals sound like. It's beginners that are too focused on pitch and/or "classically trained" singers. Groove is what happens when you're comfortable enough with an instrument that you can be playful and remove the training wheels. Tightening up these performances is like regressing their skill level.
Timid singers also try to button down every now, so it also sounds like you're making the singer more timid.
That "hi mom" has the energy of 'mom didn't tell me she was subbing my class today'
It's kind of annoying that he put on that energy on I guess because he expects the audience to expect it? I'm not sure. He's obviously proud of his mother and he invited her on the show, so why not give her the respect and just treat her like a regular guest and not like she came to his school with his gym shorts because he forgot them at home.
I think you're greatly exaggerating, Michael.
Ay my mom's a sub too.
@@ultimadum7785 i know
@@Jack-cw8bw lol
i'm absolutely with you on the fact that some things don't need the pitch correction, or will even get worse if you apply it, but i must say that selecting all notes in melodyne and letting the algorithm tune it by itself isn't really an accurate representation of what melodyne is... to really see where it shines you gotta tune every note manually by ear, and ONLY those who sounded a little off to begin with, that way you can really keep something sounding natural but more in tune.
Yep, I genuinely winced when he selected everything and just auto-applied it. You want to go through and just pitch correct parts that might sound off. Some notes that are slightly sharp or flat will sound fine in sequence, and lend a natural quality when left as is.
It's just a way to make a more extreme example where the difference is easily heard. But in general almost all modern music I think sounds like crap mostly because its auto-tuned, auto-timed etc. though in videos like this they always make sure to say that auto-tune can be used successfully in the right circumstances but tends to be over used.
The phenomenon that you talk about with your mom is something I've heard about with guitar players. With the ease of being able to edit guitar performances to perfection in DAWs, you have a whole generation of guitar players who can play with almost robotic precision because they learned off of those records that were edited and time aligned to perfection.
It's true that if you hear today's djent metal bands, they definitely do struggle for more mathematic precision even in live shows compared to for example 80s trash metal bands. Today's prog rock values technical precision way more than 70s prog rock did. And it's noticeable in every genre.
That's exactly what hyperpop mocks with it's cartoonish overproduction.
@@Posiman Cool that you mention hyperpop. On first discovery I found it to be revolting, but after a while I really started to get the parody and now I find it to be "on the other side of the uncanny valley".
@@Posiman i find most of them cannot do vibrato or bends properly for toffee though. Which is a traversty.
It's late at night and I was dosing off while watching this and when the video ended with the "BASS" sound effect I almost shat. Thanks for the sheer spike in adrenaline, Adam.