torr3nt
torr3nt
  • Видео 1
  • Просмотров 72 084
Scientifically Tested: Caedrel's Draft Evaluations
he cancelled it
Completely unbaised ChatGPT review: "This video delivers an eye-opening review of Caedrel’s renowned draft evaluations, questioning just how accurate his insights truly are. Known as the "CEO of Good Takes," Caedrel has built a reputation for sharp predictions and strategic analysis, but do his draft evaluations consistently lead to winning outcomes in pro League of Legends?
Through in-depth analysis of his takes on team compositions and champion picks, this video reveals surprising insights, showing where Caedrel’s predictions hit the mark-and where they fall short. Using real game data from the LEC, LCK, and LPL, this review dives into the science behind his predictions an...
Просмотров: 73 326

Видео

Комментарии

  • @jera1376
    @jera1376 11 часов назад

    He started a pro team now so.. that wolud be a good way to cuantify his hability

  • @Aleksy-jt6mj
    @Aleksy-jt6mj День назад

    I think a good draft wins when behind on gold too, using gold diff is fair but also its pretty general/not 100% accurate

  • @jakubsafko
    @jakubsafko День назад

    Gold diff isnt really that good for evaluations of team comps. The reason is that some comps simply scale/teamfight better - it can be the case that the better comp wins with less gold, it would be proof in fact, that the teamcomp is better, because it needs less gold.

  • @commonsense660
    @commonsense660 2 дня назад

    Simply saying that the favourite has a better draft would give way better results. To actually evaluate his draft analysis statistically you would need a team strength model or to rely on the pre-draft odds (also including red v blue side). An interesting comparison could then be the post-draft odds and the Gamba when one was done.

  • @jamesburman9784
    @jamesburman9784 3 дня назад

    I feel like this would become more significant if we took into account relative strength of the teams compared to one another at the time of the game. So for example, even if T1 are playing BDS and BDS has a better draft, T1 is just a stronger team and so his draft analysis will be far more insignificant for predicting the outcome of the game. Compare this to a game of T1 playing GENG and caedrel says that GENG has a better draft, the quality of the draft will obviously impact the outcome of the game far more. Just an idea cause would be really cool to see a pt. 2!

  • @ranenap
    @ranenap 4 дня назад

    Hey, data science student here, still learning a lot but this was super cool to watch and see applications of stuff I’ve learned on hobbies I have. Well done!

  • @eter68
    @eter68 4 дня назад

    i think this research means close to nothing, firstly a team comp is just one small variable amongst hubdreds of others secondly the mechanic of adding and subtracting game gold is very very stupid for many reasons

  • @Koko-vw6rj
    @Koko-vw6rj 4 дня назад

    Not me watching this before my methodology test

  • @bencegyalus4637
    @bencegyalus4637 4 дня назад

    It was 6%, right? We talk about the highest level of competition. +6% Winrate is HUGE in that case. (Only by the draft diff) Not in my soloQ😂... in the Lck, LCS, MSI or Wolrds matches. Statistics is a very interesting subject because we can saw the SAME results way diffrent.

    • @torr3nt_loading
      @torr3nt_loading 4 дня назад

      Its not 6% winrate. Favoured comps (all 3 categories) had a roughly 51% winrate. High favour comps had 66.67 WR, but the data set for those are only 30 games.

  • @yoake-2919
    @yoake-2919 4 дня назад

    Wow, uni classes coming in handy

  • @fringorn3553
    @fringorn3553 5 дней назад

    Love the mathematical approach and the hard work, also it seems like 6% is not a lot, but imagine if every team had the same exact level and you were able to make your team win 6% more games than the others, it is not that bad altogether!

  • @castcode3321
    @castcode3321 6 дней назад

    To an extent I guess the results confirm the common knowledge of "better team wins". There's a chance that, as LS often implies (though not obviously), that if the game played perfectly (i.e. if other factors are equalized) draft matters more.

  • @zaczac343
    @zaczac343 7 дней назад

    I know this isn't really serious but a few things I thought about improving the approach: Gold diff as a measurement varies on length of the game, and some compositions favour game state and items in a way that's disproportionate to items, so that you might be barely ahead in gold but hugely advantaged because your champs scale better with the gold they have. Also, as you mentioned, indicating favour of a draft is not in a void. Teams are favoured over others, and so if Caedrel favours a draft of say BDS vs T1, it's hardly fair to treat that equally to him favouring T1's draft vs BDS's. In cases like that the difference in team skill might be large enough that the analysis was correct, but it wasn't enough to overcome skill difference.

  • @MrAlonsoRabbits
    @MrAlonsoRabbits 7 дней назад

    Interesting approach, I like that you went further than the binary correct/incorrect results, but I think there are other factors that could be interesting to look at, like game duration. For example, a game can be an absolute stomp and end at min. 22 but "only" have a gold diff of 5 or 6k gold. Perhaps it would be more representative to look at the gold difference per minute rather than total gold diff. Also would be interesting to take into account objectives that give you no gold like dragons. Another interesting factor would be the expected outcome, so for example if you play rogue vs. fnatic, the game is expected to go to fnatic regardless of whether rogue has a favorable draft. I understand this is in part what the R2 value means, but it might make caedrel's take seem bad if he said he liked the worse team's draft better but then they got stomped by a better team. Either way, great video, very interesting take and love to see some numbres on whether all these people claiming to know a lot about league actually know their shit hahaha

  • @jevonoryvil3012
    @jevonoryvil3012 7 дней назад

    Thing is, when his predictions and takes does miss, he can recognize that the players can play it differently than he imagined, or if there's other factors and variable that he didn't account for, and when confronted about it, he does address it and took it with grace(as much grace as a rat can have at least). Meanwhile some other streamers, especially *cough*LS*cough*, are so tunnel focused on their own takes and ideas of what the meta is, they don't think they can be wrong. They can't comprehend that the players and teams have their own preference, strategies and playstyle regardless of the meta, and that League is a game with thousands if not millions of variables than can't possibly be perfectly accounted for by just a draft. And when confronted about it, they just don't address it, evades it with something else, or just makes excuses.

  • @Real_extra_1
    @Real_extra_1 7 дней назад

    Would percentage gold diff improve the data set over total gold diff? If you have a 20k gold lead, but one team has 90k gold and one has 110k gold, is that 20x more significant a factor as a 1k gold lead, when one team has 5500 and the other 4500. (Definitely extreme ends both ways, but that 1k gold lead should at LEAST be considered just as significant as the 20k gold lead.) If the draft picks were amazing, you might expect an early stomp, which might have a similar percentage gold diff, but a vastly underperforming total gold diff.

  • @theojanisaac4913
    @theojanisaac4913 8 дней назад

    Procastinating from reviewing for a stats exam I have in around 3 hours and watching this vid unknowingly what will be in it is an experience.

  • @kovacsarpad1382
    @kovacsarpad1382 8 дней назад

    Commenting for algorithm Great video!!

  • @phamminh483
    @phamminh483 8 дней назад

    there are quite a few covariances that affect the dependent variable, in here, the gold diff. i like the gold diff, but gold diff as a metric diminishes over time. a dependent value calculated by gold diff/time will take time into account. there is also scaling comps that their value is not reflected in gold diff, i.e. veigar, smolder. of course, personal bias, i.e. the rat’s preferences for dk and wbg, sometimes the underdogs like kt vs geng, might also account for his takes. players’ ability to get the most out of the comp can also confound this results, so a r-square of 0.06 where 6% of variance is explained by draft is actually quite good. anyhow, good video

  • @carlovankarlson3718
    @carlovankarlson3718 8 дней назад

    Gold difference at the end of the game is such a terrible metric. It is so composition dependent that it is basically worthless. Not scaling it with game length makes it even more silly. A 50 Minute slugfest that ends with the scaling draft barely surviving mid game and then slowly choking out the enemy team until they have a 15k gold lead is the same as a 20-0 kill 19 minute win in this metric.

    • @carlovankarlson3718
      @carlovankarlson3718 8 дней назад

      Some drafts with a 70% win rate might actually produce 30% absolute stomp loses and 70% wins with slight gold advantages. There is no logic behind the idea that a greater gold lead and the average chance of winning have any kind of logical connection of equivalence.

  • @johnortiz7999
    @johnortiz7999 8 дней назад

    Caedrel > LS, LS only tries to be different but fails, rat king's analysis is much better most of the time

  • @shadchu3o4
    @shadchu3o4 8 дней назад

    yo the random stray at caedrel's ori mid i love it

  • @timwildauer5063
    @timwildauer5063 9 дней назад

    It’s really hard to do an analysis like this, as you note. The one idea I have to get past the “hands diff” and “build inting” is to get access to all league analytics. When he identifies a comp thats good, look for all games played with that same draft, hopefully on the same patch. It’s essentially multiple simulations of how the two comps do against each other with enough random variables that the “correct” answer pops out. Though I’m not sure how you’d get access to that data.

  • @Xull042
    @Xull042 9 дней назад

    Honestly, im not sure the gold is the best value since its the outcome who matters. For instance you could prefer a late-game comp that gets itself 10k behind, but still wins and finish the game at -5k. Also, looking at the graph (and supposing the - gold is a loss and the +gold is a win), I think the pattern is interesting. - When he thinks its slightly favored for a team, he is more often wrong. Maybe meaning the teamcomp is a bit worse but they picked confort, etc. - Seems pretty equal for the "favored" - Highly favored tho is where its more clear. About 20-7 ratio. So the draft, when clearly won does have a huge impact, according to those few samples.

  • @yurifan2537
    @yurifan2537 9 дней назад

    A rat essay? Highly fucking approved, liked and subbed LETSGO

  • @aaronzhuo352
    @aaronzhuo352 9 дней назад

    More of this content as someone who is pursuing an analytics degree! Doing analytics with something you find personally fun is much more rewarding :)

  • @jakobweisbrod5378
    @jakobweisbrod5378 9 дней назад

    Cool Video!

  • @denny8360
    @denny8360 9 дней назад

    Super interested in the dataset to do some work with! Wonder if you are willing to open source the work behind it. Really cool video!

    • @torr3nt_loading
      @torr3nt_loading 9 дней назад

      Sure, but I'm afrtaid it won't be of much use, because I was a little lazy during data collection. Didn't even record which team won or was favoured or when the game was played. docs.google.com/spreadsheets/d/1uuA30j5P7nYlBND8v4jXfpzYSN_JlmQxD124qPj10TY/edit?gid=0#gid=0

  • @ixplay1519
    @ixplay1519 10 дней назад

    Prediction: Its good.

  • @AtlasAdvice254
    @AtlasAdvice254 10 дней назад

    The problem with predicting games is that, individual skill and cohesiveness of a team is more indicative of how well a team will perform than draft is. There are only a handful of truly great teams in league, if those teams draft well, then the gap between them and everyone else only gets bigger.

  • @dave1123
    @dave1123 11 дней назад

    His analysis to drafts are actually really analytical and makes sense...the only factor that makes the predictions and analysis wrong are in the players and team playing the comp... Even if fox got the better draft, they will still get stomped by GenG...

  • @rachityczny6364
    @rachityczny6364 11 дней назад

    I think that for this simple of a model, linear regretpssion is like shooting from a cannon to a bird, you could parametrical or non-parametrical correlation, and the result would be the same. I think that you should model things such as team strentgh (for example probability of win based on mmr on average winrate from last seasons, possibly weighted), or how good the people actually are on selected champions. Also im not sure if game time as a measure of stompiness isnt better, but its up to discussion

  • @Lopez-my7us
    @Lopez-my7us 11 дней назад

    Did I just watch a league research presentation

  • @Kugatsu009
    @Kugatsu009 11 дней назад

    myfraud o7

  • @Snoui
    @Snoui 12 дней назад

    7:00 the draft_eval variable is categorical, ordinal variable, not quantitative. Running a linear regression model on this data for fit is not going to give good results for regression variance/fit.

  • @AstaAnalysis
    @AstaAnalysis 12 дней назад

    Before I saw the results, I predicted that it would be near zero (indifference); it seems that was correct. Your work here is a good clue into the hidden problem with draft forecasting and analysis in League as a whole. Analysts think they are making predicitions about game outcomes and focus on this because they think this is the only game in town (unforunately this is the case). But what they are really doing for the most part, when you really listen to them, is determining the capabilities of champions and team compositions in the context of specific phases of the game and doctrinal adherence, not the outcome of the game itself. For example, one may say that composition A is favored over composition Z during the laning phase, and composition Z beats composition A in teamfights, but these inferences do not generalize to answering the question, "which team will win the game, and by how much?" This is related to problems in, for example, military science, where people may have good predicitions about the outcome of a specific battle, but it's much harder to predict the outcome of a long war. Some talented analysts in the LoL community are good at answering compositional favorability for specific operational moments, but this does not mean they are now oracles which predict game outcomes from draft alone. What does this mean? League analysts (I believe) have legitimately useful game knowledge, but this does not translate to forecasting ability. Why? Because players are actually primarily concerned with winning battles, not winning wars--these are not the same thing. However, players and analysts confuse themselves into thinking that they are the same thing. I have not seen anyone recognize this key distinction. Moreover, the theories in this discipline (if you can call it that) are very underdeveloped to nonexistent. Thus, every analyst resorts to forecasting because there are no real models in League, nor any attempt to make one.

  • @juangil5680
    @juangil5680 12 дней назад

    Gold lead is a misleading variable, as a team could have a major advantage in draft by for example playing passively, and winning a game by 3k gold was a stomp. Ex certain teams with Kalista being even or slightly behind at a certain point in the game they are even further behind than a gold lead can suggest.

  • @minigaming12
    @minigaming12 12 дней назад

    I might be misunderstanding, but him explaining "only" 6% of the variance sound like A LOT. Considering different strength of teams and "random" chance. Lets say a teams draft determines 10% of the outcome, meaning it only skews the odds a medium amount, which i would assume is the case. If he were to always guess the team with the better draft (and the team with the better draft expected to win for example 55% of the time.) would he then "only" explain 5% of the variation?

    • @minigaming12
      @minigaming12 12 дней назад

      Since i feel like even being able to predict anything is impressive considering both teams are extremely talented and pick comps they themselves think they are good. Seeing how much he predicts would almost be an indicator of how much better he is than the pros or coaches

    • @torr3nt_loading
      @torr3nt_loading 9 дней назад

      No, R^2 is only connected with the winrate of the favoured team in so far as the winning team has more gold. His favoured team had a WR of about 51%. Still Caedrel's draft evaluation predicts 6% of the variance. I agree that this is an advantage, btu as others have also poiinted out, team strength is likely way more important.

  • @noreadingcomprehension
    @noreadingcomprehension 12 дней назад

    I disagree with using gold diff as a good evaluator for which comp is better, for example in a scaling vs tempo scenario, the tempo comp could still perform well in gold diff, but the scaling comp uses the gold more efficiently. Instead of a linear regression, you could use a logistic regression for whether a team won or lost. I think you also focused too much on the R^2 value when there are more important factors to a games outcome such as the head to head record of a team with each other, perhaps you could have looked at head to head win% of the two teams as another variable in the regression. R^2 looks at the variability explained by the model, which is no surprised you can't 100% predict the outcome of a game based on draft evaluation alone. I think it would be interesting to observe the difference between the R^2 value between a model with draft evaluation and a model without draft evaluation. Overall though, interesting video - I admire the amount of hours looking through Caedrel VoDs to collect this data

  • @williamkyaw516
    @williamkyaw516 12 дней назад

    Do LS next?

  • @tomathin2102
    @tomathin2102 12 дней назад

    Great video! I think what this shows is that draft although an important starting point, is just not all the battle of league. League much like football or soccer is a game of execution, even with the perfect setup and right players, you need to make sure people are buying the right items and playing the right way.

  • @nicowrathz2243
    @nicowrathz2243 12 дней назад

    CEO of rat takes

  • @radiochango
    @radiochango 12 дней назад

    Say what you will, no one in the world predicts drafts like him. And i kind of dislike his antctics

  • @penttierareika4837
    @penttierareika4837 12 дней назад

    While I commend you for actually doing this because this is an area we sorely need data in I think the methodology is too flawed to draw any conclusions apart from that draft influences the game, and I don't think anyone would contest that statement. Some other thoughts of mine: 1. Could you make the data public? Not only would it allow for other people to verify your analysis, but also do their own without having to collect the same data. If you have more data than shown in the videon even better. 2. I think %gold would be just better than gold dif, but there might be even better criteria (pls comment your ideas) but these would probably have to be different depending on how the analysis is done. The winning teams gold lead seems to be around 10k regardless of the prediction which should tell us that this metric is not useful. Gold lead might be dependent on the type of game, but the type of game is not dependent on the gold lead. 3. As pointed out by other comments, this should be combined with a power ranking for the teams. Ideally you would do your own, since riots doesn't account for roster changes and values internationals differently to nationals. However an accurate team elo system is probably completely impossible given the amount of variations in any span of time you would have to carry out the analysis over to get any significance. Therefore I would just use the official ratings which are based on a modified elo system as explained here lolesports.com/en-GB/news/dev-diary-unveiling-the-global-power-rankings 4. To calculate a prior probability for the win and then calculating how much Caedrel's prediction affects the result. 5. As pointed out using 5 categories instead of 3 might yield better results, however each new category increases the influence of bias in the data collection since that would already mean having, for example, 50-50, 60-40, 70-30, 80-20 and (90-100)-(10-0) 6. Having multiple groups of different attitudes and familiarities with Caedrel doing the categorization, then analyzing those categories separately and comparing the results should yield the most interesting results. 7. As you point out this bias could be reduced by having multiple people do the categorization, though here we run into a question of what we want to measure. 8. If we want to analyze purely the accuracy of the words he is saying then no access to information such as teams and champions should be given. On the other hand if we want to analyze how much his comments help to understand the game at hand, then it might be better to give access to champs and teams. This would run into problems of plausible foreknowledge of how the game ended and older games would suffer from recency bias. 9. Important distinction: we are not measuring the effect of draft, we are trying to measure CAEDREL'S GUESS based on the draft. To even get at the effect of draft we would need multiple people to evaluate the drafts and even then there are multiple problems. 10. When analyzing the results we cannot be sure that Caedrel evaluated all the drafts based on the same criteria, especially those on the lower end might be based more on comfort/signature for the players.

    • @torr3nt_loading
      @torr3nt_loading 9 дней назад

      Sure, but I'm afrtaid it won't be of much use, because I was a little lazy during data collection. Didn't even record which team won or was favoured or when the game was played. docs.google.com/spreadsheets/d/1uuA30j5P7nYlBND8v4jXfpzYSN_JlmQxD124qPj10TY/edit?gid=0#gid=0

  • @madshebsgaard7905
    @madshebsgaard7905 13 дней назад

    Hi torr3nt, First off, good job on recording and regressing Caedrel's draft sentiment on the results of the game-that’s pretty cool! Your regression would likely improve if you divided the gold difference by the game length, as some of the variance in gold difference stems from game duration, and the relationship is probably close to linear. However, I would also strongly advocate showing the results of regressing the win/loss dummy variable on your draft sentiment metric. Are you willing to share your dataset? I would also recommend recording additional features, such as game length, team identities (even if you don’t use that variable, as it would require scraping team elo from the day prior), gold difference, tower counts, dragons, and barons for each team (to calculate differences later), each team's KDA (later adjusted by game length), inhibitors taken, First Blood, and nexus towers. Maybe even vision control, levels and cancelled spells, but these are likely more time consuming to manually scrape. The added time to record all these features wouldn't be too significant once you have the games loaded. The hardest part is probably listening to Caedrel for 100+ games, though his streaming-casting is enjoyable! With these variables, you could perform your original regression and additionally analyze how much impact features like First Blood have on Caedrel's draft signals. Standardizing variables is crucial here, and your gold metric should now also include towers. I believe a lot of the gold difference variation is tied to game length, so dividing by game length is important. Also goes for the other feature variables. Once again, good job on the video. Kind regards, Mads

    • @torr3nt_loading
      @torr3nt_loading 9 дней назад

      Hey Mads, thanks for the feedback. I'd love to do a more complex analysis, but I'm afraid I'm kinda limited by my lazy data collection. Didn't even record which team won or was favoured or when the game was played. If I do this again in teh future, I'll do better. docs.google.com/spreadsheets/d/1uuA30j5P7nYlBND8v4jXfpzYSN_JlmQxD124qPj10TY/edit?gid=0#gid=0

  • @mlekozmaslem8647
    @mlekozmaslem8647 13 дней назад

    Draft can help win game, but we've seen what happens when team like KC at spring gaps in comp but ints the game many times. It also depends how the team can play/is comfortable on picks and their skill. I like more Pedros predictions what will be picked than if it's strong. Btw. there aint no way that he only ate 6 times in 100 games. He eats like 6 times in one bo3. Also it would be interesting if his predictions are better in different regions like LCK or LEC.

  • @rotcivgibeif1008
    @rotcivgibeif1008 13 дней назад

    Nice analysis! Just out of interest: Did you also run the regression of prediction on pure outcome? I get the intuition behind your argument that this looses variance in the outcome, but after all only win/loss "really matters"

    • @torr3nt_loading
      @torr3nt_loading 9 дней назад

      Just using the binary variable as DV leaves you with R^2 = .05, if I remember correctly.

  • @yesyeayepnonope716
    @yesyeayepnonope716 14 дней назад

    monkaOMEGA OVERANALYZING monkaOMEGA OVERANALYZING monkaOMEGA OVERANALYZING monkaOMEGA OVERANALYZING monkaOMEGA OVERANALYZING monkaOMEGA OVERANALYZING monkaOMEGA OVERANALYZING monkaOMEGA OVERANALYZING monkaOMEGA OVERANALYZING monkaOMEGA OVERANALYZING monkaOMEGA OVERANALYZING monkaOMEGA OVERANALYZING monkaOMEGA OVERANALYZING

  • @MrUltrapyro
    @MrUltrapyro 14 дней назад

    I have been wondering about this since earlier in the year, thanks for taking the time to do it!

  • @patrickwienhoft7987
    @patrickwienhoft7987 14 дней назад

    (1) You didn't take into account prior win rates (i.e. which team is more likely to win, e.g. by Riots power rankings). A better methodology would be comparing prediction power of only using that prior, and prediction power of prior + Bayesian update on draft (2) Gold is not a good indicator of a stomp for several reasons: (a) Total stomps end earlier, potentially leading to less gold lead since there's less time to build it (b) scaling comps don't need huge gold wins to win. The malphite draft is a good example: the malphite team can be down 5k gold but still be favored against a full AD comp (c) gold distribution os important. If your Alistar gets all the kills your gold leas is meaningless. Also champions like TF and Pyke skew gold statistics In general I think just using a binary variable is fine. If you use anything else you shift the goalpost which inevitably devalues a certain way of winning the game.

    • @torr3nt_loading
      @torr3nt_loading 9 дней назад

      Just using the binary variable as DV leaves you with R^2 = .05, if I remember correctly.