The scandal that shook psychology to its core

Поделиться
HTML-код
  • Опубликовано: 25 дек 2024

Комментарии • 1,3 тыс.

  • @WhichDoctor1
    @WhichDoctor1 Год назад +1037

    My brothers phd was originally replicating quite a well known study about gut microbiome of rats affecting their brains. He went to great efforts to make sure all his rats were exactly the same strain as the original study, and actually found a higher quality source for the feed with less nutrient variability, and was super careful with all his methods. And he found absolutely no results whatsoever. He was quite upset and anxious for a while that his whole PhD had been a failure and no one would publish his study because it contradicted this famous one and everyone would think he’d just screwed up somehow

    • @TheDrFMG
      @TheDrFMG Год назад

      One of my PhD students conducted a genomics study. Funded by venture capitalists who invested more than $650k. We discovered not only that the hypothesis didn’t work but the field is largely gobbledygook. He presented his findings at a conference at a world leading genomics institute in Europe. He was hired before the day ended. This is largely due to identifying where everyone were using flawed probabilistic inference and limited methodologies. Please relay this story to your brother. A PhD is merely a training programme and he should not doubt his ability. Negative studies are important and gut-brain theory is like squinting at the stars and picking your preferred asteroid as being important. Best of luck to him.

    • @cosmicHalArizona
      @cosmicHalArizona Год назад +51

      What did the study indicate? Consistant diet affects the gut but seems to have little or no effect on brain function.

    • @k-miz3683
      @k-miz3683 Год назад +14

      Which study did he try to replicate?

    • @christopherellis2663
      @christopherellis2663 Год назад +17

      Well, someone screwed up

    • @AnaBanana-yk9px
      @AnaBanana-yk9px Год назад +39

      Or someone lied.

  • @homofloridensis
    @homofloridensis 10 месяцев назад +133

    In the Navy we used to say, "You don't get what you expect, you get what you inspect." If you reward publications, you get publications; if you reward research you get research.

    • @huveja9799
      @huveja9799 8 месяцев назад

      @user-ox6nc6ly7f
      I would be careful with popular expressions, warfare is one of the most difficult arts. I would like to see how someone conducts the logistics of an operation with thousands of troops without a certain logic ..

    • @huveja9799
      @huveja9799 8 месяцев назад +2

      The problem is that inspecting already supposes a certain predisposition, you are inspecting with a certain objective in mind (i.e. some "expectations"), and that objective is precisely the one that can introduce biases (both in the one that produces the data and in the one that inspects).
      On the other hand, if I award research, research is certainly going to proliferate like weed, but how do you discriminate between bad and good weed?

    • @sunnyadams5842
      @sunnyadams5842 4 месяца назад

      ​@huveja9799 Schrodinger's Cat...

  • @MasterPeibol
    @MasterPeibol 2 года назад +1389

    The problem is not the Science, the problem is the system. Publishers hold too much power and researchers depend too much on grants that do not consider much more than number of publications. It is vicious cycle, that can spiral downwards quickly: no positive results-> manuscript not accepted in any relevant journal -> no grants -> no money to do more research -> no more results -> no more publications, etc. Many researchers depend on grants not only for their projects, but also for their salary. So you can easily see the point of failure, if your way of life depends on positive results, and you don’t have them, there is only one way out.

    • @pepperachu
      @pepperachu 2 года назад +36

      Unfortunately you could also argue that many take advantage of said grants

    • @raroonchandranadv5
      @raroonchandranadv5 2 года назад +17

      Both are true and both the theory are conceived from some human mind....One thing that is of my personal opinion... Humans have stopped thinking..I just vaguely recollect Oscar Wilde quote...which was like' most people are other people's opinion and other people's quote'... When the author felt that in his time to be true....atleast someone else's opinion was relevant and true that gave some meilage. Now as most of us humans have stoped to think and have exhausted even the conclusive thoughts of some genuine thought or research...we truly are in a crisis of fresh thoughts and fresh actions . Thanks to AI , thanks to Augmentive reality and others that make the impossible possible , I really did enjoy the talk of this subject and wish you well.

    • @JonnyD000
      @JonnyD000 2 года назад +52

      Right, the problem isn't with science or statistics, it's with the economic incentives in the publishing system.

    • @Thekarateadult
      @Thekarateadult Год назад +23

      The process isn't flawed. The agendas behind it are.

    • @mr1nyc
      @mr1nyc Год назад +15

      No wonder the hard sciences want to guard against getting grouped with soft sciences.

  • @seeranos
    @seeranos Год назад +776

    Instead of “publish or perish,” psychology needs a culture of “replicate or retire”.

    • @rangersmyth
      @rangersmyth Год назад +62

      We need to stop publishing for a few years and work on the old work and replicate it all, then we can get a better understanding of what is true.

    • @Stretesky
      @Stretesky Год назад +15

      People mustn’t be forced to retire when they find fudged research.

    • @seeranos
      @seeranos Год назад +38

      @@Stretesky Yeah, my phrasing is quippy, but probably too aggressive. I just want a world where replication studies are rewarded as much as exploratory-research studies.
      It seems like replication studies would be the perfect kind of work for grad students to do.

    • @rangersmyth
      @rangersmyth Год назад +6

      @@seeranos I understand and want the same.

    • @andrewharrison8436
      @andrewharrison8436 Год назад +17

      @@seeranos We live in a sound bite world - I am backing "replicate or retire", it might be snappy enough to get some funding for and publication of replication studies.

  • @samdog_1
    @samdog_1 2 года назад +675

    I saw a lot of this behavior while working on my PhD. There is such strong pressure to get positive, confirmatory results, researchers will twist the data any way necessary.

    • @mignonhagemeijer3726
      @mignonhagemeijer3726 2 года назад +15

      Its incredible how easy or quickly it happens. I've been TA for many courses and I always have hammerd on doing this as planned out (and first really plan things out properly). Not adjusting hypothesis afterwards and not suddenly adding variables etc. But is is kinda brushed off by some others. I also feel like some of the researchers I worked don't seem to value carefull planning.

    • @odst2247
      @odst2247 2 года назад +7

      Because science is about trying to find out the truth of the world, or reality or the fact, or overall knowledge. There is no room for nothing other than positive confirmatory results

    • @dabtican4953
      @dabtican4953 Год назад +4

      @Mary Bean Would certainly be ironic if he actually does that but that's unconfirmed

    • @albertmockel6245
      @albertmockel6245 Год назад +19

      I encountered that in my master thesis in ecology. I was not asked to change data, but given my skills in statistics, I was asked to torture the numbers further. This is what tipped most my decision not to continue for a PhD despite the offers.

    • @albertmockel6245
      @albertmockel6245 Год назад

      @@odst2247 no science is not about finding the truth; that's the work of philosophers or theologians. And that confusion is really the core of the crisis; many people confuse science and religion to the point that what they call science is indeed their religion.

  • @holmavik6756
    @holmavik6756 Год назад +382

    Professor in Statistics here. Just two comments: firstly, one would expect researchers to involve statisticians in their statistical anslysis but that is rarely the case, and secondly, I dare to say, research in empirical economics is generally much more erratic than in psychology.

    • @ztukariansevuri
      @ztukariansevuri Год назад

      They cant and wont, ever. I am not educated, but I really dont have to be in 2023 to understand what is taking place. One would expect any form of "validated information" to be cross referenced through virtually every other highly specialized field in order to rule out the possiblity of their data being cross contaminated by a souce that the research has zero knowledge about. I will give an example. If you take mental illness as a prime example. Generally, if you're suffering from any of the infinite symtoms of mental illness, you will be sent to probably a specialist and given a magical pill (cause there is a pill for everything it seems). Well, if you compare the nutritional data over the past, say, 100 years, and you realize the general population consumes very unbalanced diets, and then if you compare the symtoms of nutrient dificiencies then you're confronted with the realization that nurtrient dificiency has the exact same symptoms as clinical mental health issues..... There are far more examples, but if my logic isnt sound please let me know. I actually have been meaning to have a sit down with someone widely accepted as more intelligent than I am in order to figure out my intelligence lol.... I mean, you're a professor, Marilyn Vos Savant is a perfect example of incredible intelligence being attacked by Ph.D's and scholars, mathmaticians, and she turned out to be correct....

    • @vaska1999
      @vaska1999 Год назад +65

      Not surprised to learn that economics research is iffy, to put it politely.

    • @Heyu7her3
      @Heyu7her3 Год назад +12

      👋🏽 If I make it on the other side of this durn program, I will GLADLY hire a statistician! I even told my Quant Methods professor this last year 😅

    • @path2source
      @path2source Год назад +9

      That must be why it’s not uncommon for economists to publish in the Journal of American Statistical Association! Economics as a discipline is unparalleled in how carefully it engages in causal inference. If you were a statistician, you’d know that.

    • @jadegrace1312
      @jadegrace1312 Год назад +21

      @@path2source Lmao actually defending economic research

  • @cyrilio
    @cyrilio 2 года назад +1116

    We need more people brave enough to publish ‘non results’.

    • @hikashia.halfiah3582
      @hikashia.halfiah3582 Год назад +106

      "Bravery" alone doesn't put food on the table. A more apt approach would be change the incentive structure of science.

    • @WhichDoctor1
      @WhichDoctor1 Год назад +41

      But then the brave and honest scientists don’t get the research grants, and only those researchers willing to hush up failures and fudge the numbers to get exciting results have successful careers. Which is pretty much what we have now

    • @ztukariansevuri
      @ztukariansevuri Год назад +23

      Publish them to whom exactly? You would be better off having a Twitch streamer release the findings as their own, because intelligence is supressed in this society.

    • @ProfDCoy
      @ProfDCoy Год назад +11

      Maybe there should be journals that only publish null results? Like, short of fundamentally changing rhe incentive structure (which should totally happen, but is an enormous project) is it possible to, at least in the here and now, promise that there will be a home for null results?

    • @tientruong2007
      @tientruong2007 Год назад +4

      We need more poor people you mean. If you want to make money, you have to fudge your results otherwise no company will fund you. What needs to be done is having a government body fund and conduct all research or severe consequences when it's clear research has been tampered with.

  • @jcskehan
    @jcskehan 2 года назад +761

    I firmly believe that the first year of a PhD, or at least for master's capstone projects, students should be spending their time validating other people's experiments.

    • @vio1583
      @vio1583 Год назад +64

      Makes sense. Why not demand at least one study from a PhD to be a replication study?

    • @inregionecaecorum
      @inregionecaecorum Год назад +73

      @@vio1583 There is a big elephant in the PhD room, which is the requirment to demonstrate original research in order to have ones proposal accepted. Sometimes the research is so original that nobody ever takes up the challenge of trying to replicate it.

    • @ChristoffelTensors
      @ChristoffelTensors Год назад +61

      My wife did this and found that both previous PhDs had major flaws in their work and procedures that don’t do anything. She is a chemical engineer…

    • @birdmusic1206
      @birdmusic1206 Год назад +29

      this is a problem because receiving your PhD becomes contingent on "validating" work, in other words as a newcomer you will be afraid to question the seniors and might just propogate errors

    • @9xixix9
      @9xixix9 Год назад +43

      @@birdmusic1206I respectfully disagree, the proposition is not you receive your PhD by Validating other publications. The proposition is: a contributing factor to receiving your PhD is by doing the actual work that is needed to validate or disprove a publication. And if many students find errors over and over again then a spot light would be put on that specific publication. This helps to put a check and balance in place.

  • @wintersking4290
    @wintersking4290 Год назад +276

    I feel like we need a catalogue of the null hypothesis, where scientists can publish like a two to three pages report on Failed or inconclusive studies. Make it searchable and that'd be an amazing resource for future scientists to see what pitfalls to avoid. And it would probably take less resources to create and run than Wikipedia or other such websites.

    • @snailmailmagic
      @snailmailmagic Год назад

      it will benefit academia but not any industry. And no industry wants that as it will leave proof of the lies they try to build.

    • @margodphd
      @margodphd Год назад +12

      I think that's a decent step.
      Let's document studies that have been repeated over and over with null-significant results so they aren't repeated multiple times over.
      Still, with the way research is financially motivated, that will not be enough.

    • @funkster2009
      @funkster2009 11 месяцев назад +3

      agreed!

    • @eveningstar1
      @eveningstar1 11 месяцев назад +3

      Great idea!!

    • @jeallen10x
      @jeallen10x 10 месяцев назад +2

      Great idea

  • @LouisaWatt
    @LouisaWatt 8 месяцев назад +9

    “I thought I had something there but it wasn’t significant” 😆😆🤣 that was the wittiest p-value joke of them all.

  • @Seraph.G
    @Seraph.G Год назад +58

    I remember in a Linguistics class I took as an undergrad, our final project was a research project based on language data and my hypothesis was completely unable to be supported one way or the other based on the data we had to work with. My professor was SO THRILLED when I stood up there, described my hypothesis and research design, all the effort I took towards proving it, and then went "and I still have no idea." It was honestly a great learning experience.

  • @jguenther3049
    @jguenther3049 Год назад +64

    In my Psych 101 course, when asked, our professor admitted that the field attracts people with mental problems, but stated that exposure to the field during their studies often gave students sufficient insight to drop out. This is no longer true. Over many decades, the requirement for personal therapy to attain a degree in Psychology has dropped from 40 hours to 24 hours to 8 hours to, at many institutions, "highly recommended," i.e., zero. At the same time, tuition costs have gone up by a factor of ten or more. Fewer students drop out and kiss goodbye to tens of thousands of dollars already invested in their education. The field is in deep trouble, and the consequences in education, research, and clinical psychology are outrageous.

    • @MrJCerqueira
      @MrJCerqueira 10 месяцев назад +2

      So your comment is that you believe “personal therapists” have

    • @jguenther3049
      @jguenther3049 10 месяцев назад +7

      @@MrJCerqueira Vs decades ago, as stated.

    • @ElizabethMillerTX
      @ElizabethMillerTX 9 месяцев назад +1

      I got my undergrad with no such requirement, decades ago. Was that not the norm?

    • @jguenther3049
      @jguenther3049 9 месяцев назад +1

      @@ElizabethMillerTX It may have been fairly common, depending on how their graduate degrees were structured.

    • @nicholascarter9158
      @nicholascarter9158 9 месяцев назад +6

      @@MrJCerqueira I think it means "people used to have to get therapy on their own personal issues before starting therapist training"

  • @StingrayMk1
    @StingrayMk1 Год назад +159

    I remember studying statistics for psychology and thinking "Wow, so I can say my findings either disprove the null or don't disprove the null according to how I slide this p-value between .05 and .01!" It was a WTF moment.

    • @gseeman6174
      @gseeman6174 Год назад +15

      Absolutely... the alpha level should be set in the study plan prior to running the subjects.... Call it Pre-procedural Alpha... it should rarely ever change after the study has been run.... the people who change it are misconstruing alpha-level with effect size... Many researchers have told told me that my idea on this is nuts

    • @Thetarget1
      @Thetarget1 Год назад +20

      In modern statistics education, people are going away from the idea of having a cut-off for the significance level at all. Instead the p-value simply gives you the probability, that the data was generated by random chance. You should then interpret based off this p-value on a case-by-case basis.

    • @katja6332
      @katja6332 Год назад +13

      Yes, same here. That's why we need to always have access to the real data and read it, how they calculated it. Is this tiresome? Yes. Is this science, yes :)

    • @ThePathOfLeastResistanc
      @ThePathOfLeastResistanc Год назад

      Same here

    • @amazinggrapes3045
      @amazinggrapes3045 Год назад +15

      I had a bit of a crisis when I found out that the threshold/criterion for "statistically significant" was completely arbitrarily decided.

  • @TheInfinityzeN
    @TheInfinityzeN 9 месяцев назад +13

    My x wife is a biological research scientist (meaning she designs drugs and medical treatments) and from her I can tell you that the problems found in psychology research are actually pretty common across all medical research. The process of handling and interpreting the data is often manipulated to give the desired result to ensure funding.

    • @Someone-ji6ni
      @Someone-ji6ni 7 месяцев назад +2

      It is, quite a few physics and cancer research papers are poorly done or just a flat-out lie/manipulation. It's pervasive in every aspect of scientific research and the whole standard of publication and research needs to change.

  • @chroniclesofpickles
    @chroniclesofpickles Год назад +33

    Former PhD candidate here (with a background in psychology). Publish or perish, p-hacking and even p-value itself were extensively discussed within the field while I was doing my PhD (in cognitive neuroscience, or specifically neuroeconomics). Personally, I preferred using Bayesian inferences when it comes to statistics. But there’s many problems why scientists face the replication crises:
    1. The sugar daddy of science: in theory, you don’t need to be a good scientist. You need to be a good marketer and a good salesperson to get the grant. Yes, and I was a part of a EU grant with billions of €€ but bullshit research plans.
    2. Academic politics: following what I stated above, when the higher up is focused on using the €€ to buy fancy new tools, the lower-ranked PhD researchers as as myself (previously) became tools as well. We had no say in anything, we were just means to the end.
    3. No one cares: with €€, power, and politics involved. Who’ll care if anyone is doing good science? It’s about publishing results right? So who cares about the PhD student, assign a shitty supervisor who doesn’t care, write bullshit things in the reports.
    P.s. the general rule for my PhD was to publish 3 papers in 3 years. And the two studies I was involved in did not have any “positive” results so to speak.

  • @rogermenendez4052
    @rogermenendez4052 Год назад +34

    Many years ago I submitted a research paper- my first major research project. It was rejected outright. A few months later, I ran into a senior colleague at a research meeting who told me " I rejected your paper because I didn't think you could actually do the analyses. And I thought" why would he think that?" Now I know he was projecting his own dishonesty. I left academia after that.

    • @MichaelCzajka
      @MichaelCzajka 11 месяцев назад +8

      I was expected to accept my lecturers claims at face value when doing psychology and I could see that the research seemed to lack strong empirical evidence and replication.
      The rest of the class seemed to eat it up.
      Today most of what they taught us has been proven wrong... but some people are still spouting it.
      Always been wary of psychology because of how many people it has mislead.
      🙂

    • @truthaus6840
      @truthaus6840 8 месяцев назад +2

      Didn't think YOU could do the analysis, because HE should. I have seen Professors in Engineering, outright lie they observed a phenomena, they personally observed it due to their great insight, with a great story of how it happened to them personally, when a student or research assistant did and told them.

  • @EvilEustace
    @EvilEustace Год назад +44

    Im a 1st year psych student. Here in UK they have been talking a lot about how journals should encourage people differently. Some journals are trying to publish research, no matter the p value will come out, by reviewing the methodology and accepting it before research is completed.
    I am excited about this as it seems to be a step to the right direction.

  • @hermiegana
    @hermiegana 10 месяцев назад +22

    This is so weird to me. In medicine, in my experience, people publish null results all the time. “Is this treatment better than this other one?” “There is no diference” “ok, publish in the new england journal of medicine”

    • @mattgilbert7347
      @mattgilbert7347 8 месяцев назад +3

      Right. It's so odd, this prejudice (or fear of) towards null-results.
      I mean, it's not *that* odd considering what the personal career stakes are.

    • @Brian_S_O_Tuireann
      @Brian_S_O_Tuireann 7 месяцев назад +2

      Yes because medical journals test treatments on humans so there is an incentive to share the results with the community of any drug that is not better than the current standard of care. They also have had a long history of money being put in through the clinical trials and any basic research that led up to these types of therapy being tested. Basic science however, people only want to know the new thing that can work so we can test the idea in humans.

    • @miscellaniac3367
      @miscellaniac3367 6 месяцев назад +1

      ​@mattgilbert7347 It's possible the fear of null results on Psych has to do with the fact it's a soft science and a relatively new one at that. The scientific method started being applied to Medicine during maybe the Rennaisance whereas psychology didn't start having the method applied till maybe the mid 1800s.

  • @monissiddiqui6559
    @monissiddiqui6559 Год назад +99

    I remember when I was back in university we used to make fun of psychology and medical science to be "easier" than math and the harder sciences. We were never convinced of many of the results coming out of psychology because of the lack of rigour and lack of quantitative literacy among those who studied the softer sciences.
    We were always made to seem like assholes for pointing these things out and maybe we could have framed things more nicely, but I'm glad to see more gradual mainstream recognition of the failure within the softer science fields.
    One thing we were completely wrong about, though, is that the softer sciences were "easier" than the hard sciences. The complete opposite is what is true. The hard sciences are what are easier since we can arrive at more well-defined answers in those fields. The amount of complexity involved in the softer sciences is mind-numbing to say the least. I have a lot of respect for anyone who ventures in search of the truth and I have no doubt that, as the softer science fields mature and develop more tools and methods to aid in their study, we will arrive at some amazing insights.
    I'm grateful to be witnessing this inflection point in these fields and I'm excited to see what we uncover. The other possibility is that we are approaching the limits of what we can understand with science and mathematical modelling and that makes me feel differently, but still grants me a sense of awe at the wonders of the universe. Maybe we will need something completely different from traditional science and math to gain further understanding of the world. We can never know everything there is to know and that's hard to accept, but not unfathomable. We already have examples of things we will never know from studies in Cosmology.

    • @Xamufam
      @Xamufam Год назад +1

      Scanning the brain, simulating the brain and AI might be a solution in the future to create rigorous testing in psychology

    • @gogowshagenai6139
      @gogowshagenai6139 11 месяцев назад +1

      Im not sure about this because theses tools are put on a pedestal while they're easy to missuse and can be the source of quite a lot of pbs.... +humans sciences are way too complexe to solely relay on it
      (we all know that it would be an issue)

    • @gogowshagenai6139
      @gogowshagenai6139 11 месяцев назад

      Im not sure about this because theses tools are put on a pedestal while they're easy to missuse and can be the source of quite a lot of pbs.... +humans sciences are way too complexe to solely relay on it
      (we all know that it would be an issue)

    • @gogowshagenai6139
      @gogowshagenai6139 11 месяцев назад

      Im not sure about this because theses tools are put on a pedestal while they're easy to missuse and can be the source of quite a lot of pbs.... +humans sciences are way too complexe to solely relay on it
      (we all know that it would be an issue)

    • @gogowshagenai6139
      @gogowshagenai6139 11 месяцев назад

      Im not sure about this because theses tools are put on a pedestal while they're easy to missuse and can be the source of quite a lot of pbs.... +humans sciences are way too complexe to solely relay on it
      (we all know that it would be an issue)

  • @IanZainea1990
    @IanZainea1990 Год назад +14

    As a member of the public. We are often told/taught that science is strong because it is replicable and that what the data shows is the result. Now I knew you could abuse the data to get what result you want. But I didn't realize how wide spread all this is.

  • @yana.mp4
    @yana.mp4 2 года назад +300

    Its so important to spread this topic beyond psychology and academia so people are not swayed by glamorous and exciting sounding results.
    I'll be starting a PhD in personality psychology soon and part of the reason I've chosen my lab to work in was their commitment to open science.

    • @gms9608
      @gms9608 2 года назад +13

      Yes!! Science needs to better serve society and people, not society have to serve (and be intimidated and misconstrue) science. A world with open science is not only a world full of knowledge to understand itself but a world full of knowledge to help make itself better.

    • @star_blazer
      @star_blazer Год назад +8

      What you say gives me hope, Yana. We really need more ethical researchers like you to eventually bring about permanent change to the current perverse incentive structure.

    • @daisyviluck7932
      @daisyviluck7932 10 месяцев назад +2

      Serious question, though, how do you make people reject excitement and glitter? Most people *intellectually* assent that sound research and simplicity and hard work are good things, but you get that same person into a room full of shiny objects, and watch their heads spin.

    • @yana.mp4
      @yana.mp4 10 месяцев назад +2

      @daisyviluck7932 that is a difficult question! I found that in my field people are pretty sceptical and there's almost a culture of holding people to a standard of openness, reanalysing and reproducing results (which apparently is a complete 180 from say 10 years ago). I think it helps that some of the big names in open science (eg Simine Vazire!!) have come from personality psychology and are pretty vocal about changing norms.

  • @hollywoostars
    @hollywoostars 2 года назад +155

    Math is excellent at revealing the truth. It's even better at hiding it.

    • @mikeolsze6776
      @mikeolsze6776 Год назад +13

      We are finally recognizing, actually more so acknowledging our true human nature's. We will manipulate anything to everything as to be achieving our desires. And as far as math is concerned, it is a factitious construct, taking many forms. When you really analyze how it has come to be. You can readily apperceive to what extents & degrees it has & yet continues to be, in far to many promulgatory scientific precedents, twisted, bent, etc. to fit within or to force our human agendas to work out. It is not that math can not be legitimately applied & utilized. It is that we factitiously bias & even corrupt it, to achieve our means & ends. I have gone down many a rabbit hole, only developing great consternation upon discerning the extraneous manipulations of the applied mathematics. Many would like believe math is everything. Nature knows no factitious math, but nature does wholistically apply informations & they are not exactly the identical language. They can be close or similar but only if we don't place our foot on the scale.

    • @pohkeee
      @pohkeee Год назад

      .

    • @andrewharrison8436
      @andrewharrison8436 Год назад +9

      A couple of my favourite quotes:
      Lies, damned lies and statistics.
      He uses statistics the way a drunk uses a lamppost, for support not illumination.
      That's why you need proper review including a statistician.

  • @konyvnyelv.
    @konyvnyelv. Год назад +218

    My issue with psychology is that it's fully arbitrary when it comes to find a difference between health and illness. Potentially any behaviour society doesn't like can be considered a disease. Example is homosexuality which was a illness in the past and now no longer since society accepted it. Some pacifists were accused of being insane for not wanting millions of people killing each other in WW1

    • @matthewkopp2391
      @matthewkopp2391 Год назад +31

      Freud and most of his early following considered homosexuality a „normal“ variation. And Kinsey confirmed it’s prevalence. When it came to GB and USA the DSM was written up redesignating it as a disease.
      But the reason for that was specifically from the idea in Freud of „societal norms“. So they could recategorize based on society.
      Freud’s position was that because it in and of itself did not inhibit adaptive functioning it should be considered normal.
      It is not arbitrary in the true sense of the word. It does however border on a type of ethical philosophy in the ancient sense of that idea, for example the Stoics.

    • @konyvnyelv.
      @konyvnyelv. Год назад +23

      @@matthewkopp2391 still, defining abnormal what inhibits you to adapt to society is arbitrary

    • @babybobo1231
      @babybobo1231 Год назад +35

      Not all mental illness designations are arbitrary. Psychosis being one of those disorders. But there is always a risk of pathologizing completely normal behaviors

    • @matthewkopp2391
      @matthewkopp2391 Год назад +25

      @@konyvnyelv. arbitrary by definition is random or whim as apposed to reason. There are other types of reason that are not scientific material reductionist.
      I don’t believe making it a mental illness was either random or based on whim. Perhaps a better word that would suit your argument would relative ethos.
      My position is that much of psychology is not science it is a philosophical ethos which negotiates well being of the individual in relationship to society.
      The fact that people think it is hard science is problematic. It is informed by science, restrained by the probabilities of science but is not science.

    • @relight6931
      @relight6931 Год назад

      ​@@matthewkopp2391I see your point. Hard to not find validity in it.. In all "soft" science while we are at it.
      But I would find personally, a world without the field of psychology to explain some of our behaviors even more prone to various forms of self deceiving... Just imagine having savere depression, and everyone in your life that knows of you, constantly calling you lazy..Without knowing that it is an illness, that you might battle with it, either through discipline, specific diet or medication or all of them together, for even a bit more will, hope... your self worth would just deflate, negative thought patterns would duplicate until only logical solution would be the one that ain't a solution at all..
      I just find human minds, and our ability to imagine, then try to engineer something into existence, one of main points of being a human, beside purely biological, to survive, conserve energy and reproduce.. The better we understand how they work, what they are capable, what is good and what is bad for them, more robust we become not just as individuals but also as a species.. We are just scratching the surface so far..
      I just wish we could have science, for science sake. To satisfy our curiosity, to enrich humankind in different ways then just material. In the end, it might be the difference between surviving next 100 years, or slowly getting extinct.

  • @Townsavage
    @Townsavage 9 месяцев назад +4

    There is absolutely no way to replicate EVERY variable. If we were smart enough to not EVER say ANYTHING was an absolute truth, we'd begin on the path of honest science.

  • @izaaguilo
    @izaaguilo Год назад +42

    "No significant results, negative results etc", are also results and should be published. Those findings are important too. If we are really capable of learning, those findings at the very least, can show researchers what not to do next time. Just a thought 🤔.

  • @inregionecaecorum
    @inregionecaecorum Год назад +108

    As a PhD candidate who had formerly undertaken statistical research in a non academic context and actually studied statistics at a formal level before which involved understanding the concepts, I found that the research training for PhD candidates left a lot to be desired, particularly given the predominance of computer packages such as SPSS. Younger students simply did not have the formal understanding to know what they were doing in my opinion. As you say you can bludgeon almost anything into significance. What is most important in understanding research results from a critical perspective is not what they say, but how the conclusions were drawn and above all of most significance (in the non mathematical sense) is a sound methodology and thorough logical consideration of all the possible confounding factors. You know it is an old trope that most psychology is based upon white middle class college kids, who form an availability heuristic of willing research subjects.

    • @lovingdemon2932
      @lovingdemon2932 Год назад +1

      If you wouldn't mind looking into this little theory that some of the world's well known geniuses were malicious since the mathematical odds of all of them being non malicious is probably very low. And at the very least extremely suspicious.

    • @Cloudsurfer69
      @Cloudsurfer69 Год назад +2

      Damn this is a 🔥 comment. Your so smart you intimidate me 😂🥰

    • @Heyu7her3
      @Heyu7her3 Год назад +4

      But wasn't that a goal of early statistics -- to produce believable numbers? Many of those theorists were social Darwinists/ eugenicists who wanted to make the differences between people significant.

    • @kwarra-an
      @kwarra-an 11 месяцев назад +1

      ​@@lovingdemon2932not sure what you mean by "malicious", but you're making a logical error. "It's unlikely all of them were not malicious" =/= "it is likely most of them were malicious". Isn't is just as unlikely that most of them were malicious, whatever that means? "It is unlikely all dogs are not rabid" =/= "it is likely most dogs are rabid"

    • @bluesunquake
      @bluesunquake 11 месяцев назад +2

      This! I desperately wanted to take an advanced stats class during my PhD, but my supervisor didn't think it was necessary. I worried that, since I didn't have a deep understanding of stats, I might well be using the wrong tests, etc.

  • @charcoal8
    @charcoal8 Год назад +43

    It's worse when doctors are then advised to treat patients based off of flawed studies.

    • @ckwind1971
      @ckwind1971 Год назад +7

      See: the psychopharmaceutical industry

    • @lukecarey613
      @lukecarey613 Год назад

      Then never go to a doctor.

    • @NoNono-o3h
      @NoNono-o3h Год назад

      ​@@ckwind1971in pharmacy is more of a case of pure monetary leverage

    • @NoNono-o3h
      @NoNono-o3h Год назад

      ​@@lukecarey613exactly, there's a lot more history and bodies of work that have passed the test of time, if people want to stoo going to the doctor because outliers, they are free to get sick

    • @denofpigs2575
      @denofpigs2575 10 месяцев назад +8

      @@lukecarey613 This is unironically good advice. Do not ever go to a doctor unless you know exactly what it is you want looked at and are willing to challenge the doctor.
      You probably didn't intend that though lol.

  • @innocentsmith6091
    @innocentsmith6091 Год назад +85

    There's nothing wrong with studies that aren't replicated. It just means there was a flaw before. The real problem is studies where the replication hasn't been attempted are put forward as knowledge, and used to make decisions.

    • @dshe8637
      @dshe8637 8 месяцев назад +3

      There are many reasons why a study may fail to replicate. It doesn't always mean there was a flaw.
      Exploring those reasons is where it starts to get interesting

  • @paulwary
    @paulwary Год назад +99

    They should publish the no-result studies. Doesn't have to take up much space - just the titlle, abstract, result summary and a link to the data.

    • @tientruong2007
      @tientruong2007 Год назад +19

      I firmly believe that if you're an honest researcher you will struggle to get work. Company's will turn to researchers who give them the results they pay for. The incentive system is fucked.

    • @stavroskarageorgis4804
      @stavroskarageorgis4804 Год назад +4

      There is no such thing as "no result".

    • @paulwary
      @paulwary Год назад +16

      @@stavroskarageorgis4804 True. How about "null result" or "hypothesis not supported". Still, I think you know what I mean.

  • @baruchben-david4196
    @baruchben-david4196 Год назад +65

    It's not just psychology. This issue has been discovered in other disciplines as well...

  • @78deathface
    @78deathface Год назад +147

    I love the obnoxious hubris of humanity, we assume that we know so much and that we’ve figured it all out until one guy tries to prove psychic phenomena and all hell breaks loose.

    • @theunbreaking
      @theunbreaking Год назад +12

      Right?!!? Why is this one of the human practices we see thoughtout almost all previous civilizations - you know?? Clearly there something there

    • @michaelnurse9089
      @michaelnurse9089 Год назад

      Scientists have the highest hubris of all. They generally treat non-scientists like absolute morons. Meanwhile, back at the ranch their beloved ideas are 90% p-hacked .

    • @antonyjh1234
      @antonyjh1234 Год назад +2

      How do you build a machine to test for something that we don't know is there.

    • @joem5615
      @joem5615 Год назад

      Well like everyone of these jackasses with a PhD have to cling to the system their indoctrinated into and worked their ass off to serve so it's kinda hard for those fuckers to think outside the bun

    • @danwall3102
      @danwall3102 Год назад +10

      ​​​@@antonyjh1234that's actually what a lot of particle accelerator experiments are all about. The math says we should see a certain particle, but we don't know if it's there, so they smash protons together at near-light-speed and hope it results in them seeing one of these particles.

  • @ckwind1971
    @ckwind1971 Год назад +33

    I was a psych major around 1998-2002. I started learning about unscrupulous studies from pharmaceutical companies, specifically about the SSRIs I had taken for 20+ years. Great essay, I learned and marveled.
    Edit: I took Statistics twice (2 different schools) and barely passed with Cs both times. If I'd had to produce a whole study it might have been all over 😂

  • @dustysoodak
    @dustysoodak Год назад +34

    it would be nice if there was a place people could post all the negative results. Even if it wasn’t officially peer reviewed, there would still be feedback from comments by other researchers and there would probably be minimal intentional fraud.

  • @PinakiGupta82Appu
    @PinakiGupta82Appu Год назад +11

    While delivering a lecture in class discussing the chapter Standard continuous probability distributions, an assertion was made by someone, 'But, the numbers don't lie as we all know'. In reply, she told, 'The data must be fed to the system of equations appropriately'. Now I see that it also applies elsewhere in all disciplines other than raw hardcore convoluted pure mathematics and statistics. Truly amazing!

  • @euchale
    @euchale Год назад +23

    Same problem exists in other fields, remember going to our bioinformatics department and asking which clustering algorithms to use for a certain type of data and I was asked how many populations I want to have...

  • @LouisaWatt
    @LouisaWatt 8 месяцев назад +2

    I’m currently doing honours in psychology and the Australian university I go to teaches about the replication crisis as part of learning about systematic reviews. It is possible to detect publication bias through statistical analysis in the review process, but obviously that doesn’t solve the problem of it happening in the first place.
    Fudging data to get published, and journals selectively publishing need to be called out more quickly.

  • @rcnhsuailsnyfiue2
    @rcnhsuailsnyfiue2 2 года назад +109

    Great and detailed video - two thoughts to add from me...
    (1) You opened with Daryl Bem's "Feeling the Future" study, but it was never mentioned again?! After hearing he rigorously replicated it himself 9 times, I was expecting to hear how he either did it "wrong" (and if so, how), or did it "right" (and if so, what a can of worms that opens up). Dean Radin and others are consistently shunned for investigating psi phenomena, despite taking rigorous research approaches. I'd love to hear more to tie this all together.
    (2) I believe researchers should be required to make ALL their data open and public. You argued that this might not be a good idea because researchers might not have the ability to properly organise their data for public release... I appreciate you followed that up joking it's not too much to expect, and I totally agree! Why can't every study require its data to be released publicly, regardless of how nicely formatted it might be? Just adding the requirement would encourage researchers to ensure their data is well-organised (thus improving its accuracy), but also allow for anyone to make their own analyses after the fact. There should be NO expectation on academic journals to conduct their own replications on the data - leave this for the public or other researchers to do after-the-fact. Journals should still demand the same rigorous standards, just ensuring that data is made open as a requirement too. When unbelievable studies are published based on meeting publication requirements, other researchers can then attempt to replicate them with access to the same underlying data too. This is imperative to computer science, where open-source research has advanced the field exponentially in recent years.

    • @neurotransmissions
      @neurotransmissions  2 года назад +59

      Good point! I probably should’ve explained the issues with Bem’s research. It essentially boils down to engaging in the questionable research practices that I was talking about. So he constructed his hypothesis after collecting data (HARKing) and even his statistical analysis is highly suspect (possible p-hacking). If you run a different test with the same data, the results disappear. As for open science, yeah, I think it should become the norm to make it all public. I think we’d need widely-available, free sites to host the data. That might make it easier.

    • @sinfinite7516
      @sinfinite7516 Год назад +11

      Yeah I was very surprised Bem’s research wasn’t brought up again in the video! I was looking forward to a resolution to it, but never got it. Good thing I found it down in the comments :)

    • @onorth5615
      @onorth5615 Год назад +7

      Great point. I believe Brian Nosek started such a project about a decade ago, called The Center for Open Science. I believe his very goal was to create a platform that would allow researchers to upload raw data sets and make this available to the public for independent analysis. I remember keeping an eye on this endeavour and was shocked that it never gained the momentum it deserved. In retrospect, it makes only perfect sense.

    • @gnoelalexmay
      @gnoelalexmay Год назад +3

      Great points.
      I would be very interested to see some analyses of the replication crisis phenomenon in relation to its 'political importance'.
      I get the impression that 'politically expedient outcomes' requiring scientific paper(s) to point to, use all-the-above methodologies to construct a faux scientific basis.
      Access to the original data is the absolute necessity of a trusted scientific institution. Maybe a real-time on-line platform documenting the progress of studies is possible?

    • @saaah707
      @saaah707 Год назад +2

      Bro his paper wasn't the point, the point was he used their scientific methods to "show" that esp exists
      It was like a reductio ad absurdum for the entire field

  • @its_jess4321
    @its_jess4321 2 года назад +47

    As an outpatient social worker I also was Intrigued about the need for positive results and ignoring unfounded or negative results because when you’re working with a client there’s wisdom and anything and everything they say. Meeting goals give info and there’s reasons why a client doesn’t meet their treatment goals and there’s information in that as well and it’s ““ unsuccessful“ but there’s still a Why

  • @freemindas
    @freemindas Год назад +47

    There's only two options for Psychology. It either has to step up it's game or go to oblivion. A student of psychology here !

    • @DavidAndrewsPEC
      @DavidAndrewsPEC Год назад +4

      Stepping up its game would require just one thing: lopping of every bit of it that would result in an increased risk of bullshit being proclaimed truth. We would lose psychanalysis (good) and humanism (also good) because neither is amenable to experimental examination. Much of cognitivism would go, since that requires inferences to be made on the basis of behaviour alone, be it at the organism-environment level or at the cellular activity level.
      We would be left with behaviour analysis and neurology.

    • @freemindas
      @freemindas Год назад +1

      @@DavidAndrewsPEC I agree with you. If it remains at this state then it doesn't deserved to be categorized as "science". Shouldn't be nothing more than just literature.

    • @DavidAndrewsPEC
      @DavidAndrewsPEC Год назад +2

      @@freemindas Exactly.
      I gave up on psychoanalysis and humanism ... you cannot do science on them. And they refuse to accept the idea of operational definitions that would make counting possible.

    • @freemindas
      @freemindas Год назад +1

      @@DavidAndrewsPEC I once asked one of my professors to elaborate on the replication crisis issue in psychology research. All she said was it's also a problem in other discipline research dodging so answering my direct question! There's a lot to be done in this field. It's far from being a real science whether we like it or not.

  • @Corporis
    @Corporis 2 года назад +35

    "the p-value sounds like incontinence medicine" had me laughing real hard

    • @neurotransmissions
      @neurotransmissions  2 года назад +10

      I had another one about it being a urine-based cryptocurrency, but I cut it 😂

    • @marksegall9766
      @marksegall9766 2 года назад +11

      8:31 "I thought I had another... but let's move on it wasn't significant" is the best of the p-value comments.

    • @NiHaoMike64
      @NiHaoMike64 2 года назад

      @@neurotransmissions One of my friends recorded videos of herself peeing standing up, then traded them for Bitcoins. (That was in 2014 or so.) Truth is stranger than fiction...

  • @harleythesalami6956
    @harleythesalami6956 2 года назад +59

    God damn can Netflix give this channel a call. This is just golden content.

    • @neurotransmissions
      @neurotransmissions  2 года назад +10

      Aw, thanks! 😊

    • @wovasteengova
      @wovasteengova 2 года назад +1

      Please put some respect to His name.

    • @The_Questionaut
      @The_Questionaut Год назад +2

      You think Netflix would pay them well or not?
      I bet he could make more not being limited to Netflix

    • @themacocko6311
      @themacocko6311 Год назад +1

      Netflix? Lol you think you could have a worse platform? Netflix is a damn joke

  • @irinaphoenix2169
    @irinaphoenix2169 10 месяцев назад +5

    What I HATE is the system that connects being a good researcher with being a good teacher. One of the many things in a long list of things that are broken in American Universities.

  • @Manullus
    @Manullus 2 года назад +10

    I'm so happy I discovered your channel. What a brilliantly put together video! Thank you for this, I think you're spot on with everything.

  • @limolnar
    @limolnar 2 месяца назад +2

    THANK YOU for this video! In university I was taught that an insignificant result is as important as a significant one, but as a newbie researcher the converse (in both respects) was given air. It's turning around - thankfully even in the last two years - and those who have a ruthless "publish or perish" mentality are being scutinised more intensely.
    It's funny that in all areas of society we're moving away from corruption and polarisation to greater transparency and compassion. 🥰

    • @Gvsvillages
      @Gvsvillages 2 месяца назад

      Can you decode my comment?

  • @mh8704
    @mh8704 10 месяцев назад +4

    I think the problem begins in grade school and high school learning that is over focused on right and wrong answers that fit testing methods instead of teaching people to be broad minded and see learning as exploration.

  • @ruthhorowitz7625
    @ruthhorowitz7625 Год назад +2

    As someone who was diagnosed with MDD at 42, ptsd at age 55 , and ASD at 57 i can say with certainty that psychology and psychiatry are broken. I had ptsd and asd at age 42. I was hospitalized for 6 weeks, had meltdowns, severe sensory issues, couldn't eat the foid, and no one tried to connect the dots. I could have been diagnosed with aspergers at the time, but they couldn't be bothered to look deeper than the 15 minute evaluation the original psychiatrist did.

  • @moorejim13
    @moorejim13 Год назад +9

    Idk why, but the idea that everything we might know about psychology is wrong, sorta excites me, as someone who is studying psychological science one of the things I love about experiments is the ability to replicate them and see if anything is different, and if you find something different yyou can go on a trip to find out why or predict a phenomenon that no one had looked at yet. There are so many possibilities on what you can learn and from a different perspective

    • @uniquenewyork3325
      @uniquenewyork3325 Год назад +1

      It's such a fun science bc the brain is constantly changing and we're always getting more information on how people react in different situations

    • @michaelslaughter5237
      @michaelslaughter5237 Год назад

      Yeah, it so fun, is it?!? Messing with people’s lives with unfounded and inaccurate theories?? Do patients a favor and stay in the lab until science figures out how to instill empathy and humanity. But look on the bright side…they’ll never pick out the bodies from the carnage of the opioid epidemic. Shhhh!

    • @bloops444
      @bloops444 10 месяцев назад +2

      Thanks for sharing this perspective! I’m a prospective psych masters student so the current state of the field seems a lil bit bleak but you framed this in a really optimistic and exciting way. I appreciate it!

    • @moorejim13
      @moorejim13 10 месяцев назад

      @@bloops444 No problem, I’m glad I was able to offer a new perspective on your field of study. I think it’s important to remember while we operate in patterns, human behavior is often nuanced, and it doesn’t hurt to question established research. That’s what science is about! Questioning what’s been found to make it stronger and update the work done by scientists who didn’t have access to as much information as we do! You’re doing important work even if it seems bleak

  • @davidschreiner6667
    @davidschreiner6667 Год назад +11

    If you had two books, one of every provable thing we really know about psychology and the other about all that they think they know about psychology you would end up with the first book about the size of a typical comic book and the second with about as many pages as the local library has in its books.

  • @jlmonolith
    @jlmonolith 2 года назад +21

    It kinda feels like journals should add a special stamp/seal/mark on studies that replicate or have been replicated by other studies, so that everyone can know to take those studies more seriously, compared to studies without that stamp/seal/mark.

  • @benmcreynolds8581
    @benmcreynolds8581 9 месяцев назад +2

    We learn just as much from not getting results, that it can eventually be what leads us to find what does produce results. This issue in our system is also causing a problem in cosmology, physics, psychology, astrophysicists, etc. I really appreciate this video because I absolutely love science and I hope we can find ways to improve the system we use to actually improve & evolve all different types of sciences.

  • @Aiphiae
    @Aiphiae Год назад +13

    This is why the only people who take the social sciences seriously are people *in* the social sciences.

  • @mignonhagemeijer3726
    @mignonhagemeijer3726 2 года назад +17

    I remembered there is a journal/website on flopped experiments. As also many biology and chemistry (?) studies fail because of replication issues. It kan help phd students also to figure out which methods to gain certain protein structures don't actually work.

    • @mazebean5200
      @mazebean5200 9 месяцев назад

      Do you remember its name by any chance? It would be useful due to the area of research I'm dedicated to but if you don't remember it's fine

  • @ryanmccrary1880
    @ryanmccrary1880 Год назад +9

    I am way too late to this video but this was incredible. Very informative and very funny. This is fantastic content.

  • @lukaurshibara5837
    @lukaurshibara5837 Год назад +14

    1:09 "ESP, the ability to sense the future".
    That's not what ESP means; ESP is an acronym standing for "Extra Sensory Perception" and is an umbrella term for everything involving a sixth sense.
    The alleged ability to specifically sense the future is called Precognition.

  • @2MileHighTsunami
    @2MileHighTsunami Год назад +8

    The fact is that scientific advance and knowledge is a national security issue. The best science is kept under wraps until it can't be held secret anymore. Lesser science struggles for funding and bad results are allowed to stand because even though it doesn't benefit us, neither does it benefit our adversaries.

    • @ztukariansevuri
      @ztukariansevuri Год назад

      OMG, THIS!!! This is what people are failing to grasp, and the brainwashing isn't helping the populace grasp this concept... We are in the age of intellectual supression, and if you're an innovator that doesn't follow the strict ignorance taught in traditional schooling, and you actually innovate, then you're going to have failing health quickly, or vanish lol.... I mean... If you really dig deep into the science from the first industrial revolution up to the 50's the technological advancements are MIND BLOWING, simply Star Trek level competence, its astounding how truly truly genius these inventions were, and that was during a time where someone labled a genius was shot directly into celebrity status, but since that 3rd industrial revolution all of that has ceased to be the case... Hell, the man who theorized the God Particle had to sell his Nobel Price in order to pay his medical bills, but people like Taylor Swift will live forever on their incredible amounts of wealth... I promise the private sector is 100's of years more advanced than we are today, and until they have to release technology, they wont, unless it is to control the world....

  • @wanderingsoul2758
    @wanderingsoul2758 2 года назад +14

    Notes on Main Ideas
    2011- Pyscologist Daryl Ben discovered its possible to sense the future. He sent the results to The Journal of Personality and Social Pyscology, which resulting in a reckoning in pyscology.
    2015-Reproducibility Project Pyscology: 27 scientists discovered only 36% of studies had replicated results
    2017- Nature and Science; 62% had replicated results

  • @roncarlin3209
    @roncarlin3209 Год назад +12

    I had a professor back in the old days who said the way you tackle the p-hacking problem is you only allow the testing of hypotheses which have an a priori reason to justify further investigation. I know, too vague to apply in practice.

    • @kirito3082
      @kirito3082 Год назад +4

      That's only one layer of the problem, this video didn't even get to the most serious ones such as the government selectively funding research that gives the result they want and defunding the ones they don't want, leading to the production of studies that rely entirely on fake data for their conclusions and no one attempting to replicate the findings.

    • @Heyu7her3
      @Heyu7her3 Год назад +1

      ​@@kirito3082 what are some examples of those studies?

  • @Stroheim333
    @Stroheim333 9 месяцев назад +2

    Every PhD student should be given the tast to replicate older studies, never try to prove their own new hypotheses. That should give them a proper respect for the scientific method, and be a good way for scientific research to sanitize itself.

  • @StrongMed
    @StrongMed 2 года назад +4

    Another awesome video!

  • @pampoovey6722
    @pampoovey6722 Год назад +16

    Most branches of 'scientific' psychology are complete bullshit. Personality/Social takes the cake though. Even when I was studying as an undergrad I was being sent to earn credits on PhD studies that involved 5 point questionnaires to harvest most of their data. I really enjoy the more analytic strains like Comparative and Developmental, but there's always room for debate there.
    The desire for positive results even came down to the undergraduate level. They weren't interested in studies that had 'problems', it had to prove an effect. Most successful ones usually involved making people do tasks while under the effect of coffee. It didn't help that we had a media famous Psychology dept that got interest through stunts and shows.

  • @katja6332
    @katja6332 Год назад +15

    Out of my experience as a coach for PhD students, the statistical bs that's going on in computer science or economics is 🤥. I had more than one student wanting to quit science bc the professor was engaging them to "make the effects better for pubclicating" 😅. Usually those professors have had a tremendous list of different "A" - journal entries. To a point where I am suspicious now towards those "unbelievable successful" Professors. 😮
    I had a hard time with telling those students , not to quit science but change the professor. One is a prof now himself 😊 and a good, honest one.
    But organizational wise, my hands are tied until today to expose those "successful" professors questionable research practices...
    Another annoying thing is how the sample is conducted, either it's Amazon Turk or students (usually first semester), hence not representative at all...

  • @Tryptic2x
    @Tryptic2x 10 месяцев назад +1

    I lost all confidence in psychology as a science when this stuff first came to light. To this day, I am deeply skeptical of the field as a whole.

  • @The_Questionaut
    @The_Questionaut Год назад +3

    The video discusses the "replication crisis" in psychology, where many influential studies have failed to replicate or have much smaller effect sizes when replication is attempted. This calls into question what research in psychology we can actually rely on.
    One major contributing factor is publication bias - journals favor publishing positive, exciting results rather than null or negative results. This skews the literature.
    There are also issues with researchers engaging in "p-hacking" - manipulating data or analyses until significant results are obtained. This leads to false positives.
    Questionable research practices are very common, often unintentionally. Things like flexibility in data analysis, lack of transparency, and small sample sizes contribute to non-reproducibility.
    Solutions involve focusing more on study preregistration, sharing data, placing less emphasis on positive results, and changing incentives in academia to value replication more. Overall more rigor, transparency, and replication is needed.
    The host argues psychology has made a lot of progress in the last decade on these issues, and this "existential crisis" may ultimately improve the field by leading to more robust and reproducible research.

  • @sinfinite7516
    @sinfinite7516 Год назад +7

    It was never addressed if the 2011 research on Esp ended up being replicable or not, I was kinda hoping there would be a resolution like his study was unable to be replicated or the Professor was found to be p-hacking etc. but it was never brought up again.

  • @BanFamilyVlogging
    @BanFamilyVlogging 10 месяцев назад +3

    This is a systemic issue across all sciences, not just psychology. But it is worrisome, regardless. I side-eye everything now.

  • @The_CrackedPot_Christian
    @The_CrackedPot_Christian Год назад +2

    I think a study measuring the effect of psychological experiments on a psychologists psychology should be undertaken

  • @WillyOrca
    @WillyOrca Год назад +3

    I remember starting my job and having total imposter syndrome for the first 2 months. I was stressing every single day, losing sleep at night in a perpetual state of fear each day might be my last there. I was essentially just waiting for the day I got called into my bosses office to be fired, but when I finally got what I thought was THAT call, it ended up being a call to pickup a 200$ gift card. I got employee of the month because I had the best numbers in my department that month lol. They also gave me a +2$/hour raise. I was FOR SURE underestimating my performance because I had nothing to compare it to nor any sort of metric to serve as a standard.

  • @waschell1
    @waschell1 Год назад

    My new favorite RUclips channel! Why did it take so long to find you? Thanks for being brave enough to talk honestly and encourage critical thinking about subjects that matter.

  • @quantumfineartsandfossils2152
    @quantumfineartsandfossils2152 Год назад +4

    5:45 + You are an absolutely brilliant researcher and producer, thank you for working on & making this for us.

  • @devinnemeyers4008
    @devinnemeyers4008 2 года назад +3

    What a wonderful video. Looking into neuroscience as a career path. I’m glad I watched this.

  • @albeit1
    @albeit1 Год назад +4

    Tenure should not even be a thing.
    I can’t think of any professional I would hire whose performance would be better if they couldn’t be fired.

  • @DeepSweetSoulful
    @DeepSweetSoulful Год назад +2

    How incentives are structured is a major issue. As long as there is an economic/career incentive to publish, there will be manipulation.

  • @sotpunkkatt158
    @sotpunkkatt158 2 года назад +7

    in 6 months ill be doing my masters. Ive already floated some ideas about doing a replication masters but my handler is arguing that it will hinder my carrier possibilities. in other news ill be doing a replication masters in education.

  • @rebeccaalvarado1838
    @rebeccaalvarado1838 4 месяца назад

    SO well spoken. Thank you for discussing this issue.

  • @sandollor
    @sandollor Год назад +9

    It's absolutely a good thing this all is being brought to light and continues to be, and I'm not just saying that because I didn't do well in my psych stats class. :P There is a large disconnect between research/lab and clinical work too.

  • @bellapayne
    @bellapayne 7 месяцев назад

    As an undergrad in Sociology, it was easy to see how this could be possible. It soured my feelings about the field...

  • @petrairene
    @petrairene Год назад +8

    I have a biologist friend and he told me from the time at university, he found the studies in the psychology department shockingly amateurish in their methodology. He had no respect for the psychology department and it's people at all, finding them very unscientific.

    • @johntang605
      @johntang605 Год назад +4

      In my graduation 9 out of the 13 highest gpas were psych majors.

  • @gamingandgunpla
    @gamingandgunpla Год назад

    As a computer science major, I want to personally thank you for emboldening my disdain for the psychological and social sciences.

  • @sendtosw
    @sendtosw Год назад +16

    I've always considered psychology to be a bogus "science." I find the idea that human personality can be "studied" with the same level of confidence as other sciences like chemistry or geology or even biology to be ridiculous. There are too many smoky glasses to barely see through, too many fuzzy, uncategorizable factors, changes and variables to be able to reach any kind of reliable finding. As far as I'm concerned, all the major theories are no more than speculation masquerading as science. They hover around ideas and concepts that might be closer to the real truth than to falsehood, but that's about the best one can say for them. They aren't science. And I've had this feeling ever since "psychology" and what it means was first made known to me all the way back in elementary school. In university courses I never encountered any reason to change my mind. This is all just another example of how our culture has been so impressed with the results of using the scientific method that we have very, very wrongfully assumed that it could be applied to all aspects of reality -- and if reality doesn't cooperate, well reality can just go pound sand. Sorry. Some areas are just murky. They don't lend themselves the level of certainty we demand. And if we try to do it anyway, well. We end up in the situation this video is describing. And a lot of people are revealed to have been naive at best and dishonest and disrespectful of truth at worst. Know what? After listening to this whole video, here's what I think the answer to all this is: humility.

    • @elvingearmasterirma7241
      @elvingearmasterirma7241 Год назад +4

      You can dee things like depression, ocd, anxiety and so forth in MRI scans
      We can physically map out how these issues affect the brain. Psychology isnt just throwing random stuff at a wall, its using modern medical technology to try and make sense of the brain and its chemistry
      Its like slagging off the whole field of sleep studies just because you personally think its too wishy washy without understanding the way they study the electrochemical signals produced in the brain

  • @SilviaHartmann
    @SilviaHartmann Год назад +1

    The problem with psychology as a science is that it is sitting on a foundation of quicksand, rather than some well understood and well proven "laws of the universe." The first and oldest "studies" are the very worst from their most basic pre-suppositions to their methodology through to their "results." Trying to build on these old studies and/or taking them as any kind of foundation has directly led to psychology being completely useless in providing any kind of meaningful advances for giving people a better life today than they could have expected in the dark ages.

  • @jannetteberends8730
    @jannetteberends8730 2 года назад +7

    Problem with using statistics is that the underlying mathematical model must be good.

    • @jannetteberends8730
      @jannetteberends8730 2 года назад

      @@ALIEN_SENTIENT the testing is done with the t-distribution. Using that distribution the data must have certain properties.

  • @JayDeeWifeBoy
    @JayDeeWifeBoy 8 месяцев назад +1

    Current Psych Undergrad (and postmodern degenerate in my spare time) Undergraduate courses are really only beginning to address this. I’m taking a class specifically in techniques developed to address the replication crisis. Yet in basically all my other classes it seems like the department is moving in business as usual.
    Another thing I’ve always wondered about the replication crisis is that psychology is affected by where you are in history. It could be that some of the psychological effects they described were cultural affects that shifted.

  • @TheMegLolDon
    @TheMegLolDon Год назад +5

    I didn't even realise that making a hypothesis based on the findings is unethical until now, was thinking of doing that for my small research project (more of a passion project) because I thought it was acceptable but now I won't do that. Thank you letting me know.

    • @ChaoticNeutralMatt
      @ChaoticNeutralMatt Год назад +2

      I read this initially as you using data from elsewhere for your initial hypothesis and then gathering your own data , but I suppose you were talking about the way HARKing is used here.
      I'll add that it's more that it seems to be when it's not disclosed and they pretend that they came up with the hypothesis first.

    • @TheMegLolDon
      @TheMegLolDon Год назад

      @@ChaoticNeutralMatt i see, that's why is wrong, thanks for clarifying

  • @marcmckenzie5110
    @marcmckenzie5110 Год назад +11

    40 years ago I wrote an unpublished undergrad psych paper covering most of the issues you so eloquently cover here. My research methods professor pulled me into his office and soundly chewed me out. I forced him to accept the paper, and of course he gave me a poor grade, which I took before the academic dean. After the academic committee reviewed both sides, the paper was returned the second time with a perfect score. I’m sure smarter people than I who were post-doc raised similar issues back then, so the harder question is how do we as a society demand academia to get their shit together. Pretty tough in a society which seems increasingly to only value profit and wealth. Those are ultimately the pressures behind a great deal of this. We can’t even get our congress to cooperate on agreeing on bathroom protocol (tongue thoroughly planted in cheek). Sigh.

  • @TheSapphireSprit
    @TheSapphireSprit Год назад +14

    The people that exposed this are heroes, and so are you! Extremely admirable to rank about a period of your life that you’re not necessarily proud of! We all have those times and more of us need to see a Rockstar that can admit to mistakes!

  • @austinlinford5698
    @austinlinford5698 2 месяца назад

    He shows who he is when he says that "people are looking for anything to disbelieve the research", when he could also have said "people are looking for anything to prove the research", which is exactly why no one trusts research anymore

  • @edwardharvey7687
    @edwardharvey7687 Год назад +4

    Being as a lot of attitudes and public policy are based on findings from psychological studies this is rather disturbing.

  • @dheerajvanarasa910
    @dheerajvanarasa910 Год назад +1

    " Let's move on, it wasn't significant "
    Good stuff.

  • @deja-view1017
    @deja-view1017 Год назад +5

    There's a very good reason for the saying (quote?) 'Lies, damn lies and statistics'. Certainly the data should be accessible for ALL published studies so that the results and statistical analysis can also be queried. I found it quite surprising when doing my research methods courses that not all peer reviewers review the datasets. We should also know who the reviewers are (some areas of science are quite cliquey) and their reports (and, perhaps, the reply) should be published. Psychology does get a bad rap for dodgy science (perhaps because of the type of people attracted to it?) but there are numerous cases in other sciences.
    Great video though!

  • @KyleHarmieson
    @KyleHarmieson 8 месяцев назад +1

    Journals should publish at LEAST one replication for every novel study

  • @leothecrafter4808
    @leothecrafter4808 2 года назад +28

    Am I the only one who expected there to be a hidden text or maybe context or something at 12:32 only to read "Media offline. Anyways, great video and an important topic, psychology is plagued the worst, but many things like publication bias and p hacking to straigh up making data up happens in every field and can affect actual human being if conclusion for clinical diagnosis and treatment and made based on that. Even the pressure on academics and the publish or perish mentality has more or less directly lead to loss of some great scientist, the statistics on suicide rate back it up.

    • @neurotransmissions
      @neurotransmissions  2 года назад +10

      Argh!!! I thought I got rid of all of that. I tell ya, Adobe was really getting on my nerves with all these rendering issues. Anyway, I agree with you 100%!

  • @dahlia695
    @dahlia695 8 месяцев назад +2

    I did a study in my research methods class and the p value was really tiny, something like 0.00002 or so (can't remember exactly but there were many zeroes). This invited much scrutiny from several professors in the psych department but after I gave them my data and they looked everything over, they agreed I had done it properly. I ended up being invited to present my paper at a large APA conference but I declined because I was only taking psych for something to do and wasn't serious about persuing it much further.

  • @teogonzalezcalzada5812
    @teogonzalezcalzada5812 2 года назад +6

    Came here from nebula just to say that this video reminds me a lot of the controversy of Dream and minecraft speedruns.
    The themes of p-hacking and statistics were deeply analyzed because statistics were used to prove a minecraft player was cheating (he later admitted to doing it). And the maths were tested by a lot of experts.
    "How lucky is too lucky" from stand up maths it's a good analysis and summary of how this works, in a scientific approach for a way less serious theme.
    Btw, thanks for the great video! Will return to binge-watch the rest of the channel on nebula.

  • @robertklem2638
    @robertklem2638 Год назад +2

    Very informative, thank you for your work. Cheers!

  • @Trident_Gaming03
    @Trident_Gaming03 2 года назад +10

    So I'm guessing the guy who "proved" ESP exists was p-hacking hard

    • @neurotransmissions
      @neurotransmissions  2 года назад +5

      Yes! I forgot to mention this later in the video! But essentially he was engaging in HARKing and was coming up with his hypothesis after he knew the results of his data and had some other questionable research practices.

    • @Trident_Gaming03
      @Trident_Gaming03 2 года назад +3

      @@neurotransmissions It's as if he was running a blind study, but his participants were a publishing board, very meta, amazing

    • @neurotransmissions
      @neurotransmissions  2 года назад +2

      @@Trident_Gaming03 Totally. Another detail that didn’t make it into this video was that Daryl Bem was actually an editor for the very journal he submitted his paper to. Weird stuff.

  • @Townsavage
    @Townsavage 9 месяцев назад

    THE ORIGINAL sin was not seeking godlike knowledge, but misrepresenting the findings.
    Hear here! My friend you are a true scientist of the most noble intentions- keep at it!

  • @megan5074
    @megan5074 2 года назад +3

    You summed up my advanced research methods psychology class really well done

  • @hanskrakaur9830
    @hanskrakaur9830 10 месяцев назад +1

    This video is a work of art, (and a valuable piece of science communication), congrats.

  • @NCRonrad
    @NCRonrad 2 года назад +18

    My favorite story in this vein comes from the measurement of the electron charge - Feynman discusses this beautifully

  • @hermanhale9258
    @hermanhale9258 8 месяцев назад +1

    When I was in college in the seventies an anthro. teacher told the class it was the most common thing in the world for a grad student to check the records of a famous scientist and find his famous findings had been faked.