How to catch a bad scientist

Поделиться
HTML-код
  • Опубликовано: 30 сен 2024

Комментарии • 358

  • @CELLPERSPECTIVE
    @CELLPERSPECTIVE 3 месяца назад +249

    It's crazy how my view has changed over the entirety of my education; in my undergrad, I automatically assumed that anyone who was "smart enough" to publish a paper was deemed to be a pillar of the industry, only to enter grad school and find out that scumbags exist in every field :)

    • @Im0nJupiter
      @Im0nJupiter 3 месяца назад +14

      The two people I know who got published in undergrad did jack shit to get on the paper. It was "thanks for cleaning the dishes and attending meetings, here's a pub".

    • @Skank_and_Gutterboy
      @Skank_and_Gutterboy 3 месяца назад +7

      @@Im0nJupiter
      And being a good-looking chick helps IMMENSELY.

    • @jshowao
      @jshowao 2 месяца назад

      Its pretty much how the world works in general, not just academia.
      The US rewards a scam economy.
      The problem is the incentive structure. Reproducing studies? No, find the next "big thing" Publish in lower impact journal? No, must be in Nature. Access to raw data? No, too much work

    • @mirzaahmed6589
      @mirzaahmed6589 2 месяца назад +1

      I realized that while I was an undergrad and volunteering in a lab. The sheer amount of work needed to publish anything of value would be way beyond the physical limit of most people. Just doing literature review alone is more than a full time job. A very well organized lab, with people specializing in specific things (lit review, benchwork, data analysis, etc) would maybe churn out 2 or 3 good papers a year. Most seemed to be doing way more, but when you read the papers, you realize it's essentially the same work and data that has been republished over and over in different journals. Never mind that all the citations are from prior work done in the same lab, or another closely related lab in the same university/institute.

    • @mirzaahmed6589
      @mirzaahmed6589 2 месяца назад +1

      @@Skank_and_Gutterboy that's true in every field, not just science.

  • @l.w.paradis2108
    @l.w.paradis2108 3 месяца назад

    I would think one significant paper per year as lead author, or several related ones as lead author, is the maximum.

  • @sdwysc
    @sdwysc 3 месяца назад +105

    Frauds can also be detected if the following points are all true to some extent:
    a) fails to keep MS/phd students in their lab,
    b) has lots of collaboration, getting idea from collaborators, making students do everything with 0% help, securing corresponding author position doing simply nothing.
    c) hiring too many post docs.
    d) reaching out to great professors and doing their side works with the help of high-paid post-doc. As a result, being able to get Co-PI position in next grant.
    e) Being Co-PI in many grants and have no PI of any grants.

    • @crimfan
      @crimfan 3 месяца назад +7

      This is a good list, though your point (e) can be wrong depending on the person's specialization. For example, someone who's a technical specialist, e.g., a statistician or specialist in certain data gathering methods, is quite frequently a co-PI but quite rarely a PI.

    • @sdwysc
      @sdwysc 3 месяца назад +1

      @@crimfan I agree with you. The points I mentioned are all linked together. So, the points are all "AND"s

    • @pranaypallavtripathi2460
      @pranaypallavtripathi2460 3 месяца назад +1

      This should be the pinned comment.

    • @samsonsoturian6013
      @samsonsoturian6013 3 месяца назад +6

      Even without fraud that's definitely a shitty workplace

    • @crimfan
      @crimfan 3 месяца назад +4

      @@samsonsoturian6013 Sure is. I was a co-PI with someone who was a PI like that. Terrible. There was massive turnover that definitely affected the work. Bleh. What a shitshow.

  • @pheodec
    @pheodec 3 месяца назад +103

    Based on my observations: A lot of people don’t care about scientific fraud

    • @User-y9t7u
      @User-y9t7u 3 месяца назад

      A lot of people don't care about fraud

    • @Novastar.SaberCombat
      @Novastar.SaberCombat 3 месяца назад +13

      They won't even check. Essentially, if a news blurb comes out: "Dark chocolate now proven to make you wealthier", you can *GUARANTEE* that dark choco sales will increase. Guaranteed.

    • @samsonsoturian6013
      @samsonsoturian6013 3 месяца назад

      A lot of people WANT fraud to go unnoticed. It's like the Olympics, we want to entertain power fantasies about what is possible, also fraud is the only way to change demographic trends in academia

    • @crimfan
      @crimfan 3 месяца назад +10

      They're happy to have their pre-existing views supported, yep.

    • @sssspider
      @sssspider 3 месяца назад

      They have literally been trained and coerced, with the threat of social ostracism, to listen to The Experts and trust The Science, no matter what. Acknowledging that scientists may be not only wrong, but *willfully* corrupt and serving a non-scientific agenda - is basically sacrilege of The Science, the modern church that has little to do with the scientific method.

  • @robmorgan1214
    @robmorgan1214 2 месяца назад +43

    5 is a major eyebrow raiser...and that guy better be a workaholic with 15 rock star grad students and ZERO personal/family/social life, and is graduating at least 4 PhD's per year. Moreover, they only keep up that pace for a few years at a time.

    • @hypothalapotamus5293
      @hypothalapotamus5293 2 месяца назад

      It depends on how people operate.
      I'll use the example of group A: a lab is in a saturated field, is fully independent, and the gradstudents have nearly identical skills but are working on competing projects. The publication rate isn't going to scale very well for grad students or the PI. In this case, a sustained rate of 5 publications a year is impressive.
      Consider group B: The PI has developed a very specific measurement technique that almost nobody else can do and has a reputation for being easy to work with. Already well vetted experiments will pop up from other groups (who do most of the work and suffer most of the costs from the failures) and publications will scale well for all in the group if the PI is fair. I think you should be able to get 10 publications a year at least.
      Group C: Experimental particle physics or astronomy groups... Everyone blobs together with massive author counts and inflates publication/citation rates so much that nobody looks at H-index anymore.

  • @glennchartrand5411
    @glennchartrand5411 2 месяца назад +13

    The main thing that determines if someone gets tenured at a College isn't their ability to teach.
    It's how many times they've been published.
    You can be the best teacher in your field and get passed over in favor of an incompetent teacher with no communication skills whatsoever if his bibliography is bigger than yours.
    And once you become tenured , it doesn't end because your bibliography is still the most important metric for advancement and funding.
    As long as "Publish or Perish" is the norm at colleges , you're going to have rampant academic fraud.

    • @DrTWG
      @DrTWG 2 месяца назад +2

      Yep - it's baked in . It's necessary to be a fraudster .

  • @mjdally82
    @mjdally82 3 месяца назад +24

    What is the point of academic journals if they don’t already do all of this??!! All this time, effort and resources that could be used to do ACTUAL RESEARCH but has to be wasted on this - not to mention all the funding that they take from honest academics!! For just the autism article ALONE The Lancet and everyone involved should go “straight to jail!” 😡

    • @sirgavalot
      @sirgavalot 2 месяца назад +6

      Money dude, to make money

    • @Parrot5884
      @Parrot5884 2 месяца назад +3

      Academic journals are for-profit publishers. Its the crux of the entire issue, really

  • @titaniumteddybear
    @titaniumteddybear 3 месяца назад +26

    The survey you did could have benefited from lower response options. 70% of respondents choosing a single option is absurdly high. It is possible that many of your viewers consider more than 5 papers a year to be too many, but there wasn't an option for that so they choose the nearest one: 10+. However, this does not contradict your conclusions in any way. I just felt it important to comment on it, given that the video is about scientific accuracy. Keep fighting the good fight.

    • @abicaksiz
      @abicaksiz 2 месяца назад

      I fully agree with you sir. Anyone publishing more than one article a year raises my eyebrows.

    • @mirzaahmed6589
      @mirzaahmed6589 2 месяца назад +3

      I agree. Anything more than 2 or 3 is already suspicious. Over five is definitely a strong indication of something fishy.

  • @Batmans_Pet_Goldfish
    @Batmans_Pet_Goldfish 3 месяца назад +83

    "Right to jail."

  • @Astroponicist
    @Astroponicist 3 месяца назад +59

    @2:22 "the manpower or the infrastructure" If you can popularize lawsuits against scientific fraud it will become a self driving mechanism.

    • @Novastar.SaberCombat
      @Novastar.SaberCombat 3 месяца назад

      That will NEVER happen. The wealthy cannot possibly allow it; it would destroy them at the very core.

    • @crimfan
      @crimfan 3 месяца назад +15

      Be careful of what you wish for because lawsuits can and are used by bullies who want to cause problems. Most scientists don't have deep enough pockets to ward that off.

    • @samsonsoturian6013
      @samsonsoturian6013 3 месяца назад +10

      Or just criminalize it as fraud. Technically no new laws are needed, as anyone can reasonably expect financial gain from getting published

    • @_human_1946
      @_human_1946 3 месяца назад +7

      ​@@crimfanYeah, it's easy to imagine corporations abusing it to suppress studies they don't like

    • @gregorycolonescu6059
      @gregorycolonescu6059 3 месяца назад +3

      @@_human_1946 then include something similar to SLAPP but with more teeth.

  • @charlesmanning3454
    @charlesmanning3454 3 месяца назад +10

    I think the number one way to catch bad science ought be replication.
    Requiring independent replication of results will be a much more reliable way of finding bad science and bad scientist than counting publications, auditing for conflicts of interest or investigating images. Theories built on bad data will not stand up under replication regardless or whether the data were fraudulently made up, the result of unintentional methodical problems or bad luck.

    • @kagitsune
      @kagitsune 2 месяца назад

      Unfortunately replicating results doesn't get big fancy grant money. 🙃 So now we have 20 years of dodgy claims with no incentive to check them, except in the big fields like cancer research

  • @nnonotnow
    @nnonotnow 3 месяца назад +24

    We are seeing how people that lack integrity and honesty can easily game the system. And that's just not in publishing scientific papers. It's epidemic throughout our society. Great video!

  • @tugger
    @tugger 3 месяца назад +27

    the violin doesn't work with speech

  • @diffpizza
    @diffpizza 3 месяца назад +73

    I think it's time to treat scientific papers like we treat open source code: Something that EVERYONE can audit and analyze, without any paywalls or any other impediment. Also, I think that each university should host its own server for provisioning data about the publications and have them authenticated through a decentralized blockchain method.

    • @arabusov
      @arabusov 3 месяца назад +1

      For my data analysis, I've preselected 1 TB real and simulated data. There's no way my university is going to keep this data longer than I stay working on this data, after that it will be removed, because the cost of storing such big data for decades without any further use is just ridiculously expensive.

    • @BenjaminGatti
      @BenjaminGatti 3 месяца назад +5

      ​@@arabusovseriously? 1 terabyte is like what $40?

    • @wintermute5974
      @wintermute5974 3 месяца назад +2

      Universities do increasingly have long term storage for research data. You can always reach out to the researchers and ask if they'd share their data with you or if its being hosted somewhere. Of course lots of data can't actually be shared even when it is stored, for reasons related to privacy laws, ethics policies or commerical agreements.

    • @arabusov
      @arabusov 3 месяца назад

      @@BenjaminGatti it must be done by means of universities, they need to dedicate people and hardware for just this task. It's expensive

    • @TheThreatenedSwan
      @TheThreatenedSwan 3 месяца назад +1

      Important genome data like from the UK biobank is kept under lock and key so people don't get any unapproved ideas about things like race

  • @enemyofYTemployees
    @enemyofYTemployees 3 месяца назад +32

    After hearing about Harvard’s disgraced ex-dean, I am not surprised scammers fill the top-rung of the healthcare industrial complex.

    • @lauraon
      @lauraon 3 месяца назад +1

      👍Stanford too

    • @RillianGrant
      @RillianGrant 2 месяца назад +1

      Healthcare?

  • @MFMegaZeroX7
    @MFMegaZeroX7 3 месяца назад +9

    The number really depends a lot on authorship level, field, and subfield. For example, Erik Demaine, who is in my field, and overlapping with some subfields of mine, publishes around 20 papers a year, and is well respected. Since it is CS theory (AKA, math stuff), there is no data to be faked, and I haven't ready any flawed proofs of his. But certainly, the more papers that is published by someone, the more I'm suspicious. For social sciences and physical sciences, I would be very suspicious of even matching Erik's publication rate.

    • @agnesw7680
      @agnesw7680 3 месяца назад +5

      It also depends on what kind of authorship we’re talking about. Being the first author for 10+ papers might be suspicious but if you’re the PI of a larger research group then you might be entitled to co-authorship to 20-30+ papers, or even more. The PI’s actual contribution could be put into question but academia is, in some ways, a bit of a pyramid scheme. I.e. the people on the top get most of the resources, such as publications.

  • @atmamaonline
    @atmamaonline 3 месяца назад +21

    The music is a tad loud at the start of the video, just fyi

  • @gmonkman
    @gmonkman 3 месяца назад +3

    Do you "first author"? I find it very surprising you didn't specify this. Last authors, for example, are typically lab/science area heads who will appear on every paper authored by their team.

  • @davidbingham4348
    @davidbingham4348 2 месяца назад +4

    In cases where you see people putting out tons of papers, there’s a simple answer. They are the head of the department and are putting their name on every paper that comes out of their department, even if they never even saw the paper.
    And the underlings who actually do the work don’t complain because a) they have no power and no choice and b) having their famous professor’s name on the paper increases their chances of actually getting published.
    This is extremely common in the medical field.

  • @Draconisrex1
    @Draconisrex1 3 месяца назад +172

    My wife is a scientist. I think more than four as primary author is suspicious.

    • @kheming01
      @kheming01 3 месяца назад +14

      what do you mean primary author? if you only mean author I don't think so... because sometimesnyou need collaboration on fields for a research, example biologists and physicians, then, if it is the work of students, you need to include the supervisors and PI. if you need a technology only available in another lab, you'll have to include whoever conducted the experiment in the other lab... I understand that a max of 8-10 authors if sufficient for most of research

    • @kheming01
      @kheming01 3 месяца назад +59

      this type of mindset is what excluded me from 2 papers in which I had significantly contributed on. They thought there were too many authors listed so they excluded the undergrad students

    • @DaLiJeIOvoImeZauzeto
      @DaLiJeIOvoImeZauzeto 3 месяца назад +20

      @@kheming01 He means "equally contributing authors", those considered to have done the critical parts of the backbone of the paper.

    • @crimfan
      @crimfan 3 месяца назад +16

      It depends a lot on the area. In some areas like particle physics, you'll see A LOT of authors. In other areas, like math or philosophy, multiple authors is rare. That said, I do agree with your wife that author lists have gotten pretty out of hand in some areas... social psychology, I'm talking about you.

    • @DrJTPhysio
      @DrJTPhysio 3 месяца назад

      @Draconisrex1 Yeah, 4 is def a large amount as a PI, especially if it's experimental research and if it's human subjects.

  • @Tortellia
    @Tortellia 3 месяца назад +26

    7:45 That is the single most true statement I’ve encountered so far in academia. I’m a high schooler (or rather was, I graduated as of June 2024) and I had to write a couple of papers for the International Baccalaureate programme.
    I was writing a paper on breast cancer rates and since I couldn’t conduct a study on that myself, I needed to find a database- an unaltered one. Using an analyzed or altered database (ie the data that would be present in a paper with all the conclusions) would result in me failing as it would be considered plagiarism. So, I contacted many authors of many papers asking for their raw datasets that were said to be available upon request. I was either told to ask someone else, ignored, or straight up denied.
    In the end, I didn’t get any of the datasets I wanted and spent about 3 months until I found in a very roundabout way a database that could offer me the data I sort of needed… not exactly, but it worked.

    • @KitagumaIgen
      @KitagumaIgen 2 месяца назад +2

      Cc your reply/reminder to their denials to provide the primusen data to the journal editors.

  • @00Jman2000
    @00Jman2000 3 месяца назад +11

    great summary. would be interesting to see how journals react to recent developments and fraud cases.

  • @jeffersondaviszombie2734
    @jeffersondaviszombie2734 3 месяца назад +23

    The ones with this amount of papers published per year could be working in large physics collaborations, like those at CERN: ATLAS, CMS and even the progressively smaller ones such as LHCb, Alice and so forth. For example, last year, the ATLAS collaboration published 111 papers, each one having an author list of about 3000 people. Up to now, in 2024, there have been 69 ATLAS papers published. Of course, these large physics experiments are a bit funny that way because they couldn't even happen if they wouldn't be run this way. But I don't doubt that there are a lot of fraudsters claiming to publish hundreds of papers per year in small teams or on their own. That's a completely different story.

    • @crimfan
      @crimfan 3 месяца назад +1

      Yes indeed that is quite true.

    • @Anton-tf9iw
      @Anton-tf9iw 3 месяца назад +1

      And why should that bad habit be acceptable?

    • @jeffersondaviszombie2734
      @jeffersondaviszombie2734 3 месяца назад +5

      @@Anton-tf9iw did you miss the part where I said that this is the only way to run this type of physics experiment? In high-energy physics experiments aren't tabletop stuff. They are giant apparatuses that are some of the most complex devices ever created. You think people would contribute to building them without any recognition? Out of those 3000 ATLAS authors, a few dozens are actual physics theoreticians, a few hundred do physics data analysis, which could be any other type of data analysis in most cases, and then, the vast majority, are doing engineering research, development, manufacturing, logistics etc. And all of this is done by people working in research institutes, where publications are essential for career advancement. So nobody would contribute if their contributions were not recognized. And without all those contributions, nothing would be accomplished. So you criticizing the way things are done only show you don't understand how things are done.

    • @01Aigul
      @01Aigul 3 месяца назад +1

      @@Anton-tf9iw Whether you think it's acceptable or not, it doesn't mean the papers are fake. Those are two different issues.

    • @satunnainenkatselija4478
      @satunnainenkatselija4478 3 месяца назад +1

      @@jeffersondaviszombie2734 Engineers, developers, machinists, and warehouse operators do not need career advancements in physics. They may have other ways to advance their careers or they could also be happy to continue to work in their positions as professionals. I think it is ludicrous to include them as authors in physics publications even if they all work in an institution.

  • @sandyacombs
    @sandyacombs 3 месяца назад +43

    Pete, you should do a statistical analysis on all types of scientific fraud to establish a general understanding on the extent and magnitude of this problem.

    • @DrJTPhysio
      @DrJTPhysio 3 месяца назад +7

      That is a massive undertaking for one person

    • @Aro9313
      @Aro9313 3 месяца назад +6

      ​@@DrJTPhysio shoot I'll help if he shows me what to do lol

    • @crimfan
      @crimfan 3 месяца назад +5

      That would be something that would require a pretty massive research grant to do and would be field-specific as he said, but I think the basic signals are there: Unbelievable sustained publication rate is, in my view, the sine qua non. Unfortunately, universities and granting agencies really do like their rock stars, not us more ordinary working faculty who are the ones who make the university run and publish at a more reasonable rate. Some years we have a lot of articles (my best year had 7 articles) and other years are more down years (uh... 0, or 1).

    • @ColdHawk
      @ColdHawk 3 месяца назад +2

      @@DrJTPhysio - start with a pilot study. Sample of N papers selected at random from all published original studies within X field (or from c search term), with primary authors university faculty, between Jan. Xxxx and dec. Xxxx
      Or
      N number of primary research papers selected from total pubs new pharmaceutical agents in 1989 versus 2024 that demonstrate conflict of interest as defined by yada yada

    • @DrJTPhysio
      @DrJTPhysio 3 месяца назад +2

      @@ColdHawk there's a bit more to it, but I get where you're going. I'm working on a study that implements a similar research design. Doing this alone would be pure pain.

  • @abrahamstuvwyz6693
    @abrahamstuvwyz6693 2 месяца назад +1

    "And occasionally these things do happen in SCIENCE, right by NATURE they have to."
    Was this pun intended when talking about rare and surprising results making it to high-impact journals?

  • @theondono
    @theondono 3 месяца назад +2

    The question depends not only on field but on what “publication” is.
    There’s a “tradition” of putting as authors people who have contributed to the paper, so if you built some machine that say 30 grad students are using, and they each publish 2 papers, it wouldn’t be rare to have 60 publications without you having to do pretty much anything.
    I hate how academics trade and game publications, but I wouldn’t jump to conclusions based solely on the number of publications.

  • @davidmitchell3881
    @davidmitchell3881 3 месяца назад +5

    Data availability. One paper i wanted to look at the data myself. It was apparently a PDF. However the data was protected by a password. I was able to copy the results by hand but this was a nasty trick

    • @DavidJBurbridge
      @DavidJBurbridge 3 месяца назад +1

      It's funny when they say "Upon reasonable request," as if there is such a thing as a request to see data (personally identifying information notwithstanding) that isn't reasonable.

  • @willemmagney1327
    @willemmagney1327 2 месяца назад +2

    I’ve read through the code of one published paper that was gibberish. There were functions called that weren’t defined in the script/did not exist in any of the packages. I’m familiar with the language (r) and packages used. Even looked through older versions of the same packages and couldn’t find the functions they had called. If they’ve produced the code it’s often worth looking over.

  • @erkishhorde
    @erkishhorde 2 месяца назад +3

    I went to a college that was known for hands on and in my last couple years they were making a shift toward wanting professors to publish which was never an issue before. I had my thesis advisor pushing me to ignore problematic data just because he wanted to publish and it really rubbed wrong.

  • @manyaafonso
    @manyaafonso 3 месяца назад +5

    Not a red but orange flag is someone getting too many successful grant applications from fields other than their own, for example, AI to address some topic in the humanities.

    • @Novastar.SaberCombat
      @Novastar.SaberCombat 3 месяца назад

      The wealthy always win. Always. 💪😎✌️ No exceptions. Facts don't matter. Science is for geeks. MONEY rules all.

    • @samsonsoturian6013
      @samsonsoturian6013 3 месяца назад +1

      That's just fraud/embezzlement

  • @cfromnowhere
    @cfromnowhere 3 месяца назад +5

    Governmental conflicts of interest definitely deserve more attention. When most people talk about conflicts of interest, they often only talk about corporate conflicts of interest. Governmental grants are often seen as a public expense and represent the interest of taxpayers, in other words, you and me. But in reality, the ruling class often holds the ultimate decision of who gets the fund, which can go against the public interest in a lot of ways.
    While it is not strictly academic, are you interested in fraud in healthcare reviews from governmental institutions (e.g. the notorious Cass Review and others)? Speaking of corporate conflicts of interest, I think another neglected area is weight loss research, which almost always makes their results more appealing than they actually are.

  • @ColdHawk
    @ColdHawk 3 месяца назад +5

    Graphic distortions e.g. truncated axis on a graph but uninterrupted plot. Not fraudulent as much as deceptive

  • @NamatronNi
    @NamatronNi 3 месяца назад +2

    The first thing that should be done is to pay reviewers for their time and have a grading system for reviewers. Reviewers are incentivied to spend as little time as possible on reviewing a paper. Reviewing time should be mandated on an institutional level and paid for by the cost to publish fee by the for-profit publishing industry. Yes, this may lead to higher cost of publishing and slower publishing, but also fewer fraud cases and less retraction rate of papers.

  • @kaoticgoodkate
    @kaoticgoodkate 3 месяца назад +3

    My understanding is it could be very challenging to convince journals that the original dataset must be published, because it could be an ethical liability if participants have consented to scientists looking at their data but not it being published unabridged and itd make anonymising data more complicated in terms of how much information makes a participant identifiable vs removing necessary context

  • @halneufmille
    @halneufmille 3 месяца назад +1

    7:40 I think this is unfair. Sometimes the dataset is just very large to host online. Sometimes the data is confidential. As for the codes, the ideal is indeed to release them. From experience though, well-documented working code that reproduce all the results with graphs is sometimes almost as much work as writing the manuscript itself. But most of the time when I wrote to an authors for their codes, they were kind enough to share them with me.

  • @danielschein6845
    @danielschein6845 3 месяца назад +6

    Regarding the scientists publishing a ridiculously high number of papers. Is that really a sign of fraud or does it just mean that the scientist is running a huge lab and all the junior researchers are putting his name on their work?

    • @l.w.paradis2108
      @l.w.paradis2108 3 месяца назад +2

      That would be a possible sign of overreaching. What could be his contribution? Some of these "authors" never read the paper.

    • @AndrewJacksonSE
      @AndrewJacksonSE 3 месяца назад +1

      Even with a big group, 200 papers is way out of reasonable. There is no way that person has seriously contributed to those papers. I’d be surprised if they had even read them all.

    • @danielschein6845
      @danielschein6845 3 месяца назад +1

      @@AndrewJacksonSE True - But signing your name on to a paper you haven’t even read is a very different issue from research fraud.

    • @AndrewJacksonSE
      @AndrewJacksonSE 3 месяца назад +1

      @@danielschein6845 indeed, but seems to be conflated in a lot of comments on this type of video. Also, it inflates the risk of errors or fraud by the main author or contributor not getting spotted.

    • @samsonsoturian6013
      @samsonsoturian6013 3 месяца назад

      Except that is also fraud since it isn't their work

  • @luisdmarinborgos9497
    @luisdmarinborgos9497 2 месяца назад +4

    Sciencing like a TRUE scientist. This is godtier work

  • @spshkyros
    @spshkyros 2 месяца назад +1

    I know plenty of legit researchers who manage more than 10 papers a year. Now, 10 papers as a first author? That's insane. Those claiming 10 papers a year simply do not have enough experience in a wide enough swath of science, if I am to be charitable about it. Now personally I have never managed more than 3 or 4 in a year, but each of those was a MASSIVE amount of my own work being published. There are lots of scientists who become "consultants" for others on their work, and come in to do a substantial but not consuming piece of work on a paper. This is essentially what a corresponding author is doing in fact. So for a lab with 5 or 10 grad students in it, the PI SHOULD easily hit 10 papers a year.
    So yeah, 10 is ridiculous, and you saying "they should be put under investigation" for that is insane. Stop it.

  • @mikeschneider5077
    @mikeschneider5077 2 месяца назад +1

    #6: genealogical connections to peerage. (These are invariably the wastrel children of hidden elites gifted sinecures in prestigious fields, with ghost-drones creating "their" work; this is easy to get away in fields where big cheeses have lots of assistants under them, and where "findings" generally do not have any immediate practical benefits such that there's an incentive to attempt duplication during the course of product-development. E.g., much if not most of theoretical physics has been treasury-soaking fraudulent hokum for nearly a century.) #7: any "public scientist" (which should be an oxymoron by now) should be heavily scrutinized.

  • @dotnet97
    @dotnet97 2 месяца назад +1

    10 as primary author, yeah, would be suspicious, but I think my advisor averages around 10 overall per year, although only 1 or 2 of them with him as primary author that weren't invited articles. Nothing suspicious about it, we do a very niche kind of physics simulation, and he ends up being cited for fine tuning and advising other groups on planning their simulations.
    50+ is very blatantly sketchy, though, even as a secondary author.

  • @dvirkes1
    @dvirkes1 2 месяца назад +1

    I used to love publishing papers. But...
    There is science, and there is science. One is pursuing truth, and the other is pursuing paper quotas. While you may pat your own shoulder for catching the bad scientists, you'll find that those are found in the latter branch, but in a process you're also punishing the former truth seeking branch, rising suspicion of everything. Now, it may come as a surprise to you, but only the institutionalised scientists are under constant pressure to publish or perish. So I as an independent scientist with a PhD in electrical engineering in my pocket, and no pressure at all to play a rigged game - won't play it. Without any institutional backup, and the ever rising participation fees, it is also much easier on my pocket. Now, whose loss would that be?
    Also the small matter of contribution. A single paper is good with a single contribution, however insignificant. Real researchers used to pour down their whole research in a paper, with many significant contributions in a single paper. But if you're pursuing quotas, you'll make sure that 20 or so people milk a single contribution to death and back. Such skimmed papers are now norm, and a real research is now suspicious as out of place and too bombastic. Go figure.

  • @Basil-the-Frog
    @Basil-the-Frog 2 месяца назад +1

    Many of the papers in computer science are now providing the data and the programs used to create the data so the results can be reproduced.
    However, it is problematic to recreate some of these things because of the amount of time and the knowledge it takes to set up the actual "experiment". In the case of biology, providing the data usually means providing the readings taking in the lab. In the case of computer science, providing the program and the results means you can redo the lab experiments. Setting up these experiments and doing them takes a lot of work, however, this is the classic "reproduction" that we expect from people who do pure science. That is, if you say you created a cure for a disease, we usually don't trust that you have a perfect cure until other people have seen it.
    Unfortunately, with many drugs, there is no "100% cure", so people can fudge the data and then say, well, my results were different because of the statistical sample. I had a 20% remission rate and you did not see it because of this other factor.
    When there is no clear answer, i.e., 100% cure, it is much easier to misrepresent the results and get away with it.

  • @tatianaalinatrifan
    @tatianaalinatrifan 3 месяца назад +2

    Single authorship on all or most papers someone publishes is another red flag.
    Another eye should be kept on journal editors, as they can help their buddies publish their research before someone else via simply rejecting a paper that tested that hypothesis first. Also, journal editors who gatekeep new directions in the field as a favour to their buddies is a form of scientific misconduct.
    As someone else pointed out, a search of how many harassment complaints are against a researcher is an indication that they have also gotten engaged in scientific misconduct.

  • @incorrectbeans
    @incorrectbeans 3 месяца назад +5

    Those are some great points to look out for and quite nice how easy it is to check for red flags.

  • @crimfan
    @crimfan 3 месяца назад +4

    Very good discussion.
    I think point 5 can be a problem in some areas of research, where privacy requirements pull against data sharing. For example, getting permission to release data from actual patients or minors can be quite onerous.
    That said, bad faith researchers can and do make use of these requirements as a shield. One of the worst examples I've encountered in my professional existence---won't name names but I'm talking B grade horrid, not A grade like the ones that have been getting outed to substantial publicity, such as that real peach Francesca Gino---loved to hide behind the IRB. In my time working with that team, I didn't see any data made up, but I did see quite a lot of really shady data practices.

    • @User-y9t7u
      @User-y9t7u 3 месяца назад

      You should name them or at least the practices

    • @crimfan
      @crimfan 3 месяца назад +1

      @@User-y9t7u I'm not naming names on RUclips, but I filed complaints to the relevant authorities, including the grant agency.
      As to the practices, basically the PI would state that data would not be released due to FERPA (educational data privacy law) but then was perfectly happy to do things that pretty clearly violated FERPA, such as give out datasets to people on the team who had no business having things like participant names. Simply put, she didn't want to have anyone else seeing the data. I wasn't convinced she understood the difference between research and data analysis done for course improvement, quite honestly. She's one of those people who's charismatic and clever, not actually smart.

    • @User-y9t7u
      @User-y9t7u 3 месяца назад

      @@crimfan send the deets to Judo my man

  • @theresegalenkatttant
    @theresegalenkatttant 3 месяца назад +1

    can you please make a video on how to spot ai generated text in papers? My professors are saying "it's so obvious by the way ut is formated" but i can't really see it. Can you please share your thoughts on how to spot it?

  • @npc239
    @npc239 3 месяца назад +2

    Great video like always, thanks!
    I wonder, though, if you could also look into the darker fallacies of science? The ones that are done more on an unconscious level. The obvious examples are (and I think you already touched upon those in past videos):
    - Statistical correction for multiple testing. In biology, p>0.05 is widely accepted as "significant", after Bonferroni (etc.) correction for multiple testing. All that means that, on average, 1 in 20 studies will yield a "significant" result. Well, in certain fields, hundreds of papers are published every year - so we end up with all the false positives, while the negative results are much less likely to see the eye of the reader.
    - Statistics in general (I won't go into cohorts here, that's a topic all by itself). Statistics is based on a random distribution, whatever that distribution might be. Especially biologists, who are not experts in statistics, working with statisticians, who are not experts in biology, can easily produce way to optimistic p-values, without being aware. In my experience, reviewers only rarely catch that, because they are either biologists or statisticians.
    - Reproducibility. This is particularly apparent in GWAS studies, only a tiny fraction can ever be reproduced by anyone else, and even if, never exactly. Yet, the claims of GWAS ("you have a 14% higher chance of getting this or that disease if...") are often widely reported by the media, hence the incentive to do these studies. Even if the statistics used is sound, this comes down to case vs. control selection, sampling bias, etc., which invalidates the underlying assumptions. I think there is a need to also publish negative results wrt reproducibility, as well as positive ones. Because the proof is ultimately in the pudding.
    All in all, yes, there are the obvious bad apples in the scientific community, the ones who willfully falsify results. I think these are in the minority though, and that most false results are published out of (tolerated) ignorance.

    • @kagitsune
      @kagitsune 2 месяца назад

      I think about this all the time. Great description of p-values, thank you for the sanitizer check. 1/20 times accidentally rejecting the null hypothesis is pretty damn high!

  • @jaykay9122
    @jaykay9122 3 месяца назад +2

    I don't agree with you on the Data Availability Statement (below a suggestion). Data in biochemistry etc is typically very simple, there it can be standard practice. However, often the raw data impossible to handle, if you're not an expert in the field - typically there are maybe 100-1000 true experts in each specific subfield.
    Therefore, first, randomly upload data for the community won't really benefit anyone. The chance ppl will wrongly treat the data out of missing expertise is just really high. Second, making everything available we will just feed AI to know better, how to make raw data, which then will be used by asshole scientists to create fake raw data and deposit it.
    Therefore, another suggestion:
    First, here over in Germany, the German Research Foundation requires universities to store ALL data. If you don't do that, you won't get funding.
    Second, nowadays every journal provides paper statistics. Just add one for "Requests / Data shared / Not shared". Thats even better because honest scientist (who the majority) love it if somebody shows interest AND uses their data and has to cite them!
    In this way data is available, support is provided by the authors, the community communicate, and we will easily spot assholes.

  • @UnKnown-xs7jt
    @UnKnown-xs7jt 3 месяца назад +2

    How are people Who watch your RUclips video justified or educated enough to determine how many publications per year are too many?
    If one person publishes 10 papers in one year and then Publishers only one paper for the next two years how can and essentially uneducated populous, decide that this person has committed fraud?
    Initially your channel was wonderful and brought about many feelings of the system.
    Since that time it appears that you have become extremely enamored by yourself and are putting out information that is Clickbait and dubious at best..

  • @misterhat6395
    @misterhat6395 3 месяца назад +2

    In the process of submitting a manuscript and yes, you simply have to declare that you have no conflict of interest. Basically you pinky promise.

    • @aeroeng22
      @aeroeng22 3 месяца назад

      what's your solution to this?

    • @ehjapsyar
      @ehjapsyar 2 месяца назад

      @@aeroeng22 Sanctions: fines, banishment from any academic position, temporary systematic refusal of funding when the person is involved, etc.
      Also clearly marking on the paper by the editor that a conflict of interest was detected. And maybe forcing to add a footnote on the first page of any paper published by the author, that he has committed fraud in the past.

  • @heliopunk6000
    @heliopunk6000 3 месяца назад +1

    Is statcheck or a similar software regularly used? As the name suggests, it checks the reported statistics for plausibility.

  • @omarbahrour
    @omarbahrour 3 месяца назад +3

    "right to jail" hahahahaha, great video

  • @felixjohnson3874
    @felixjohnson3874 12 часов назад

    10 published papers a year means a single month and a single week per "study". If you do even 2 more than that, you're looking at a study per month. Is this sort of rapid-iteration approach potentially viable? Sure, but thats not what modern science is considered, nor how it behaves.
    Society overall needs to take a page out of programmers' books and learn to use proper version control. Your *_entire_* country's laws, down to the city you live in, should be viewable in a simple, organized HTML/markdown file, created using something like git and stacking diffs. Managing plain-text information for complex environments and rapid asynchronous iteration is a solved problem, yet no non-programmers do it. So science is not treated as a field where entire studies take a month to complete, it's treated as a field where single studies take *_years_* to complete and are the culmination of countless hours of work, checking double checking and triple checking its validity from 25 third parties. This is an incorrect reputation, but it is it's reputation. As Veritasium put it, "Most published research is wrong", so when people say things like "this damages the credibility of scientists", *_good. You already have more than you deserve._* (speaking in aggregate of course)

  • @Jon-cw8bb
    @Jon-cw8bb 2 месяца назад +1

    The speech over violin doesnt work at all

  • @banana9494
    @banana9494 3 месяца назад +2

    The way I catch them is by watching Pete judo's channel

  • @FatFrankie42
    @FatFrankie42 3 месяца назад +3

    *_comment for the algorithm gods_*
    ~ Thanks! Winston Salem, NC

    • @robertharper3754
      @robertharper3754 3 месяца назад +1

      Oh wow, glad to see someone else in this town have critical thinking skills! It becomes more rare each year here!

    • @FatFrankie42
      @FatFrankie42 3 месяца назад +1

      @@robertharper3754 It's a sad fact of reality here in W-S, with many unfortunate down-&-upstream effects. Thanks for the neighborly show of solidarity; your reply gave me a much-needed boost of optimism in these otherwise dark & unpleasant times!! Perhaps W-S isn't an entirely blasted toxic wasteland, after all!🤷‍♂️ 😁💞✌️

  • @sethtrey
    @sethtrey 3 месяца назад +3

    Do you consider results that are very socially popular as falling in the "conflicts of interest" category? Maybe affecting the works of the likes of Roland Fryer and others who research such touchy topics?

    • @Novastar.SaberCombat
      @Novastar.SaberCombat 3 месяца назад

      If a pseudo-scientist hits on results which the general public WANT to see as being valid and true, you can bet that the P-hacking will commence, baby! 💪😎✌️ There's HUUUGE money in telling billions of people what they already hoped was "true". 😂

    • @samsonsoturian6013
      @samsonsoturian6013 3 месяца назад

      If the results support existing political rhetoric then it's fraud. I find this when reading history all the time, no further study necessary: Fraud.

  • @sphakamisozondi
    @sphakamisozondi 3 месяца назад +1

    200 papers a year?!!! That's beyond suspicious

  • @aladd646
    @aladd646 2 месяца назад +1

    I am not a scientist. The discouraging aspect is to undermine my confidence in any research. Even if I try to look into a paper, it would be almost impossible for a layman as myself to establish validity.

    • @kagitsune
      @kagitsune 2 месяца назад

      Thus, social media's fascination with citing academic papers to support their opinions, yet matched with revulsion to the science establishment as a whole. You get people who simultaneously swear that anti-parasitics must work against covid because the FDA has approved them for human use, and yet that the same FDA must have rushed the vaccine emergency use approval. 🙃

    • @kagitsune
      @kagitsune 2 месяца назад

      Thus, social media's fascination with citing academic papers to support their opinions, yet matched with revulsion to the science establishment as a whole. You get people who simultaneously swear that anti-parasitics must work against covid because the FDA has approved them for human use, and yet that the same FDA must have rushed the vaccine emergency use approval. 🙃

  • @Shawkster6
    @Shawkster6 3 месяца назад +1

    I agree most strongly with the last point. Data needs to be more easily available throughout the scientific community, and should really be available to the public too

  • @vulpo
    @vulpo 3 месяца назад +6

    Isn't "Peer Review" supposed to catch most of these things? Isn't that the whole idea of peer-reviewed studies?

    • @fluffymcdeath
      @fluffymcdeath 3 месяца назад

      Peer review is a scam more or less. It is an invention to publishers to raise the perceived status of their journals but it all depends on the "peers" who are unpaid, busy with their own work and are motivated not to piss off peers that might block their own papers.
      To see how bad peer review can get look for the greivence studies affair. Admittedly not science but illustrative of the problems with peer review.
      Before peer review there was adversarial review where competing scientific minds would tear at each other's theories, but that functioned in a very different system where scientists were fighting for glory rather than grants for their institutions.

    • @chrisumbel3132
      @chrisumbel3132 3 месяца назад +3

      So, I'm not a scientist or a scientist. Heck, I spent the first 44 years of my life without a bachelor's degree. It was while I was wrapping up my undergrad that a psychology professor covered a thing or two about spotting not only bad study design, but also fraudulent study design. What alarmed me was the sheer quantities of examples she could cite.
      Naive me just assumed that peer review worked, but if that professor and the RUclipss-these-days are correct, it's absolutely broken. It must be done cheaply, if at all.
      I do think you're raising the right point. Peer review is supposed to be a knowledgeable, human safeguard. We have to be missing incentives to do it and do it well, no?

    • @kevc5532
      @kevc5532 3 месяца назад +6

      ​@chrisumbel3132 Peer review does somewhat work, but it has some significant issues. It is completely unpaid, which limits the time that people can justify spending on it for a start. For the issues in this video, peer review is often blinded, so they couldn't look up to see if the author is publishing a suspicious amount, or consistently finding surprising results. Any individual can also only expect to find so much- if the data is faked but the interpretation of that faked data is consistent then the peer reviewer may not be in a good position to raise much of an issue

    • @wintermute5974
      @wintermute5974 3 месяца назад +3

      Peer review is just getting somebody else to look over some work, tell the editor if it's worth publishing and maybe make some suggestions to the authors. Detecting fraud is outside what it's meant to do and very few peer reviewers would be on the lookout for it. Even if they were on the lookout for it detecting likely data manipulation is generally going to be much more time consuming than can resonably be expected for what is basically volunteered time that actively takes away from their actual jobs. On top of all that you have to remember that a lot of fraud isn't going to be detectable based on the paper itself, if they're making up data you may need to actually look into their research process to discover this.

  • @ralphlorenz4260
    @ralphlorenz4260 3 месяца назад +1

    I publish dozen or more papers a year, though only a few are first/sole author. In planetary science we often work in broad flat teams of individual 'independent' scientists associated with space missions (without the more feudal lab head/minion hierarchy associated with a lab). The mission data are contractually obliged to be publicly available.

  • @diegomardones6651
    @diegomardones6651 3 месяца назад +1

    The solution is to make it irrelevant to have large numbers of papers. Use a logarithmic metric, with a maximum at --say-- log(12 papers/year) and decreasing for larger number pf papers per year down to zero if you publish --say-- 24 papers or more per year. Actually I think the numbers should be closer to half of those above.

  • @garyc1384
    @garyc1384 3 месяца назад

    Really annoying background music (too loud) places this vid firmly in the TLDR category. (Too Long, Didn't Watch, so TLDW)

  • @douginorlando6260
    @douginorlando6260 2 месяца назад

    Here’s how to tell (follow the money). For example when Sam Bankman Fried gave the researchers millions of dollars he embezzled from FTX immediately AFTER they announced Ibernectin was not effective.

  • @fcolecumberri
    @fcolecumberri 2 месяца назад

    Another way is looking how diverse the fields of publication are. if someone publish 40 papers a year, but all of them are LLM research, ok, LLMs are exploding right now, with enough GPU you can automate lot's of experiments and some of them will get interesting results. If they publish 10 papers, one in AI, one in cybersecurity, one in robotics, etc. You may wonder how a person can be an expert on so many fields.

  • @blue-neutrino
    @blue-neutrino 2 месяца назад

    ...academia research? who cares! . . . Industrial research were scientists promote crapy stuff to be put on the real world. Poor analysis, products, and more. THAT is of concern. Industrial publications are typically internal to the company, and then there are patents 🤣😂 I know... I worked in the industrial realm long enough to know a few things . . . oh, BTW, yes I have a PhD in chemistry from a prestigious university in case you cared to know who the f**k I am to opine ... ... yes I also got patents and all that jazz

  • @Lithilic
    @Lithilic 2 месяца назад

    Administration takes +40% off of any incoming grant, claiming administration costs to support research efforts. "There just isn't the manpower or infrastructure doing this kind of work." Ah yes, if there's anything universities are light on it's the infrastructure and oversight...

  • @b0r0g0ve
    @b0r0g0ve 2 месяца назад

    The 200 papers in 2024 people, did they publish on their own, or co-authored, so it was different people writing and submitting those publications?

  • @Rondoggy67
    @Rondoggy67 2 месяца назад

    We need a "Day zero" reset before AI completely undermines conventional fraud detection. Removing all bad actors now would act as a strong deterrent, but it would also remove the commercial market for fraud. The last 20 years of papers should be for fraud using image checking and similar software. Then, not only should all affected papers be redacted, but also all authors, labs, and companies involved in the fraud should be blacklisted for future publication. That blacklisting should only be lifted with increased scrutiny of publications on an individual basis.

  • @nickhockings443
    @nickhockings443 2 месяца назад

    It depends hugely what the "author's" role is in producing the paper. A senior prof may have a peripheral mentoring role for a large number of PhD students and post-docs. BUT it is a red flag for extra scrutiny.

  • @jonconnor0729
    @jonconnor0729 2 месяца назад

    I'd say 2-3 is average but 5 is pushing it. Several papers published within a year is not only a sign of fraud but also of predatory tactics and exploitation such taking undeserved credit, forcing post-grad students with publication quota, and blatant authorship selling/trading within circles.

  • @ehjapsyar
    @ehjapsyar 2 месяца назад

    The 10 papers per year limit is too low imo, it really depends on the scientists' position and career advancement. For young scientists, 3-4 would already be a high number since they typically run the studies. However renowned scientists are often collaborating with many different labs and supervising multiple people. Many of them end up co-authoring 20 papers without any cheating involved.
    The "reasonable" number of publications depends a lot on what work is being done, and how well-estalished the person is. Professors typically do a lot of *reviewing* of nearly finished papers, which takes much much less time than *conducting and writing* the study. The amount of studies being presented to them increases when they have more connections within their field. Some people (especially professors) are also workaholics human machines who will review your work as soon as you ask them, even if they're in the middle of their vacations in the middle of a remote mountain range.
    Now if we are talking about 10 first-author papers, I would agree that the number is high regardless of the position.

  • @nickhockings443
    @nickhockings443 2 месяца назад

    A red flag that should prompt investigation would be cultivating loyal followers and building celebrity status. These are invalid reasons for believing an individual's hypotheses.
    BUT there is a big complication, some very good scientists have also produced false theories and suppressed rivals better theories,
    e.g. Georges Cuvier's suppression of Lamark's evolution, and Karl Pearson's suppression of research into causal inference.
    In both cases men who had made major contributions to their field, also held their field back for three generations.

  • @sephgeodynamics9246
    @sephgeodynamics9246 2 месяца назад

    Number of papers a year, as first author or not? The difference is fundamental. If you are a group leader with several PhD students, 10 papers a year is not uncommon

  • @qerupasy
    @qerupasy 3 месяца назад

    A software solution that sweeps for things that aren't straight up copies sounds very difficult and potentially problematic. I think that, at best, something like that should flag stuff for human review. It could be difficult to use unless it can give the human reviewer an account of its reasoning (which modern AI systems mostly cannot).

  • @Loreweavver
    @Loreweavver 3 месяца назад

    Do they refer to themselves as a scientist? Bonus points if additional modifiers such as 'behavioral' 'political' 'social' etc.

  • @frafstet3835
    @frafstet3835 2 месяца назад

    It’s important to change the whole apparatus built around the scientific method. Having to pay to be part of the scientific discussion it’s not acceptable, peer reviewers that peer review badly are unacceptable, Publish or Perish is unacceptable.

  • @SpencerPaire
    @SpencerPaire 2 месяца назад

    I've read many papers where the abstract reads more like a Billy Mays ad than a summary of the research, and I've read plenty of papers with spelling/grammar/formatting mistakes. I have no idea if there's a higher incidence rate of fraud in such papers, but it always makes me way more suspicious.

  • @Boahemaa
    @Boahemaa 3 месяца назад

    Data should be made available to readers definitely because how else will we verify results? 200 papers a year by 22 academics is crazy. Must be AI generated.

  • @STR82DVD
    @STR82DVD 2 месяца назад

    Hey Pete. Given the sensitive nature of some of these personalities that you profile I'm extremely surprised that you haven't been sued for slander/defamation. I'm assuming you have a legal team behind you vetting what you can and can't say.

  • @davidmurphy563
    @davidmurphy563 3 месяца назад

    On your image manipulation section you bring up Elizabeth Bic's work in the context of fraud. I haven't watched your previous videos and you have the impression that she was someone who had committed image manipulation. Only later so you make clear that she's exposing fraudulent use of images by others.
    You need to be more careful and not assume everyone watching your videos has seen previous videos and make it clearer who is committing the fraud and who is exposing it.

  • @TheBladzAngel
    @TheBladzAngel 2 месяца назад

    More than an average of 2 papers, where you are the first or corresponding author, per research staff in your lab, is where I would say you have to be gaming the system somehow. I have 7 people in my lab, 5 of them I would consider actual researchers, and 10 would absolute insane from our group.

  • @christopherg2347
    @christopherg2347 2 месяца назад

    They allow publishing without the dataset?
    That should not even be considered a scientific study.
    The average flat earher can make a claim without any data.

  • @chrizzbenyon3993
    @chrizzbenyon3993 2 месяца назад

    From personal experience in the role, If expert reviewers of publications have to go through all the original data then a far larger pool of reviewers will be needed. Experts have to devote quite a lot of time already to reviewing just the bare publication - work which is usually unpaid by the journal editorial boards I should add. Where are these extra experts going to come from?

  • @AthosRac
    @AthosRac 3 месяца назад +3

    Great topic, thanks

  • @fburton8
    @fburton8 2 месяца назад

    I'm surprised that no one has created software to generate Western blots showing any desired result, with just the right (credible) amount of artefact. As a programmer who has written image processing code I imagine it is quite easy.

  • @fluffymcdeath
    @fluffymcdeath 3 месяца назад

    I'm aware of Wakefield's conflict of interest and the problems with his work but I am perpetually amused by all the bleating labeling him AntiVax when he was actually trying to hawk a competing vaccine all along.

  • @euchale
    @euchale 3 месяца назад

    Regarding the "10 Papers a year" thing. The reason why I am stating this, is because I assume to be an author you need to at least have "read" and "understood" it. Both of these need time. if you are publishing a paper a day, when are you actually doing the data generation part?

  • @mattabesta
    @mattabesta 2 месяца назад

    I've published a few papers and the group I am in publishes many. Journals will not host our data as it is too large. Some fields have community solutions for these issues, most don't.

  • @kylebowles9820
    @kylebowles9820 3 месяца назад

    Bro wtf they're trying to kill your video, it's buffering every few seconds on 480p for no reason

  • @qbatmobile
    @qbatmobile 2 месяца назад

    10 publications is not suspicious at alworking with.l.
    As a phd student I was able to publish 10 without any issues. But it's all about the field you are

  • @Nah_Bohdi
    @Nah_Bohdi 3 месяца назад

    I....publish at the rate of 200 per year, but I take days off, sometimes weeks.

  • @luisdmarinborgos9497
    @luisdmarinborgos9497 2 месяца назад

    Anything above 2 papers is already suspicious. It has taken 10 years to do my thesis 😥

  • @LostArchivist
    @LostArchivist 2 месяца назад

    Coming at this from a computerphile angle, I think in addition to publishing the original data, an academic standard akin to hash fingerprinting data to visuals ought to be developed if such is possible and the establishment of provenance standards for data and visuals.

  • @AlexRedacted
    @AlexRedacted Месяц назад

    If you are a lab boss, you will be an author in everything from your lab. Publication rate isn't as simple, especially for collaborations and big institutions.

  • @Nikkifrenchbulldog
    @Nikkifrenchbulldog 3 месяца назад

    If movies have taught me anything, it’s that you should automatically distrust male scientists with long hair.

  • @maxbrooks5468
    @maxbrooks5468 2 месяца назад

    I'm relatively new to academia and struggling to get one paper published a year becayse I spend too long sweating the details. It really disheartens me to see people publishing loads and loads every month!

  • @smeegy1
    @smeegy1 2 месяца назад

    You have 100k subs please get a mic that doesnt make you sound like youre in an empty concert hall.

  • @necrodrucifver
    @necrodrucifver 3 месяца назад

    Like Dr Nefaurios or Like Climate Change Scientiest?