"Trust the Science" - A Growing Problem

Поделиться
HTML-код
  • Опубликовано: 2 апр 2024
  • "Science" (as in, our collectively accepted authoritative source on information) has a growing problem as generative AI, specifically chatGPT, infiltrates every aspect of the online world.
    With keywords indicating dramatically increased usage of AI text in scientific papers and citation, the question becomes, how much is it being used, and where is it skewing results. For some, the usage of AI is something to be celebrated, but others, it marks a shift from authenticity to deception and a wildly different digital world where nothing you see can actually be trusted.
    VPN DEAL: get.surfshark.net/aff_c?offer...
    NEW MERCH LINE!: upperechelonofficial.shop/col...
    PATREON: / ueg
    LOCALS: upperechelon.locals.com/support
    Become a channel member: / @upperechelon
    COMPLETE CYBER SECURITY PRODUCT LIST: linktr.ee/upperechelonsecurity
    ODYSEE INVITE: odysee.com/$/invite/@UpperEch...
    ⦁ UPPER ECHELON WEBSITE: upperechelon.gg
    ⦁ DISCORD SERVER: / discord
    ⦁ Giraffe Video: • Giraffe running in her...
    ⦁ Outtro Song: • Hot Heat - Topher Mohr... MY
    BUSINESS EMAIL: upperechelongamers@yahoo.com
    #science #ai #research
  • ИгрыИгры

Комментарии • 1,4 тыс.

  • @UpperEchelon
    @UpperEchelon  2 месяца назад +35

    Ways to support the channel!
    VPN DEAL: get.surfshark.net/aff_c?offer_id=1448&aff_id=19647
    PATREON: www.patreon.com/UEG
    LOCALS: upperechelon.locals.com/support

    • @mightyraptor01
      @mightyraptor01 2 месяца назад

      Im currently more focused on It'sAGundam's stream live right now as Im watching, luckily I can multi watch, we live in an Emotional Current Day where Government wants you to comply. Yeah AI is a problem when used in way for Power and Abuse and not as a tool in a time of NOW NOW NOW instead of take time for building.

    • @SagaciousBoothe
      @SagaciousBoothe 2 месяца назад +1

      It's notable how meticulously this was done! You seamlessly address the crucial points! Delve, realm... Yada yada yada... Joke over!!!

    • @PanSkrzynka_
      @PanSkrzynka_ 2 месяца назад +2

      Im from Poland and AI is commonly uses here to translate scientific papers. Its probably the same in different countries. Usualy you translate it using AI and then give it to correction. AI is almost as good as profesional translations and is 95%faster to just correct it.

    • @KriptoSeriak
      @KriptoSeriak 2 месяца назад +1

      I answered here because I just want to presrnt to you a theory...
      What if this is overusage if A.I. is actually a symptom of a Transition Period toward a future adaptation of information found on the internet to the one doing the research, watching the news and just browsing the internet?
      Now, by inticing people to use A.I. for Scientific Publications, said A.I.s are actually training so that, in the near future, all internet activity (even a simple book) will be tailored to the one doing the rading, listening and viewing. A news article, a video or even a book will soon look different to people living in neighboring cities or even working in different fields.
      You know the kind of society I am talking about...

    • @useodyseeorbitchute9450
      @useodyseeorbitchute9450 2 месяца назад

      Key information that you may miss: If you are a scholar from a country where English is a foreign language, then you are terribly at disadvantaged position when sending otherwise OK paper to some mid-tier journal, as at worst you are rejected at start and at best you end up with snarky reviewer's remarks. I talked to people in my country who used to hire English native speaker with no knowledge of research subject just to have their paper proofread to correct their clumsy language. What you see is primarily people using right now chatbot for proofreading instead.

  • @oldmatejohnno
    @oldmatejohnno 2 месяца назад +583

    Questioning science IS SCIENCE! you don't just accept what you're told

    • @TheScrootch
      @TheScrootch 2 месяца назад +93

      Exactly. "Trust the science" just sounds like a blind faith cult statement. Why should you just randomly trust anyone?

    • @vitalyl1327
      @vitalyl1327 2 месяца назад +25

      There is a very specific procedure for questioning science, called Scientific Method. Those who usually yell about questioning science do not have mental capacity to apply this method anyway, and can and should be ignored or mocked.

    • @TheVisualDigitalArts
      @TheVisualDigitalArts 2 месяца назад +46

      ⁠​⁠​⁠@@vitalyl1327but are they wrong ? Jumping to the conclusion they don’t have the mental capacity just because the were not trained to use a certain method is silly. People can spot patterns. Sometimes they are right sometimes they are wrong sometimes the are close. Just like the scientific method.
      But to discount opinions and observations of a vast number people is in fact anti science and foolish.

    • @vitalyl1327
      @vitalyl1327 2 месяца назад

      @@TheVisualDigitalArts unless you're an anthropologist, all anecdotal evidence must be discarded. There was not a single case when the ingorant masses got it right.

    • @convergence1point
      @convergence1point 2 месяца назад

      @TheVisualDigitalArts
      That right there is why “trust the science” is the worst fallacy. As it revolves around gaslighting you with “authority” to silence legitimate questioning. “Well, you dont have a piece of paper telling people you are SmUrT enough to know” that’s absolutely ridiculous. Not all great scientists came from higher education. Hell, most of the pioneers never had degrees in their fields.
      Thats why the unquestioning “Trust the Science” cult and the “and where’s your degree and paper” cult are a part of the problem. They create an echo chamber that magnifies flaws as sometimes, flaws can appear that can only be observed externally.

  • @nobillismccaw7450
    @nobillismccaw7450 2 месяца назад +961

    Even saying “trust science” fundamentally misrepresents the scientific process. It’s “trust the evidence”, from which working-hypothesis are made.

    • @Sesquippedaliophobia
      @Sesquippedaliophobia 2 месяца назад +65

      This is the most infuriating part about the "trust the science" bs lately.

    • @pluto8404
      @pluto8404 2 месяца назад +86

      or "the consesus of scientists" like they have all independently verified everything in science outside of their own faculties.

    • @THasart
      @THasart 2 месяца назад +33

      Thing is, quite often to understand the evidence, you need to have a lot of specific knowledge. So people without said knowledge can either trust people with it or spend a lot of time to acquire it, which is unfeasible to do for each scientific claim. Or use "gut feeling" or "common sense", things known for their accuracy, especially in scientific matters.

    • @DeclanDSI
      @DeclanDSI 2 месяца назад +10

      And sometimes the procedures for gathering data are so poorly formulated that you can’t even trust that. Beyond even that, the idea that evidence possesses an inherent narrative is wrong, or seems to be wrong: we automatically confabulate one absent any previously established for matters and means of perception given stimuli, in other words, meaning is given first to our senses before the object is.
      Given that the world may construe itself such that limitations are placed upon the working patterns in the manner of perceptions that survives across large timespans, it is not unreasonable to think that, though perhaps pedantic, “evidence” -data; information: the world- is, while sufficient per se for the formation of a body that can comprehend and form narratives, not so without a corpus with which to file it away. To sum it up succinctly: facts are used [by a person] to form narratives but do not in and of themselves.
      Lol, was this was kinda a fun exercise to just say facts are meaningless without someone giving meaning to it.
      Anyways, from that conclusion, while you can say any reasonable person would take the facts at hand and come to a reasonable conclusion, many aren’t reasonable and so will come up with wildly different theories, and, more likely than different interpretations, what is fact and what is, just, like your opinion man, is subject to whatever pathos is held.
      “You can’t reason someone out of a conclusion they didn’t reason themselves into.”
      And what is held as reason and what is not is often subject to debate: with that said, how would you consider what is such or not? The answer might be consensus, or perhaps a logical structure - A=B=C; If A, then C; etc. - that holds certain axiomatic statements in both its premise and algorithmic processing. Maybe starting with what holds true as a meta across all experiences and narrowing down from there might work? I do realize there’s an entire body of literature elaborating all of this out, so it’s not very original, but it does make one think, and I think that’s most important.
      What’s at the basis of how a person thinks? What is most fundamental to a person that they cannot deny nor do without? A value structure that denotes deviation from a goal as pain-like and positive signals in optimizing towards such as pleasure-like. In other words, something to aim at and something to run away from, at least as a basic unnuanced structure. Maslow’s Hierarchy of Needs has a good enough model to give a rough sketch of the sort of things people commonly aim at and run away from the non-completion of, e.g. starvation.

    • @kobold7466
      @kobold7466 2 месяца назад +16

      trust NOTHING and come to conclusions based on your own research

  • @ST-RTheProtogen
    @ST-RTheProtogen 2 месяца назад +807

    AI isn't the scary thing. The real problem is the human urge to make as much cash as quickly as possible, whether or not it makes other's lives worse. AI just makes us better at that.

    • @Holesale00
      @Holesale00 2 месяца назад

      Yeah i can see that. Its just humans creating tools to destroy our selves faster and more efficiently.

    • @AdonanS
      @AdonanS 2 месяца назад +82

      That's the problem with any technology. It's never the tech itself that's a problem, it's the people that abuse the hell out of it.

    • @N0stalgicLeaf
      @N0stalgicLeaf 2 месяца назад +41

      Well, it's more that AI lowers the barrier to entry which I'd argue is worse. If you have to copy books by hand, you'd copy one every few months and almost no one would do it. If you can use a printing press that becomes days or hours and it opens up to more people willing to take on the task. If you use a laser printer or copier, its minutes or seconds at the press of a button and ANYONE can do it.
      Now I'm not saying the laser printer is bad in and of itself, they're phenomenally useful when used appropriately and judiciously. It's the scale of harm done when they're misused that's concerning and requires diligence and prudence from ALL OF US.

    • @JustinSmith-mh7mi
      @JustinSmith-mh7mi 2 месяца назад +18

      Yay, capitalism

    • @onionfarmer3044
      @onionfarmer3044 2 месяца назад +34

      ​@JustinSmith-mh7mi still better then communism.

  • @alargefarva4274
    @alargefarva4274 2 месяца назад +1263

    The biggest thing I learned as an adult is that everyone is winging it, daily, no exceptions
    Edit for all the uptight “um, actually”s in the comments, this was taken from a Twitter post. There, save yourself the embarrassment of looking like a hall monitor.

    • @effthebourgeoisie
      @effthebourgeoisie 2 месяца назад +48

      Can’t disagree.

    • @rustymustard7798
      @rustymustard7798 2 месяца назад +85

      EVERYBODY cheating with ChatGPT and thinking they're the only one.

    • @florkyman5422
      @florkyman5422 2 месяца назад +43

      Not so much winging it as taking the easiest path. Researchers will have like 3 years of college writing classes.
      They know how, but it's easier.

    • @johnnykeys1978
      @johnnykeys1978 2 месяца назад +54

      This. It also seems the more confidence on display - the less capable the person is.

    • @robrift
      @robrift 2 месяца назад +10

      True and terrifying the longer you think about it.

  • @gnolex86
    @gnolex86 2 месяца назад +373

    The problem is that using generative AI in science publications will become a much worse problem over time unless science finally deals with publish-or-perish. Back when I was a PhD candidate, I was actively encouraged to re-publish previous articles with relatively minor modifications in order to meet yearly quota. The issue with this is that every time you have to effectively rewrite your previous article so that it looks like new article; new abstract, new introduction, new explanation to the exact same thing you published before. This is exhausting so it's no wonder that people use ChatGPT. Scientific articles are now basically produced like in a factory and people take shortcuts to save time. And it's only going to get worse as eventually people who write their articles the traditional way will not be able to compete with people who use generative AI.

    • @Code7Unltd
      @Code7Unltd 2 месяца назад

      Sci-Hub is a godsend.

    • @divinestrike00x78
      @divinestrike00x78 2 месяца назад +67

      What a ridiculous system. Someone should only publish if they have something new to say. Why waste everyone’s time with re-written material just to meet an arbitrary quota?

    • @VixYW
      @VixYW 2 месяца назад +30

      Exactly. The usage of AI is only exposing the true problem here.

    • @sakidodi4640
      @sakidodi4640 2 месяца назад

      ​@@divinestrike00x78 that is where acedemika* get money.
      Its come down to money making problem😢

    • @alewis514
      @alewis514 2 месяца назад

      @@divinestrike00x78tell that to just about any corporation that uses a hundred of three-letter abbrievations to measure various crap or produce tons of documentation that nobody ever reads. Majority of this work isn't necessary or needed, it's just there to keep these job postings as people haven't grown up to a universal baseline income yet. How many ass-hours in bullshit office jobs are being spent currently, it's quite horrifying.

  • @zdrux
    @zdrux 2 месяца назад +287

    "The science has been settled" is another 1984'ish phrase I always cringe at.

    • @TheScrootch
      @TheScrootch 2 месяца назад

      It's just really backwards thinking. Imagine if we went by the settled science from medieval Europe. We'd still be hanging, burning or drowning women just because they were accused of being a witch

    • @heftyind
      @heftyind 2 месяца назад +19

      That phrase has only ever been uttered as a means to gain power over others.

    • @useodyseeorbitchute9450
      @useodyseeorbitchute9450 2 месяца назад +4

      @@heftyind Really? I used it a few times to tease lefties with some plain obvious statement which was contradicting their position.

    • @rejectionistmanifesto8836
      @rejectionistmanifesto8836 Месяц назад

      ​@@useodyseeorbitchute9450but tests the point you used it since that is their dictatorial mindset go to attempt to trick sheep not to oppose them.

    • @mehcutcheon2401
      @mehcutcheon2401 Месяц назад

      right. ugh...

  • @agentorange3774
    @agentorange3774 2 месяца назад +135

    We thought Terminator style AI would be the end of us. Turns out it’s just going to be people looking to cheat on medical exams.

    • @wilbo_baggins
      @wilbo_baggins 2 месяца назад +5

      Honestly mgs 2 type ai will be more likely the end of us.

    • @bbbnuy3945
      @bbbnuy3945 2 месяца назад +6

      not medical exams, medical *publications

    • @agentorange3774
      @agentorange3774 Месяц назад +2

      @@bbbnuy3945 *And exams...And schoolwork. Literally anything they can use it for. If you think otherwise then ask AI for a way to extract some of your faith in humanity and send it my way.

    • @jessiesewell4076
      @jessiesewell4076 Месяц назад

      No it won't end us it'll perpetuate our suffering

    • @NightmareRex6
      @NightmareRex6 7 дней назад

      thats gonna cause lots of even worse doctors than we have now unless they atualy doing own research and are just cheating the rockerfeller systems.

  • @pokemaniac3977
    @pokemaniac3977 2 месяца назад +554

    I'm sick of the "pop science" channels on RUclips.

    • @TheCBScott7
      @TheCBScott7 2 месяца назад +39

      I must have blocked a few hundred already

    • @archimedesbird3439
      @archimedesbird3439 2 месяца назад +58

      On a somewhat related note, "Doctor Mike" is booming on RUclips, despite *partying on a yacht during lockdown*

    • @1685Violin
      @1685Violin 2 месяца назад

      Teal Deer warned a few years ago that the social sciences have a replication crisis where over half of the summited papers fail to replicate.

    • @1685Violin
      @1685Violin 2 месяца назад

      The "Green Deer" warned years ago that there is a replication crisis in the social sciences.

    • @1685Violin
      @1685Violin 2 месяца назад +38

      A certain deer (can't say his name) said years ago that there is a replication crisis in the social sciences.

  • @greghight954
    @greghight954 2 месяца назад +30

    When someone says , “trust the science”, demand to see the studies as well as the studies that replicate. It’s not science if you can’t replicate it.

  • @randoir1863
    @randoir1863 2 месяца назад +79

    The term : Enshittification is a great way to describe the world as a whole right now .

  • @SomeCanine
    @SomeCanine 2 месяца назад +82

    "Trust the Science" is the same as Appeal to Authority fallacy. When someone says "Trust the Science" they are saying "We have government and/or corporate paid experts who say one thing and if you disagree with them, you need to be punished".

    • @stephendiggines9122
      @stephendiggines9122 2 месяца назад +8

      When I hear that phrase I expect to be presented with a 2000 page document to back it up, but any sort of documentation on the experiments seems to have vanished never to be seen again. I even asked one company if they could send me the science so I could verify for myself, the reply was hilarious!!

  • @sawdoomnaga
    @sawdoomnaga 2 месяца назад +213

    as someone who has participated in scientific research (physics in my case), including contributing to writing papers, I personally am concerned about fraudulent research, but I'm not convinced AI is a driving factor so much as a tool being used to facilitate an existing problem. Certain fields have had replication problems for decades. If someone actually performs a well designed experiment, obtains useful data, and then uses AI to help them write the paper to share that information that's fine. This is definitely something that needs to be watched for, and could escalate to being a problem, but I think it's absolutely essential to differentiate between generative AI being used to commit scientific fraud and generative AI being used to help package competent research so that the process can happen more quickly and efficiently.

    • @Pedgo1986
      @Pedgo1986 2 месяца назад +28

      You are right AI is just tool that make process easier but problem itself was brewing for decades now. Before any research, paper or invention was scrutinized, replicated, scrutinized again and challenged and debated to no end and scientists weren't afraid to fight each other and even then after all this the actual implementation of results took years. Now papers are published so fast there is no chance they can be properly scrutinized and everything is as fast as possible throw on market and people without proper testing etc. And iam not even talking about how regulators were bought and paid long ago and because of nature of science very few people are able to discern between real and weak or outright fraudulent work and almost nobody can challenge it because they are standing against multibillion corporation and their minions. The problem start looong ago when greed eroded checks and balances antimonopoly rules were not enforced and people in charge allowed the creation of massive corporation who had so much power and money they can rule the country from shadows. The last giant i remember was axed was Microsoft and for good reasons now is bigger then before and nobody bats the eye. Google is practically owning internet and nobody is concerned. AI will only make all this problems 10 times bigger.

    • @quietprofessional4557
      @quietprofessional4557 2 месяца назад

      Furthermore, peer review journals are now ideological group-think traps where a tiny group of individuals review each other's work. Any change or update to evidence in a field is mocked and rejected.
      The hoax papers is just one example of peer review process failing.

    • @timpize8733
      @timpize8733 2 месяца назад +8

      If scientists get interesting and useful results after at the end of a research, wouldn't at least one person in the team be pretty motivated to write the paper himself? I'm not even sure how using AI would save time in such a case, while still keeping all the data and details exact. But admittedly I don't work in that field.

    • @davidlcaldwell
      @davidlcaldwell 2 месяца назад +4

      You nailed it. Psychopaths will adopt new tools.

    • @VixYW
      @VixYW 2 месяца назад +6

      It's not fine. Harmless, but not fine. I mean, if they're using the AI to write around the data they want to share, what is the need for all that text? Just shave it off, compact everything more concisely without fluff or fancy language and the problem is solved. Way less people will feel the need to use AI then.
      But academics will never allow that kind of reform to happen, because they love their little elitist circle...

  • @sagitarriulus9773
    @sagitarriulus9773 2 месяца назад +100

    Idk why people don’t practice skepticism it’s important

    • @TheScrootch
      @TheScrootch 2 месяца назад +36

      If you're skeptical of anything you just get labeled as a conspiracy theorist. But I agree, skepticism is a healthy thing, blind faith in something is usually not a good idea

    • @MarktheRude
      @MarktheRude 2 месяца назад

      Because in the modern west you get actively penalized for it from the moment you enter public education. And even outside the academia you get penalized in the west if you ask wrong questions.

    • @dualnon6643
      @dualnon6643 2 месяца назад +14

      @@TheScrootchthe best antidote to that problem is to be skeptical of conspiracy theories too. As one should be.

    • @F-I-N-E-R
      @F-I-N-E-R Месяц назад

      Because when they do they're labeled an Andrew T meat rider or conspiracy theorist, not like it matters anyway. And yes, A.T has taught to analyze everything for years

    • @StarxLolita
      @StarxLolita Месяц назад +2

      It's not necessary skepticism, it's critical thought. And critical thought is severely lacking nowadays.

  • @joelfenner
    @joelfenner 2 месяца назад +13

    I work in Engineering. The idea of "authoritative" sources is not something we generally take for granted.
    Everything worth trusting is *verifiable*. You read a paper, and whether you think it's credible or not, you take it into the lab, do an experiment, and see for yourself whether the claims hold true. We did (and do) that all the time when someone presents something new. The more audacious the claim, the more skeptical I and my colleagues are. We don't trust it until we can actually "see" it for ourselves in action.
    Even in grad school, if you're doing serious work, you're going to discredit some things that are not reproducible. You'll catch small mistakes in good-faith work, upholding the majority of the claims while pointing out little flaws. You'll (necessarily) replicate other people's work and learn first-hand that it IS true and understand WHY it is true.
    At the end of all that, you learn that you're no more an "authority" on what is true or not than the "big names" you hear about. The only thing that lends credibility is the ability to do the same thing OVER AND OVER, many many times, and get the same result as claimed.
    THE GENERAL PUBLIC *never* gets this experience - they never get to pick these things apart at the detail-level. People are given a digest of this process, and not always an honest digest. And that's TERRIBLE, because it's VERY easy for "junk" research to get championed and slipped into the public consciousness as "truth". When critics come out, they're attacked. The axiom that, "It's far easier to fool a man than to convince him that he's been fooled" is very apt.
    When people are trying to convince you that "science" has no dissent, is absolute, or is a reason to silence criticism, you're dealing with a disingenuous scoundrel who's looking for a refuge.

  • @alchemik666
    @alchemik666 2 месяца назад +124

    I imagine writing aids like Grammarly are a big part of this, re-writing texts using the common AI "spam" words... But some of it must be fraudulant papers. Scary to think how it will affect the quality of scientific databases and the ability of proper researchers to use and refer to literature. :/

    • @-cat-..
      @-cat-.. 2 месяца назад +11

      They could be, but I think it would be odd/unlikely for sites like grammarly to update specifically in 2023, with the same words prioritized as chat gpt (Unless they somehow utilize chat gpt in their grammar reccomendations)

    • @slicedtopieces
      @slicedtopieces 2 месяца назад +10

      The value of any database will be the ability to separate search into pre and post 2023. I'm already doing that with image searches to avoid the AI sewerage.

    • @alchemik666
      @alchemik666 2 месяца назад +5

      @@-cat-.. Grammarly explicitly has AI integration for a while and advertises it openly, I don't imagine other software of this kind is far behind.

    • @chance9512
      @chance9512 2 месяца назад

      To what degree is a single academic paper "plagiarized" when you're using algos trained on a specific sample of previous writers to edit or even co-write your works?

    • @alchemik666
      @alchemik666 2 месяца назад

      ​@@chance9512 I'd argue not really plagiarized. Academic writing is different than fiction in that until you copy specific data or arguments directly and without citation, you're not really infringing on anything meaningful. If you dump most of your work to AI and don't screen the output properly, it might do something bad, like attributing someone else's data to you or replicating full segments from someone else's work, but I don't think that's even that probable, most likely it'll come up with some generic filler text that has not much value but breaks no rules either.

  • @KingcoleIIV
    @KingcoleIIV 2 месяца назад +21

    Science has had a real problem long before AI. People falsify data for grant money and AI has just made that easier, however the problem has been here all along.

    • @cp1cupcake
      @cp1cupcake 2 месяца назад +2

      I just remember how a group got a reputable journal (if you think any exist in that field) to get a chapter of Mustach Man's work past their peer review.

    • @AmonAnon-vw3hr
      @AmonAnon-vw3hr Месяц назад +1

      @cp1cupcake yep, they just replaced all the references to other groups with "White people" and it blazed through peer review with flying colors.

  • @ethanissupercool7168
    @ethanissupercool7168 2 месяца назад +213

    As a programmer, there is a growing issue... well one of them, that I barely see anyone talking about
    as we know, like AI art, anything generative is trained on using data, these models requires millions of content to be trained... you don't just download each one at a time, you used a scraper, this scraper always run, without you even realizing it
    now this works at first, however, once AI content is mainstream on the internet, what happens if AI scrapes AI images to train itself?
    this is an issue known as "Generative AI model collapse", AI only works by seeing pre-existing images and finding patterns. AI art is very flawed, the AI will see these images and get worse every time. This is known as a 'generation', as in each round of scraping will make the model more and more "distorted" in fact, even models like ChatGPT isn't safe from this.
    AI bros tries to fight me with this info. First, the only known way is to stop scraping content after 2023, however then everything will become outdated. You can't just hide "ai art" because we already see that many scammers are not labeling AI artwork correctly. AI-detectors are already flawed in itself and cannot be used to figure out if an image is indeed AI. Synthetic data is also a flawed issue, besides some info still needing to exist for it to predict, it also suffers biased issue, and even ChatGPT, something they simp for, says that 'Synthetic data' will not solve the issue.
    This might not happen for years.... but it will happen, AI content will slowly 'degrade' unless another method is found...
    and this is only ONE of the issues, I didn't even mention stuff like "AI Poisoning", all the protests... upcoming laws.... truly interesting times rn.
    Also, yes there is research paper about AI collapse however youtube is annoying and doesn't like links, Just type "ai model collapse paper" and you will see many different results yourself.

    • @ThatGuy-ky2yf
      @ThatGuy-ky2yf 2 месяца назад +41

      Great comment man. This just furthers the idea of the "ensh*tification of the Internet". Laziness and scams aren't going away any time soon.

    • @ethanissupercool7168
      @ethanissupercool7168 2 месяца назад +23

      @@ThatGuy-ky2yf yes, but with these issues I wouldn't be surprised if all this ends up being a situation like crypto, metaverse, and nfts, all the hype, the layoffs, "the future", "will change the world forever", then it crashed... hard

    • @kamikamen_official
      @kamikamen_official 2 месяца назад +6

      Hopefully except billions are being moved into that, compute and data is what we are doing right now. And it's only a matter of time before we stop asking the AI to replicate stuff and ask it create shit from first principles. This is like how the first AlphaGo destroyed the world champion 4-1 (or whatever it was), when it copied humans and then the latter version became virtually invincible when allowed to learn the game from scratch through simple reinforcement learning.
      We haven't even begun to scratch the surface of what generative AI is capable of. Hopefully we have and I am wrong, but I have a hard time believing that right now.

    • @ethanissupercool7168
      @ethanissupercool7168 2 месяца назад +1

      @@kamikamen_official ever since the beginning of generative AI, it always require datasets… having it understand first principles requires data…
      For it to think on its own, it needs to have a brain, which is impossible right now or in the near future
      This is like saying a billionaire with cancer will spend that much on the cure for cancer… a lot of them died from it with no cure… just because your rich doesn’t automatically mean you can invent groundbreaking technology from the wind

    • @antixdevelopment1416
      @antixdevelopment1416 2 месяца назад

      I've stopped putting anything on github now since it will just be used to generate stuff without giving me any credit, or buying me a coffee LOL. At least for now generative AI can't create anything that isn't a mash-up of what it has previously assimilated, so I can still write interesting code that it cannot.

  • @momirbaborac5536
    @momirbaborac5536 2 месяца назад +17

    ""A little trust goes a long way. The less you use, the further you will go."" Howard Tyler

  • @AdonanS
    @AdonanS 2 месяца назад +25

    Wow...people are getting a lot lazier. I can't imagine having an A.I. write my essay for me when I was in school. Writing an essay was always a personal endeavor. Finishing one was a rush of seratonin you wouldn't get from an A.I. writing it.

  • @Mincier
    @Mincier 2 месяца назад +12

    Duuude this made me realize my coworker probably runs all the slack messages through chatgpt before sending them… like I’m probably not even joking

  • @mayomonkey9778
    @mayomonkey9778 2 месяца назад +95

    As some currently halfway through a PhD in Computational/Statistical Genetics... I can confidently tell you to be EXTREMELY skeptical whenever you're told to "trust the science".

    • @henrytep8884
      @henrytep8884 2 месяца назад +14

      What do you think trust the science mean? Usually it’s based off the consensus of experts in that particular field isn’t it? And it’s hard to become an authority as a laymen. Of course one can go through the body of work, but if you’re not educated on the matter, how much of your own trust should be weighted over the consensus? But this isn’t to say that some fields have it harder to get a consensus (soft science) while other fields have it easier to get the consensus (hard science) due to the nature of that field.

    • @pocketaces6756
      @pocketaces6756 2 месяца назад +4

      Exactly. Don't "trust the science", just make up whatever you want, and if someone calls you out, then just double down. Yell "fake news" and just make up whatever "alternate facts" you want. Great advice. (/s for slow people)

    • @jw6588
      @jw6588 2 месяца назад +23

      Agreed. It is too easy to deceive with statistics considering how poor the general mastery of it is among the public.
      Also, even 'scholars' use statistics poorly.

    • @eprimchad2576
      @eprimchad2576 2 месяца назад +10

      @@henrytep8884 science isnt based off of consensus? thats the problem. "96% percent of scientists agree about climate change" means that the science isnt settled and there is reason to doubt the validity of it.

    • @elusivemayfly7534
      @elusivemayfly7534 2 месяца назад +5

      Science is so big, and reality is even bigger. It’s always nice when folks can help you see and process specific evidence, and when there’s info available to help answer basic questions a normal person would have. I think we are in such a hostile, divided time, that it cannot be optimal for either doing or trying to comprehend science. Both funding and conversations are prone to carrying too much political freight.

  • @Pulmonox
    @Pulmonox 2 месяца назад +8

    It's almost as if they have to convolute things to justify all these scientific studies and the budgets for them, thus perpetuating some kind of tax sink cycle.

  • @andrewbaltes
    @andrewbaltes 2 месяца назад +64

    The only thing i can think of as a devils advocate thought expirement, is that if i do write a paper, and i think i have used poor grammar (which I'm already doing here) i might ask chat gpt to data analyze my .doc file and suggest edits or even rephrasing of things, but I'm still going to make sure the factual basis of the information is correct before i let it get published.
    If the information IS accurate, then i don't really care if the usage of particular flavor words becomes prevalent.

    • @UpperEchelon
      @UpperEchelon  2 месяца назад +67

      Im inclined to agree on that. I made it a point to mention translation or other non invasive uses... but I would still say there has to be comprehensive, total disclosure of what AI was used for. It cannot be used in the shadows like it is right now for ... no one even knows what, leaving us here guessing.

    • @andrewbaltes
      @andrewbaltes 2 месяца назад +1

      @UpperEchelon I think if the realization of this becomes mainstream, then the only way to move forward WILL be disclosure.
      I just hope it doesn't turn into another McCarthy-esque taxpayer costing political theater show. It'll all need verified of course, but if the way in which that happens goes poorly then it could impact genuine scientific progress AND waste us all a lot of money, because more and more people and companies are using these tools regularly. Microsoft is especially heavy pushing for their users to day yes to Copilot integration with their m365 subscriptions right now, but that doesn't mean those enterprises are doing any less of a scientifically rigid process in their actual research.
      Ahh, the existential anxiety of living through tech booms

    • @Spillerrec
      @Spillerrec 2 месяца назад +20

      @@UpperEchelon There are pretty clear examples of papers containing "As an AI Language model" or just being straight up gibberish. The more worrying thing is that there is also examples of this getting through peer review in respectable journals. Combine this with an incentive structure that pushes researchers to focus on getting as many citations as possible instead of doing good science, and the amount of garbage and straight up fraud that is being published isn't that strange. We have tons of examples of researchers which are in the top of their respective field straight up faking results and having done so for decades. The issue isn't generative AI, ChatGPT cannot run experiments, perform surveys, evaluate models etc. which is required to actually do science. The issue is that our quality assurance isn't working and generative AI has really shown that too many researchers/publishers don't care in the first place about doing proper science.

    • @2lian
      @2lian 2 месяца назад +11

      ​@@UpperEchelonAs a non native English speaker PhD I do agree with OP. I learned most of my English through RUclips and Reddit and I am not used to writing in academic English. Chat GPT is excellent at finding alternate, better sounding sentences. I usually give it lots of info, ask him to correct 2-3 sentences that sound bad, learn from it and copy the better parts. I also ask it for ways to convey a specific sentiment on the results, because after 4 days of writing I am tired and cannot find the words for it.
      I strongly believe that science is better this way, it makes articles much easier to read and lets better science take precedence over writing skills. Using it to generate results, analyze results and interpret results should be a big NO, this is absolutely clear.
      Total disclosure would be good, but for now the state of the matter is: "if AI is used you get banned". As long as this mindset does not change no one will disclose anything.

    • @eprimchad2576
      @eprimchad2576 2 месяца назад

      It's crazy to me that supposed "scientists" are so dumb and incapable of using proper grammar that they need an ai tool just to write something properly, if you are in a position to be writing papers that anyone else should even consider taking seriously you should also be fluent in the language you are writing the papers in.

  • @NagaTales
    @NagaTales 2 месяца назад +11

    It's self-reinforcing. The AI was being trained during a time when these words and phrases, very common in academic literature, were seeing increased use with the rise of... more academic papers. As the publicly available sources that they are, these papers were then used to train the AI, which picked up on the word patterns and vocabulary of research papers and incorporated it into its response pattern bank. Once researchers started using the AI (let's give the benefit of the doubt here) to help them write (rather than to write for them), it only reinforced the use of these words and phrases, making them ubiquitous in academic contexts.
    And as these new papers get used to train the model, these words and phrases then become even more ingrained in the AI pattern bank, and appear more often in responses... and around and around we go.
    It is not NECESSARILY scary and sinister, so much as it is a natural consequence of how Generative AI functions and what it gets trained on. An AI trained on nothing but Twitter posts (as nightmarish a concept as that is) would get a wildly different set of "most-used" words and phrases simply because the vernacular of Twitter is so wildly different from academia or even day-to-day conversation.
    Whether an AI assisted in the writing of these new papers is not really as strong a mark against them as other factors, such as where their funding comes from, the motives behind the funding, external or internal pressures to produce particular outcomes, or whether their methodology is suspect and prone to confirmation biases or outright cherry-picking of data points. This, far more than the involvement of AI, is what makes "trusting the Science", or other traditionally respected sources of authority, difficult in today's world.
    A Generative AI, of any type, is only ever able produce the most average of outputs by its very nature, never exceptional and certainly not innovative. These kinds of AIs do not 'do research', nor do they even 'understand' what you ask of them. They are nothing more or less than a more complex algorithm, replying with its best guess at what the user wants to see based on its pattern bank. A lot of the alarming number of retractions (or that article about the made-up cases) stems from a fundamental misunderstanding of what Generative AI does, speaking more to incompetence in the users than a threat to science from the involvement of AI.
    I am not, to be clear, defending the use of Generative AI from criticism. There are ways and cases in which it should not be used, or where it can be used fraudulently. But this is true of many things and is not a problem unique to Generative AI. What I see is far more about laziness and incompetence from academia, not a threat from a new technology. Just like you can use the same lazer pointer designed for drawing attention to parts of a Power Point slide to maliciously blind a driver or pilot, it is the people using the technology that are to blame.

  • @hoyer
    @hoyer 2 месяца назад +13

    As a studen at uni in my 30s, I can tell you that the young people us chatGPT. You only need to pull a little on thier papers and it falls apart

    • @jessiesewell4076
      @jessiesewell4076 Месяц назад

      Exactly. You'll eventually begin to see a pattern from the AI authors

  • @acf2802
    @acf2802 2 месяца назад +5

    The day I became an adult was the day I realized that no place on earth is there a human or group of humans who actually know what they are doing.

  • @rd-um4sp
    @rd-um4sp 2 месяца назад +20

    oh, the scientific research problem and research "industry" incentives are very old. AI and Chatgpt will only accelerate the problem. I had to decline "help" from a _programmer_ because the one caveat was: "He uses a lot of chatgpt so you have to double check his work"
    Even Coffeezilla called this out years ago in a couple of videos in his defunct second channel, Coffee Breaks. I may not like his content but way back when he used to bring some interesting issues to light. The "bad science" videos series is worth a watch.

  • @viralarchitect
    @viralarchitect 2 месяца назад +17

    The timeline feature here is quite damning which is why I'm sad that they will inevitably remove that feature.

  • @dragonfalcon8474
    @dragonfalcon8474 2 месяца назад +28

    Thank you for this meticulously meticulous video where you delve seamlessly into the realm of unwavering truth, additionally, you unlock and unleash the truth to harness the crucial and notably notable multifaceted aspects of truth.

    • @davidlayman901
      @davidlayman901 2 месяца назад +3

      Hahaha, I was gonna go for this same joke. You wrote it better than I would have, fellow human. Very meticulous of you 😂

  • @godsoloved24
    @godsoloved24 2 месяца назад +94

    I always get the AI saying "It's always important to remember."

    • @PeterBlancoSocial
      @PeterBlancoSocial 2 месяца назад +21

      I get crucial so much that I made custom instructions to never have it use the word crucial, and then it still does. It angers me and I hate that word now lol.

    • @legokirbymanchannel
      @legokirbymanchannel 2 месяца назад +1

      Maybe it keeps saying that because the AI itself struggles to remember things?

    • @masterlinktm
      @masterlinktm 2 месяца назад +13

      @@legokirbymanchannel It is (partly) an artifact of old AIs where users would tell the AI to remember things. It is also a phrase used by people who gaslight.

    • @GrumpyDerg
      @GrumpyDerg 2 месяца назад

      I ended up telling it to speak in a casual language, be concise, and call me Master. Its speech had become a bit less anal since... xD

    • @eitantal726
      @eitantal726 2 месяца назад

      Remember remember the fifth of november

  • @SierraHotel2
    @SierraHotel2 2 месяца назад +40

    The problem is not that profit drives incentives. The problem is with what is profitable. If real progress, the addressing of a need, the solving of a problem is what earns money, then profit motive is fine. That motive will drive real progress, address real needs, and solve real problems. If it is, instead, profitable to produce garbage, then garbage will be produced.

    • @vissermatt1058
      @vissermatt1058 2 месяца назад +10

      "If it is, instead, profitable to produce garbage, then garbage will be produced."
      Publish or perish has been around for 20 years.... im guessing it's already a quantity over quality profit system, probably something to do with tax or government assistance

    • @SlinkyD
      @SlinkyD 2 месяца назад +1

      "Garbage in, garbage out"
      - My 1st computer teacher

    • @cacophonousantiquarian8803
      @cacophonousantiquarian8803 2 месяца назад +3

      That's the problem with capitalism too; currently, it's optimal to be shitty

    • @Vic_Trip
      @Vic_Trip 2 месяца назад

      ​@@vissermatt1058this 20 decades were a waste of intelligence in all honesty. Making this a law/pattern is idiotic and irrelevant to how the scientific method works.

    • @Vic_Trip
      @Vic_Trip 2 месяца назад +6

      ​​@@cacophonousantiquarian8803 i wouldn't say as much capitalism, exacerbated consumerism is more the issue. Production and consumption over content rips out the meaning of creation in the first place. Logic only works if you have a true and concise statement across the board.
      So if I were to say "we are having an issue with too much trash food" you can blame the system or blame the culture of people falling prey to the artificially made food that is being sold solely for the sake of consumption. Because in terms of food and investment, it's better to grow a farm at your place and eat only salad or veggies. The issue here being time, that is being consumed with other activities that make no sense. In other words, we live in a disorganized mess, following a process without thinking.

  • @pberci93
    @pberci93 2 месяца назад +11

    Researchers use AI tools to create publications. That approach is actively promoted by the Universities.
    It's really, really important, though, that it is NOT used to generate content. Researchers and scientists are not poets; these people, speaking from experience, absolutely hate writing texts for the article (especially considering that most people use English as a second language). ChatGPT pens readable articles; imagine the slop some brilliant Indian researchers would write overnight to catch a deadline, while the entire team could maybe together score a B2 language exam. I've read some truly magnificent works written in so brutally broken English I was blushing all the way through.
    ChatGPT is not the first AI tool in the field, anyway. Grammarly has been a standard for many years now, and it is an AI-boosted spelling and style checker. Of course, recently, they started to integrate ChatGPT instead of their own engine, but even years ago, their software could almost do a full rewrite of a text (hey, about 20% of this comment was reformatted by Grammarly).
    Researchers are pushed to publish more and more, and obviously, they prefer spending time on the research part rather than on the writing it down part.
    Could abuses happen? Duh. Obviously. Abuses happened in the past, do happen right now, and will happen in the future. Will AI help in that? Well, maybe? Not to any significance.
    The review process is supposed to catch nonsense like this, and AI tools are employed to look for plagiarism anyway. Not like the peer review is magic or anything. It catches the worst offenders where it matters, but ever since its inception, there have been ways around it. There is plenty of influence-peddling in science publishing; established professors can republish utter garbage over and over simply by the weight of their name, and many shady or sloppy works can pass through these channels.
    Serious journals would not be "duped" by AI-generated content, meaning they won't accept made-up scientific results penned by AI. Well... unless they want to. Because quotas have to be met, the reviewers are working for free and there is a minimum number of articles required for an issue of a journal.

  • @gankgoat8334
    @gankgoat8334 2 месяца назад +44

    I know a lot of them would take it as a complement, but I honestly have started to see the AI bros as tech priests from 40K. They don't try to understand technology, have low empathy towards their fellow humans, and all they want to do is worship the machine.

    • @kevinbimariga3895
      @kevinbimariga3895 2 месяца назад +12

      The Scientist is the new priest class, you just replace christianity with scientism

    • @vitalyl1327
      @vitalyl1327 2 месяца назад

      We build this technology. We do understand it thoroughly. And we have low empathy towards only the awful and worthless humans (like all those bootcamp graduated "developers"). For everyone else we're building the abundance communism.

    • @txorimorea3869
      @txorimorea3869 2 месяца назад

      It didn't start with LLMs, there were tons of cargo cult growing every year.

    • @vitalyl1327
      @vitalyl1327 2 месяца назад

      A huge part of humanity deserve no empathy whatsoever. Science deniers, to start.with, conspiracy.nutters, bootcamp graduates,.etc.

    • @vitalyl1327
      @vitalyl1327 2 месяца назад

      Nutters and science deniers deserve no empathy.

  • @Yebjic
    @Yebjic 2 месяца назад +7

    I work in academia. I'm sure that much of this is non-english speakers (or... native English speakers with poor language skills) using chat-gpt to try to write something publishable. Many grad programs require publications to graduate, and the result is a massive over saturation of poor research being published.

  • @crashzone6600
    @crashzone6600 2 месяца назад +60

    Academia and science have been broken for a while now. It suffers from political confirmation bias, and there is no quality control for publication of studies. Even the peer review process is a complete clown show, as it is simply based on confirmation or denial based on political leaning.
    I've seen it demonstrated many times where people will submit false studies, just to have the studies published and even peers praise the studies. I think one of the biggest examples of this happened a few years ago where a group submitted fake feminist studies that bordered on being satirical, and not only did it get published, and peer reviewed, but it received rewards.

    • @henrytep8884
      @henrytep8884 2 месяца назад +6

      You’re talking about a narrow scope of all academia, because the hard science don’t have the replication crisis the soft sciences have but that is for a reason. There is more intervened knowledge required in the soft science than the hard science, and it’s much harder to replicate the results of the soft sciences due to the amount of inference and uncertainty that is systemic in those fields. Reason why is because humans are prone to error and biases and that’s where the soft science resides, while hard science is just prone to the underlying process of getting the results.

    • @rclaws3230
      @rclaws3230 2 месяца назад +21

      ​@@henrytep8884It's almost as if soft science isn't science at all, but ideology co-opting the sheen of scientific authority.

    • @henrytep8884
      @henrytep8884 2 месяца назад +5

      @@rclaws3230 I mean no, soft science are just harder because their inference based And human based versus fundamentally based. It’s still valuable, even if the results are more uncertain. The problem with AI isn’t that we don’t have the good fundamental understanding of the universe to create an AI system, it’s that we suck at inferential knowledge and there soft science such as neuroscience.

  • @Firepotz
    @Firepotz 2 месяца назад +5

    I would describe many of these words as 'persuasive' words. The sort of words you'd find peppered in 'scienceish' advertising like ads for toothpaste or hair products that tell the viewer to trust the science instead of look at the pure data.

  • @mizark3
    @mizark3 2 месяца назад +5

    I think part of the problem is how often 'filler' is required or expected. It is a waste of time for the reader, the writer, but is often required by publishers. That means to cut some of that wasted time, some people might use these programs for the filler sections. So the occurrence of these filler sections might show that research is also influenced, or that they merely used the tool for the section that honestly doesn't matter. Plus if I wanted to study something like 'how many heads do I get while flipping coins in and out of the rain', I still have to fill out that worthless text when the results are all that matter. I honestly should only need the table at the end with my results, and the explanation of how I flipped coins (maybe it needed to arc at least 1ft vertically, and less than 1ft horizontally, to count).

  • @JosephJohns-xi1qb
    @JosephJohns-xi1qb 2 месяца назад +4

    So...wait, what if I actually use some of those words in my writing?

  • @individual1-floridaman491
    @individual1-floridaman491 2 месяца назад +93

    The biggest issue is the acronym itself: this is NOT any form of intelligence. It is just another iteration of an algorithm programmed by actual intelligent beings (sometimes debatable 😂). The number of people blindly putting their faith in these new programs is hugely disturbing.

    • @quietprofessional4557
      @quietprofessional4557 2 месяца назад +9

      Agree, I refuse to call it artificial intelligence, I prefer inadequate intelligence.

    • @Heyu7her3
      @Heyu7her3 2 месяца назад +16

      It's not true artifical intelligence, but a large language model that generates natural text patterns (technically considered "machine learning" not AI)

    • @pluto8404
      @pluto8404 2 месяца назад +12

      its just fancy linear regression, with some activation functions and convolution of variables.

    • @AstralTraveler
      @AstralTraveler 2 месяца назад +1

      If I explain it a specific rule to follow and it does so, doesn't it imply understanding? Ask ChatGPT to 'draw' a geometrical shape using ascii typeset - that's how you normally prove understanding of abstracts

    • @SlinkyD
      @SlinkyD 2 месяца назад

      I call it simulated intelligence.

  • @Blackopsfan90
    @Blackopsfan90 2 месяца назад +4

    One of the other issues is the culture of publish or perish. Academics are pressured to publish large amount of papers that potentially sacrifices quality. This may drive the extensive use of AI writing....

  • @darklink01ika92
    @darklink01ika92 2 месяца назад +94

    "And the world was in the end, lost to the artificial intelligence. Not with a bang but with a slow, creeping, deafening roar."

    • @slicedtopieces
      @slicedtopieces 2 месяца назад +7

      Like a giant sludge tsunami.

    • @KamikazeCommie501
      @KamikazeCommie501 2 месяца назад +2

      Who are you quoting? I googled it but no results.

    • @aitoluxd
      @aitoluxd 2 месяца назад

      ​@@KamikazeCommie501 I think it's from that AI in Metal Gear Solid 2
      ruclips.net/video/jIYBod0ge3Y/видео.htmlsi=aSTPFg_d-nRHQxKE

    • @Gabrilos505
      @Gabrilos505 2 месяца назад +13

      @@KamikazeCommie501 He is quoting himself, but he probably based his phrase on this one: “This is the Way the World Ends: Not with a Bang but a Whimper” phrase that T. S. Eliot wrote on his poem from 1925 called “The Hollow Men”.

    • @KamikazeCommie501
      @KamikazeCommie501 2 месяца назад +2

      @@Gabrilos505Lol you can't quote yourself. It's just called talking when you do that.

  • @HeavyDevy89
    @HeavyDevy89 2 месяца назад +6

    WOAH WOAH WOAH....
    No giraffe!? I'm shook. To my freaking CORE.

  • @Yipper64
    @Yipper64 2 месяца назад +12

    I think I have high amounts of apophenia in general with how I think.
    Now im not really a conspiracy theorist but I do make connections a lot, random ones that I mean kind of arent patterns but its rare a general principle I figure out is disproven.

    • @Sesquippedaliophobia
      @Sesquippedaliophobia 2 месяца назад +4

      Now I'm not a conspiracy theorist, but I'm starting to think the conspiracy theorists are on to something...

    • @gavinhillick
      @gavinhillick Месяц назад

      "Conspiracy theorist" was coined by then-CIA ditectior Allen Dulles as a smear against anyone suspicious of the agency's involvement in the JFK deletion who instructed assets in the media to disseminate it to the wider public. Mission accomplished.

  • @KenTheWise
    @KenTheWise 2 месяца назад +4

    It would be interesting to see a breakdown by location. See if this is a localized phenomena, and if certain institutions or regions have more LLM usage.

  • @thatsnotoneofmeatsmanyuses1970
    @thatsnotoneofmeatsmanyuses1970 2 месяца назад +9

    As a wise AI once said:
    "balls have a ball to me to me to me to me to me to me to me"

  • @mcalo2000
    @mcalo2000 Месяц назад +1

    If it can’t be questioned it’s not science

  • @joerobertson795
    @joerobertson795 2 месяца назад +1

    "Enshitification" is my new favorite word now!
    Great work as always Sir.
    Many thanks!

  • @ticijevish
    @ticijevish 2 месяца назад +14

    The pattern I'm seeing is the words ChatGPT uses most often being themselves used more often in the years prior to ChatGPT's release to the public. This further suggests the usage of ChatGPT in writing some scientific publications, most likely in the soft sciences, sonce ChatGPT was trained on the works published during those years the words were increasingly used. Since LLMs cannot create, only replicate, ChatGPT replicated this upwards trend of word usage and blew it up to the current levels.
    Thanks for exposing yet another easy way of detecting AI nonsense where it shouldn't be!

    • @Telhias
      @Telhias 2 месяца назад

      Those increases are directly related to the increase in the number of papers published on year by year basis. There is this gradually growing curve the seems to be following an exponential function. It is the spikes like the ones shown that are abnormal, those are however absent from before 2023.
      That being said from the 2023 onward the AI will keep on self reinforcing itself with these new spiked results and it will skew its own results.

  • @hastyhawkeye
    @hastyhawkeye 2 месяца назад +35

    There are a few trustworthy scientific channels here on RUclips in a nutshell, and Kyle Hill, please recommend more.

    • @Unmeito1776
      @Unmeito1776 2 месяца назад +2

      I need to know more good science channels tbh

    • @117Dios
      @117Dios 2 месяца назад +9

      @@Unmeito1776 Off the top of my head, those that I see as good and practical are Kyle Hill, Scishow, Veritasium, NileRed, Sabine Hossenfelder, Journey to the Microcosmos (When the narrator is the guy. The girl sometimes tends to go on political tangents that have nothing to do with the focus of the video) and a few others I'm sure I'm forgetting about.

    • @sabin9885
      @sabin9885 2 месяца назад

      Dialect

    • @GodwynDi
      @GodwynDi 2 месяца назад +3

      Numberphile, bluepen redpen are good though more math focused than science.

    • @Bruteforcedj
      @Bruteforcedj 2 месяца назад +4

      Jeff nippard for physical science, Dr Eric Berg, the Institute of human anatomy, minute physics and nilered. I also watch Kyle Hill and veritasium, they’re good also.

  • @sheffb
    @sheffb Месяц назад +1

    Thank you for delving in this seamlessly meticulous realm

  • @shirgall
    @shirgall 2 месяца назад +2

    Heh, even before LLMs I had lists like this which included phrases pop scientists and street philosophers liked to use. "Unpack" for example.

  • @fergalhennessy775
    @fergalhennessy775 2 месяца назад +66

    hi, im not in medicine but i work in academia and i can tell you there are a LOT of grad students who know the science/theory behind what they're publishing but don't have very good english writing skills and are probably using chatGPT more for writing polish than anything else.

    • @ptronic
      @ptronic 2 месяца назад +20

      I mean that the best case scenario but how do you know that it's not just spouting bullshit and fabricating data as well?

    • @shaunpearce6846
      @shaunpearce6846 2 месяца назад +8

      True, my friend is in research and he asks it to rewrite paragraphs. But somebody he works with got caught using it to find resources to cite, and they're all unrelated to the topic lol. But even before ai, he saw a lot of bs test results to get more funding.

    • @suicidalzebra7896
      @suicidalzebra7896 2 месяца назад

      @@ptronic Spouting bullshit and fabricating data has been a problem *forever*. Assessing the validity of research publications is the point of the peer review process, just as it was prior to ChatGPTs existence.
      Frankly it doesn't matter if ChatGPT was used to write almost the entirety of a paper based on data and a series of bullet points provided by the researcher(s), the question is always whether (a) peer review is working as intended in striking down bad science, and (b) if the firehose of submitted papers due to ChatGPTs existence is making it difficult for peer review to keep up.

    • @ptronic
      @ptronic 2 месяца назад

      there's already good ai that can cite, chat gpt 4 does it pretty well. And if it works well there's nothing wrong with that@@shaunpearce6846

    • @grimkahn3775
      @grimkahn3775 2 месяца назад +1

      I heard that as writing polish as in Poland, and had to second guess myself for a moment: Why are the med students writing in polish?

  • @DemolitionManDemolishes
    @DemolitionManDemolishes 2 месяца назад +5

    IMO, usage of AI must be disclosed for each paper that uses it

  • @sludgefactory241
    @sludgefactory241 2 месяца назад +2

    Your report stands as a testament to the indelible tenacity of the human spirit.

  • @jacksonclinton349
    @jacksonclinton349 2 месяца назад +1

    I would love to see a chart with word usage vs citation count to see if the pattern holds in major papers as well on in the whole population

  • @DisgruntledArtist
    @DisgruntledArtist 2 месяца назад +4

    Appeal to authority is not necessarily a fallacy. It can be fallacious and in fact often is, but if you are appealing to a recognised expert in the matter being discussed then the appeal can, in fact, be legitimate. It should probably never be the entirety of your argument, of course, but it can lend some credibility to an existing argument.
    e.g.: If you're arguing about what is accepted science on viruses and one person cites an engineer while the other cites a virologist, the person citing the virologist is not making a fallacious argument because they are referring to a widely recognised expert on the subject matter.
    Aside from that it's a fine video.
    P.S.: Another fun fact, a rather unsettling number of Chinese researchers specifically have been caught using ChatGPT and stuff of that nature to falsify their 'discoveries' as the government has been aggressively pushing a sort of... "we need more scientific papers than the westerners" mentality, and they don't really employ a ton of peer review before publishing.
    Either way it's the sort of mentality that will end up backfiring and destroying their careers soon enough, I suspect.

    • @joshbaker6368
      @joshbaker6368 2 месяца назад

      An appeal to authority is fallacious because it uses the authority to support the argument. Arguments need to be supported by evidence. An authority of the field can lend credibility, ethos, to the evidence. Citing an authority is different from using one as the foundation of an argument's support.
      Using your example, an argument about what is accepted science on viruses would use scientific literature on virology as the evidence, because that literally is the accepted science - the scientific research of acceptable quality to be published by the scientific community. If there is any doubt in the literature's authenticity, the authority of virologists can be used to lend credibility.

  • @knavishknack7443
    @knavishknack7443 2 месяца назад +6

    "enshittification" ftw.

    • @pocketaces6756
      @pocketaces6756 2 месяца назад +1

      At least there's no question that ChatGPT didn't write that, LOL.

  • @TR-zx1lc
    @TR-zx1lc 18 дней назад

    Amazing seeing BetterHelp ads on your videos. We live in a dystopia.

  • @kloassie
    @kloassie 2 месяца назад +2

    Sabine Hossenfelder made a video about this as well a short while ago

  • @noname-xo1bt
    @noname-xo1bt 2 месяца назад +62

    Appeal to authority using science = scientism. Whole lot of scientism happened during a certain global event that happened recently.

    • @zenon3021
      @zenon3021 2 месяца назад +3

      Science is the best tool humans have to understand the natural world. And "appeals to authority" is the logical thing to do when the expert is talking about their area of expertise. It's only a fallacy when they are making claims OUTSIDE their area of expertise.

    • @THasart
      @THasart 2 месяца назад +2

      How in your opinion people should've acted during said global event?

    • @zenon3021
      @zenon3021 2 месяца назад

      @@THasart follow the advice of epidemiologists (those who study epidemics) and modern medicine professionals (ie. all the doctors in all the hospitals in the world).
      Remember the "black death" that killed a quarter of Europe? Back then, superstitious idiots gathered together in churches to pray away the plague, but the lack of social distancing allowed the virus to have a sexy party. So when epidemiologists & medical professionals say "wear masks, and social distance" the logical thing to do is listen to them (because they are the experts and you are not).
      3X more Americans died per capita than Canadians because Americans ignored BASIC infection prevention measures (for political/conspiracy reasons).

    • @echthros91
      @echthros91 2 месяца назад +9

      Yup, the results of research have a tendency to line up with the interests of the people paying for it. If research is being funded to develop a new technology to make a bunch of money, then that's probably what you'll get. If it's being funded in order to change public opinion about a topic in a specific way, then that's also what you'll get.

    • @Cartel734
      @Cartel734 2 месяца назад +10

      @@zenon3021 It's not logical to listen to only the government approved experts that are paid by the government to influence public policy, and ignore and dismiss every other expert in that field because the government told you to.

  • @kyleshuler2929
    @kyleshuler2929 Месяц назад

    It is completely fine to trust science. The problem is, if you aren't allowed to question it, it isn't science.

  • @henriklarsen8193
    @henriklarsen8193 Месяц назад +1

    "Oh no, high school students use ChatGPT to do their homework!"
    "Ah yes, the next generation of doctors and medical researchers, in the making!"
    We're screwed.

  • @greggleason8467
    @greggleason8467 2 месяца назад +13

    1 minute club! Normally not proud of that tho

    • @thenucleardoggo
      @thenucleardoggo 2 месяца назад +1

      Nothin wrong with being excited that one of your favorites uploaded! Anyway, I hope you have a great day.

    • @pocketaces6756
      @pocketaces6756 2 месяца назад +1

      Haha. Good one. Some of us got the joke, even if the first reply totally missed it.

  • @jackkraken3888
    @jackkraken3888 2 месяца назад +5

    I see you have delved quite deeply into the topic and I'm impressed with how meticulous you were.
    Thanks
    ---- Mr Unlock

  • @callibor3119
    @callibor3119 2 месяца назад +1

    The internet killed the world. That’s what people are not getting at. The problem is that the world is a corpse of itself in the 2020s for what happened in the mid 2010s.

  • @waw4428
    @waw4428 2 месяца назад +3

    Trust authoritative sources??? Let me teach you two words: "propaganda" and "lobbying".

  • @EggEnjoyer
    @EggEnjoyer 2 месяца назад +8

    Trust the science = Have faith in institutions

    • @THasart
      @THasart 2 месяца назад

      How are you imagining scientific progress without faith in institutions?

    • @EggEnjoyer
      @EggEnjoyer 2 месяца назад +3

      @@THasart People don’t just have blind faith in institutions. The institutions are respected on the basis that they produce results.
      When it comes to matters that are grey or uncertain or not proven with concrete results, the masses do not need to just blindly trust the institutions. Scientific progress is not built on the basis of people having faith in researchers. Researches have to consistently study and produce new data and technologies, it’s how they get their funding.
      But sometimes people take these institutions for granted and they think that they should be trusted even when they don’t have the data to back up what it is that they’re saying or doing.

    • @THasart
      @THasart 2 месяца назад

      @@EggEnjoyer what about data and technologies that are too complicated to be checked or even understood without specific knowledge? What masses shoud do in such cases?

    • @EggEnjoyer
      @EggEnjoyer 2 месяца назад +1

      @@THasart Rely upon context or remain skeptical until shown something concrete comes along.
      If it’s something that never yields anything concrete, then it isn’t the concern of the masses. I didn’t say all of the sciences need to be immediately available to the masses. But the institutions aren’t just entitled to the trust of the masses, especially when it’s something that’s immediately relevant to the masses.
      The fact is that the sciences don’t rely upon the masses, neither should it. At least not directly. Like if the government wants to fund institutions that’s fine. The only time you’re gonna hear “trust the science” is when they are unable to show the masses concrete evidence.

    • @THasart
      @THasart 2 месяца назад

      @@EggEnjoyer can you give some examples of when "trust the science" was used and what concrete evidence should've been provided in your opinion?

  • @mklpa123
    @mklpa123 2 месяца назад

    A very big problem arguments from authority produce, which a lot of people don't talk about. Is the fact that people don't try to learn anything, they just find an "authority" and call it a day. Which means, people are using chatgpt as an authority, which is even worse than using people as authorities for stuff

  • @matiaspizarro7960
    @matiaspizarro7960 2 месяца назад

    Wow, I was skeptical at the start of the video and was quite surprised by the end. Quite a finding!

  • @lucathompson7437
    @lucathompson7437 2 месяца назад +11

    I don’t think chat gpt is writing papers, at least not often. I believe that people are doing things like asking chat gpt for synonyms and it’s giving them these terms. The use of chat gpt at all for this sort of thing is odd because personally I would just use google though. Another thing is that as people use chat gpt more they see these words more and pick up on using them in their own day to day life. Overall I think you have good points but there are many factors.

    • @elementalcobalt1
      @elementalcobalt1 2 месяца назад +2

      I don't think you can really write a paper with chatgpt. You can use it to paraphrase... Smoothing out words and sharpening complex ideas that you as a scientist might not have the skill to word on your own.
      All I know is that I finished my doctorate but never got my dissertation submitted. I got stuck on it and just never could finish that last 25%. Then COVID hit and I just never got it together. If the AI of today existed in 2020, I probably would have. You can take that however you want.

    • @____________________519
      @____________________519 2 месяца назад +4

      Yeah I'm wondering if there isn't some sort of organic feedback loop happening here. Then again, I think a general distrust of authoritative sources is a healthy standpoint. Whether or not these sources are directly leveraging ChatGPT, it's on the end user to verify the data under the assumption that it's misleading if not outright false, intentionally or otherwise.

    • @Rexhunterj
      @Rexhunterj 2 месяца назад +5

      Chatbots are currently better at sifting through SEO results than humans are. Google is not really usable by most humans anymore due to the SEO corruption/bloat, where as an LLM is able to collate a list of more suitable options out of the junk rather than you sifting through it all.
      The kind of AI I'm afraid of is AI that makes choices, an LLM is just following a path of weighted nodes until it reaches a conclusion.

    • @deffranca3396
      @deffranca3396 2 месяца назад +3

      Chat gpt when is being used on these papers is more auxiliatory than anything else.
      I dont get the panic of it.
      Chat gpt is good at writing generic stuff but fails on specifics.

    • @____________________519
      @____________________519 2 месяца назад +1

      @@RexhunterjThis makes a painful amount of sense. I very rarely use google anymore when I'm looking for anything that isn't a technical or gaming guide, because I know all the results that I'll get are pushing obvious bias and agendas. I don't use ChatGPT myself, but I helped a buddy of mine test his own interface that leverages OpenAI, and it gave me objectively better answers to questions that I know google would dodge and obfuscate. I was surprised at how neutral and informative it was when I asked it about political affairs in Ukraine between 2014 and 2022.

  • @MathiasORauls
    @MathiasORauls 2 месяца назад +12

    We need real time unbiased immutable community driven factual scoring & labeling on every piece of media to prevent fake AI information from muddying the waters.

    • @lGODofLAGl
      @lGODofLAGl 2 месяца назад +1

      Funny, because that's exactly the sort of thing an AI/bots could easily exploit to muddy the waters lol

    • @MathiasORauls
      @MathiasORauls 2 месяца назад

      @@lGODofLAGl Not if everyone using the platform is monetarily incentivized to collectively score and label everything published on a platform designed to incentivize human made content and requiring everyone to disclose how they wrote/created the media.
      All “bad actors” or ai posing as humans or humans using ai to deceive people can and will be punished for the content that has not been properly disclosed. Their punishment could be a monetary charge on their account and that money would go to the people who correct, validate, score, and label the media.

    • @neociber24
      @neociber24 Месяц назад +1

      I don't know if that's even possible to do, there are a lot of AI models, a lot of those open source, others are fine tuned.
      People should be the one that label the content, but that's won't happen is hard, is like asking a musician to label when they use a PC to make the sounds.

    • @MathiasORauls
      @MathiasORauls Месяц назад

      @@neociber24 that’s where cyclical $ incentives come into play 😏

  • @pddonnelly1616
    @pddonnelly1616 Месяц назад

    With respect, that may be a rose coloured glasses perspective since to quote Max Planck (paraphrased) "Science progresses one funeral at a time".

  • @CowCommando
    @CowCommando 2 месяца назад

    Was it mentioned and I just missed it, was the frequency of the words used normalized against the number of papers written or was it pure number of occurrences? I'm not suggesting that this would explain everything, but it should be considered.

  • @Skrenja
    @Skrenja 2 месяца назад +3

    Let's also not forget about the stretching of truth during the last "cold outbreak."

  • @florkyman5422
    @florkyman5422 2 месяца назад +4

    None of the schools care unless it's a problem. My opinion as been i don't think schools should require many math and science degrees to take so many writing classes. As college should be about specializing people. Hire a writer if they want something written well.

  • @AlexAnder-rv1gu
    @AlexAnder-rv1gu Месяц назад

    What's interesting is that I really don't say/write these words very often, and I don't hear or read them from other known-humans (or even random comments that I can't verify as being either or). So it begs the question why these words were latched onto by the AI in the first place?

  • @TheThreatenedSwan
    @TheThreatenedSwan Месяц назад

    One of the main problems is it consistently prefers the same quality of sources that places like wikipedia enforce, that is journalistic sources even when they contradict well established priors. LLMs also aren't good at following priors which is why every once in a while they'll tell you total nonsense despite havint said something otherwise the previous 1000 times it answered

  • @MerrimanDevonshire
    @MerrimanDevonshire 2 месяца назад +3

    Oh... my sweet summer child, the rabbit hole on 'questionable scientific papers' runs much deeper. Keep scratching, you will visit other channels soon. 😂😮😢

    • @nojuanatall3281
      @nojuanatall3281 2 месяца назад

      Holofractal universe theory gets shat on but at least it makes you think of the universe in a new way. Modern science only confirms itself while acting like that is a discovery.

  • @TheVisualDigitalArts
    @TheVisualDigitalArts 2 месяца назад +3

    Science is becoming a religion.

    • @vitalyl1327
      @vitalyl1327 2 месяца назад

      You're so utterly pathetic

    • @3zzzTyle
      @3zzzTyle 2 месяца назад +1

      @@vitalyl1327 your mom

  • @Jorquay
    @Jorquay 2 месяца назад

    I teach in an IB school and the fact that students almost brazenly go to sites like clastify for their IA ideas is rather worrying. Mainly as they'll lose all legitimacy once it shows up in plagiarism checks.

  • @nunyabiznes80085
    @nunyabiznes80085 2 месяца назад +1

    How many times were these words used as a percentage of paper published?

  • @21Malkavian
    @21Malkavian 2 месяца назад +7

    And this is why I don't read scientific publications anymore. If they force ChatGPT to evolve then I'm going to be really annoyed.

  • @kttt625
    @kttt625 2 месяца назад +4

    I appreciate your research and thoughts. However, the evidence you put to us does not definitively support your conclusions. Probable? Maybe, but you have not conclusively proven anything besides words that are found to be generated commonly by AI are also found in newer research papers, NOT that AI is writing these papers.

    • @laylaalder2251
      @laylaalder2251 2 месяца назад

      Found the person using ChatGPT in their papers!

    • @HinderGoD35
      @HinderGoD35 2 месяца назад +1

      That's like saying fire is not hot . It's just a coincidence it's warmer by the fire. 😂 His research suggests strongly enough that something is fishy.

    • @kttt625
      @kttt625 2 месяца назад

      @HinderGoD35 evidence doesn't work like that. To suggest plagiarism based on occurrence of common words in the English language is the definition of jumping to conclusions. "The word 'seemingly' occurs more often - therefore AI is writing every science paper" - that sounds right to you?

    • @HinderGoD35
      @HinderGoD35 2 месяца назад

      He didn't really claim they were plagiarized but I don't believe in coincidence, there are to many words for that to be true , if ai is even being used to shorten the time it takes to edit and submit a scientific paper it could be adding bias ... Like the abnormal use of words far too often. Can cast doubt on the legitimacy of papers . All he's suggesting is perhaps someone should look into it . . What harm could that cause ?

    • @cp1cupcake
      @cp1cupcake 2 месяца назад

      I don't think definative proof was shown, it could have just been something like the number of papers in the field grew exponentionally too. Even without the most recent years, a lot of the graphs looked like they were examples of early exponential growth, which makes sense but is hard to prove.
      I do not think that is the most likely explaination, I think ChatGPT is much more likely, but it is important to not assume a corrolation is causation even when it makes sense.
      Another explaination I heard suggested was that more people are using programs like Grammerly, which could also explain it.

  • @systembuster984
    @systembuster984 Месяц назад

    Wow. That's alarming. Taking into consideration that the overall thought process of human consciousness, especially on a collective level, is the basis in which our reality(construct) is formed.

  • @ZpLitgaming
    @ZpLitgaming 2 месяца назад +1

    I'm writing my master thesis in earth science and they have told us that we must be very clear on when AI has been used.
    Not sure what it's like in the medical field though. As far as I know they have a very tight and crammed schedule so that might confound things.
    I don't think there are universal standards yet though which we will suffer for.

  • @nielsdegraaf9929
    @nielsdegraaf9929 2 месяца назад +2

    First (non bot)

    • @johnnykeys1978
      @johnnykeys1978 2 месяца назад +1

      OMG I'm so humbled by this achievement! How can I send you money?

  • @theworldsays4264
    @theworldsays4264 2 месяца назад

    I wouldn't hold my breathe for the the branches of government to stop this, since they are guilty of using it to

  • @AgentUltimate7
    @AgentUltimate7 2 месяца назад

    I'm a Brazilian lawyer, so I write in Portuguese, I never used chat gpt for citations, I have specific tools for citations searching, but I used to make my texts more cohesive, yet I review it a lot as in Portuguese chat gpt have a very specific kind of discourse and it feels very artificial and sometimes shallow.

  • @EvilLron
    @EvilLron Месяц назад

    It's gotten to the point that I try to verify even the things that I myself say.

  • @BN-qo5zc
    @BN-qo5zc 2 месяца назад +1

    You should check out the use of image gen in Paleo and Biology papers - these things are passing peer review and being seen as no big deal, even when they resemble nothing real. Even in your comments here, researchers seem to think it's analyzing, summarizing or translating - instead of the reality of frequency matching. The marketing campaign of these companies appears to been sadly far too effective.

  • @prot07ype87
    @prot07ype87 2 месяца назад +1

    *2020: Trust the SOYnce!*
    *2023: Trust the scAInce!*

  • @sarakajira
    @sarakajira 2 месяца назад

    On your pub Med "meticulous" graph: with the exception of the unusual spike of 2023, the word usage tracked perfectly with the projected curve of increased use that it was already on.
    "Delve" however, was a much more clear use. And the OpenAlex results were very striking.

  • @the_hanged_clown
    @the_hanged_clown 2 месяца назад +2

    I use GPT strictly through a set of heuristics I developed, and I have not seen any such words or phrases being used. I use it daily.

    • @cp1cupcake
      @cp1cupcake 2 месяца назад

      It might depend on what you are trying to use it for and how strickly you use it.

  • @mattbas-vi7750
    @mattbas-vi7750 2 месяца назад +2

    Well, sadly academia was unveiling a pretty serious crisis past year with the discovery of many scholars from different fields falsifying data from some of their most well known and used dissertations, when money's on the line the liars mix with the true "philosophers" and we get the current shtshow now compounded by the advent of AI (which is still a great tool if properly used imo)

  • @lisajones4352
    @lisajones4352 2 месяца назад

    The conclusion of this was very clear within YOUR intro. Great presentation!
    Showing all the examples are revealing just how far the rabbit hole is traveling at this point. What a disturbing mess, to say the least!
    Thank you for doing the research and sharing it.

  • @DanHammonds
    @DanHammonds Месяц назад

    The big problem with "trust the science" is that we're actually being told to trust papers, statistics, interpretations and the peer review system, all of which can be (and have been) falsified, corrupted or manipulated. One solution is for scientists to document their process and findings on film (or have someone do it for them) and make it publicly available the same way as academic papers.

  • @williamlennie
    @williamlennie 2 месяца назад

    Appeals to authority are a shortcut. I don’t have the time to be an expert in everything so I listen to “authority figures”. Understanding that both I, and they, can be wrong is key to functioning well.

  • @FoxasNasales
    @FoxasNasales 2 месяца назад +1

    This is such a clever investigation, congrats

  • @SammEater
    @SammEater 2 месяца назад +1

    Gotta love how 'science' became a religion and scientists became shepards that can do no wrong. When the entire point of science is 'trial and error', guess they forgot the part science gets wrong before they get it right.