THIS Got Through Peer Review?!

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 พ.ย. 2024

ความคิดเห็น •

  • @PeteJudo1
    @PeteJudo1  8 หลายเดือนก่อน +160

    Go to ground.news/pete to stay fully informed. Subscribe through my link to get 30% off unlimited access this month only.

    • @esecallum
      @esecallum 8 หลายเดือนก่อน

      you can buy peer review from 10000 to 50000 $. that is why so many fake or crap drugs get thru...remember vioxx

    • @rebeccachambers4701
      @rebeccachambers4701 8 หลายเดือนก่อน +2

      Have you tried chat GBT to find out what all those things are

    • @Eduardo_Espinoza
      @Eduardo_Espinoza 8 หลายเดือนก่อน +1

      The algorithm has blessed me with this vid, & I thot it was an April fools joke ! XD

    • @dckatyx9577
      @dckatyx9577 7 หลายเดือนก่อน +1

      Ground News is biased.

    • @MetrioGaming_
      @MetrioGaming_ 7 หลายเดือนก่อน

      I didn't even realize the ad was an ad; well done!

  • @tayzonday
    @tayzonday 8 หลายเดือนก่อน +4599

    I so vividly wish we could know the peers who reviewed this 😆😂

    • @Enhancedlies
      @Enhancedlies 8 หลายเดือนก่อน +127

      the real OG

    • @elperronimo
      @elperronimo 8 หลายเดือนก่อน +281

      The peers were other ai

    • @desmond-hawkins
      @desmond-hawkins 8 หลายเดือนก่อน +251

      VICE wrote about it and talked to one of the two reviewers (named in the article), who said it was not his responsibility to vet the obviously incorrect images. It doesn't look like they were able to get in touch with the other reviewer, though.

    • @EmeraldMara85
      @EmeraldMara85 8 หลายเดือนก่อน +42

      We only know of 1 reviewer based in the US, while the 2nd based in India is unknown (as stated in the Vice article).

    • @ASapientBeing
      @ASapientBeing 8 หลายเดือนก่อน +61

      ​@@desmond-hawkinsthat's so stupid isn't it

  • @nyx6903
    @nyx6903 8 หลายเดือนก่อน +2499

    I saw a paper that had the words: “as I am an AI language model” in its conclusion.
    Nice going Elsevier.

    • @seanvogel8067
      @seanvogel8067 7 หลายเดือนก่อน +145

      ‘s cool … as long as they listed ChatGPT as a co-author

    • @wladynoszhighlights5989
      @wladynoszhighlights5989 7 หลายเดือนก่อน

      @@seanvogel8067 As main Author

    • @kellapeer
      @kellapeer 7 หลายเดือนก่อน +66

      Did you try reporting it? Or cross reference findings?

    • @AnaICarnaval
      @AnaICarnaval 7 หลายเดือนก่อน

      Scientist identifes himself as AI language model, you bigot. Live and let live!

    • @batcactus6046
      @batcactus6046 7 หลายเดือนก่อน +4

      its

  • @seanrrr
    @seanrrr 8 หลายเดือนก่อน +1309

    I took a look at the "apology" from Frontiers. They noted that one of the reviewers raised concerns about the images being AI generated, yet the authors never responded. Then Frontiers "failed to act on the lack of author compliance with the reviewers' requirements", and they're looking into how this happened. I think we all know how this happened.

    • @Krilium
      @Krilium 8 หลายเดือนก่อน +327

      So the peer reviewer didn’t approve the paper, and the ‘reliable source’ (Frontiers) accepted the paper anyway? Their reputation is ruined. Why even accept peer reviews if you’re going to cut them out anyways?

    • @Ebani
      @Ebani 8 หลายเดือนก่อน +110

      @@Krilium This is why publishing in a scientific magazine today doesn't really mean anything anymore.

    • @Joe_Yacketori
      @Joe_Yacketori 8 หลายเดือนก่อน +32

      "I think we all know how this happened." No, I don't know. What are you insinuating?

    • @seanrrr
      @seanrrr 8 หลายเดือนก่อน +145

      @@Joe_Yacketori They intentionally ignored the reviewers to push another publication out for profit. Frontiers charges anywhere from $700 - $9000k + in publishing fees (depending on which journal; they have hundreds). They'd rather publish a subpar article and take the cash than to be stricter and aim for higher prestige.
      Now, I wouldn't call Frontiers a "predatory" publisher, but they're known to be less reliable than others for this exact reason.

    • @barbaravenuh
      @barbaravenuh 8 หลายเดือนก่อน +16

      Yes, we all know how it happened. Money money money 🎶

  • @ivacheung792
    @ivacheung792 8 หลายเดือนก่อน +2040

    I've been trying and failing to get a legitimate paper on legitimate research with legitimate text published since 2020. The fact that some people can just skate through peer review with zero scrutiny really grinds my gears.

    • @FlipTheBard
      @FlipTheBard 8 หลายเดือนก่อน +181

      It's always like that.With _your_ stuff it's as if they're very rigid and "by the book" but with someone else's "work" it's as if they're given a lot of leeway.

    • @aniksamiurrahman6365
      @aniksamiurrahman6365 8 หลายเดือนก่อน +230

      As long as the "publish or perish" culture continues and the academic world revolves around papers, this will keep happening. Academia should find something else to evaluate researcher performance. Like what they claim can be reprodueced or not, etc.

    • @argfasdfgadfgasdfgsdfgsdfg6351
      @argfasdfgadfgasdfgsdfgsdfg6351 8 หลายเดือนก่อน +24

      Has anyone actually heard of "frontiers" before? I didn't.

    • @jackognibene2387
      @jackognibene2387 8 หลายเดือนก่อน

      @@argfasdfgadfgasdfgsdfgsdfg6351I have- and not positively, they’ve been caught up in a number of scandals over the years

    • @itsgonnabeanaurfromme
      @itsgonnabeanaurfromme 8 หลายเดือนก่อน

      ​@@argfasdfgadfgasdfgsdfgsdfg6351it's a very well known journal

  • @seanh0123
    @seanh0123 8 หลายเดือนก่อน +290

    The horngus of a dongfish is attached by a scungle to a kind of dillsack (the nutte sac), 77)

  • @scepticalchymist
    @scepticalchymist 8 หลายเดือนก่อน +753

    The untold problem is, that all these people doing bad science are often rewarded for it by the system, because they can produce more sh** papers and that is what counts nowadays. Doing slow but good science does not pay off.

    • @WobblesandBean
      @WobblesandBean 8 หลายเดือนก่อน +47

      That's what happens when the only ones getting grants are the ones who can spit out the most output.

    •  8 หลายเดือนก่อน +48

      Rewarded and not a serious punishment compared to when caught. You can't give me 100 every time I lie and take 10 when I'm caught and expect me not to lie. Even if it were equal I could obviously spend and come out ahead or live better until. Have to have real costs.

    • @karlwalton4953
      @karlwalton4953 8 หลายเดือนก่อน +35

      Agreed, novelty takes a priority, accuracy is secondary, the last paper I wrote was a direct debunk of another engineering paper. I was dismayed to find the journal who published the original nonsense refused to publish my response on the grounds that it lacked novelty!!

    • @t.m.2415
      @t.m.2415 8 หลายเดือนก่อน +6

      Capital reigns

    • @thewhitefalcon8539
      @thewhitefalcon8539 8 หลายเดือนก่อน +7

      It's just like products. Companies making bad products that break easily sell more products adn get more money.

  • @krazoe6258
    @krazoe6258 8 หลายเดือนก่อน +912

    Speaking as someone that has published in Frontiers: It's a garbage publisher. You can't take it seriously, or at the very least, be very sceptical.

    • @arnoldvezbon6131
      @arnoldvezbon6131 8 หลายเดือนก่อน +32

      Can you name one that can be taken seriously?

    • @Yura135
      @Yura135 8 หลายเดือนก่อน

      @@arnoldvezbon6131 Cell, Nature, Reviews of Modern Physics, New Englang Journal of Medicine, The Lancet...
      There are plenty. Look for the top influential journals in each field. Look for ones older than 50 years yet still in print. Go to a top university, look for their top researchers/nobel prize winners, look at the citations they show off on their website: those are great journals.

    • @WobblesandBean
      @WobblesandBean 8 หลายเดือนก่อน +10

      May I ask what you published? I'd like to read it.

    • @hugosiu656
      @hugosiu656 8 หลายเดือนก่อน

      ​​@@arnoldvezbon6131 Nature and Cell are pretty good. And he is right, Frontiers is kind of a joke

    • @BlackCat69909
      @BlackCat69909 8 หลายเดือนก่อน +61

      Came here to say that. Frontiers is, at least in my field, considered borderline predatory. Comparable to MDPI or Hindawi.

  • @brootalbap
    @brootalbap 8 หลายเดือนก่อน +1240

    What we learned: When your paper gets rejected from 10 good journals, just send that garbage to Frontier. Frontier prints all the garbage without any of the peer review as it looks like.

    • @arodvaz1528
      @arodvaz1528 8 หลายเดือนก่อน +64

      I think someone missed the notorious Elsevier peer reviewed article....

    • @lurifaks92
      @lurifaks92 7 หลายเดือนก่อน +14

      @@arodvaz1528 Elsevier is not a journal, they are a publisher.

    • @arodvaz1528
      @arodvaz1528 7 หลายเดือนก่อน +7

      @@lurifaks92 Who said Elsevier is a journal? Have you read the article I mentioned? Btw if services like Elsevier charge libraries so much for their subscriptions, their content better be good, don't you think?

    • @lurifaks92
      @lurifaks92 7 หลายเดือนก่อน +14

      @@arodvaz1528 You are attributing things on a journal level to the publisher level, so you are saying Elsevier is a journal, alternately you are just not familiar with the process.
      Link the article for me please.

    • @arodvaz1528
      @arodvaz1528 7 หลายเดือนก่อน +1

      @@lurifaks92 I'm referring to the service that links to the article, who's also responsible for its doi, which is not an incidental thing. The journal is not the most prominent name on the article, unfortunately.

  • @Draconisrex1
    @Draconisrex1 8 หลายเดือนก่อน +247

    My wife is a scientist who does peer review. She's very dedicated about it and it's given her ideas. Mostly that 'this paper is bs and now I can do a study that shows this study, which I won't be able to stop, is wrong and I will get a good paper out of it.' That's 3x now. Though I have noticed nobody has asked to peer review in about 5 years...

    • @mikedavison4313
      @mikedavison4313 7 หลายเดือนก่อน +40

      And... that's the whole problem with the academic system. Dedicated peer reviewers don't get invited for more reviews, and honest researchers struggle to get research grants.

  • @jerrycaughman6324
    @jerrycaughman6324 8 หลายเดือนก่อน +3833

    Science takes yet another massive L. As a scientist I am once again hanging my head. What a joke peer review has become.

    • @ChuckWatson
      @ChuckWatson 8 หลายเดือนก่อน

      I too weep for the loss of integrity in science. Stuff like this rat thing is super obvious but how much is out there that never is discovered? What is going on in the scientific community and for how long? Faking data etc is totally undermining what science is supposed to be and do. A real shame.

    • @Imdan92
      @Imdan92 8 หลายเดือนก่อน +131

      How do you know it hasn't always been this bad?

    • @OzixiThrill
      @OzixiThrill 8 หลายเดือนก่อน

      Look up the holy texts of anti-vaxxers, if you want a bit of a reality check.
      That garbage made it through peer review, got published and did untold amounts of damage to medical science, just because some greedy little shit wanted in on the vaccines market without the actual skills to compete in it.
      And that shit was going on 50 years ago.

    • @ThatGuyz82
      @ThatGuyz82 8 หลายเดือนก่อน

      @@Imdan92it may have been bad. But before about 30 years ago, it was not treated as a "get rich quick" scheme by universities.
      My grandfather and uncle were accounting professors. They remember the days when you could become a professor to teach, rather than purely as a publication machine to generate that federal journal cash.

    • @20chocsaday
      @20chocsaday 8 หลายเดือนก่อน +101

      These are your peers. According to the magazine.

  • @JapanPop
    @JapanPop 8 หลายเดือนก่อน +255

    This is exactly why I take all peer review requests and WORK HARD on those projects, just because I believe in my discipline and want good scholarship to continue. The last paper I reviewed, I couldn't keep your channel out of my mind as I checked everything as carefully as possible.

    • @kitefan1
      @kitefan1 8 หลายเดือนก่อน +9

      Thanks for your integrity and effort.

    • @Valgween
      @Valgween 8 หลายเดือนก่อน +7

      thanks for being one of the good guys.

    • @ChappalMarungi
      @ChappalMarungi 7 หลายเดือนก่อน +2

      Thank you for your integrity and principles

    • @ColdHawk
      @ColdHawk 7 หลายเดือนก่อน +11

      It’s hard to over estimate the value of a good cautionary tale, no? Having been fooled early in my career by ghostwritten papers and marketing strategies, then seeing some terrible outcomes, I became extremely hard-nosed. There is good reason to cultivate skepticism.

    • @JapanPop
      @JapanPop 7 หลายเดือนก่อน +2

      Indeed. And for the more senior scholars who fudge their work-how can they justify the risk? The risk to credibility and career alone should make them shudder-to say nothing of the ethics of the act!

  • @robertlewis6915
    @robertlewis6915 8 หลายเดือนก่อน +1565

    A lot of people don't understand how ChatGPT style language models work; they think it actually knows and comprehends what it's talking about.

    • @roobs4245
      @roobs4245 8 หลายเดือนก่อน +185

      They have the same mistaken assumption about humans.

    • @omp199
      @omp199 8 หลายเดือนก่อน +103

      @@roobs4245 Tell me about it. Whenever people point to evidence of ChatGPT not being an AGI because it has no real understanding of what it is saying, I just think, "Well, have you tried holding humans to that standard?"

    • @wilburdemitel8468
      @wilburdemitel8468 8 หลายเดือนก่อน

      @@omp199 you're a clear example of it!

    • @exosproudmamabear558
      @exosproudmamabear558 8 หลายเดือนก่อน

      Llms has certain reason. They are not random generated shit like image generation machine learnings. They actually have no resembles of intelligence unlike llms who actually understand most of the thing it says but has limited brain capabilities. This is why it is a one step forward to agi but it is not agi.
      Cha gpt has no image generation capabilities it just know how to use image generators like it can use a lot of other tools just like humans. So I can do what chat gpt can do with dall e 3 myself.

    • @lukeherbst7931
      @lukeherbst7931 8 หลายเดือนก่อน +120

      ​@@omp199the better argument would be "which one has the capability and or potential to understand its requested subject matter"

  • @SoiBoi_Kelda1059
    @SoiBoi_Kelda1059 8 หลายเดือนก่อน +204

    100K to publish in nature, and not 1 single $ goes to the peer reviewers

    • @julius4858
      @julius4858 8 หลายเดือนก่อน +28

      It costs money?! I thought they make money by charging for access

    • @SoiBoi_Kelda1059
      @SoiBoi_Kelda1059 8 หลายเดือนก่อน

      @@julius4858 Money from both ends. At least the high end journals

    • @djdjdjshhsuss3941
      @djdjdjshhsuss3941 8 หลายเดือนก่อน

      ​@@julius4858thats why theyre genius, they charge both ways.

    • @VeranikaKananovich
      @VeranikaKananovich 8 หลายเดือนก่อน

      ​@@julius4858unfortunately not only authors need to pay for publishing their articles, they also don't get any money that readers pay to get the access

    • @oiytd5wugho
      @oiytd5wugho 8 หลายเดือนก่อน

      @@julius4858 it costs money to publish. You pay to access, and if it's open-access, the authors had to pay APC. I think a lot of publishers still charge for images and graphs too. No authors, reviewers or universities receive any money in this process, publisher pockets everything

  • @jaykay9122
    @jaykay9122 8 หลายเดือนก่อน +325

    It's not peer-review which is the problem, it's the editors of the journals. It happens so often that the reviewers raise concerns but editors of certain journals just do not care. For them it's the number of papers at the end of the year, not their quality

    • @arnoldvezbon6131
      @arnoldvezbon6131 8 หลายเดือนก่อน

      Scientism is dying a slow slow death.

    • @itsgonnabeanaurfromme
      @itsgonnabeanaurfromme 8 หลายเดือนก่อน +7

      Not really. If the peer reviewer rejects the journal, it won't push through.

    • @arnoldvezbon6131
      @arnoldvezbon6131 8 หลายเดือนก่อน

      @@itsgonnabeanaurfrommeYou mean peer selector. There is no peer review only selection.

    • @nathangamble125
      @nathangamble125 8 หลายเดือนก่อน +7

      @@arnoldvezbon6131 There is peer review in some parts of academia, it's just not consistent.

    • @fzigunov
      @fzigunov 8 หลายเดือนก่อน

      Funnily enough, most associate editors are also volunteers...

  • @thomcarr7021
    @thomcarr7021 8 หลายเดือนก่อน +191

    The whole integrity issue of "studies" has been questionable from the start. A "Study" of the effects of Olubolu extract shows mice live much longer than the control group given a placebo. They don't mention that the control group of mice where twice as old at the start of the study.

    • @WobblesandBean
      @WobblesandBean 8 หลายเดือนก่อน +46

      That's just dirty. But I bet whoever sells olubolu extract paid them handsomely for those results.

    • @lukeherbst7931
      @lukeherbst7931 8 หลายเดือนก่อน +21

      It's really easy to doctor results to send clients a certain direction when your target audience doesn't understand the content of your study (because realistically speaking, product based studies are unlikely to be read past or before the conclusion segment)

    • @agnidas5816
      @agnidas5816 7 หลายเดือนก่อน

      you cannot feed placebo to rats.
      they don't understand human speech etc
      you don't even know what placebo is...

    • @darkzeroprojects4245
      @darkzeroprojects4245 5 หลายเดือนก่อน

      And it's why I don't hardly listen to people as much anymore.
      How am I to trust peoples word or these "experts" research when this crap is been a thing.
      And yet Im told off for "rejecting science" or "Progress" or whatever people spew.
      I am so done with everyone Bs.
      No wonder some people stay in a bubble, this crap is ridiculous you can't have a open mind without it potentially you gaining absolutely false studies and research.

    • @peasant502
      @peasant502 5 หลายเดือนก่อน +1

      This is still an issue of peer review. Any good peer reviewer would make sure all methodological details are included, which would make this fakery obvious.
      The idea of a study isn't necessarily flawed, really this all falls to peer review and issues in methodology

  • @Sky-bx9mn
    @Sky-bx9mn 8 หลายเดือนก่อน +87

    9:35 Not just that ChatGPT does citations to nonexistent papers, but also that it does citations to real papers that don't contain what they're being cited for. Combine this with the prolific practice of high paywalls and you have a disinformation nightmare. (It also doesn't help that scientific fields, unlike law, generally don't make pinpoint citation a practice.)

    • @micronerd
      @micronerd 5 หลายเดือนก่อน +1

      I have used AI tools in my research to try and fetch scarce and difficult to find information parallel to my own searches. One thing I noticed is that AI cannot handle compound sentences very well. So the tool told me something like "Condition X triggers response A and B" after asking a specific question, while the original cited source wrote something like: "Condition X triggered A, while B was triggered by Y". These incorrect fusions/compilations of statements can be really tricky as well and are another proof of why AI-fetched info should never be accepted as conclusive.

  • @NightmareCourtPictures
    @NightmareCourtPictures 8 หลายเดือนก่อน +37

    As a Leregasaur, I remember the resprouization process. It was terrifying.

  • @akam9919
    @akam9919 8 หลายเดือนก่อน +226

    AI generated Squirrel With Massive Balls was not what I was expecting on youtube.

  • @DarisT-qc1fw
    @DarisT-qc1fw 8 หลายเดือนก่อน +523

    A resident used ChatGPT for the lit review chapter in their thesis. The examiners took a look at the reference list and found out that they were entirely hallucinated. Don't need to be an oracle to know what happened next 😂

    • @interstellarsurfer
      @interstellarsurfer 8 หลายเดือนก่อน +90

      He got an attaboy and a medal.

    • @KristianKumpula
      @KristianKumpula 8 หลายเดือนก่อน +115

      I once tried to make chatgpt recommend academic literature about various aspects of a certain topic, and once I started going through the list, I realised that nearly all titles were made up even though chatgpt gave rather specific information about what those made up texts contain.

    • @mrosskne
      @mrosskne 8 หลายเดือนก่อน +61

      obviously he got published in frontiers

    • @Sage-ig9hk
      @Sage-ig9hk 7 หลายเดือนก่อน +13

      I like to think there’s some cgbt cinematic universe where all of these made up publications exist and it’s pulling them from this imaginary wealth of scientific knowledge unknown to us XD

    • @brahmdorst5154
      @brahmdorst5154 7 หลายเดือนก่อน +18

      Hired as new president of Harvard?

  • @justhecuke
    @justhecuke 8 หลายเดือนก่อน +45

    I share your concerns with the peer review process. On its face, it's an obviously ineffective process that seems designed to give bad results. It has no real defense against malicious actors, side-channel coordination, or cartels/gatekeepers.
    The fact that this is how things is done is one of the reasons I ended up not doing post-grad work. I edited some papers for a lab as an undergrad and they were nearly all a waste of time and effort with me catching some math errors in the data and analysis. That experience also opened my eyes to how much of a pedestal we put academics on, and how unearned this respect is.

    • @xponen
      @xponen 8 หลายเดือนก่อน +5

      during undergrad I saw my lab partner simply record fake data with no hesitation whatsoever.

    • @EvanPilb
      @EvanPilb 4 หลายเดือนก่อน

      @@xponen bruh

  • @jgmor6
    @jgmor6 8 หลายเดือนก่อน +22

    Frontiers is not a respected journal....but your point and the problem still stands.

  • @doctorlolchicken7478
    @doctorlolchicken7478 8 หลายเดือนก่อน +105

    As an AI model developer the biggest issue is the circular reference of AI using AI sources. Have you noticed your streaming or TH-cam recommendations get progressively more narrow over time, to the point where you are sick of watching the types of things being recommended? Essentially that’s what’s happening with production AI models at the moment. It’s very possible that many AI systems will become practically useless, at least for a period until this issue is resolved. That’s IF the issue can be resolved.

    • @wilburdemitel8468
      @wilburdemitel8468 8 หลายเดือนก่อน +3

      Yeah. People becoming more and more sheepish and therefore easier to lead down this silly hype bubble only exacerbates the issue.

    • @hefoxed
      @hefoxed 7 หลายเดือนก่อน +14

      TH-cam history only remembers about ~2 months of watch history for me (I watch youtube a lot, so my guess it's a set number and not time based, 2 months likely corresponds to how many videos I watch in month reaching that limit), and so keeps recommending me the same stuff I watched 2 months + ago... like youtube, so much content and can't find me much good new content?!

    • @darkzeroprojects4245
      @darkzeroprojects4245 5 หลายเดือนก่อน +3

      I hate to say this ,but you and the others should of not messed around with this stuff so rapidly.
      Imo People these days aren't able to handle AI properly, we struggle to handle Social media right without what cesspool of issues it's brought on.
      We already got in the west a Competency crisis for example, we didn't need this AI advancements these past 2-3 years.

  • @nessunolinux
    @nessunolinux 7 หลายเดือนก่อน +30

    I love how I was telling people ten years ago that peer review is flawed and should not be looked at as the authority of reliable information, was told I was wrong by just about everyone, and now this view is becoming more widely accepted rapidly right now.

  • @SourRazberry
    @SourRazberry 8 หลายเดือนก่อน +201

    The garbled text on the images has me dying 😂

    • @trucid2
      @trucid2 8 หลายเดือนก่อน +9

      I hope you got better.

    • @bluesteno64
      @bluesteno64 6 หลายเดือนก่อน

      SAME

    • @Flesh_Wizard
      @Flesh_Wizard 6 หลายเดือนก่อน +8

      I like how it just says "rat" on the thumbnail 😂

    • @PunishedKrab
      @PunishedKrab 5 หลายเดือนก่อน

      They sound like Star Wars character names ngl

  • @DaLiJeIOvoImeZauzeto
    @DaLiJeIOvoImeZauzeto 8 หลายเดือนก่อน +148

    There was a phisicist who added his cat as a coauthor, the community liked it very much and there were plenty of letters sent to the cat asking for consult. A world-famous biochemist also added her dog as a coauthor, but it caused her a lot of trouble with the journal editor. Maybe the authors who added chatgpt as coauthor are making the same joke, or at least they're being honest about prompting the AI to corwite. Pesronally, I think the practice of use of AI in academic writing should be outright banned, but people will always seek the path of the least resistance.
    Looking at the thumbnail, I assumed the rat image was used as a graphical abstract. It's very catchy, would make for a great one. That this actually got published as a fogure in the main body of a paper is unsettling, to say the least.

    • @WobblesandBean
      @WobblesandBean 8 หลายเดือนก่อน +37

      They're not "making a joke", ffs. The woman who added her dog wasn't either, she actually tried to pass the damn thing off as a researcher to lend herself credibility. And it kinda worked, because after the journal editor who denied her paper passed on, the others were all "dawwww, pupper" and brought her right back in. 🙄

    • @itsfinnickbitch63
      @itsfinnickbitch63 8 หลายเดือนก่อน

      i completly disagree that ai should be banned. its a very helpful tool for correcting grammatical and linguistical errors especially if english isn't your native languege. if you just copy paste whatever it says then its your fault for not reading the ai's output correctly.

    • @cobusvanderlinde6871
      @cobusvanderlinde6871 6 หลายเดือนก่อน +5

      I think there certainly is a place for AI language models in writing academic work.
      Writing is chiefly about communication: having an idea and embedding that idea into language in such a way that the recipient of the language will be able to unpack your idea, and not some other idea from the given language. However, language is a chaotic system with such annoying things as homonyms and ambiguous grammatical structures. (consider a sentence like: He sold him vegetables that were grown in his own garden. Clearly we have two men transacting... but were the vegetables grown in the buyer's or the seller's garden? Did you even notice this ambiguity before I pointed it out? Perhaps the context for this sentence would clarify, but it's also perfectly possible that it simply doesn't. How easily could a sentence containing such an ambiguity sneak into your paper and mislead an expected half of your readers?)
      But it's even worse, some prose is exceedingly boring and impenetrable, you might not even be able to sustain the attention to unpack any ideas from it at all, nevermind ones the author didn't intend to embed in it. Thus, the most successful researchers will invariably end up being those with sufficient communication skills to produce prose that is easy to read AND conveys the intended ideas, and this means that to be a good researcher requires not specialising fully in your field, but partially in your field and partially in communication.
      Thus it seems that the more successful authors are going to end up not being the most visionary researchers, but the one's who are visionary enough AND have greater skill in communication.
      BUT, what if a tool existed that could enhance the researcher's prose... a system that could catch and address those sneaky ambiguities, improve diction for greater clarity, and make the prose more readable. Suddenly, the researcher can devote more time and effort into the actual research, instead of workshopping their prose.
      This would also result in a quality of life improvement for the review process, now the reviewer is less likely to run into dense impenetrable prose, can more easily unpack the author's ideas, which means that it would be easier to catch if the author made some unjustified leap in logic or overlooked some important factor, or if the author is absolutely right in a massively revolutionary way, the reviewer will now not need to agonise for minutes on sentences, hours on pages, just to untangle the bad prose and discover this revolutionary revelation, or more likely, give up on untangling and judge the epiphany put to paper to be absolute nonsense.
      What kind of tool would be able to fulfil this role? Obviously, an AI language model.
      Either that or we need to formalise a standard of pairing up communication specialists with geniuses into researcher/writer teams.

    • @methatis3013
      @methatis3013 5 หลายเดือนก่อน +1

      ​@@cobusvanderlinde6871 the job of scientists is not communication. A paper doesn't need to be written such that a lay-person can understand it

    • @cobusvanderlinde6871
      @cobusvanderlinde6871 5 หลายเดือนก่อน +1

      @@methatis3013 I wasn't talking about making the writing digestible for the lay person.
      Obviously it would be good to have it all be readable for the average joe, but bad writing can also make it impossible for a subject matter expert to make sense of what the scientist said he did, and thus, even a subject matter expert would be unable to judge the validity of the research.
      Did you even read my whole comment? I know it's on the long end, but nowhere in it do I express a concern for the ability of peasants to digest academic writing.

  • @ChemDamned
    @ChemDamned 8 หลายเดือนก่อน +28

    I swear those people just pretend to their job 😂. Few days ago I've downloaded a paper about carbon nanotubes from "Journal of Materials Science" by Springer and there was, completely out of the blue, the following sentence:
    《...the effect of chirality on the stress of CNTs increases with the increase inthe United States as the strain. 》
    😂😂

  • @adamryan977
    @adamryan977 8 หลายเดือนก่อน +30

    The biggest problem aren't the peer-reviewers don't looking at the paper....it's the authors don't looking at their OWN paper. Even when using AI to create images, you should at least look at it before using it in your own paper. If they show so little care for presenting their own research, how little care did they take to research. I would die of shame if i were them.

    • @kody.wiremane
      @kody.wiremane 6 หลายเดือนก่อน +2

      Maybe they did _no_ research. ChatGPT did, and cared to its best

  • @DavidJBurbridge
    @DavidJBurbridge 8 หลายเดือนก่อน +15

    I've had reservations about Frontiers for a while but this event is where I put them in the "predatory" box. It's clear that they are just trying to grab as many APCs as possible without regard for academic standards.

  • @pjvandijk3987
    @pjvandijk3987 8 หลายเดือนก่อน +19

    Pay the reviewers decently for their work, publish their names, and keep a review index, like a citation index. It is all part of the academic environment.

    • @stephenallen4635
      @stephenallen4635 7 หลายเดือนก่อน

      Then you have the same issue that peer review was designed to address, who is paying and who benifits from this paper saying what it says. Once money becomes involved the two usually end up being the same party.
      It's easy to hear something online from someone you think has authority and just roll with what they say but the issue really isnt the peer reviewers, they actually tend to do a good job but the editor (the journal) has the final say on what gets published

  • @davea136
    @davea136 8 หลายเดือนก่อน +107

    Shouldn't the peers who "reviewed" these papaers be subject to professional criticism for their failure to properly review? They are endorsing fraud. That's a crime in most countries.

    • @samsonsoturian6013
      @samsonsoturian6013 8 หลายเดือนก่อน +7

      Can be. Depends on university and whether they were too trusting or just lazy

    • @realGBx64
      @realGBx64 8 หลายเดือนก่อน +40

      Turns out the peer reviewers raised concerns, the editor posted the questions to the authors, the authors ignored half the questions, and the editor decided to publish anyways.

    • @gcewing
      @gcewing 8 หลายเดือนก่อน +5

      Seems to me they should have done more than just "raise concerns" -- more like say loudly and clearly "This is complete and utter rubbish, throw it straight in the bin."

    • @realGBx64
      @realGBx64 8 หลายเดือนก่อน +27

      @@gcewing the peer reviewer can write comments and questions for the author, and signal whether they suggest the paper for publication, want a revision, or reject. But the decision is always the discretion of the editor. The peer reviewer can’t force the editor’s hand to throw anything into the bin.

    • @zama422
      @zama422 7 หลายเดือนก่อน +3

      @@gcewing “raise concerns” is a very broad statement. They aren’t going to actually describe exactly what the reviewers said.

  • @languagepool-germanusingli9902
    @languagepool-germanusingli9902 8 หลายเดือนก่อน +108

    This is much more than a quite serious. Peer review must be an entirely transparent process. The reputation of the peer reviewer must be on the line. Peer review needs to be paid for. Even that is problematic where the academic publishing process has been gamed.

    • @gorkyd7912
      @gorkyd7912 8 หลายเดือนก่อน +14

      Who employs the scientists? Universities. The university should have their funding on the line when their scientist publishes junk. The university should review the papers for obvious issues like this before it gets submitted. That's where the money comes from and that's why all these "scientists" keep pushing out junk papers so they should be held accountable.

    • @rp7390
      @rp7390 8 หลายเดือนก่อน +15

      It is relatively transparent, you can look up who reviewed this paper on the webpage. The problem is the publishing process in journals that have a high incentive to apply predatory practices. But in this case, the authors mentioned in the paper that the figures were AI-generated (which is allowed according to Frontiers' author guidelines), and one reviewer complained about them.

  • @WorkWise-2024
    @WorkWise-2024 8 หลายเดือนก่อน +36

    Could you please cover the recent paper published in the journal of Surfaces and Interfaces, which was authored by ChatGPT and passed through the 'review' process without being caught?

  • @strbrry0276
    @strbrry0276 8 หลายเดือนก่อน +45

    the label for plain "rat" is even funnier then the made up words imo

  • @14zrobot
    @14zrobot 8 หลายเดือนก่อน +15

    This reminds me of a case where a paper from iPhone autocomplete was accepted in 2016. And with no modern GPT fluff, entirely nonsense language that is not even coherent can go through those checks. And with modern plagiarism machines, scientific journals looks more and more like a yellow press

    • @KnakuanaRka
      @KnakuanaRka 8 หลายเดือนก่อน

      Where did you hear about that?

    • @johannbauer2863
      @johannbauer2863 8 หลายเดือนก่อน

      ​@@KnakuanaRkaWas also wondering that, so I looked it up: Christoph Bartneck submitted a nonsense paper with a fake identity at International Conference on Atomic and Nuclear Physics, which was accepted for oral presentation after 3 hours. He made a blog post about it and there are some articles about it.
      There probably was no peer review involved and it may not have been published (not sure, couldn't find info about that quickly)

    • @not_ever
      @not_ever 8 หลายเดือนก่อน +5

      @@KnakuanaRkaA professor called Christoph Bartneck received an email asking him to submit a paper to a conference on nuclear physics. Since it was not his area of expertise he used iphone autocomplete, added some photos, submitted under a fake name and within two hours had an email saying the paper was expected please pay us US$1k to speak at the conference. The conference was The International Conference on Atomic and Nuclear Physics.

    • @not_ever
      @not_ever 8 หลายเดือนก่อน +4

      There was also the "Get me off Your F**king Mailing List” paper, accepted by the International Journal of Advanced Computer Technology which is a really fun read.

  • @barahng
    @barahng 8 หลายเดือนก่อน +61

    Was the peer review done with AI as well? Holy crap. 😂

  • @asdfghyter
    @asdfghyter 8 หลายเดือนก่อน +16

    there are also peer reviewed papers where the introductions start with: “Certainly, here is a possible introduction for your topic:”
    i don’t get why they don’t even bother to take a glance at the text before submitting it!?

    • @justinsayin3979
      @justinsayin3979 4 หลายเดือนก่อน +1

      Many authors are not native speakers and have a shaky command of English, so they might not notice as easily.

  • @left_ventricle
    @left_ventricle 8 หลายเดือนก่อน +55

    This is actually a problem I am noticing in art/design world too.
    A lot of people are not hesitant enough to not let ChatGPT decide the descriptions of their artworks. If you are a creator of something, I find it almost a necessity that you speak of your creation 'IN YOUR OWN WORDS'. Yet, when confronting the professors with these issues, I have been either met with 'well, it's their choice, and if that leads to the downfall, so be it' or 'I mean, it's a new world, adjust to it or you are left behind'.
    Having said the above, is it only my impressions, or does ChatGPT always try to sum up the essay in a hopeful note? Even when the bullet points I am giving are pretty grim, it's one thing I cannot help but notice straight-away.
    Thanks for the insightful video.

    • @EmeraldMara85
      @EmeraldMara85 8 หลายเดือนก่อน +16

      So true, so true.
      If as a creator, you cannot articulate your own art...
      It means you never really thought through about the themes, process or any subject in you life.
      Just an empty head.

    • @bbrainstormer2036
      @bbrainstormer2036 8 หลายเดือนก่อน +1

      Can chatgpt even scan images?

    • @xybersurfer
      @xybersurfer 8 หลายเดือนก่อน

      @@bbrainstormer2036 yeah. it can analyze images. it's a relatively new features

    • @left_ventricle
      @left_ventricle 8 หลายเดือนก่อน +7

      @@bbrainstormer2036 Nah, you write some summary, then put them onto ChatGPT so you have a full article. Problem there being, is that the description of an artwork of any sort, needs greater attention to nuances. ChatGPT has a tendency to muddle things in a vaguely optimistic positivity, even when the summary prompt suggests otherwise. That is a problem that I am referring to.

    • @absolutezippo7542
      @absolutezippo7542 4 หลายเดือนก่อน

      I think someone can give ai a lot of information and then it can make fluent descriptions that are accurate to the creator’s intentions. Sure you could do a less specific job, or a more specific one. It’s just how one does it.

  • @ETBrooD
    @ETBrooD 8 หลายเดือนก่อน +15

    Kinda reminiscient of security theater. TSA fails far too often to promise security, and there's no evidence that the pretense of security accomplishes anything objectively. But it's quite effective at creating a feeling of security for the passengers, so that's the main point of it.
    If peer review is equally shoddy, then the only basis for the reputation of journals is blind faith.
    I've seen highly reputable journals publish outrageously wrong papers. So if there is bad information out there, the only way to combat that is by publishing contradictory information in other papers. Otherwise the spread of misinformation under the guise of science would become a huge epidemic.

    • @ColdHawk
      @ColdHawk 7 หลายเดือนก่อน +1

      Perhaps you mean _”has become_ a huge epidemic” confronting scientists today? Consider, the trouble with misinformation - or simply bad information - may be that even a little calls everything into question. It’s pernicious. Science in most fields today builds upon what has been published. Entire areas of study are theories stacked upon theories, the sturdiness of the entire structure depending on each building block being solid. Some papers are more foundational than others, but all still contribute. It seems germane that one of the first steps for a new scientist in even formulating a research question is to run an online search to see what is in the literature already. Has anyone asked the question already? Is the theory you might pursue already disproven or proven? Those initial searches will only review titles and abstracts, not deep dive to parse the data in papers for discrepancies. How often have fraudulent papers distorted our efforts, steering the entire thrust of a discipline in one direction or another, do you suppose? I think most would agree it’s impossible to know once more than a few cases of fraud in frequently cited articles have been exposed.
      Some areas will be more vulnerable to distortions resulting from fraud, mistakes, or poor work. One of the most fragile I can think of is climate science. Papers are produced that use theoretical models which take as their inputs variables produced by other models, that in turn take as their inputs data sets that have been adjusted by applying other models. Given that these global models are attempting to predict outcomes from dynamic interactions within chaotic systems, [here I mean chaotic as in a profound dependence upon initial conditions] what is the likelihood for a fatal flaw to arise. This is in what may be one of the most important areas of scientific endeavor our species has undertaken. The stakes are extremely high. I would submit that even when not as dramatically apparent, the stakes are extremely high and the consequences very difficult to predict. I think we already have a huge issue at hand.

    • @stephenallen4635
      @stephenallen4635 7 หลายเดือนก่อน

      That could actually be a great analogy but from someone in the industry, its more that the journals are TSA and peer reviewers are the x-ray machines. This problem starts making a lot more sense when you look at it that way

  • @souljaboy2384
    @souljaboy2384 8 หลายเดือนก่อน +9

    I sent this picture to my friend in a discord server we primarily use for VC and as such doesn't get much actual messages so I've been staring at this picture while playing counter strike for about 2 months now

  • @ColdHawk
    @ColdHawk 7 หลายเดือนก่อน +3

    I would think there is an obvious role for a paid editorial review board that does _BASIC_ things for _every_ paper such as check spelling and grammar, review images for duplicates and manipulation, check that labels make sense, check graphs for accuracy, review data and math. That review board should have access to paid consultation by specialists as needed, such as statisticians or mathematicians. That should probably occur BEFORE the paper goes out to peers to be reviewed…. After all a journal is a publisher not a platform and has responsibility for what they publish.

  • @Valgween
    @Valgween 8 หลายเดือนก่อน +18

    the real joke is my university is grading me on whether or not I use peer-reviewed papers as sources or not.

  • @MusangKing-b3o
    @MusangKing-b3o 8 หลายเดือนก่อน +23

    The analogy goes like this. Imagine that you are a student about to take your A-level mathematics official examination in next 2 weeks.
    You saw a new mathematics textbook from a different publisher in a bookshop. You bought the book and returned home. One
    day around 5.30 pm, you attempt to do 5 questions from the textbook, you selected 5 random questions from these 5 chapters: Matrices, Complex Numbers,
    Vectors, Differentiation and Integration. You spend about 25 minutes doing the five mathematics questions, scribbling down your working
    on 5 different sheet of papers. The time is about 6 pm, you stopped and went out to play basketball with your friends. You returned
    home around 7.30 pm. You took a bath and had a dinner with your family. You returned back to your study table and checked your scribbled down 5 answers
    against the book's answers section. To your horror, you found out that you have got it all wrongs. But you said "Hey, I have published 5 papers, right?
    See, I am holding 5 sheet of papers".
    Another student somewhere out there, did the same feat, but he attempted 2 questions from 2 chapters: Differential Equations and Numerical Methods and he
    got it all rights, a 2 out of 2 and you got 0 out of 5. But you said, "Hey I published more papers than him, 5, he only got 2. I deserved
    to be an assistant professor, right? If I keep on publishing more, I get to be promoted to be a full tenured university professor,
    keep on doing this, then onward to become a university president or chancellor or maybe a future director of a research institute, right?"
    You found out that your friend Thomas, did the same feat, he also got it all wrongs, 0 out of 5 from the same 5 chapters. But this is okay, since you are going to
    put a reference at the end of your 5 papers, citing Thomas's work and Thomas returned the favor and did the same, citing your work.
    Now you have them all: published papers, citations, H-index, impact factors, research grants, etc...

  • @schneewante99
    @schneewante99 8 หลายเดือนก่อน +13

    Not declaring text as generated by ChatGPT in a scientific publication is plagiarism in my opinion. Scientific publications are supposed to cite all their sources. ChatGPT draws from a large amount of sources it trained on. Using text of someone else without clearly citing it's source is plagiarism. This includes text generated from LLMs.

    • @lyokianhitchhiker
      @lyokianhitchhiker 5 หลายเดือนก่อน

      Technically, when the ideas contained within are those of the user…

  • @tiotsopkamouolivier3031
    @tiotsopkamouolivier3031 8 หลายเดือนก่อน +8

    I mean this is really crazy! How could the reviewers not see that it's obviously AI-generated? I spotted it without even knowing the story behind it!

    • @johannbauer2863
      @johannbauer2863 8 หลายเดือนก่อน +3

      This is frontiers...

    • @ColdHawk
      @ColdHawk 7 หลายเดือนก่อน +1

      The reviewers may be old?

  • @Jackllewellynn
    @Jackllewellynn 8 หลายเดือนก่อน +11

    The fact it took so long for this stuff to start getting exposed is the worst part

  • @bobman929
    @bobman929 8 หลายเดือนก่อน +8

    This is the problem with not paying researchers and reviewers. Getting payed means you can hold reviewers accountable.

    • @stephenallen4635
      @stephenallen4635 7 หลายเดือนก่อน +2

      It also means the papers that get favourably reviewed are more likely to align with the views of the person paying

    • @TheAechBomb
      @TheAechBomb 5 หลายเดือนก่อน

      ​@@stephenallen4635that's why you make the person publishing pay a fee that goes toward impartial review, the money may come from the author but it's paid out by the journal

  • @GoTFCanada1230
    @GoTFCanada1230 8 หลายเดือนก่อน +15

    See, I was wondering about that mouse image! I saw it everywhere on LinkedIn and was wondering if it was AI-generated

  • @Pedro-dn3sg
    @Pedro-dn3sg 8 หลายเดือนก่อน +6

    Academia needs an overhaul in how researchers' performances are assessed. It makes absolutely no sense to look at bibliometrics anymore. AI has only sped things up, the current publishing model has been broken for quite some time. I reckon some sort of system where only a limited amount of self-appointed best yearly efforts are published and considered for evaluation would be much more productive for Science.

    • @samsonsoturian6013
      @samsonsoturian6013 8 หลายเดือนก่อน +2

      Historian here: AI would have no idea what I was talking about and spit out critiques that often have nothing to do with what I'm writing about.
      Just like the average peer reviewer

  • @AwestrikeFearofGods
    @AwestrikeFearofGods 8 หลายเดือนก่อน +35

    Already, the younger generation has no expectation of privacy.
    Their children will have no expectation of accuracy, no expectation of truth.

    • @ColdHawk
      @ColdHawk 7 หลายเดือนก่อน

      It’s a perilous convergence to be sure. The Information Age has withered into the Manipulation Age where there seems to a con under everything you see. I am worried for our children.

  • @AJRestoration
    @AJRestoration 8 หลายเดือนก่อน +4

    Those words are hillarious.

  • @bettasbetta
    @bettasbetta 8 หลายเดือนก่อน +7

    I disagree with the premise that AI blunders making it to publication is thr fault of thr peer reviewers. Academia is full of pay-for-play publishers (Frontiers being one of them). I have plenty of stories of referees recommending major revisions or reject for these papers and editors accepting them instead.

    • @stephenallen4635
      @stephenallen4635 7 หลายเดือนก่อน +1

      Thats what he missed and I think it's really the most important point

  • @debasishraychawdhuri
    @debasishraychawdhuri 8 หลายเดือนก่อน +12

    Academia needs to be trolled like this. Peer review is a joke.

  • @briant7265
    @briant7265 8 หลายเดือนก่อน +4

    Technically, ChatGPT doesn't "make stuff up." It just doesn't understand references. It knows what a citation should look like, and how it should be used, but not what it is.
    Some lawyers submitted a court brief that was generated by ChatGPT, with bogus citations. They judge was NOT amused. They got sanctioned.

  • @L00ww
    @L00ww 6 หลายเดือนก่อน +4

    i've seen an article that started with "sure i can. "

  • @stephenmcinerney9457
    @stephenmcinerney9457 8 หลายเดือนก่อน +7

    Frontiers journals' practices have a very bad reputation, and bizarrely in 2023 their chief executive editor published an open letter decrying "[critics] sloppily promulgating “the p-word” [p-hacking]; unfortunately, this unethical behavior is being noticed, creating concern and bewilderment. The p-word is a blanket derogatory term that is so easy to use that it blocks scientific, critical, and common-sense thought processes." Preregister your outrage!

    • @jackpijjin4088
      @jackpijjin4088 7 หลายเดือนก่อน +1

      Did it literally straight-up say "p-word"??

  • @shApYT
    @shApYT 5 หลายเดือนก่อน +2

    Also the explosion in frequency of the word "delve" in papers is very telling.
    You shouldn't copy paste chatgippity.

  • @DandoPorsaco-ho1zs
    @DandoPorsaco-ho1zs 8 หลายเดือนก่อน +7

    This is not news: every now and then, someone pushes through a deliberately non-sensical article that gets magically peer-reviewed, which is strange, because everyone knows that "peers" are infallible, uncorruptible all-knowing people with infinite spare time and budget to carefully check everything that gets published.

    • @faxd3448
      @faxd3448 8 หลายเดือนก่อน +5

      it's not new, but its getting worse... generative AI will only make it way more accessible and convincing

  • @japa50
    @japa50 3 หลายเดือนก่อน

    Thank you for your videos, and I find it awesome that you put your own progress-bar to indicate the bit that corresponds to a product-placement. I don't mind watching it and I think this one was very relevant, but on top of that, I appreciate that it shows the level of respect you have for your viewers. A lot to learn from you :D

  • @TimPeterson
    @TimPeterson 7 หลายเดือนก่อน +5

    the faster these tools are developed the better. it might finally force the community to actually publish the raw data and subject it to actual scrutiny

  • @larryhuffine2814
    @larryhuffine2814 8 หลายเดือนก่อน +1

    I would just like to point out that that was really smart of you to mention how your recent videos have done so well because as soon as you said that I wanted to see what they were about smart man

  • @WilhelmDrake
    @WilhelmDrake 7 หลายเดือนก่อน +11

    The problem with sites like ground news is that its all establishment news outlets with very minor areas of disagreement.

    • @Wanhope2
      @Wanhope2 7 หลายเดือนก่อน +5

      And the summaries are worse than the average Reddit comment news bot 😢

  • @sojourner4726
    @sojourner4726 7 หลายเดือนก่อน +2

    International bodies need to not only fund research but also the peer review process.
    Peer review is not the problem. The obsession with competition in academics is the problem. Competition can drive innovation in very narrow circumstances, typically it runs interference against progress.
    We have observed this time and time again and academics and economics and sociology. We need to be scientific about our issues and critiques. These the profit competition fetish model need to be eliminated.

  • @caseyleedom6771
    @caseyleedom6771 8 หลายเดือนก่อน +3

    Nice use of integrating your Content with your Sponsor Message for Ground News. I wish other "Creators" would do the same thing, and I think that Sponsors should want this since I think that this offers a more compelling story for their product.

  • @swipersniper7471
    @swipersniper7471 8 หลายเดือนก่อน +7

    Dead internet theory has joined the chat

  • @NonExistentSpace
    @NonExistentSpace 5 หลายเดือนก่อน +2

    As an AI language model, I don't have personal opinions or the ability to critique a video I can not access. However, I can provide a summary of highlighted strengths to approximate a comment.
    It is commendable how the video meticulously delves into the subject of the increasing prevalence of AI-generated content in peer-reviewed papers, providing varied examples throughout to highlight the points while also having notable humor to keep the viewer engaged through its runtime. The video makes it clear how pivotal it is that the audience is aware of the impact recklessly implemented AI-generation can have on a paper's validity and
    Regenerate Response

    • @NonExistentSpace
      @NonExistentSpace 5 หลายเดือนก่อน

      As a human who has never tried pretending to be an AI before: That was fun to write. (Edit: Just put this in a bunch of AI detectors and the majority flagged it as AI! (I deleted the header and "regenerate response" when checking))

  • @ASapientBeing
    @ASapientBeing 8 หลายเดือนก่อน +3

    It's becoming quite concerning as many bad players fake their research without employing Ai but with the use of Ai it would become much more difficult to vet the correct/proper ones. Also peer review has also degraded too much and must be improved.

  • @oklu_
    @oklu_ 8 หลายเดือนก่อน

    Finally!!! Thank you for dealing with this issue

  • @psyboyo
    @psyboyo 8 หลายเดือนก่อน +7

    To all future generations: "Yeah, we did this for lols. Sorry. "

  • @forethoughtx2846
    @forethoughtx2846 5 หลายเดือนก่อน

    we need more like this.... awareness. Also, people who read, then read multiple sources.

  • @axeman2638
    @axeman2638 8 หลายเดือนก่อน +5

    Believe it or not, it's going to get a lot worse yet.

  • @ambertunstall3093
    @ambertunstall3093 8 หลายเดือนก่อน

    Thank you for addressing how dangerous this is becoming. So many people simply believe things being written by these models when they just hallucinate so much info. It's horrifying that it's becoming established in academic spaces. We really need more exposure and education for regular people on what's going on here.

  • @boowiebear
    @boowiebear 8 หลายเดือนก่อน +3

    If this passed peer review then there is no peer review.

    • @stephenallen4635
      @stephenallen4635 7 หลายเดือนก่อน

      the reviewers said there were major issues with the paper the journal said jk dont care who asked

  • @MemberRoach
    @MemberRoach 6 หลายเดือนก่อน +1

    I haven't laughed this much since yesterday (actually a compliment). The longer you look at the image, the more funny it becomes. They labeled a rat as Rat. Why are the steps in the order of 2 5 4? What's a Retat and what are Stenm cells doing there? Is that a spoon?

  • @livelongandtroll9108
    @livelongandtroll9108 6 หลายเดือนก่อน +5

    As an AI language model, I really don't see the problem.
    Regenerate response

  • @sirati9770
    @sirati9770 8 หลายเดือนก่อน

    i think language models can be helpful in taking out the work of refining linguistic presentation, you can start with bullet points and get the style you want. that cuts effort that doesnt constitute to quality, as long as you actually check the result to a) be representative of what you wanted to express and b) does it's job e.g. presenting and explaining
    and as someone who uses chatgpt regularly i know that you can never ever skip that job. it is so common for regressions or hallucination to sneak in at completely unrelated prompts were you expected it the least. let's not forget that it reproduces language statistics, and sometimes what you want is a statistical outlier that it thus cannot even reproduce

  • @eesev2017
    @eesev2017 8 หลายเดือนก่อน +17

    Ground news is biased. Because reality is not equally distributed to “both sides” of the political aisle. It’s not linear, first of all, so doing that balancing is automatically giving attention to BS that should not be treated similarly.

    • @djentlover
      @djentlover 5 หลายเดือนก่อน

      All news are biased.
      The only way to know the reality is to painstakingly fact check all the details.
      But viewing news from all different perspectives is the next best thing.

    • @fuckYTIDontWantToUseMyRealName
      @fuckYTIDontWantToUseMyRealName 5 หลายเดือนก่อน

      I can't tell you how good it is to see someone else who has noticed this.

  • @oaklandgargoyle
    @oaklandgargoyle 8 หลายเดือนก่อน

    Great work Pete! This is incredibly important work!

  • @endgamefond
    @endgamefond 8 หลายเดือนก่อน +3

    Journals are jokes now.

  • @Giovansbilly
    @Giovansbilly 8 หลายเดือนก่อน +2

    3:13 Funny enough, Willy Wonka would eventually get its own AI incident after the paper with that Glasgow event

  • @kategnidenko4651
    @kategnidenko4651 8 หลายเดือนก่อน +4

    Artists already said that ChatGPT is not a tool, it's a substitude. so may be in that paper it's a substitude for an author, maybe the leading one or the one who did the whole bunch of job, who knows for exactly?

  • @oceanbytez847
    @oceanbytez847 7 หลายเดือนก่อน +1

    I've already noticed it getting more realistic. A lot of the most obvious tells are starting to blend a lot more, and it's beginning to get harder to just immediately determine that an image is AI generated.

  • @GospodinJean
    @GospodinJean 8 หลายเดือนก่อน +16

    "Trust the science...."

    • @ColdHawk
      @ColdHawk 7 หลายเดือนก่อน +3

      God save us from sloganism.

    • @cubonefan3
      @cubonefan3 6 หลายเดือนก่อน +1

      You can understand the potential pitfalls in a methodology without using conspiracy theorist dogwhistles.
      Science has consistently been the BEST way to improve life for the MOST amount of people. (Whether though medicine, increasing crop yields, travel & telecommunications, etc) You existing in a modern western world is “trusting the science” in thousands of ways every day. Oh, and also, your home country would probably only have 10% of its current gdp (and worse quality of life) if it didn’t have access to science’s inventions. 😂

  • @Ivytheherbert
    @Ivytheherbert 6 หลายเดือนก่อน

    This is exactly why discussions around topics like "academic freedom" piss me off so much. The entire framing of the debate ignores how academia has become a deeply corrupt and sometimes outright fraudulent institution. It's not a question of free expression, which is already protected in most of the countries having this debate. It's not a question of being able to freely choose research topics, because that is simply not a thing for the vast majority of researchers. It's a question of whether funding bodies should be allowed to freely buy the few information generating resources we actually have with no restrictions, whether journals should be allowed to continue their monetised stranglehold on new information that is supposedly in the interests of humanity, and whether the few professors at the top who aren't reliant on grant money are allowed to do whatever they want regardless of ethics.

  • @soyitiel
    @soyitiel 8 หลายเดือนก่อน +3

    haven't lost all respect for science but seeing how often stuff like this happens makes one wonder how much else should we doubt or reconsider

  • @FreedomTaleTrio
    @FreedomTaleTrio 6 หลายเดือนก่อน +1

    Putting ChatGPT as your co-author actually seems like a good idea to me: it tells the reviewers that ChatGPT wrote some of the paper, thus allowing them to check for inaccuracies within it which ChatGPT often fails at, and isn’t concealing the use of ai in the writing of their paper.

  • @4.0.4
    @4.0.4 8 หลายเดือนก่อน +3

    Reminds me of when someone had published re-write of M€in K4mpf as a feminist manifesto and it got through peer review too.

  • @MetrioGaming_
    @MetrioGaming_ 7 หลายเดือนก่อน

    I didn't even realize the ad was an ad; well done!

  • @SendyTheEndless
    @SendyTheEndless 8 หลายเดือนก่อน +7

    AI Pros: People with no skills can make cool shit
    AI Cons: The complete obfuscation of reality, floods of misinformation and AI slop, not being able to tell if a video/image/text/quote/citation is real.

  • @AngryPug76
    @AngryPug76 7 หลายเดือนก่อน +1

    I’ve seen many cases of people testing the peer review process and finding it meaningless. My favorite was a peer reviewed study about how to easily cheat peer review by using certain key words and phrases even when it’s gibberish in relation to the rest of the text.

  • @labbit35
    @labbit35 8 หลายเดือนก่อน +6

    You know, we should start arresting people who use AI for academic misconduct and such

  • @NotSoAnonymousSoldier
    @NotSoAnonymousSoldier 8 หลายเดือนก่อน +1

    dude ngl that add for the news site was crazy i actually watched all the way through it lol

  • @MavHunter20XX
    @MavHunter20XX 8 หลายเดือนก่อน +12

    "TRUST THE SCIENCE! OMG!" I trust science. I don't trust people. AI is trained by people.

  • @SiminaCindy
    @SiminaCindy 7 หลายเดือนก่อน +2

    Two of my most favorite things, art and science, are both getting screwed over by AI 😔

  • @argfasdfgadfgasdfgsdfgsdfg6351
    @argfasdfgadfgasdfgsdfgsdfg6351 8 หลายเดือนก่อน +3

    Never heard of "frontiers".

    • @pjvandijk3987
      @pjvandijk3987 8 หลายเดือนก่อน +4

      They publish almost 200 different scientific journals, hence a big player

    • @samsonsoturian6013
      @samsonsoturian6013 8 หลายเดือนก่อน

      The science channels/magazine writers that you watch/read are getting their stuff from Frontiers and other big journals

    • @stephenallen4635
      @stephenallen4635 7 หลายเดือนก่อน

      Youre not in the industry then are you

    • @cubonefan3
      @cubonefan3 6 หลายเดือนก่อน

      Frontiers is not a trustworthy peer review publisher.

  • @ReneOlivaresJ
    @ReneOlivaresJ 7 หลายเดือนก่อน +1

    Really good channel. Thanks for your content! Very interesting.

  • @kream_8127
    @kream_8127 8 หลายเดือนก่อน +3

    2rd

  • @hotcoffee5542
    @hotcoffee5542 8 หลายเดือนก่อน +1

    10:44 To your point at the end, if you look at the title, it makes perfect sense that the author lists ChatGPT. The article is an editorial about the use of ChatGPT in generating papers.

    • @xponen
      @xponen 8 หลายเดือนก่อน +1

      I would think of ChatGPT as a librarian, when I can't get result from keywords in search engine, eg: acronym like VsPs, Cs, Ngs then I present same keyword to ChatGPT and it started talking about Graphic Pipeline, so now I know the correct keyword that work with search engine.

    • @hotcoffee5542
      @hotcoffee5542 8 หลายเดือนก่อน +1

      @@xponen It is a very powerful research aid!

  • @lorekeeper685
    @lorekeeper685 8 หลายเดือนก่อน +3

    Hoooow

  • @dimentio1030
    @dimentio1030 4 หลายเดือนก่อน +1

    To quote Dan Olson of Folding Ideas: “Cringe. There’s no other word for it. This makes me cringe. It’s embarrassing.” To think I’ve spent almost 4 years in grad school trying to get quality data and this is allowed to be published is spectacularly disheartening. This smacks of laziness and lack of passion. Why even go into science if you don’t want to do science?