VICE wrote about it and talked to one of the two reviewers (named in the article), who said it was not his responsibility to vet the obviously incorrect images. It doesn't look like they were able to get in touch with the other reviewer, though.
I took a look at the "apology" from Frontiers. They noted that one of the reviewers raised concerns about the images being AI generated, yet the authors never responded. Then Frontiers "failed to act on the lack of author compliance with the reviewers' requirements", and they're looking into how this happened. I think we all know how this happened.
So the peer reviewer didn’t approve the paper, and the ‘reliable source’ (Frontiers) accepted the paper anyway? Their reputation is ruined. Why even accept peer reviews if you’re going to cut them out anyways?
@@Joe_Yacketori They intentionally ignored the reviewers to push another publication out for profit. Frontiers charges anywhere from $700 - $9000k + in publishing fees (depending on which journal; they have hundreds). They'd rather publish a subpar article and take the cash than to be stricter and aim for higher prestige. Now, I wouldn't call Frontiers a "predatory" publisher, but they're known to be less reliable than others for this exact reason.
I've been trying and failing to get a legitimate paper on legitimate research with legitimate text published since 2020. The fact that some people can just skate through peer review with zero scrutiny really grinds my gears.
It's always like that.With _your_ stuff it's as if they're very rigid and "by the book" but with someone else's "work" it's as if they're given a lot of leeway.
As long as the "publish or perish" culture continues and the academic world revolves around papers, this will keep happening. Academia should find something else to evaluate researcher performance. Like what they claim can be reprodueced or not, etc.
The untold problem is, that all these people doing bad science are often rewarded for it by the system, because they can produce more sh** papers and that is what counts nowadays. Doing slow but good science does not pay off.
That's what happens when the only ones getting grants are the ones who can spit out the most output.
8 หลายเดือนก่อน +48
Rewarded and not a serious punishment compared to when caught. You can't give me 100 every time I lie and take 10 when I'm caught and expect me not to lie. Even if it were equal I could obviously spend and come out ahead or live better until. Have to have real costs.
Agreed, novelty takes a priority, accuracy is secondary, the last paper I wrote was a direct debunk of another engineering paper. I was dismayed to find the journal who published the original nonsense refused to publish my response on the grounds that it lacked novelty!!
@@arnoldvezbon6131 Cell, Nature, Reviews of Modern Physics, New Englang Journal of Medicine, The Lancet... There are plenty. Look for the top influential journals in each field. Look for ones older than 50 years yet still in print. Go to a top university, look for their top researchers/nobel prize winners, look at the citations they show off on their website: those are great journals.
What we learned: When your paper gets rejected from 10 good journals, just send that garbage to Frontier. Frontier prints all the garbage without any of the peer review as it looks like.
@@lurifaks92 Who said Elsevier is a journal? Have you read the article I mentioned? Btw if services like Elsevier charge libraries so much for their subscriptions, their content better be good, don't you think?
@@arodvaz1528 You are attributing things on a journal level to the publisher level, so you are saying Elsevier is a journal, alternately you are just not familiar with the process. Link the article for me please.
@@lurifaks92 I'm referring to the service that links to the article, who's also responsible for its doi, which is not an incidental thing. The journal is not the most prominent name on the article, unfortunately.
My wife is a scientist who does peer review. She's very dedicated about it and it's given her ideas. Mostly that 'this paper is bs and now I can do a study that shows this study, which I won't be able to stop, is wrong and I will get a good paper out of it.' That's 3x now. Though I have noticed nobody has asked to peer review in about 5 years...
And... that's the whole problem with the academic system. Dedicated peer reviewers don't get invited for more reviews, and honest researchers struggle to get research grants.
I too weep for the loss of integrity in science. Stuff like this rat thing is super obvious but how much is out there that never is discovered? What is going on in the scientific community and for how long? Faking data etc is totally undermining what science is supposed to be and do. A real shame.
Look up the holy texts of anti-vaxxers, if you want a bit of a reality check. That garbage made it through peer review, got published and did untold amounts of damage to medical science, just because some greedy little shit wanted in on the vaccines market without the actual skills to compete in it. And that shit was going on 50 years ago.
@@Imdan92it may have been bad. But before about 30 years ago, it was not treated as a "get rich quick" scheme by universities. My grandfather and uncle were accounting professors. They remember the days when you could become a professor to teach, rather than purely as a publication machine to generate that federal journal cash.
This is exactly why I take all peer review requests and WORK HARD on those projects, just because I believe in my discipline and want good scholarship to continue. The last paper I reviewed, I couldn't keep your channel out of my mind as I checked everything as carefully as possible.
It’s hard to over estimate the value of a good cautionary tale, no? Having been fooled early in my career by ghostwritten papers and marketing strategies, then seeing some terrible outcomes, I became extremely hard-nosed. There is good reason to cultivate skepticism.
Indeed. And for the more senior scholars who fudge their work-how can they justify the risk? The risk to credibility and career alone should make them shudder-to say nothing of the ethics of the act!
@@roobs4245 Tell me about it. Whenever people point to evidence of ChatGPT not being an AGI because it has no real understanding of what it is saying, I just think, "Well, have you tried holding humans to that standard?"
Llms has certain reason. They are not random generated shit like image generation machine learnings. They actually have no resembles of intelligence unlike llms who actually understand most of the thing it says but has limited brain capabilities. This is why it is a one step forward to agi but it is not agi. Cha gpt has no image generation capabilities it just know how to use image generators like it can use a lot of other tools just like humans. So I can do what chat gpt can do with dall e 3 myself.
@@julius4858unfortunately not only authors need to pay for publishing their articles, they also don't get any money that readers pay to get the access
@@julius4858 it costs money to publish. You pay to access, and if it's open-access, the authors had to pay APC. I think a lot of publishers still charge for images and graphs too. No authors, reviewers or universities receive any money in this process, publisher pockets everything
It's not peer-review which is the problem, it's the editors of the journals. It happens so often that the reviewers raise concerns but editors of certain journals just do not care. For them it's the number of papers at the end of the year, not their quality
The whole integrity issue of "studies" has been questionable from the start. A "Study" of the effects of Olubolu extract shows mice live much longer than the control group given a placebo. They don't mention that the control group of mice where twice as old at the start of the study.
It's really easy to doctor results to send clients a certain direction when your target audience doesn't understand the content of your study (because realistically speaking, product based studies are unlikely to be read past or before the conclusion segment)
And it's why I don't hardly listen to people as much anymore. How am I to trust peoples word or these "experts" research when this crap is been a thing. And yet Im told off for "rejecting science" or "Progress" or whatever people spew. I am so done with everyone Bs. No wonder some people stay in a bubble, this crap is ridiculous you can't have a open mind without it potentially you gaining absolutely false studies and research.
This is still an issue of peer review. Any good peer reviewer would make sure all methodological details are included, which would make this fakery obvious. The idea of a study isn't necessarily flawed, really this all falls to peer review and issues in methodology
9:35 Not just that ChatGPT does citations to nonexistent papers, but also that it does citations to real papers that don't contain what they're being cited for. Combine this with the prolific practice of high paywalls and you have a disinformation nightmare. (It also doesn't help that scientific fields, unlike law, generally don't make pinpoint citation a practice.)
I have used AI tools in my research to try and fetch scarce and difficult to find information parallel to my own searches. One thing I noticed is that AI cannot handle compound sentences very well. So the tool told me something like "Condition X triggers response A and B" after asking a specific question, while the original cited source wrote something like: "Condition X triggered A, while B was triggered by Y". These incorrect fusions/compilations of statements can be really tricky as well and are another proof of why AI-fetched info should never be accepted as conclusive.
A resident used ChatGPT for the lit review chapter in their thesis. The examiners took a look at the reference list and found out that they were entirely hallucinated. Don't need to be an oracle to know what happened next 😂
I once tried to make chatgpt recommend academic literature about various aspects of a certain topic, and once I started going through the list, I realised that nearly all titles were made up even though chatgpt gave rather specific information about what those made up texts contain.
I like to think there’s some cgbt cinematic universe where all of these made up publications exist and it’s pulling them from this imaginary wealth of scientific knowledge unknown to us XD
I share your concerns with the peer review process. On its face, it's an obviously ineffective process that seems designed to give bad results. It has no real defense against malicious actors, side-channel coordination, or cartels/gatekeepers. The fact that this is how things is done is one of the reasons I ended up not doing post-grad work. I edited some papers for a lab as an undergrad and they were nearly all a waste of time and effort with me catching some math errors in the data and analysis. That experience also opened my eyes to how much of a pedestal we put academics on, and how unearned this respect is.
As an AI model developer the biggest issue is the circular reference of AI using AI sources. Have you noticed your streaming or TH-cam recommendations get progressively more narrow over time, to the point where you are sick of watching the types of things being recommended? Essentially that’s what’s happening with production AI models at the moment. It’s very possible that many AI systems will become practically useless, at least for a period until this issue is resolved. That’s IF the issue can be resolved.
TH-cam history only remembers about ~2 months of watch history for me (I watch youtube a lot, so my guess it's a set number and not time based, 2 months likely corresponds to how many videos I watch in month reaching that limit), and so keeps recommending me the same stuff I watched 2 months + ago... like youtube, so much content and can't find me much good new content?!
I hate to say this ,but you and the others should of not messed around with this stuff so rapidly. Imo People these days aren't able to handle AI properly, we struggle to handle Social media right without what cesspool of issues it's brought on. We already got in the west a Competency crisis for example, we didn't need this AI advancements these past 2-3 years.
I love how I was telling people ten years ago that peer review is flawed and should not be looked at as the authority of reliable information, was told I was wrong by just about everyone, and now this view is becoming more widely accepted rapidly right now.
There was a phisicist who added his cat as a coauthor, the community liked it very much and there were plenty of letters sent to the cat asking for consult. A world-famous biochemist also added her dog as a coauthor, but it caused her a lot of trouble with the journal editor. Maybe the authors who added chatgpt as coauthor are making the same joke, or at least they're being honest about prompting the AI to corwite. Pesronally, I think the practice of use of AI in academic writing should be outright banned, but people will always seek the path of the least resistance. Looking at the thumbnail, I assumed the rat image was used as a graphical abstract. It's very catchy, would make for a great one. That this actually got published as a fogure in the main body of a paper is unsettling, to say the least.
They're not "making a joke", ffs. The woman who added her dog wasn't either, she actually tried to pass the damn thing off as a researcher to lend herself credibility. And it kinda worked, because after the journal editor who denied her paper passed on, the others were all "dawwww, pupper" and brought her right back in. 🙄
i completly disagree that ai should be banned. its a very helpful tool for correcting grammatical and linguistical errors especially if english isn't your native languege. if you just copy paste whatever it says then its your fault for not reading the ai's output correctly.
I think there certainly is a place for AI language models in writing academic work. Writing is chiefly about communication: having an idea and embedding that idea into language in such a way that the recipient of the language will be able to unpack your idea, and not some other idea from the given language. However, language is a chaotic system with such annoying things as homonyms and ambiguous grammatical structures. (consider a sentence like: He sold him vegetables that were grown in his own garden. Clearly we have two men transacting... but were the vegetables grown in the buyer's or the seller's garden? Did you even notice this ambiguity before I pointed it out? Perhaps the context for this sentence would clarify, but it's also perfectly possible that it simply doesn't. How easily could a sentence containing such an ambiguity sneak into your paper and mislead an expected half of your readers?) But it's even worse, some prose is exceedingly boring and impenetrable, you might not even be able to sustain the attention to unpack any ideas from it at all, nevermind ones the author didn't intend to embed in it. Thus, the most successful researchers will invariably end up being those with sufficient communication skills to produce prose that is easy to read AND conveys the intended ideas, and this means that to be a good researcher requires not specialising fully in your field, but partially in your field and partially in communication. Thus it seems that the more successful authors are going to end up not being the most visionary researchers, but the one's who are visionary enough AND have greater skill in communication. BUT, what if a tool existed that could enhance the researcher's prose... a system that could catch and address those sneaky ambiguities, improve diction for greater clarity, and make the prose more readable. Suddenly, the researcher can devote more time and effort into the actual research, instead of workshopping their prose. This would also result in a quality of life improvement for the review process, now the reviewer is less likely to run into dense impenetrable prose, can more easily unpack the author's ideas, which means that it would be easier to catch if the author made some unjustified leap in logic or overlooked some important factor, or if the author is absolutely right in a massively revolutionary way, the reviewer will now not need to agonise for minutes on sentences, hours on pages, just to untangle the bad prose and discover this revolutionary revelation, or more likely, give up on untangling and judge the epiphany put to paper to be absolute nonsense. What kind of tool would be able to fulfil this role? Obviously, an AI language model. Either that or we need to formalise a standard of pairing up communication specialists with geniuses into researcher/writer teams.
@@methatis3013 I wasn't talking about making the writing digestible for the lay person. Obviously it would be good to have it all be readable for the average joe, but bad writing can also make it impossible for a subject matter expert to make sense of what the scientist said he did, and thus, even a subject matter expert would be unable to judge the validity of the research. Did you even read my whole comment? I know it's on the long end, but nowhere in it do I express a concern for the ability of peasants to digest academic writing.
I swear those people just pretend to their job 😂. Few days ago I've downloaded a paper about carbon nanotubes from "Journal of Materials Science" by Springer and there was, completely out of the blue, the following sentence: 《...the effect of chirality on the stress of CNTs increases with the increase inthe United States as the strain. 》 😂😂
The biggest problem aren't the peer-reviewers don't looking at the paper....it's the authors don't looking at their OWN paper. Even when using AI to create images, you should at least look at it before using it in your own paper. If they show so little care for presenting their own research, how little care did they take to research. I would die of shame if i were them.
I've had reservations about Frontiers for a while but this event is where I put them in the "predatory" box. It's clear that they are just trying to grab as many APCs as possible without regard for academic standards.
Pay the reviewers decently for their work, publish their names, and keep a review index, like a citation index. It is all part of the academic environment.
Then you have the same issue that peer review was designed to address, who is paying and who benifits from this paper saying what it says. Once money becomes involved the two usually end up being the same party. It's easy to hear something online from someone you think has authority and just roll with what they say but the issue really isnt the peer reviewers, they actually tend to do a good job but the editor (the journal) has the final say on what gets published
Shouldn't the peers who "reviewed" these papaers be subject to professional criticism for their failure to properly review? They are endorsing fraud. That's a crime in most countries.
Turns out the peer reviewers raised concerns, the editor posted the questions to the authors, the authors ignored half the questions, and the editor decided to publish anyways.
Seems to me they should have done more than just "raise concerns" -- more like say loudly and clearly "This is complete and utter rubbish, throw it straight in the bin."
@@gcewing the peer reviewer can write comments and questions for the author, and signal whether they suggest the paper for publication, want a revision, or reject. But the decision is always the discretion of the editor. The peer reviewer can’t force the editor’s hand to throw anything into the bin.
This is much more than a quite serious. Peer review must be an entirely transparent process. The reputation of the peer reviewer must be on the line. Peer review needs to be paid for. Even that is problematic where the academic publishing process has been gamed.
Who employs the scientists? Universities. The university should have their funding on the line when their scientist publishes junk. The university should review the papers for obvious issues like this before it gets submitted. That's where the money comes from and that's why all these "scientists" keep pushing out junk papers so they should be held accountable.
It is relatively transparent, you can look up who reviewed this paper on the webpage. The problem is the publishing process in journals that have a high incentive to apply predatory practices. But in this case, the authors mentioned in the paper that the figures were AI-generated (which is allowed according to Frontiers' author guidelines), and one reviewer complained about them.
Could you please cover the recent paper published in the journal of Surfaces and Interfaces, which was authored by ChatGPT and passed through the 'review' process without being caught?
This reminds me of a case where a paper from iPhone autocomplete was accepted in 2016. And with no modern GPT fluff, entirely nonsense language that is not even coherent can go through those checks. And with modern plagiarism machines, scientific journals looks more and more like a yellow press
@@KnakuanaRkaWas also wondering that, so I looked it up: Christoph Bartneck submitted a nonsense paper with a fake identity at International Conference on Atomic and Nuclear Physics, which was accepted for oral presentation after 3 hours. He made a blog post about it and there are some articles about it. There probably was no peer review involved and it may not have been published (not sure, couldn't find info about that quickly)
@@KnakuanaRkaA professor called Christoph Bartneck received an email asking him to submit a paper to a conference on nuclear physics. Since it was not his area of expertise he used iphone autocomplete, added some photos, submitted under a fake name and within two hours had an email saying the paper was expected please pay us US$1k to speak at the conference. The conference was The International Conference on Atomic and Nuclear Physics.
There was also the "Get me off Your F**king Mailing List” paper, accepted by the International Journal of Advanced Computer Technology which is a really fun read.
there are also peer reviewed papers where the introductions start with: “Certainly, here is a possible introduction for your topic:” i don’t get why they don’t even bother to take a glance at the text before submitting it!?
This is actually a problem I am noticing in art/design world too. A lot of people are not hesitant enough to not let ChatGPT decide the descriptions of their artworks. If you are a creator of something, I find it almost a necessity that you speak of your creation 'IN YOUR OWN WORDS'. Yet, when confronting the professors with these issues, I have been either met with 'well, it's their choice, and if that leads to the downfall, so be it' or 'I mean, it's a new world, adjust to it or you are left behind'. Having said the above, is it only my impressions, or does ChatGPT always try to sum up the essay in a hopeful note? Even when the bullet points I am giving are pretty grim, it's one thing I cannot help but notice straight-away. Thanks for the insightful video.
So true, so true. If as a creator, you cannot articulate your own art... It means you never really thought through about the themes, process or any subject in you life. Just an empty head.
@@bbrainstormer2036 Nah, you write some summary, then put them onto ChatGPT so you have a full article. Problem there being, is that the description of an artwork of any sort, needs greater attention to nuances. ChatGPT has a tendency to muddle things in a vaguely optimistic positivity, even when the summary prompt suggests otherwise. That is a problem that I am referring to.
I think someone can give ai a lot of information and then it can make fluent descriptions that are accurate to the creator’s intentions. Sure you could do a less specific job, or a more specific one. It’s just how one does it.
Kinda reminiscient of security theater. TSA fails far too often to promise security, and there's no evidence that the pretense of security accomplishes anything objectively. But it's quite effective at creating a feeling of security for the passengers, so that's the main point of it. If peer review is equally shoddy, then the only basis for the reputation of journals is blind faith. I've seen highly reputable journals publish outrageously wrong papers. So if there is bad information out there, the only way to combat that is by publishing contradictory information in other papers. Otherwise the spread of misinformation under the guise of science would become a huge epidemic.
Perhaps you mean _”has become_ a huge epidemic” confronting scientists today? Consider, the trouble with misinformation - or simply bad information - may be that even a little calls everything into question. It’s pernicious. Science in most fields today builds upon what has been published. Entire areas of study are theories stacked upon theories, the sturdiness of the entire structure depending on each building block being solid. Some papers are more foundational than others, but all still contribute. It seems germane that one of the first steps for a new scientist in even formulating a research question is to run an online search to see what is in the literature already. Has anyone asked the question already? Is the theory you might pursue already disproven or proven? Those initial searches will only review titles and abstracts, not deep dive to parse the data in papers for discrepancies. How often have fraudulent papers distorted our efforts, steering the entire thrust of a discipline in one direction or another, do you suppose? I think most would agree it’s impossible to know once more than a few cases of fraud in frequently cited articles have been exposed. Some areas will be more vulnerable to distortions resulting from fraud, mistakes, or poor work. One of the most fragile I can think of is climate science. Papers are produced that use theoretical models which take as their inputs variables produced by other models, that in turn take as their inputs data sets that have been adjusted by applying other models. Given that these global models are attempting to predict outcomes from dynamic interactions within chaotic systems, [here I mean chaotic as in a profound dependence upon initial conditions] what is the likelihood for a fatal flaw to arise. This is in what may be one of the most important areas of scientific endeavor our species has undertaken. The stakes are extremely high. I would submit that even when not as dramatically apparent, the stakes are extremely high and the consequences very difficult to predict. I think we already have a huge issue at hand.
That could actually be a great analogy but from someone in the industry, its more that the journals are TSA and peer reviewers are the x-ray machines. This problem starts making a lot more sense when you look at it that way
I sent this picture to my friend in a discord server we primarily use for VC and as such doesn't get much actual messages so I've been staring at this picture while playing counter strike for about 2 months now
I would think there is an obvious role for a paid editorial review board that does _BASIC_ things for _every_ paper such as check spelling and grammar, review images for duplicates and manipulation, check that labels make sense, check graphs for accuracy, review data and math. That review board should have access to paid consultation by specialists as needed, such as statisticians or mathematicians. That should probably occur BEFORE the paper goes out to peers to be reviewed…. After all a journal is a publisher not a platform and has responsibility for what they publish.
The analogy goes like this. Imagine that you are a student about to take your A-level mathematics official examination in next 2 weeks. You saw a new mathematics textbook from a different publisher in a bookshop. You bought the book and returned home. One day around 5.30 pm, you attempt to do 5 questions from the textbook, you selected 5 random questions from these 5 chapters: Matrices, Complex Numbers, Vectors, Differentiation and Integration. You spend about 25 minutes doing the five mathematics questions, scribbling down your working on 5 different sheet of papers. The time is about 6 pm, you stopped and went out to play basketball with your friends. You returned home around 7.30 pm. You took a bath and had a dinner with your family. You returned back to your study table and checked your scribbled down 5 answers against the book's answers section. To your horror, you found out that you have got it all wrongs. But you said "Hey, I have published 5 papers, right? See, I am holding 5 sheet of papers". Another student somewhere out there, did the same feat, but he attempted 2 questions from 2 chapters: Differential Equations and Numerical Methods and he got it all rights, a 2 out of 2 and you got 0 out of 5. But you said, "Hey I published more papers than him, 5, he only got 2. I deserved to be an assistant professor, right? If I keep on publishing more, I get to be promoted to be a full tenured university professor, keep on doing this, then onward to become a university president or chancellor or maybe a future director of a research institute, right?" You found out that your friend Thomas, did the same feat, he also got it all wrongs, 0 out of 5 from the same 5 chapters. But this is okay, since you are going to put a reference at the end of your 5 papers, citing Thomas's work and Thomas returned the favor and did the same, citing your work. Now you have them all: published papers, citations, H-index, impact factors, research grants, etc...
Not declaring text as generated by ChatGPT in a scientific publication is plagiarism in my opinion. Scientific publications are supposed to cite all their sources. ChatGPT draws from a large amount of sources it trained on. Using text of someone else without clearly citing it's source is plagiarism. This includes text generated from LLMs.
@@stephenallen4635that's why you make the person publishing pay a fee that goes toward impartial review, the money may come from the author but it's paid out by the journal
Academia needs an overhaul in how researchers' performances are assessed. It makes absolutely no sense to look at bibliometrics anymore. AI has only sped things up, the current publishing model has been broken for quite some time. I reckon some sort of system where only a limited amount of self-appointed best yearly efforts are published and considered for evaluation would be much more productive for Science.
Historian here: AI would have no idea what I was talking about and spit out critiques that often have nothing to do with what I'm writing about. Just like the average peer reviewer
It’s a perilous convergence to be sure. The Information Age has withered into the Manipulation Age where there seems to a con under everything you see. I am worried for our children.
I disagree with the premise that AI blunders making it to publication is thr fault of thr peer reviewers. Academia is full of pay-for-play publishers (Frontiers being one of them). I have plenty of stories of referees recommending major revisions or reject for these papers and editors accepting them instead.
Technically, ChatGPT doesn't "make stuff up." It just doesn't understand references. It knows what a citation should look like, and how it should be used, but not what it is. Some lawyers submitted a court brief that was generated by ChatGPT, with bogus citations. They judge was NOT amused. They got sanctioned.
Frontiers journals' practices have a very bad reputation, and bizarrely in 2023 their chief executive editor published an open letter decrying "[critics] sloppily promulgating “the p-word” [p-hacking]; unfortunately, this unethical behavior is being noticed, creating concern and bewilderment. The p-word is a blanket derogatory term that is so easy to use that it blocks scientific, critical, and common-sense thought processes." Preregister your outrage!
This is not news: every now and then, someone pushes through a deliberately non-sensical article that gets magically peer-reviewed, which is strange, because everyone knows that "peers" are infallible, uncorruptible all-knowing people with infinite spare time and budget to carefully check everything that gets published.
Thank you for your videos, and I find it awesome that you put your own progress-bar to indicate the bit that corresponds to a product-placement. I don't mind watching it and I think this one was very relevant, but on top of that, I appreciate that it shows the level of respect you have for your viewers. A lot to learn from you :D
the faster these tools are developed the better. it might finally force the community to actually publish the raw data and subject it to actual scrutiny
I would just like to point out that that was really smart of you to mention how your recent videos have done so well because as soon as you said that I wanted to see what they were about smart man
International bodies need to not only fund research but also the peer review process. Peer review is not the problem. The obsession with competition in academics is the problem. Competition can drive innovation in very narrow circumstances, typically it runs interference against progress. We have observed this time and time again and academics and economics and sociology. We need to be scientific about our issues and critiques. These the profit competition fetish model need to be eliminated.
Nice use of integrating your Content with your Sponsor Message for Ground News. I wish other "Creators" would do the same thing, and I think that Sponsors should want this since I think that this offers a more compelling story for their product.
As an AI language model, I don't have personal opinions or the ability to critique a video I can not access. However, I can provide a summary of highlighted strengths to approximate a comment. It is commendable how the video meticulously delves into the subject of the increasing prevalence of AI-generated content in peer-reviewed papers, providing varied examples throughout to highlight the points while also having notable humor to keep the viewer engaged through its runtime. The video makes it clear how pivotal it is that the audience is aware of the impact recklessly implemented AI-generation can have on a paper's validity and Regenerate Response
As a human who has never tried pretending to be an AI before: That was fun to write. (Edit: Just put this in a bunch of AI detectors and the majority flagged it as AI! (I deleted the header and "regenerate response" when checking))
It's becoming quite concerning as many bad players fake their research without employing Ai but with the use of Ai it would become much more difficult to vet the correct/proper ones. Also peer review has also degraded too much and must be improved.
Thank you for addressing how dangerous this is becoming. So many people simply believe things being written by these models when they just hallucinate so much info. It's horrifying that it's becoming established in academic spaces. We really need more exposure and education for regular people on what's going on here.
I haven't laughed this much since yesterday (actually a compliment). The longer you look at the image, the more funny it becomes. They labeled a rat as Rat. Why are the steps in the order of 2 5 4? What's a Retat and what are Stenm cells doing there? Is that a spoon?
i think language models can be helpful in taking out the work of refining linguistic presentation, you can start with bullet points and get the style you want. that cuts effort that doesnt constitute to quality, as long as you actually check the result to a) be representative of what you wanted to express and b) does it's job e.g. presenting and explaining and as someone who uses chatgpt regularly i know that you can never ever skip that job. it is so common for regressions or hallucination to sneak in at completely unrelated prompts were you expected it the least. let's not forget that it reproduces language statistics, and sometimes what you want is a statistical outlier that it thus cannot even reproduce
Ground news is biased. Because reality is not equally distributed to “both sides” of the political aisle. It’s not linear, first of all, so doing that balancing is automatically giving attention to BS that should not be treated similarly.
All news are biased. The only way to know the reality is to painstakingly fact check all the details. But viewing news from all different perspectives is the next best thing.
Artists already said that ChatGPT is not a tool, it's a substitude. so may be in that paper it's a substitude for an author, maybe the leading one or the one who did the whole bunch of job, who knows for exactly?
I've already noticed it getting more realistic. A lot of the most obvious tells are starting to blend a lot more, and it's beginning to get harder to just immediately determine that an image is AI generated.
You can understand the potential pitfalls in a methodology without using conspiracy theorist dogwhistles. Science has consistently been the BEST way to improve life for the MOST amount of people. (Whether though medicine, increasing crop yields, travel & telecommunications, etc) You existing in a modern western world is “trusting the science” in thousands of ways every day. Oh, and also, your home country would probably only have 10% of its current gdp (and worse quality of life) if it didn’t have access to science’s inventions. 😂
This is exactly why discussions around topics like "academic freedom" piss me off so much. The entire framing of the debate ignores how academia has become a deeply corrupt and sometimes outright fraudulent institution. It's not a question of free expression, which is already protected in most of the countries having this debate. It's not a question of being able to freely choose research topics, because that is simply not a thing for the vast majority of researchers. It's a question of whether funding bodies should be allowed to freely buy the few information generating resources we actually have with no restrictions, whether journals should be allowed to continue their monetised stranglehold on new information that is supposedly in the interests of humanity, and whether the few professors at the top who aren't reliant on grant money are allowed to do whatever they want regardless of ethics.
Putting ChatGPT as your co-author actually seems like a good idea to me: it tells the reviewers that ChatGPT wrote some of the paper, thus allowing them to check for inaccuracies within it which ChatGPT often fails at, and isn’t concealing the use of ai in the writing of their paper.
AI Pros: People with no skills can make cool shit AI Cons: The complete obfuscation of reality, floods of misinformation and AI slop, not being able to tell if a video/image/text/quote/citation is real.
I’ve seen many cases of people testing the peer review process and finding it meaningless. My favorite was a peer reviewed study about how to easily cheat peer review by using certain key words and phrases even when it’s gibberish in relation to the rest of the text.
10:44 To your point at the end, if you look at the title, it makes perfect sense that the author lists ChatGPT. The article is an editorial about the use of ChatGPT in generating papers.
I would think of ChatGPT as a librarian, when I can't get result from keywords in search engine, eg: acronym like VsPs, Cs, Ngs then I present same keyword to ChatGPT and it started talking about Graphic Pipeline, so now I know the correct keyword that work with search engine.
To quote Dan Olson of Folding Ideas: “Cringe. There’s no other word for it. This makes me cringe. It’s embarrassing.” To think I’ve spent almost 4 years in grad school trying to get quality data and this is allowed to be published is spectacularly disheartening. This smacks of laziness and lack of passion. Why even go into science if you don’t want to do science?
Go to ground.news/pete to stay fully informed. Subscribe through my link to get 30% off unlimited access this month only.
you can buy peer review from 10000 to 50000 $. that is why so many fake or crap drugs get thru...remember vioxx
Have you tried chat GBT to find out what all those things are
The algorithm has blessed me with this vid, & I thot it was an April fools joke ! XD
Ground News is biased.
I didn't even realize the ad was an ad; well done!
I so vividly wish we could know the peers who reviewed this 😆😂
the real OG
The peers were other ai
VICE wrote about it and talked to one of the two reviewers (named in the article), who said it was not his responsibility to vet the obviously incorrect images. It doesn't look like they were able to get in touch with the other reviewer, though.
We only know of 1 reviewer based in the US, while the 2nd based in India is unknown (as stated in the Vice article).
@@desmond-hawkinsthat's so stupid isn't it
I saw a paper that had the words: “as I am an AI language model” in its conclusion.
Nice going Elsevier.
‘s cool … as long as they listed ChatGPT as a co-author
@@seanvogel8067 As main Author
Did you try reporting it? Or cross reference findings?
Scientist identifes himself as AI language model, you bigot. Live and let live!
its
I took a look at the "apology" from Frontiers. They noted that one of the reviewers raised concerns about the images being AI generated, yet the authors never responded. Then Frontiers "failed to act on the lack of author compliance with the reviewers' requirements", and they're looking into how this happened. I think we all know how this happened.
So the peer reviewer didn’t approve the paper, and the ‘reliable source’ (Frontiers) accepted the paper anyway? Their reputation is ruined. Why even accept peer reviews if you’re going to cut them out anyways?
@@Krilium This is why publishing in a scientific magazine today doesn't really mean anything anymore.
"I think we all know how this happened." No, I don't know. What are you insinuating?
@@Joe_Yacketori They intentionally ignored the reviewers to push another publication out for profit. Frontiers charges anywhere from $700 - $9000k + in publishing fees (depending on which journal; they have hundreds). They'd rather publish a subpar article and take the cash than to be stricter and aim for higher prestige.
Now, I wouldn't call Frontiers a "predatory" publisher, but they're known to be less reliable than others for this exact reason.
Yes, we all know how it happened. Money money money 🎶
I've been trying and failing to get a legitimate paper on legitimate research with legitimate text published since 2020. The fact that some people can just skate through peer review with zero scrutiny really grinds my gears.
It's always like that.With _your_ stuff it's as if they're very rigid and "by the book" but with someone else's "work" it's as if they're given a lot of leeway.
As long as the "publish or perish" culture continues and the academic world revolves around papers, this will keep happening. Academia should find something else to evaluate researcher performance. Like what they claim can be reprodueced or not, etc.
Has anyone actually heard of "frontiers" before? I didn't.
@@argfasdfgadfgasdfgsdfgsdfg6351I have- and not positively, they’ve been caught up in a number of scandals over the years
@@argfasdfgadfgasdfgsdfgsdfg6351it's a very well known journal
The horngus of a dongfish is attached by a scungle to a kind of dillsack (the nutte sac), 77)
The untold problem is, that all these people doing bad science are often rewarded for it by the system, because they can produce more sh** papers and that is what counts nowadays. Doing slow but good science does not pay off.
That's what happens when the only ones getting grants are the ones who can spit out the most output.
Rewarded and not a serious punishment compared to when caught. You can't give me 100 every time I lie and take 10 when I'm caught and expect me not to lie. Even if it were equal I could obviously spend and come out ahead or live better until. Have to have real costs.
Agreed, novelty takes a priority, accuracy is secondary, the last paper I wrote was a direct debunk of another engineering paper. I was dismayed to find the journal who published the original nonsense refused to publish my response on the grounds that it lacked novelty!!
Capital reigns
It's just like products. Companies making bad products that break easily sell more products adn get more money.
Speaking as someone that has published in Frontiers: It's a garbage publisher. You can't take it seriously, or at the very least, be very sceptical.
Can you name one that can be taken seriously?
@@arnoldvezbon6131 Cell, Nature, Reviews of Modern Physics, New Englang Journal of Medicine, The Lancet...
There are plenty. Look for the top influential journals in each field. Look for ones older than 50 years yet still in print. Go to a top university, look for their top researchers/nobel prize winners, look at the citations they show off on their website: those are great journals.
May I ask what you published? I'd like to read it.
@@arnoldvezbon6131 Nature and Cell are pretty good. And he is right, Frontiers is kind of a joke
Came here to say that. Frontiers is, at least in my field, considered borderline predatory. Comparable to MDPI or Hindawi.
What we learned: When your paper gets rejected from 10 good journals, just send that garbage to Frontier. Frontier prints all the garbage without any of the peer review as it looks like.
I think someone missed the notorious Elsevier peer reviewed article....
@@arodvaz1528 Elsevier is not a journal, they are a publisher.
@@lurifaks92 Who said Elsevier is a journal? Have you read the article I mentioned? Btw if services like Elsevier charge libraries so much for their subscriptions, their content better be good, don't you think?
@@arodvaz1528 You are attributing things on a journal level to the publisher level, so you are saying Elsevier is a journal, alternately you are just not familiar with the process.
Link the article for me please.
@@lurifaks92 I'm referring to the service that links to the article, who's also responsible for its doi, which is not an incidental thing. The journal is not the most prominent name on the article, unfortunately.
My wife is a scientist who does peer review. She's very dedicated about it and it's given her ideas. Mostly that 'this paper is bs and now I can do a study that shows this study, which I won't be able to stop, is wrong and I will get a good paper out of it.' That's 3x now. Though I have noticed nobody has asked to peer review in about 5 years...
And... that's the whole problem with the academic system. Dedicated peer reviewers don't get invited for more reviews, and honest researchers struggle to get research grants.
Science takes yet another massive L. As a scientist I am once again hanging my head. What a joke peer review has become.
I too weep for the loss of integrity in science. Stuff like this rat thing is super obvious but how much is out there that never is discovered? What is going on in the scientific community and for how long? Faking data etc is totally undermining what science is supposed to be and do. A real shame.
How do you know it hasn't always been this bad?
Look up the holy texts of anti-vaxxers, if you want a bit of a reality check.
That garbage made it through peer review, got published and did untold amounts of damage to medical science, just because some greedy little shit wanted in on the vaccines market without the actual skills to compete in it.
And that shit was going on 50 years ago.
@@Imdan92it may have been bad. But before about 30 years ago, it was not treated as a "get rich quick" scheme by universities.
My grandfather and uncle were accounting professors. They remember the days when you could become a professor to teach, rather than purely as a publication machine to generate that federal journal cash.
These are your peers. According to the magazine.
This is exactly why I take all peer review requests and WORK HARD on those projects, just because I believe in my discipline and want good scholarship to continue. The last paper I reviewed, I couldn't keep your channel out of my mind as I checked everything as carefully as possible.
Thanks for your integrity and effort.
thanks for being one of the good guys.
Thank you for your integrity and principles
It’s hard to over estimate the value of a good cautionary tale, no? Having been fooled early in my career by ghostwritten papers and marketing strategies, then seeing some terrible outcomes, I became extremely hard-nosed. There is good reason to cultivate skepticism.
Indeed. And for the more senior scholars who fudge their work-how can they justify the risk? The risk to credibility and career alone should make them shudder-to say nothing of the ethics of the act!
A lot of people don't understand how ChatGPT style language models work; they think it actually knows and comprehends what it's talking about.
They have the same mistaken assumption about humans.
@@roobs4245 Tell me about it. Whenever people point to evidence of ChatGPT not being an AGI because it has no real understanding of what it is saying, I just think, "Well, have you tried holding humans to that standard?"
@@omp199 you're a clear example of it!
Llms has certain reason. They are not random generated shit like image generation machine learnings. They actually have no resembles of intelligence unlike llms who actually understand most of the thing it says but has limited brain capabilities. This is why it is a one step forward to agi but it is not agi.
Cha gpt has no image generation capabilities it just know how to use image generators like it can use a lot of other tools just like humans. So I can do what chat gpt can do with dall e 3 myself.
@@omp199the better argument would be "which one has the capability and or potential to understand its requested subject matter"
100K to publish in nature, and not 1 single $ goes to the peer reviewers
It costs money?! I thought they make money by charging for access
@@julius4858 Money from both ends. At least the high end journals
@@julius4858thats why theyre genius, they charge both ways.
@@julius4858unfortunately not only authors need to pay for publishing their articles, they also don't get any money that readers pay to get the access
@@julius4858 it costs money to publish. You pay to access, and if it's open-access, the authors had to pay APC. I think a lot of publishers still charge for images and graphs too. No authors, reviewers or universities receive any money in this process, publisher pockets everything
It's not peer-review which is the problem, it's the editors of the journals. It happens so often that the reviewers raise concerns but editors of certain journals just do not care. For them it's the number of papers at the end of the year, not their quality
Scientism is dying a slow slow death.
Not really. If the peer reviewer rejects the journal, it won't push through.
@@itsgonnabeanaurfrommeYou mean peer selector. There is no peer review only selection.
@@arnoldvezbon6131 There is peer review in some parts of academia, it's just not consistent.
Funnily enough, most associate editors are also volunteers...
The whole integrity issue of "studies" has been questionable from the start. A "Study" of the effects of Olubolu extract shows mice live much longer than the control group given a placebo. They don't mention that the control group of mice where twice as old at the start of the study.
That's just dirty. But I bet whoever sells olubolu extract paid them handsomely for those results.
It's really easy to doctor results to send clients a certain direction when your target audience doesn't understand the content of your study (because realistically speaking, product based studies are unlikely to be read past or before the conclusion segment)
you cannot feed placebo to rats.
they don't understand human speech etc
you don't even know what placebo is...
And it's why I don't hardly listen to people as much anymore.
How am I to trust peoples word or these "experts" research when this crap is been a thing.
And yet Im told off for "rejecting science" or "Progress" or whatever people spew.
I am so done with everyone Bs.
No wonder some people stay in a bubble, this crap is ridiculous you can't have a open mind without it potentially you gaining absolutely false studies and research.
This is still an issue of peer review. Any good peer reviewer would make sure all methodological details are included, which would make this fakery obvious.
The idea of a study isn't necessarily flawed, really this all falls to peer review and issues in methodology
9:35 Not just that ChatGPT does citations to nonexistent papers, but also that it does citations to real papers that don't contain what they're being cited for. Combine this with the prolific practice of high paywalls and you have a disinformation nightmare. (It also doesn't help that scientific fields, unlike law, generally don't make pinpoint citation a practice.)
I have used AI tools in my research to try and fetch scarce and difficult to find information parallel to my own searches. One thing I noticed is that AI cannot handle compound sentences very well. So the tool told me something like "Condition X triggers response A and B" after asking a specific question, while the original cited source wrote something like: "Condition X triggered A, while B was triggered by Y". These incorrect fusions/compilations of statements can be really tricky as well and are another proof of why AI-fetched info should never be accepted as conclusive.
As a Leregasaur, I remember the resprouization process. It was terrifying.
AI generated Squirrel With Massive Balls was not what I was expecting on youtube.
It's a rat
Drawn together?
It was a rat,not a squirrel.
A resident used ChatGPT for the lit review chapter in their thesis. The examiners took a look at the reference list and found out that they were entirely hallucinated. Don't need to be an oracle to know what happened next 😂
He got an attaboy and a medal.
I once tried to make chatgpt recommend academic literature about various aspects of a certain topic, and once I started going through the list, I realised that nearly all titles were made up even though chatgpt gave rather specific information about what those made up texts contain.
obviously he got published in frontiers
I like to think there’s some cgbt cinematic universe where all of these made up publications exist and it’s pulling them from this imaginary wealth of scientific knowledge unknown to us XD
Hired as new president of Harvard?
I share your concerns with the peer review process. On its face, it's an obviously ineffective process that seems designed to give bad results. It has no real defense against malicious actors, side-channel coordination, or cartels/gatekeepers.
The fact that this is how things is done is one of the reasons I ended up not doing post-grad work. I edited some papers for a lab as an undergrad and they were nearly all a waste of time and effort with me catching some math errors in the data and analysis. That experience also opened my eyes to how much of a pedestal we put academics on, and how unearned this respect is.
during undergrad I saw my lab partner simply record fake data with no hesitation whatsoever.
@@xponen bruh
Frontiers is not a respected journal....but your point and the problem still stands.
As an AI model developer the biggest issue is the circular reference of AI using AI sources. Have you noticed your streaming or TH-cam recommendations get progressively more narrow over time, to the point where you are sick of watching the types of things being recommended? Essentially that’s what’s happening with production AI models at the moment. It’s very possible that many AI systems will become practically useless, at least for a period until this issue is resolved. That’s IF the issue can be resolved.
Yeah. People becoming more and more sheepish and therefore easier to lead down this silly hype bubble only exacerbates the issue.
TH-cam history only remembers about ~2 months of watch history for me (I watch youtube a lot, so my guess it's a set number and not time based, 2 months likely corresponds to how many videos I watch in month reaching that limit), and so keeps recommending me the same stuff I watched 2 months + ago... like youtube, so much content and can't find me much good new content?!
I hate to say this ,but you and the others should of not messed around with this stuff so rapidly.
Imo People these days aren't able to handle AI properly, we struggle to handle Social media right without what cesspool of issues it's brought on.
We already got in the west a Competency crisis for example, we didn't need this AI advancements these past 2-3 years.
I love how I was telling people ten years ago that peer review is flawed and should not be looked at as the authority of reliable information, was told I was wrong by just about everyone, and now this view is becoming more widely accepted rapidly right now.
The garbled text on the images has me dying 😂
I hope you got better.
SAME
I like how it just says "rat" on the thumbnail 😂
They sound like Star Wars character names ngl
There was a phisicist who added his cat as a coauthor, the community liked it very much and there were plenty of letters sent to the cat asking for consult. A world-famous biochemist also added her dog as a coauthor, but it caused her a lot of trouble with the journal editor. Maybe the authors who added chatgpt as coauthor are making the same joke, or at least they're being honest about prompting the AI to corwite. Pesronally, I think the practice of use of AI in academic writing should be outright banned, but people will always seek the path of the least resistance.
Looking at the thumbnail, I assumed the rat image was used as a graphical abstract. It's very catchy, would make for a great one. That this actually got published as a fogure in the main body of a paper is unsettling, to say the least.
They're not "making a joke", ffs. The woman who added her dog wasn't either, she actually tried to pass the damn thing off as a researcher to lend herself credibility. And it kinda worked, because after the journal editor who denied her paper passed on, the others were all "dawwww, pupper" and brought her right back in. 🙄
i completly disagree that ai should be banned. its a very helpful tool for correcting grammatical and linguistical errors especially if english isn't your native languege. if you just copy paste whatever it says then its your fault for not reading the ai's output correctly.
I think there certainly is a place for AI language models in writing academic work.
Writing is chiefly about communication: having an idea and embedding that idea into language in such a way that the recipient of the language will be able to unpack your idea, and not some other idea from the given language. However, language is a chaotic system with such annoying things as homonyms and ambiguous grammatical structures. (consider a sentence like: He sold him vegetables that were grown in his own garden. Clearly we have two men transacting... but were the vegetables grown in the buyer's or the seller's garden? Did you even notice this ambiguity before I pointed it out? Perhaps the context for this sentence would clarify, but it's also perfectly possible that it simply doesn't. How easily could a sentence containing such an ambiguity sneak into your paper and mislead an expected half of your readers?)
But it's even worse, some prose is exceedingly boring and impenetrable, you might not even be able to sustain the attention to unpack any ideas from it at all, nevermind ones the author didn't intend to embed in it. Thus, the most successful researchers will invariably end up being those with sufficient communication skills to produce prose that is easy to read AND conveys the intended ideas, and this means that to be a good researcher requires not specialising fully in your field, but partially in your field and partially in communication.
Thus it seems that the more successful authors are going to end up not being the most visionary researchers, but the one's who are visionary enough AND have greater skill in communication.
BUT, what if a tool existed that could enhance the researcher's prose... a system that could catch and address those sneaky ambiguities, improve diction for greater clarity, and make the prose more readable. Suddenly, the researcher can devote more time and effort into the actual research, instead of workshopping their prose.
This would also result in a quality of life improvement for the review process, now the reviewer is less likely to run into dense impenetrable prose, can more easily unpack the author's ideas, which means that it would be easier to catch if the author made some unjustified leap in logic or overlooked some important factor, or if the author is absolutely right in a massively revolutionary way, the reviewer will now not need to agonise for minutes on sentences, hours on pages, just to untangle the bad prose and discover this revolutionary revelation, or more likely, give up on untangling and judge the epiphany put to paper to be absolute nonsense.
What kind of tool would be able to fulfil this role? Obviously, an AI language model.
Either that or we need to formalise a standard of pairing up communication specialists with geniuses into researcher/writer teams.
@@cobusvanderlinde6871 the job of scientists is not communication. A paper doesn't need to be written such that a lay-person can understand it
@@methatis3013 I wasn't talking about making the writing digestible for the lay person.
Obviously it would be good to have it all be readable for the average joe, but bad writing can also make it impossible for a subject matter expert to make sense of what the scientist said he did, and thus, even a subject matter expert would be unable to judge the validity of the research.
Did you even read my whole comment? I know it's on the long end, but nowhere in it do I express a concern for the ability of peasants to digest academic writing.
I swear those people just pretend to their job 😂. Few days ago I've downloaded a paper about carbon nanotubes from "Journal of Materials Science" by Springer and there was, completely out of the blue, the following sentence:
《...the effect of chirality on the stress of CNTs increases with the increase inthe United States as the strain. 》
😂😂
The biggest problem aren't the peer-reviewers don't looking at the paper....it's the authors don't looking at their OWN paper. Even when using AI to create images, you should at least look at it before using it in your own paper. If they show so little care for presenting their own research, how little care did they take to research. I would die of shame if i were them.
Maybe they did _no_ research. ChatGPT did, and cared to its best
I've had reservations about Frontiers for a while but this event is where I put them in the "predatory" box. It's clear that they are just trying to grab as many APCs as possible without regard for academic standards.
Pay the reviewers decently for their work, publish their names, and keep a review index, like a citation index. It is all part of the academic environment.
Then you have the same issue that peer review was designed to address, who is paying and who benifits from this paper saying what it says. Once money becomes involved the two usually end up being the same party.
It's easy to hear something online from someone you think has authority and just roll with what they say but the issue really isnt the peer reviewers, they actually tend to do a good job but the editor (the journal) has the final say on what gets published
Shouldn't the peers who "reviewed" these papaers be subject to professional criticism for their failure to properly review? They are endorsing fraud. That's a crime in most countries.
Can be. Depends on university and whether they were too trusting or just lazy
Turns out the peer reviewers raised concerns, the editor posted the questions to the authors, the authors ignored half the questions, and the editor decided to publish anyways.
Seems to me they should have done more than just "raise concerns" -- more like say loudly and clearly "This is complete and utter rubbish, throw it straight in the bin."
@@gcewing the peer reviewer can write comments and questions for the author, and signal whether they suggest the paper for publication, want a revision, or reject. But the decision is always the discretion of the editor. The peer reviewer can’t force the editor’s hand to throw anything into the bin.
@@gcewing “raise concerns” is a very broad statement. They aren’t going to actually describe exactly what the reviewers said.
This is much more than a quite serious. Peer review must be an entirely transparent process. The reputation of the peer reviewer must be on the line. Peer review needs to be paid for. Even that is problematic where the academic publishing process has been gamed.
Who employs the scientists? Universities. The university should have their funding on the line when their scientist publishes junk. The university should review the papers for obvious issues like this before it gets submitted. That's where the money comes from and that's why all these "scientists" keep pushing out junk papers so they should be held accountable.
It is relatively transparent, you can look up who reviewed this paper on the webpage. The problem is the publishing process in journals that have a high incentive to apply predatory practices. But in this case, the authors mentioned in the paper that the figures were AI-generated (which is allowed according to Frontiers' author guidelines), and one reviewer complained about them.
Could you please cover the recent paper published in the journal of Surfaces and Interfaces, which was authored by ChatGPT and passed through the 'review' process without being caught?
the label for plain "rat" is even funnier then the made up words imo
This reminds me of a case where a paper from iPhone autocomplete was accepted in 2016. And with no modern GPT fluff, entirely nonsense language that is not even coherent can go through those checks. And with modern plagiarism machines, scientific journals looks more and more like a yellow press
Where did you hear about that?
@@KnakuanaRkaWas also wondering that, so I looked it up: Christoph Bartneck submitted a nonsense paper with a fake identity at International Conference on Atomic and Nuclear Physics, which was accepted for oral presentation after 3 hours. He made a blog post about it and there are some articles about it.
There probably was no peer review involved and it may not have been published (not sure, couldn't find info about that quickly)
@@KnakuanaRkaA professor called Christoph Bartneck received an email asking him to submit a paper to a conference on nuclear physics. Since it was not his area of expertise he used iphone autocomplete, added some photos, submitted under a fake name and within two hours had an email saying the paper was expected please pay us US$1k to speak at the conference. The conference was The International Conference on Atomic and Nuclear Physics.
There was also the "Get me off Your F**king Mailing List” paper, accepted by the International Journal of Advanced Computer Technology which is a really fun read.
Was the peer review done with AI as well? Holy crap. 😂
Shit, that's probable 💀
there are also peer reviewed papers where the introductions start with: “Certainly, here is a possible introduction for your topic:”
i don’t get why they don’t even bother to take a glance at the text before submitting it!?
Many authors are not native speakers and have a shaky command of English, so they might not notice as easily.
This is actually a problem I am noticing in art/design world too.
A lot of people are not hesitant enough to not let ChatGPT decide the descriptions of their artworks. If you are a creator of something, I find it almost a necessity that you speak of your creation 'IN YOUR OWN WORDS'. Yet, when confronting the professors with these issues, I have been either met with 'well, it's their choice, and if that leads to the downfall, so be it' or 'I mean, it's a new world, adjust to it or you are left behind'.
Having said the above, is it only my impressions, or does ChatGPT always try to sum up the essay in a hopeful note? Even when the bullet points I am giving are pretty grim, it's one thing I cannot help but notice straight-away.
Thanks for the insightful video.
So true, so true.
If as a creator, you cannot articulate your own art...
It means you never really thought through about the themes, process or any subject in you life.
Just an empty head.
Can chatgpt even scan images?
@@bbrainstormer2036 yeah. it can analyze images. it's a relatively new features
@@bbrainstormer2036 Nah, you write some summary, then put them onto ChatGPT so you have a full article. Problem there being, is that the description of an artwork of any sort, needs greater attention to nuances. ChatGPT has a tendency to muddle things in a vaguely optimistic positivity, even when the summary prompt suggests otherwise. That is a problem that I am referring to.
I think someone can give ai a lot of information and then it can make fluent descriptions that are accurate to the creator’s intentions. Sure you could do a less specific job, or a more specific one. It’s just how one does it.
Kinda reminiscient of security theater. TSA fails far too often to promise security, and there's no evidence that the pretense of security accomplishes anything objectively. But it's quite effective at creating a feeling of security for the passengers, so that's the main point of it.
If peer review is equally shoddy, then the only basis for the reputation of journals is blind faith.
I've seen highly reputable journals publish outrageously wrong papers. So if there is bad information out there, the only way to combat that is by publishing contradictory information in other papers. Otherwise the spread of misinformation under the guise of science would become a huge epidemic.
Perhaps you mean _”has become_ a huge epidemic” confronting scientists today? Consider, the trouble with misinformation - or simply bad information - may be that even a little calls everything into question. It’s pernicious. Science in most fields today builds upon what has been published. Entire areas of study are theories stacked upon theories, the sturdiness of the entire structure depending on each building block being solid. Some papers are more foundational than others, but all still contribute. It seems germane that one of the first steps for a new scientist in even formulating a research question is to run an online search to see what is in the literature already. Has anyone asked the question already? Is the theory you might pursue already disproven or proven? Those initial searches will only review titles and abstracts, not deep dive to parse the data in papers for discrepancies. How often have fraudulent papers distorted our efforts, steering the entire thrust of a discipline in one direction or another, do you suppose? I think most would agree it’s impossible to know once more than a few cases of fraud in frequently cited articles have been exposed.
Some areas will be more vulnerable to distortions resulting from fraud, mistakes, or poor work. One of the most fragile I can think of is climate science. Papers are produced that use theoretical models which take as their inputs variables produced by other models, that in turn take as their inputs data sets that have been adjusted by applying other models. Given that these global models are attempting to predict outcomes from dynamic interactions within chaotic systems, [here I mean chaotic as in a profound dependence upon initial conditions] what is the likelihood for a fatal flaw to arise. This is in what may be one of the most important areas of scientific endeavor our species has undertaken. The stakes are extremely high. I would submit that even when not as dramatically apparent, the stakes are extremely high and the consequences very difficult to predict. I think we already have a huge issue at hand.
That could actually be a great analogy but from someone in the industry, its more that the journals are TSA and peer reviewers are the x-ray machines. This problem starts making a lot more sense when you look at it that way
I sent this picture to my friend in a discord server we primarily use for VC and as such doesn't get much actual messages so I've been staring at this picture while playing counter strike for about 2 months now
I would think there is an obvious role for a paid editorial review board that does _BASIC_ things for _every_ paper such as check spelling and grammar, review images for duplicates and manipulation, check that labels make sense, check graphs for accuracy, review data and math. That review board should have access to paid consultation by specialists as needed, such as statisticians or mathematicians. That should probably occur BEFORE the paper goes out to peers to be reviewed…. After all a journal is a publisher not a platform and has responsibility for what they publish.
the real joke is my university is grading me on whether or not I use peer-reviewed papers as sources or not.
The analogy goes like this. Imagine that you are a student about to take your A-level mathematics official examination in next 2 weeks.
You saw a new mathematics textbook from a different publisher in a bookshop. You bought the book and returned home. One
day around 5.30 pm, you attempt to do 5 questions from the textbook, you selected 5 random questions from these 5 chapters: Matrices, Complex Numbers,
Vectors, Differentiation and Integration. You spend about 25 minutes doing the five mathematics questions, scribbling down your working
on 5 different sheet of papers. The time is about 6 pm, you stopped and went out to play basketball with your friends. You returned
home around 7.30 pm. You took a bath and had a dinner with your family. You returned back to your study table and checked your scribbled down 5 answers
against the book's answers section. To your horror, you found out that you have got it all wrongs. But you said "Hey, I have published 5 papers, right?
See, I am holding 5 sheet of papers".
Another student somewhere out there, did the same feat, but he attempted 2 questions from 2 chapters: Differential Equations and Numerical Methods and he
got it all rights, a 2 out of 2 and you got 0 out of 5. But you said, "Hey I published more papers than him, 5, he only got 2. I deserved
to be an assistant professor, right? If I keep on publishing more, I get to be promoted to be a full tenured university professor,
keep on doing this, then onward to become a university president or chancellor or maybe a future director of a research institute, right?"
You found out that your friend Thomas, did the same feat, he also got it all wrongs, 0 out of 5 from the same 5 chapters. But this is okay, since you are going to
put a reference at the end of your 5 papers, citing Thomas's work and Thomas returned the favor and did the same, citing your work.
Now you have them all: published papers, citations, H-index, impact factors, research grants, etc...
Not declaring text as generated by ChatGPT in a scientific publication is plagiarism in my opinion. Scientific publications are supposed to cite all their sources. ChatGPT draws from a large amount of sources it trained on. Using text of someone else without clearly citing it's source is plagiarism. This includes text generated from LLMs.
Technically, when the ideas contained within are those of the user…
I mean this is really crazy! How could the reviewers not see that it's obviously AI-generated? I spotted it without even knowing the story behind it!
This is frontiers...
The reviewers may be old?
The fact it took so long for this stuff to start getting exposed is the worst part
This is the problem with not paying researchers and reviewers. Getting payed means you can hold reviewers accountable.
It also means the papers that get favourably reviewed are more likely to align with the views of the person paying
@@stephenallen4635that's why you make the person publishing pay a fee that goes toward impartial review, the money may come from the author but it's paid out by the journal
See, I was wondering about that mouse image! I saw it everywhere on LinkedIn and was wondering if it was AI-generated
Academia needs an overhaul in how researchers' performances are assessed. It makes absolutely no sense to look at bibliometrics anymore. AI has only sped things up, the current publishing model has been broken for quite some time. I reckon some sort of system where only a limited amount of self-appointed best yearly efforts are published and considered for evaluation would be much more productive for Science.
Historian here: AI would have no idea what I was talking about and spit out critiques that often have nothing to do with what I'm writing about.
Just like the average peer reviewer
Already, the younger generation has no expectation of privacy.
Their children will have no expectation of accuracy, no expectation of truth.
It’s a perilous convergence to be sure. The Information Age has withered into the Manipulation Age where there seems to a con under everything you see. I am worried for our children.
Those words are hillarious.
I disagree with the premise that AI blunders making it to publication is thr fault of thr peer reviewers. Academia is full of pay-for-play publishers (Frontiers being one of them). I have plenty of stories of referees recommending major revisions or reject for these papers and editors accepting them instead.
Thats what he missed and I think it's really the most important point
Academia needs to be trolled like this. Peer review is a joke.
Technically, ChatGPT doesn't "make stuff up." It just doesn't understand references. It knows what a citation should look like, and how it should be used, but not what it is.
Some lawyers submitted a court brief that was generated by ChatGPT, with bogus citations. They judge was NOT amused. They got sanctioned.
i've seen an article that started with "sure i can. "
Frontiers journals' practices have a very bad reputation, and bizarrely in 2023 their chief executive editor published an open letter decrying "[critics] sloppily promulgating “the p-word” [p-hacking]; unfortunately, this unethical behavior is being noticed, creating concern and bewilderment. The p-word is a blanket derogatory term that is so easy to use that it blocks scientific, critical, and common-sense thought processes." Preregister your outrage!
Did it literally straight-up say "p-word"??
Also the explosion in frequency of the word "delve" in papers is very telling.
You shouldn't copy paste chatgippity.
This is not news: every now and then, someone pushes through a deliberately non-sensical article that gets magically peer-reviewed, which is strange, because everyone knows that "peers" are infallible, uncorruptible all-knowing people with infinite spare time and budget to carefully check everything that gets published.
it's not new, but its getting worse... generative AI will only make it way more accessible and convincing
Thank you for your videos, and I find it awesome that you put your own progress-bar to indicate the bit that corresponds to a product-placement. I don't mind watching it and I think this one was very relevant, but on top of that, I appreciate that it shows the level of respect you have for your viewers. A lot to learn from you :D
the faster these tools are developed the better. it might finally force the community to actually publish the raw data and subject it to actual scrutiny
I would just like to point out that that was really smart of you to mention how your recent videos have done so well because as soon as you said that I wanted to see what they were about smart man
The problem with sites like ground news is that its all establishment news outlets with very minor areas of disagreement.
And the summaries are worse than the average Reddit comment news bot 😢
International bodies need to not only fund research but also the peer review process.
Peer review is not the problem. The obsession with competition in academics is the problem. Competition can drive innovation in very narrow circumstances, typically it runs interference against progress.
We have observed this time and time again and academics and economics and sociology. We need to be scientific about our issues and critiques. These the profit competition fetish model need to be eliminated.
Nice use of integrating your Content with your Sponsor Message for Ground News. I wish other "Creators" would do the same thing, and I think that Sponsors should want this since I think that this offers a more compelling story for their product.
Dead internet theory has joined the chat
As an AI language model, I don't have personal opinions or the ability to critique a video I can not access. However, I can provide a summary of highlighted strengths to approximate a comment.
It is commendable how the video meticulously delves into the subject of the increasing prevalence of AI-generated content in peer-reviewed papers, providing varied examples throughout to highlight the points while also having notable humor to keep the viewer engaged through its runtime. The video makes it clear how pivotal it is that the audience is aware of the impact recklessly implemented AI-generation can have on a paper's validity and
Regenerate Response
As a human who has never tried pretending to be an AI before: That was fun to write. (Edit: Just put this in a bunch of AI detectors and the majority flagged it as AI! (I deleted the header and "regenerate response" when checking))
It's becoming quite concerning as many bad players fake their research without employing Ai but with the use of Ai it would become much more difficult to vet the correct/proper ones. Also peer review has also degraded too much and must be improved.
Finally!!! Thank you for dealing with this issue
To all future generations: "Yeah, we did this for lols. Sorry. "
we need more like this.... awareness. Also, people who read, then read multiple sources.
Believe it or not, it's going to get a lot worse yet.
Thank you for addressing how dangerous this is becoming. So many people simply believe things being written by these models when they just hallucinate so much info. It's horrifying that it's becoming established in academic spaces. We really need more exposure and education for regular people on what's going on here.
If this passed peer review then there is no peer review.
the reviewers said there were major issues with the paper the journal said jk dont care who asked
I haven't laughed this much since yesterday (actually a compliment). The longer you look at the image, the more funny it becomes. They labeled a rat as Rat. Why are the steps in the order of 2 5 4? What's a Retat and what are Stenm cells doing there? Is that a spoon?
As an AI language model, I really don't see the problem.
Regenerate response
i think language models can be helpful in taking out the work of refining linguistic presentation, you can start with bullet points and get the style you want. that cuts effort that doesnt constitute to quality, as long as you actually check the result to a) be representative of what you wanted to express and b) does it's job e.g. presenting and explaining
and as someone who uses chatgpt regularly i know that you can never ever skip that job. it is so common for regressions or hallucination to sneak in at completely unrelated prompts were you expected it the least. let's not forget that it reproduces language statistics, and sometimes what you want is a statistical outlier that it thus cannot even reproduce
Ground news is biased. Because reality is not equally distributed to “both sides” of the political aisle. It’s not linear, first of all, so doing that balancing is automatically giving attention to BS that should not be treated similarly.
All news are biased.
The only way to know the reality is to painstakingly fact check all the details.
But viewing news from all different perspectives is the next best thing.
I can't tell you how good it is to see someone else who has noticed this.
Great work Pete! This is incredibly important work!
Journals are jokes now.
3:13 Funny enough, Willy Wonka would eventually get its own AI incident after the paper with that Glasgow event
Artists already said that ChatGPT is not a tool, it's a substitude. so may be in that paper it's a substitude for an author, maybe the leading one or the one who did the whole bunch of job, who knows for exactly?
I've already noticed it getting more realistic. A lot of the most obvious tells are starting to blend a lot more, and it's beginning to get harder to just immediately determine that an image is AI generated.
"Trust the science...."
God save us from sloganism.
You can understand the potential pitfalls in a methodology without using conspiracy theorist dogwhistles.
Science has consistently been the BEST way to improve life for the MOST amount of people. (Whether though medicine, increasing crop yields, travel & telecommunications, etc) You existing in a modern western world is “trusting the science” in thousands of ways every day. Oh, and also, your home country would probably only have 10% of its current gdp (and worse quality of life) if it didn’t have access to science’s inventions. 😂
This is exactly why discussions around topics like "academic freedom" piss me off so much. The entire framing of the debate ignores how academia has become a deeply corrupt and sometimes outright fraudulent institution. It's not a question of free expression, which is already protected in most of the countries having this debate. It's not a question of being able to freely choose research topics, because that is simply not a thing for the vast majority of researchers. It's a question of whether funding bodies should be allowed to freely buy the few information generating resources we actually have with no restrictions, whether journals should be allowed to continue their monetised stranglehold on new information that is supposedly in the interests of humanity, and whether the few professors at the top who aren't reliant on grant money are allowed to do whatever they want regardless of ethics.
haven't lost all respect for science but seeing how often stuff like this happens makes one wonder how much else should we doubt or reconsider
Putting ChatGPT as your co-author actually seems like a good idea to me: it tells the reviewers that ChatGPT wrote some of the paper, thus allowing them to check for inaccuracies within it which ChatGPT often fails at, and isn’t concealing the use of ai in the writing of their paper.
Reminds me of when someone had published re-write of M€in K4mpf as a feminist manifesto and it got through peer review too.
I didn't even realize the ad was an ad; well done!
AI Pros: People with no skills can make cool shit
AI Cons: The complete obfuscation of reality, floods of misinformation and AI slop, not being able to tell if a video/image/text/quote/citation is real.
I’ve seen many cases of people testing the peer review process and finding it meaningless. My favorite was a peer reviewed study about how to easily cheat peer review by using certain key words and phrases even when it’s gibberish in relation to the rest of the text.
You know, we should start arresting people who use AI for academic misconduct and such
dude ngl that add for the news site was crazy i actually watched all the way through it lol
"TRUST THE SCIENCE! OMG!" I trust science. I don't trust people. AI is trained by people.
Two of my most favorite things, art and science, are both getting screwed over by AI 😔
Never heard of "frontiers".
They publish almost 200 different scientific journals, hence a big player
The science channels/magazine writers that you watch/read are getting their stuff from Frontiers and other big journals
Youre not in the industry then are you
Frontiers is not a trustworthy peer review publisher.
Really good channel. Thanks for your content! Very interesting.
2rd
10:44 To your point at the end, if you look at the title, it makes perfect sense that the author lists ChatGPT. The article is an editorial about the use of ChatGPT in generating papers.
I would think of ChatGPT as a librarian, when I can't get result from keywords in search engine, eg: acronym like VsPs, Cs, Ngs then I present same keyword to ChatGPT and it started talking about Graphic Pipeline, so now I know the correct keyword that work with search engine.
@@xponen It is a very powerful research aid!
Hoooow
To quote Dan Olson of Folding Ideas: “Cringe. There’s no other word for it. This makes me cringe. It’s embarrassing.” To think I’ve spent almost 4 years in grad school trying to get quality data and this is allowed to be published is spectacularly disheartening. This smacks of laziness and lack of passion. Why even go into science if you don’t want to do science?