6:47 Wait, what!? Okay, that's amazing. Does Elisabeth Bik have some incredible neurodivergence that allows her to spot this by eye!? She's like a detective born at just the right time in the right era to catch these types of. . . anomalies.
Well would these photographic inconsistencies alter the general results and general findings of the paper? From what I can see it would not. So where is the scandal? These alterations look more like some idiot assistant scribbled with a pen on the original chromatography, forgetting that pen ink will also be Chromaticagraphitsed. And somebody remove the inkpen scribblings into typed letters to the column on the right. Basically bringing back the chromatography to its original state before somebody spoiled it. This all looks like clarification not like forgery. If a scientist doubts the results of a paper, the scientist repeat the experiment and see if the results match. They don't look for digital inconsistencies.
"those are compression artifacts" lmao what a joke. This dude's supposed to be smart but doesn't even have a grasp for how unlikely it'd be for random noise in two areas of an image to match up perfectly? 😅😂😂
The bigger problem is that negative results don’t get published. So everyone will try to fudge their data, conclusions and say there is something significant.
True. I think negative results may often be more interesting than positive. I read an article in a newspaper 35 years ago, or so. Researchers found that, "College women who got pregnant were more likely to finish successfully if they got an abortion." Besides the, "Well duh," factor, I marveled at the assumption on the direction of causality. Like maybe it's, "Women who prioritize college over motherhood are more likely to get an abortion." I would find more usefulness from, "I tried this promising medical treatment and didn't get the result I hoped for." Others might try variations and find success, or the medical community learns and tries something else.
Time ago I was thinking the same, but now I think that if we include a new bunch of papers about negative results it would be impossible (may already be) to stay up to date of the smallest, lest significant micro area of research. So many journals, so many papers, so much systemic pressure… publish or perish… a big business with many journals that will support your work for a reasonable amount. Are positive results more easily faked than negative results? That’s is another one. Do a lazy experiment with a sick hypothesis , nothing comes out, publish on The Journal of Irreproducible Results. Done. Next
@@joajoajoaquin It is exactly this "publish or perish"-nonsense that pretty much forces people to forge. You *need* to publish, *only* positive results get published. Guess what people will produce.
Pretty wild when you think about it like that, but yeah.. I, as a PhD student was put through some very harsh and frankly cynical questions before I managed to publish my first paper. A lot of rejection even though I am confident to say it was all honest work that I was able to repeat many times. How come established scientists are not put to the same level of thorough scrutiny?
Surprisingly Joe Rogan interview with Terrance Howard went over how peer review is a scam created by Maxwell the guy that had something to do with Epstein
Problems with peer review would disappear if we valued experimental replication more. I'm first! isn't science, it's different people working on the same problem coming to the exact same conclusion.
Jan Hendrik Schön... thats the only name i gonna say, anything comming after him as "peer-reviewed" isnt worth a miniscule of consideration on itself. Peer Review is broken sicne that guy, and until the scientific community comes up with something which FINALLY fixes the hole Schön shoot in it i gonna consider chases like this as a inteded feature, not a bug.
@@federicolopezbervejillo7995I think it’s because the fraud is often not premeditated. Imagine grad students and postdocs are working in a pressure cooker lab where an overbearing supervisor demands you crank out amazing results, that fit their preconceived notions, and must make it into a top journal by some arbitrary deadline. Honest scientists too often get stymied by negative results, or are sidelined into the annals of mediocrity.
@@federicolopezbervejillo7995much of it is laziness and arrogance. They're in a field so used to publishing reams upon reams of nonsense that no one will ever read other than a few lazy reviewers that I'm sure they never actually expected anyone to give it any more than a cursory glance.
@jacobmumme You pointed out right: ALL OVER ACADEMIA. 99.99% of scientist do their reasearch for state or private company grant money and not for their interesting. And by chance, they always reach results that support the interests of their clients.
According to Wikipedia, in 2021, she was awarded the John Maddox Prize for "outstanding work exposing widespread threats to research integrity in scientific papers".
Well, If a proper scientist doubts the results of a paper, the scientist repeats the experiment and see if the results match or not match. They don't look for digital inconsistencies. That's not scientific. These alterations look more like some idiot assistant scribbled with a pen on the original chromatography, forgetting that pen ink will also be Chromaticagraphitsed. And somebody remove the inkpen scribblings into typed letters to the column on the right. Basically bringing back the chromatography to its original state before somebody spoiled it. This all looks like clarification not like forgery.
@@winstonchurchill8300 "If a proper scientist doubts the results of a paper, the scientist repeats the experiment." There are no proper scientist then, i guess. Only broke loosers in a dire need of funding.
I think 1. The peer reviewed system needs an overhaul because it’s completely failing at its task. 2. It’s too easy for a “supervisor” to put their name on a paper without doing the leg work and then expect blame to lie elsewhere when they fail at supervising. 3. We need some kind of reward for the person/people who find the scam artists/Lazy work in the system.
That's the way to do it... incentivize peer review studies. These irregularities only were found due to the authors' sloppiness, and someone who was looking for duplicate data in the results. Other than that sort of double checking, nobody has done a review study to find out whether it's repeatable with similar results... because who will pay for that secondary research? It happens, but not on most studies. If the person who stitched the western blot image had used different "empty" patches for each coverup, then it wouldn't have been caught. Going forward, most embellishers won't be so sloppy.
@@ksgraham3477 That's not quite it. Working in academia is a full-time job. Would you like to spend several dozens of hours monthly working FOR FREE just to earn browny points from journals you're reliant on to publish your own work? Probably not. So conscientious scientists actually read the manuscript, look at the figures and provide comments. But essentially nobody will go to the trouble of doing a thorough check for fraud. We have daytime jobs! Proper rubber stamping is a small, if related, problem. When you know that rejected manuscripts will be published eventually, even if not in the same journal, you're incentivized to provide as many helpful comments, rather than rejecting, semi-trash research (and even right-out trash). But I wouldn't accept something I knew was fraudulent.
Number 2, the supervisor putting his name on a paper without doing the leg work, is how you get ahead in Academia. It is proving that the academic system is broken and we need a different way to choose and evaluate research.
Right? Supervisors expect to like, privatise all the credit and fame and socialise all the blame and mistakes. Fuck them. If you're the lead author and your paper is fraudulent, THAT IS ON YOU! None of this bollocks 'oh it must have been a research assistant sneaking in bad data not my fault!'
There should be a Nobel Prize for proving fraud or outright disproving the truthfulness in important papers. That might actually create a healthy tension between the scientists doing research and the people keeping them from using tricks and/or deception to gain status and wealth.
When I worked in academia, I was there long enough to hear rumors about specific professors. Some of these rumors even came from their own students. Only a few profs names came up again and again, but it became predictable after a time. There were certain names you came to mistrust, even if you had no direct proof or contact, because the stories never stopped coming. Whether it was an academic integrity problem, or a problem with interpersonal conduct, some names became associated with this stigma. But the truly discouraging part were the recurring counterpart tales about students who supposedly HAD direct involvement in these issues, and took them to the administration, and their concerns were buried or ignored. These stories ALSO came up again and again. These problems don't exist JUST because there are bad actors in academia. They also exist because *administrators* would rather sweep the concerns under the rug, and avoid a scandal that might hurt donations or grants, rather than maintain a rigid standard of integrity. I'm convinced that a lot of the fraudsters are given cover, intentionally or not, by their departments and institutions who are desperate for the gravy train to continue at any cost.
JF you have summarized it well. It takes a lot of different people to enable this type of fraud, and working in unspoken agreement. I have seen it from the inside.
Sorry to burst your bubble, but good old Pete here is guarding his backside by never calling a spade a spade. the always skirts the edges and give the liars weasel room.
@@biggseye no bubble to be bursted, I am a PhD myself, spent 6 years in academia and still active. Humans are flawed, but the scientific principles are sound - seeing as I am typing up this response using incredibly advanced phone techology and transmit this text to a site for the whole globe to see at any time of day or night.
@@biggseye I imagine unless you have rock solid evidence that someone has done something themselves, an accusation of that calibre is very dangerous. Even the fact that this kind of stuff is being brought to attention is commendable I think. You don't *need* to point fingers and call people explicitly liars. The scientific community is smart and can draw conclusions.
Elisabeth Bik is the real MVP. When these institutions fire some of these fraudsters they should send the discoverer that employee's would-be bonus or 6 months salary upon termination. It would show that they actually care about integrity and encourage academic honesty rather than just acting aghast and brushing things under the rug. Encouraging honesty and sending a message that academia can have a future when trust in institutions is at an all-time low.
That would create another whole problem of people being shady just to collect money. People should do good for the sake of it, once people get rewarded for something they do the bare minimum in order to obtain that reward.
They don't care, they probably get more cash committing the fraud. People without integrity will act in their own best interest, so you can suspect even rewarding "honesty" would be filled with fraud as well.
Trust in institutions should be at an all time low. The neoliberals that dominate all of our institutions have disdain for the public and choose to implement their agenda through social engineering, manipulation, and collusion with various central gov agencies.
As a supervisor it is also your role to verify the data and ask uncomfortable question-At least that is the case in Germany... Maybe you can mess it up once or twice but not 30 times.... You do get so much money BECAUSE you have to do this tidious task, you can not just rest on your past distinctions...
@@crnojaje9288 Googling highest ranking universities, my first hit says that the best German ones are on 28th and 59th place. It's not like it was 100 years ago, is it? There's even those in that prison colony called Australia ahead, so do shave and put your cups in the cupboard in order! ;-)
A med student friend of mine asked her adviser if she should go into research, or medical practice. He asked her how important it was to her to be able to look at herself in a mirror. Abby asked him to clarify, and he said, "They won't order you to commit fraud, but they'll press you to find a way to get their products approved - whether they help patients or not. Could you still look at yourself in the mirror after doing that?" That was 20 years ago, almost. I never found out which way she went.
""They won't order you to commit fraud, but they'll press you to find a way to get their products approved" -- Sounds a lot like the computer software industry.
@@JakeStine But computer software rarely kills tens of thousands of people, like Vioxx, Fen-Phen, and Fentanyl. Makes software design sound almost benign... except they still cheat customers out of their money. Just not their lives. 💩🤡🤯😵💫
@@JakeStine Sounds like every job I have ever had in every industry I have worked. Construction, contracting, software, food and sales. If management is not requiring deception, the customers or culture are.
I love the fact that you took note of your audience's reaction to your first academia is broken videos and took action. Your action resulted in this channel becoming one of the most unique channels covering these subjects.
@@mrosskne what I mean is I think his channel was on marketing if I'm not mistaken but his first academia was broken video got a lot of views, so he took note and started to dig into cases like these which I believe had a good return in terms of audience count.
"As the final author on the paper & as a scientist of his caliber" he should NOT just be there in an advisory role: he should be reviewing & confirming the data collected & the work of his co-authors. Let's face it, as a Nobel Prize winner, the paper is flying under his authority & will attract attention in the marketplace because of his name on it. If he's just surfing fame, resting on his laurels & gaining continued fame by co-opting the work of his co-authors, he deserves to go down in flames if THEY fabricated evidence.
For real. If he had so little to do with the paper that he couldn't even determine the veracity of the output, then he should be stripped of his nobel prize for not actually contributing anything.
My ex was a research assistant at various economical-environmental agencies in DC. Huge multi-billion dollar facilities. Probably by now she is some lead researcher Her papers were basically on various environmental factors on the economy of countries. Like, for example, how rainfall affects agricultural growth in Tibet and how this affects the economy, etc. She would sometimes actually visit the said countries to write the research (although she would never visit any actual sites) But anyways, the point is, I would know the conclusions of EVERY one of her research assignments before any research. I was shocked that she did not see this way. Nothing she researched had a negative conclusion. Absolutely everything she did, the conclusion was that X impacts Y and Y impacts society and economy, and so by giving money to fix X, we will fix both Y and the economy. This is the conclusion of every single economic research paper of every 3rd world country economy/environment/etc I just still dont understand why she could never understand this: that she is paid to write what THEY want, rather than conduct actual research. If she goes to Tiber, and writes that everything is just fine, the research center would have just lost money for nothing... And obviously, all these research centers are heavily politically inclined. I would guess that 95% or more of the employees and especially the leadership, all vote and heavily support one party and one political view on economics/environment/psychology
Shocking.How many papers pointing out the idiocy of catastrophic anthropogenic CO2 “climate change “ have you seen in Nature recently? Money always dictates the conclusions,the studies are a mere formality
Nothing new sadly. Even 10 years ago when I was studying biochem it was an open secret that if you don't get the results they want they'll find someone else who will to give a grant. It's all crooked.
> need for funding lol, hard to sympathize with scientists over this supposed desperate lack of funding when one of the best funded areas of research --- cancer research --- is filled with frauds and mismanaging of resource all the same. Not to mention, fundamentally this (lack of funding) is a problem that will never go away due to the nature of resource allocation and what we study in the first place.
@@TeamSprocketso does what that mean for integrity in the sciences? I am tired of businesses being demonized for seeking profit (even when they are acting ethically by anyone’s standards) when the sciences are considered noble for seeking funding even as it will use the most unethical means to get it.
As someone who as owned quite of few dogs, I find them eating my homework more believable than it being "compression artifacts" as would happen with low resolution rendering.
The second one would be the closest. But it just doesn't look like JPEG. Not to mention it wouldn't be copied around neatly like that. It would also need to be a very very high JPEG setting, as that content looks really difficult to compress - it's pretty close to random. And true random data cannot meaningfully be compressed. Fun maths fact though: you can have an infinite string of random data. The chance that you can compress it by 90% is exactly zero. But it can still happen! It's the difference between surely and almost surely, where you can have probabilities of 1 or 0 but still have things not happen or happen.
@PeteJudo1, the cloned rectangles shown round 5:30 in the video can actually be artifacts of an image upscale. Actually if you try to upscale an image that has any one-colored area will result in repeated upscale artifacts because the algorithm will make the same "decisions" on what to do there and given it will be seeded with the only non blank area will repeat the same output until it fills the blank area. I am just a software developer and have no idea of the subject of the photos and it's composition, but I have enough experience to say that "well.. I have seen this before" ;)
When your success in academia depends largely on the quality and relevance of your research, it’s no surprise that people fabricate data. There are essentially an infinite number of directions you can research, and the majority are dead ends. Imagine working your whole life in a merit-based system, climbing to the very top, then getting passed over because you happened to pick a research direction that was a dead-end, which you couldn’t have known was a dead end ahead of time. You can either attribute the last few years of your life to “at least others know not to do it this way now”, but the temptation to fabricate some results is very real.
Ironically, the negative result could, as you say, be beneficial to research as a whole. But it would hardly be a satisfying outcome for the researcher.
@@joannleichliter4308 Absolutely. Knowing what doesn’t work is extremely valuable and necessary for scientific development. It’s just not recognized as especially valuable as it’s a far more likely outcome.
@@tomblaise Moreover, in testing various compounds unsuccessfully for one purpose, researchers can inadvertently find that it is efficacious for something else. I don't think that is particularly unusual.
@@joannleichliter4308 University (at least Canadian university) is not entirely merit based. I would go as far to say that merit is not the main predictor of success. I have experienced as well as seen others be looked over for a multitude of reasons that have nothing to do with merit. If you want to include things like "politicing" and ones ability to blindly follow orders in "merit" then sure, but lets not confuse merit (marks, and significant findings) with other things. It's sad to see bright minds, over looked because they refuse to tow the line. People with power in these institutions would rather lie and manipulate people and data to remain "successful" than to accept that they are perhaps wrong.
Merit should lie in well executed research work and not in the results of that work. Researchers (should) have no influence on the results of thier research. They shouldn't be blamed for results which are unfavorable to society, reality being a certain way is not thier fault.
I know academics who publish hundreds of papers per year and are promoted and yet they don't seem to do any work Yet their work gets published in good peer reviewed journals In addition they don't even have access to lab equipment Who can explain this? I feel even the good peer reviewed journals are fraudulent and the editors just OK their friends' work Thanks for highlighting this issue Pete As someone who has worked in academia I can confirm that there is a lot of fraudulent papers
I have run a lot of Western blots, had published a number of them, and have seen my colleagues publish quite a lot of it. Also had the experience (as a graduate student) to see my fellow grad student forced to falsify a Western blot, pressured by her supervisor. When it came to publication, the lab director thought the blot looked really dodgy (nobody dared tell him it was doctored but he thought it was not right, and explicitly told the PI that he should repeat all the experiments before thinking of publishing it. The PI did not do that, and published without the lab directors name and permission. He got his paper, but the lab director refused to give him a salary raise at his next evaluation because of low scientific standards. In return the PI volunteered to spearhead a witch-hunt against the lab director, that was started by a scumbag high in the academic ladder (with similar standards as the falsifying PI).... Shortly, in my experience there are two very distinct men in academia, and the ONLY commonality between them is that they are extremely smart. One type is extremely smart to come up with new ideas and how to demonstrate them. The other type is the pathological evil-genius, who has no original ideas but is an expert at leeching off the colleagues, scheming to ruin others' reputations and stealing their research and especially: their funding! I have seen the craziest funding thefts, and the most incredible allegations used to ruin careers. It is super sad though that from the outside these extremely different career paths look the same... both are in academia.... and when the pathological cases are caught and shown to the public, the public thinks that this is how every scientist is. Which is the furthest from the truth. Yet, as always, hard work does not get a fraction of the attention that outrage gets.... so the general impression is that academia and science is all about fraud and misconduct, and nobody thinks twice that all the technology around us (from cell phones to liver transplants) came through science. The truth is that the majority of scientists are extremely altruistic, sacrifice much more from their lives as people in general do, work hard hours thanklessly, are abused and discarded by the system which has a lot of unfortunate influences at the top, and stand at a big disadvantage in life compared to a carpenter or a construction worker to have a solid financial foundation for personal life. As always, where money is, trouble follows. As funding for science is becoming more scarce every year, even honest people are forced into desperate measures just to stay floating. While there's a very big difference between totally faking a Western blot and touching up a part of the blot to make it look super clean, the latter is unfortunate as it gets placed in the same category as outright forgery. When you have ran Western blots, you know that sometimes they come out looking not so clean: the lanes might not be perfectly parallel, there might be a small uneven gel pockets skewing the lane, it picked up some dust from the camera, and so many little hickups. The way to take care of those is to run the experiment again until you get the perfect looking one, that's neat enough to get published in PNAS or Cell. However, since 2010 or so, we barely have enough funding to run a single blot for a given experiment. Nobody has the luxury to allow for multiple repeats (which likely require purchasing an additional set of antibodies, and maybe months to run the experiments to get the protein for the blots.) As a post doc, you have a job security that does not extend to more than one year, in lucky cases 2 or 3 years when you have a major lab backing you up (mainly through nepotism, but also happens rarely by sheer luck). Most often if you do not get the blots right the first time, it's the end of your career. You can start looking for a job as an adult with no practical training in any field whatsoever (other than your specialized field, that just chewed you out), and no life savings at all. And it's getting more cut-throat every year as NIH is constantly cutting the funding on R01s and other grants, while reagents go through staggering price increases every year. The funding system is forcing the academia to break, as thriving requires either uncanny luck to have experiments and ideas work right away (they almost never do in biology) or resorting to stealing and faking.
I think hardly anyone has a low opinion of scientists, especially as a profession. They’re just saying what you are-that the institutions are failing and something needs to be done.
I have a hard time believing that a lab at Stanford with a Nobel Prize winning Primary Investigator has such a lack of funding that repeat Western blots cannot be run. 🧐
last name and being corresponding author on the article means that he is the most responsible. first author is usually PhD student or postdoc. nobel prize is now just for propaganda purposes
???? What the hell are you talking about? He's the last author with about 8 people. There's a good chance he hasn't even read the paper. The last author is typically a courtesy. I'm guessing he lent a piece of equipment for the experiment. Being last means your cat probably had more involvement in the paper. Kinda disappointed in this channel. It's pretty much click bait with the Nobel Prize spin.
@@johnsmithers8913 That may be true for some fields, but the comment above is correct that in many cases, the first author will be the PhD student responsible for the project, and the last (and more importantly, corresponding) author will be their supervisor, whose direct involvement depends a bit on the dynamics of a given research group but takes much of the long-term responsibility for the paper. The least involved are usually the group in the middle, which will be students (+supervisors) who just did maybe a small supporting measurement or calculation.
@@michaelbartram9944 ?? I have my Ph.D. The first author, assuming a doctoral student, will publish papers based on his Thesis work. The Supervisor at the beginning of the Thesis will work with the Student to set up a structure of the thesis. From then on, the student is on his own but will periodically meet with his supervisor on a weekly or monthly basis. In my personal experience, I would go months without meeting my supervisor, and honestly, his input was minimal until the end of the thesis and was then a reviewer of the thesis and papers. Yes, the supervisor is generally second author, the third is either a second researcher, which participates by providing something key, such as the samples or the specialized technique/equipment. He is likely to review the thesis and publications before publishing. You could break down the contribution to the research 80, 15, 5%, respectively, for the first 3 authors. All other authors after that are "courtesy" authors, generally lab technicians, professors who donated their moth-balled equipment for the thesis, or possibly people outside academia who provided help in excess that would require an acknowledgement. Whether these authors actually read the paper before publication (although I'm sure they received a copy) or after the publication is questionable. In summary, one could make an argument that the first 3 authors are the only authors that actually put work into the paper, had enough exposure to the data and procedure to determine whether the data was good or bad. If you look at the papers that were shown in this video Sudorf was dead last in the list of Authors. He must have contributed almost nothing to the work and anyone in academia would know this.
@@johnsmithers8913 he is last and corresponding author on the article Jude speaks about in the video which means that he is the most responsible. do you think that if is author on papers he didn't even read that somehow it makes it better?
It would be more understandable if the concerns were minor, but these seem to indicate serious data manipulation. He just looks guilty, even if in reality he knew nothing about the data manipulation.
@@adamrak7560 this is naive question: is duplicating a pattern of white noise which does not appear to be the crux of the illustration serious data manipulation? While we are all speculating here, given the images clones in the two questionable protein blots all start as a certain horizontal area, it seems to be that those two images were not as wide as the others and whomever put this illustration together hap-haphazardly cloned the white noise non dataum section to give these two images the same horizontal width as the others. Not saying it was the best solution, but this is what appears to have happened here. My question is the white noise area of that protein blot scan as important?
5:24 I'd guess that the duplicated regions in the noise are due to some lossy image compression used some step along the way. However I'd expect them in all the images, not just one or two of them.
The idea of AI going through old publications to find duplications like this is actually a fascinating idea (even if that's not what happened this time)! We already have a reproducibility crisis as it stands, so being able to weed out at least the most obvious fraud would be immensely beneficial for science as a whole. Of course, people would learn to "cheat better", but there's no reason why algorithms in the future couldn't be produced to catch that later as well. As sad as it may seem for science now, this could be a major step on the path to "fixing" it.
Even if people learn to cheat better it would increase the amount of effort necessary to do so (and the tools to catch them would also improve as long as we reward people cheating).
"Cheating better" actually wouldn't be a problem at all; AI generally greatly benefits from having an increasingly adversarial dataset, particularly when the data is genuine competition and not the arbitrary adversaries researchers are fabricating. The more effective way to beat this would be to intentionally create so many false positives that the model loses credibility permanently, and continuing to train the model to beat them becomes impossible to fund, but that would require the cheaters to be honest, which would never happen. This AI will also never happen, though, since after a single false accusation, the project would probably get axed and ruin the creator's career, with the people who cheated their way to the top throwing around weight to blacklist them. It's too risky of a product to make for anyone inside academia.
@@cephelos1098 Which is how we can have tens of thousands of car fatalities a year from human error, but if we get 1 from a self-driving car, everyone loses their mind, even though it would still be objectively safer for everyone to have the AI do the driving. The pushback against common sense AI solutions can only last for so long before the benefits present themselves and people no longer desire what life was like before it. Keep in mind that electricity had doom ads against it and people were protesting to keep electricity out of their towns. Either way, the passing of time itself is all that is needed to overcome that particular qualm. Especially if the leading experts keep losing credibility and their "weight" means less and less as time passes. Ironically, this is one of the few ways for them to maintain that weight long-term.
You could take the offensive aspect out by using it like a plagiarism checker and have it as a standard procedure when acceoting a paper for peer review. If any anomalies just ask for clarification
Which of these two headlines do you find more informative? "Some uniformed Germans in 41,140 armored vehicles seen traveling on French border road" or "Hitler invades France!" ?
Thank you for publicising these matters. Thanks to Dr. Bik. for doing her work. Nobel prize or not this is science and all should have their work scrutinized. This is the only way to make progress. I am loving this disobedience to "authority".
This has happened before, many, many times. Nobel Prizes seem to bring out the worst in instututions and people. Examples include Jerome Friedman owing his physics Nobel Prize to a graduate student, who never got credit for anything. Or Samuel CC Ting/Burton Richter who owe their prizes to a a third party. The initial discovery was on Long Island Brookhaven National Laboratories. Ting doubted the results and dithered. In desperation a phone call to Stanford was made telling Richter where to look. Stanford had the better PR team, with book authors etc and they won the war of words. The guy who did the work, first, got tenure at MIT but is pretty much only known to the insiders-- this guy does not even have a Wikipedia page! Also Katalin Kariko/Drew Weissman would have stayed unknown too, considering their low status in the academic community if the pandemic hadn't happened.
Video idea: what happens to the professors after being caught in scandals. I'm in research, and here are 2 interesting examples known in our field: 1. David Baker, who is often regarded by professors in my field as the next Nobel prize winner, recenty created and extremely well-funded startup. He also hired Tessier-Lavigne as a CEO. That's the Stanford/Genentech professor with potentially fraudulent papers in Alzheimer's. 2. David Sabatini, famous ex-Harvard and HHMI professor fired for sexual misconduct, recently got big funds for his own lab by 2 wealthy investors.
The Sabatini "scandal" is absurd. A 50 year old man and a 29 year old woman entering a sexual relationship, the latter NOT an employee of the former, is not a crime. A violation of prudish and litigation-fearful university bylaws, yes, but there was no reason it should hamper his research.
Having identical blocks in an image is actually not as unlikely as the video suggests. It really depends on the encoding method used, digital artifacts really can produce these effects with non-trivial probability. For example, JPEG has a low-quality setting in which entire regions of the image will have the exact same pixel value; i.e., the pattern that's repeating across the image is just a constant value. In audio codec these things happen a lot too, and in measurement devices you have quantization errors... two random floating point values are very unlikely to be the same, but two measurements of very low amplitude signals are very very likely to be the same. Much like the image of basically a white background with low real-life variation being photographed. I'm not sure which codec was used and I can't rule out wrongdoing just from this video, but I think people throw out these "it's as likely as 1 in 10^900" way too much without it being correct. The actual digital processes generating these numbers and images are complex, and NOT purely random, and it's usually really hard to tell the actual probability of something like this happening. On top of that, for someone like Sudhof who has hundreds of published papers, with thousands of images in total, it might not be as unlikely as you might think to have 35 papers with random artifacts. That's before accounting for human error like the paste error in the video (which is really an honest mistake that can happen), and before accounting for the hundreds of thousands of scientists out there that are making mistakes all the time. I'd wager that finding a high-profile scientist with a lot of problematic papers due to no wrongdoing but simple pure chance is not an unlikely event.
These artifacts may be way more common than you think - th-cam.com/video/7FeqF1-Z1g0/w-d-xo.htmlsi=hXJ-J96cgwGXdM6r (or - if you prefer englisch - search for "Xerox glitch changes documents" ). In short - memory saving algorithm looked for similiar blocks of pixels in scanned documents and replaced them. Block size was similiar to letter size in common documants. Effect - mutliple documents where significant numbers were replaced with wrong ones (6 -> 8, 8 -> 9). And since - by definition - algorithm tried to find matchong blocks - changes werent obvious when reading. I dont think matching blocks of, basically, noise is a proof of wrongdoing. Sure - Sudhof could write better answer, but when someone throws "carrer-ending" accusation and you dont know whatys going on...
If people faked data by random sampling from a gaussian with the desired mean/variance, it would be much more difficult to prove anything. It is scary to think about how many experiments can be manipulated that we can never notice
What's shocking to me about this is how crude the manipulations are. I am a researcher myself, and if I ever had the intention to make fake data for a paper, I can think of so many ways of doing it in a much more sophisticated manner so that it would be much harder to find indications for the manipulation than looking for duplicates. And there are MANY people with the technical skills to pull this off -- therefore it stands to reason that people probably are cheating in more sophisticated ways while evading detection.
I agree, this must be the tip of the iceberg, just the sloppiest manipulations can be detected. I mean, with those blots, considering that these researchers probably collect hundreds of such blot pictures they never end up publishing, how lazy do they have to be to just copy-paste selected areas from the same picture to paper over the blots they want to erase?
PNAS is not a peer-reviewed journal. It's the ultimate old-boy's club. NAS member submits the paper on a non-member's behalf. It was meant to be a high profile rapid publication journal of exciting new results. There was a time when there were not any other options (or few options). That time is kind of past. So a retracted paper is egg on some NAS member's face. But I always treated anything in PNAS with a very healthy dose of skepticism.
9:29 I think you should mention in the beginning of the video how he most likely didn’t make these errors himself . That kind of changes the outlook on him, though in my opinion it still looks bad, but it doesn’t look like he knew it was happening.
Well, all authors are responsible of the paper. No differences. If you do not trust what's written, or worse you do not know how things were done, don't have you name put on it. As simple as that. There are scientists that publish a couple of articles a year and know what's on their articles, other publish 100 papers a year and do not even read them.
most notably to push shitty right-wing socioeconomic policy that places society behind individual fortunes /see: pro-smoking, anti-climate, anti-vaccine, anti-PoC, i could go on but those are the major lowlights //but you gotta hand it to those rich people - they know how easy it is to clown their base ///and take them for all they're worth. see: magats
TBH why couldn't the similar regions in an image be an artefact of an image compression algorithm? That's basically what image compression algorithms do : look for "unimportant" regions (perhaps where all pixels are very similar colours), throw away details, and replace that with a general approximation of the colour / texture. LZW does it most simplistically with stretches of the same colour being replaced by a single pixel and the length of the stretch. I'd guess more complex algorithms might do it with smaller 2d "patches" that are representative of larger areas. So I wouldn't rule out some unusual lossy compression (JPG? etc) creating artefacts like those described here. Also, changing resolution, attempts to brighten or despeckle or do other visual processing algorithms, especially if done at too low a resolution, might leave awkward repeated patterns.
As someone who went through the submission process before, JPEG is not an accepted submission format for images, rather ALL images have to be TIFF for that particular reason of avoiding compression artifacts. So, I'm calling bs on that part. But more concerning is that there's definitely a pattern of copy-paste on 'several' occasions.
This is journal dependent. Plenty of journals accept jpeg format for images, but if you are submitting images where compression artifacts might affect how your images are interpreted then you probably shouldn’t submit compressed images.
@@thepapschmearmd I'm not sure which type of image you are referring to, but for me it was a western blot. But even if jpeg, there is no way it had anything to do with compression.
Those duplicated boxes in the images reminded me of something... In 2013 there was a scandal where Xerox photocopiers would sometimes change numbers on copied papers, 6 became 8 most commonly. The cause of this was some algorithm that detected repeated structures in the images and copied the same data onto all these places to save memory, but clearly it wasn't accurate enough to distinguish a small 6 from an 8. Perhaps this or a similar algorithm could cause such weird 'duplications'? There is plenty of information about this online if anyone wants to look more into it.
No, that makes zero sense that you would get entire sections of artifacting that were somehow identical in a blank section, multiple times, all within the same image. Why are you trying to excuse what is OBVIOUSLY fraud by imagining some hypothetical scenario that you have zero evidence for, or understanding of? This attitude is how this fraudster managed to con his way to a nobel prize
No, that literally makes zero sense and could not possibly apply to the doctored images presented here. There are MULTIPLE perfectly rectangular, identical sections that appear "randomly" across the same data in otherwise "completely blank space" the odds of these being true artefacts created by compression would be so infinitesimally small that you would be just as likely to get multiple perfect bridge hands in a row than for it to occur the multiple times it did in this paper.
@@kezia8027 I don't follow your reasoning regarding them being perfectly rectangular and appearing in practically blank spaces. How would this make it less likely? Since all pixels are near-white, wouldn't that make it more likely to consider the areas 'similar enough'? And the areas being rectangular I'd almost take as a given for said type of compression artifacts. Much easier and more efficient to have an algorithm look for rectangular areas than any other shape.
Looking at the video again, there are other things that point to this not being the cause however, such as the fact they only seem to appear in specific 'lanes' of the results, and only in some of the results (assuming the highlighting squares are after an exhaustive search). If these specific spots are of high importance to the results that would increase suspicion further. The rotated images metioned earlier are much more difficult to explain away as well.
@@Jutastre yeah you're right, the rectangular aspect was an asinine point that is irrelevant and doesn't help my case at all, but the point is that all that 'white' is really just noise. One pixel is more cream, one is more gray, one is more beige, and to the naked eye, there are no perceptible differences, but for these sections to be modified to have the exact same characteristics, that is seemingly ONLY present in areas where we would expect to see relevant data, and not just in random completely irrelevant sections, shows a clear level of intent that could only be described through chance or an algorithm in an absurdly unlikely situation. And as you say, there are many other indications that this is false. That is the issue, you cannot look at a case like this out of context. Yes specific data is being accused of being inaccurate/forged, but to determine the likely root cause, we have to look at comparable information, and the surrounding context. Given these various red flags (which is admittedly all they are) there are more than enough of them to seriously doubt this work, and those who took part in it. These red flags warrant MORE scrutiny, not people making excuses or attempting to paint a narrative of innocence, based entirely on their own personal beliefs and understanding (or lack thereof). What benefit is there in making excuses for this behaviour? At best it was negligent scientific method and reporting, at worst its explicit fraud. The scientific community should be able to hold up to any and all scrutiny, otherwise what is the point of science?
This is just sad. Sudhof would have reacted with interest and joined in reviewing all the work submitted if he had not known of the false data. Science. You seek the truth, accolades are supposed to be the perks. Outstanding job, Dr. Elisabeth Bik. We need many more of you out there.
Dude been loving your vids the past year or so since I found you. Its so interesting how so many things we believe are one discovery of duplicated data away from being destroyed
It is soooooo common for scientists to photoshop their images to make them look better, even if the general results would not change. And this has real implications. Investors make decisions on how good a discovery appears, and perfect images can convince them to invest.
Science is a search for truth. If such discrepancies appeared in my work, rather than being offended I would want to learn why they are there, and what they mean. The off-hand, poorly thought out dismissals are concerning. Whenever someone seems to be saying “don’t look over there”, I have the overwhelming desire to look.
The first lab I was in, they never told us to fudge the data, but they didn't have the instruments to read a key parameter and essentially told us to make something up, and if the data didn't support the hypothesis the lab manager would be mad for the next week. I tried to build on the work of a couple previous students and was told that was probably a bad idea. It starts from the professor.
Sorry to have to tell you this, but you are very naive. Peer review has become a charade, and indeed, a significant proportion of it is faked. Also, even if a reviewer says a paper is rubbish, but another says it is quite interesting The editor will probably publish it anyway.
Having been a peer reviewer myself it can be difficult. Elizabeth Bik has an amazing visual cortex. She sees things i would miss. We need more like her. Data manipulation can be difficult to spot at least initially. I have just finished a battery of tests on a published data set. This set actually looks good despite it being almost impossible to recreate. Science isnt yet totally broken 😂. Peer review is very poorly rewarded. You have a few days to teview material that can be very technical. You dont get paid for this. You do not get thsnked or acknowledged for the time and effort. Some put a lot of effort in. Others less so. One set of papers in nature had an overall error rate of approximately 30%. Final figure when i had it sorted out was 32% if i remenber correctly. I found these problems the day the papers came out. I wrote a note to nature giving examples of the problems. Nature decided that my letter was not of sufficient interest. Huh? I tried to get this pslublished elsewhere. The authors of tye original papers did everything they could to stop it being published. I eventually got this published in the middle of a different paper. It was an unreal experience. The authors later admitted the presence of errors and seversl years later published a revised paper. Residual error rate was only about 10%. Peer review is not easy. Journal editors dont like admitting mess ups. Authors can make life difficult for you. The whole process is messy
@@binaryguru I should add that this is not a speculation if it’s a fact, because I review papers for journals as a world expert in the particular field until seen several papers, what I said were rubbish published anyway. Needless to say no communication from the editor, justifying his decisions .
No, peer reviewers have no access to the raw data, they assess the logic of the conclusions and the relevance of the findings. They have no way or incentive to actually check the validity of the data itself.
@@psychotropicalresearch5653 Has become???! It was always this way. Friends help each other even in deceit and try and question some inventoLord or some serial discoveroSearcher. It's all bullshit. Most of it, anyway.
Sounds to me like a large part of the issue is that the system is built such that professors that are in an advisory role (and a loose enough role that they can't spot faked data in the study) are getting the paper published under their own name. If they didn't do the bulk of the work, their name should not be first on the paper. Instead, established professors names are being put on papers that in reality they had very little to do with, and there is little incentive for those actually doing the work to do it right, because it is not their reputation on the line. For my thesis, I brought the idea to my advisor, and we spoke for 30 minutes once a week, and she never even saw the experimental apparatus or the code. For my cousins thesis, the project was his advisors initial idea, but the only time he got with his advisor was a 20 minute slot once a week, within which 15 students took turns updating the advisor on their own different projects. If all of those are published, that's 15 papers with the professors name at the front, none of which the professor actually knows all that much about.
profs names should never be at the front - that is reserved for their trainees who did the work. Senior authors almost always have their names listed last, not first, on published manuscripts.
Issues like this are partially why public trust in academia have fallen drastically as govt policy decisions & products such as medical treatments are developed based on academic findings. I'm extremely happy to see public peer reviews occurring to start weeding out the snakes in the grass that have been an issue imo for far too long. Time to bring credibility back.
Still, the problematic that there are repeating patterns in these western blot images could be easily explained by image compression in the editorials process (which would also make it the journal's faut and not TC Südhof's). Some image compression algorithms (like JBIG2) use pattern matching to reduce image size (like in the Xerox scandal 2013). That could also be a likely explanation of having the exact same noise pattern in images shown.
The correct response would be to be concerned about the anamoly and to follow it up with the first author. The actual response leads me to think that their was either direct complicity, or an indirect culture that makes it acceptable to manipulate results. Although I would add that the blot images may have been manipulated for aesthetic reasons ... like someone put their coffee mug down on them and someone made the call to not repeat the expt. edit* Also to add, some supervisors can be really supportive of their teams and phd students, and will defend them really strongly.
To be fair, someone who blindly supports their team in spite of evidence that they should be inquiring into their accuracy/efficacy is not someone who SHOULD be supervising.
@@kezia8027 not really, defend robustly first is a perfectly reasonable strategy as their is a presumption of innocence. Also, you don't know what happened behind the scenes, the author who did the images could have been asked, came up with the BS excuse and that was just parroted out. The point is, that it might not be as clear cut as the vid makes out and I personally need more evidence.
just a comment on the first fluorescent images and nothing else: if you turn photo A89P clockwise 90°, you can overlay the yellow box in it on the yellow box in photo 9-89. So what is to the left of the yellow box in A89P is 'above' the yellow box area and what is below the yellow box in 9-89 is 'below' the yellow box area. Because the photo plate is arranged as a bunch of square photos, not rectangular photos, these 2 photos show overlapped fields of view. the areas outside of the yellow box in both photos are as important to show as the yellow box area. so to fit a nice square photo arrangement on a plate of photos, 2 different, overlapping photos were provided. The same with the purple rectangles. I do not see artificial manipulation of the photos, just some overlapping regions in a couple of them. For photography under a microscope this is not unusual. They could have created 2 larger rectangular photos to replace 2 sets of 2 overlapping square ones but the layout of the plate would be disrupted. For those who care about nice visual arrangements of photo plates (I am a microscopist) they have done nothing wrong. For those who don't care about how their plates (or graphs, plots, tables) look, there are plenty of incredibly awful examples in the literature. as for the rest of the stuff, will let the medical community review his papers.
As a quality engineer I know that it is “easier” to point out the flaws in everyone’s work than it is to be the OG and not generate them. In my experiences this an issue with other people doing the work and the manager not having a flipping clue about the work. Happens all the time bc the manager needs the project result and doesn’t know what the real risks they are taking bc the engineering is trying to minimize concerns. It is absurd that “managers” get their names on others work. This is similar when managers steal patent credit. These scientists need to start doing their own work again so they actually have a real connection and insight.
There should be a website database created for scientific papers that had no significant results. So that those papers actually go somewhere, and people can look them up.
These might be artifacts from the Floyd-Steinberg dithering algorithm, which is used to convert color images into grayscale. This method employs predefined repeating patterns, known as 'error-diffusion,' applied to surrounding pixels in both dimensions to create various shades.
Having done quite a lot of flourescence microsopy myself, the instance of the rotated images i would whole heartedly believe it could be an honest mistake. You end up with hundreds of images that you have to resize/rotate and do whatnot. The other "mistakes" though are quite weird. My guess would be the pressure of having to get data with results got to someone in the team. Whoever that might be.
This happens rather commonly in academia. It won't stop until the honors are stripped away in a public ceremony or similar exemplary action is meted out. The people who have studied at about the semi conservative model of DNA replication would know the British scientist Rosalind Franklin confirmed the semi conservative model through X ray crystallography process, but the honors and the Nobel Prize went to Watson and Crick. Similarly, the Indian Nobel laureate Amartya Sen probably received the honor because of his influential wife Emma Rothschild ( the richest and most powerful European family) . He currently has allegations of land grabbing a huge plot of university land and some fraudery charges labelled against him. Dr. Südhof's case might be classic PhD slave labour. All in all this hinders the sanctity of the Nobel Prize and disregards visions and scientific temperament of the patrons.
To be fair, quite a few of the duplications/errors noted on Pubpeer got responses from the other authors acknowledging their mistakes and showing the original data or offering to correct them. The majority seem to be genuine mistakes. The others don't seem to affect results anyway. But Südhof seems to get irrationally angry at these posts. Sure its annoying to have mistakes constantly pointed out, but its free. Also if the rest really were unintentional, he should get to the bottom of why they happened. Personally I think there were probably irrelevant visual issues with a few images and someone along the way decided (wrongly) to edit them.
Yeah, the big boss himself is unlikely to be the one who manipulates the data (or even runs the analyses or makes the figures). However, when manipulation is commonplace in a group then it eventually boils down to the head of the group telling people to cover up the ugly stripes in their Western blots and to make some data up when nothing is available (or when the real data doesn't support the hypothesis). In short, it is a culture in his group, and who else to blame for the culture than the head.
I'd be most concerned about the pharmaceutical derived "conclusions", which may well be harming patients and/or making lots of money for some pharma-mafiosi. Has anyone investigated which drug and drug-making company benefits from these "errors"?
AI engineer here, modern AI image-generating algorithms cannot produce exact duplication of the kind observed in these plots. They can produce very similar regions, but because of the way they create the image the exact pixel values will never be identical in different regions of the image. (It's similarly as unlikely to random noise being identical in two regions of a natural image.) This seems more like manual photoshopping.
The paper in question was published in 2010, there is no way, that any form of Generative AI was used in generating these plots. ML models were much less common and way way less user-friendly back then.
This is as it should be. Scrutiny of papers is essential and it is the only corrective measure for poor results --whether by malice or simple incompetence. The more suspicious people are, the better. So these scandals are a sign that things aren't broken, but that they are not getting caught as early as they should be.
Considering that he is about to become unemployable, have his tenure and pension taken away from him, along with possibly having to pay damages it makes a lot of sense!
Could these duplications have something to do with the compression algorithms in photocopiers/scanners and similar devices? There is a big problem like that known to occur with Xerox devices where similar looking areas in an immage were replaced with clones of just one area, leading to numbers being changed in millions of documents some of which were destroyed after digitizing and now cannot be corrected. Something similar could be at play here and would have to be ruled out. Nobody expects thier scanners to betray them, yet it can happen.
@@AbsentMinded619Sure, you are emotionally attached to the subject, so I'm going to explain. It's bad, but it isn't as bad as plagiarism, like what happened at Stanford.
I’m a photographer and have edited many photos with the goal of getting them to “look right”. Sometimes the unmanipulated result looks misleading so you tweak it. That doesn’t excuse tweaking scientific images, but if an image in 2010 was ok in the meaningful area but had some artefact in the area of the coloured boxes then I would reach for a bandaid type tool to erase the artefact and clone some background area in. No one will ever know! Wrong, this kind of edit is pretty easy to detect if you know how they are done. I can’t tell if this is corrupt science or just the unwise application of photo editing software in the publication process. I would say there are two conflicting ethical standards here and the higher scientific standard rather than the image preparation standard should apply.
So many people who are attempting to claim it could have been caused by a XEROX. Clearly these people haven't seen what artefacts look like, or what copying and pasting "blank" areas over them looks like either. When you know, it is painfully obvious what has occurred. WHY it occurred is still technically up for debate (though I have my suspicions) but without a doubt, these images have been deliberately doctored for SOME reason.
Why put in more effort? This amount got them a nobel prize. Elizabeth Holmes was worth BILLIONS. More effort wouldn't get them any better results, so why bother?
Both my 1st cousin and nephew have Ph.Ds from Cambridge. Academia is not dead in Europe or Australasia. Asking for a fresh replication should be part of the process.
I wouldn’t call academia broken just from a misconduct of a scientist. And Academia being able to flag and retract works that might be a product of misconduct and dishonesty, and from a Nobel prize winner at that, sounds like a good system to me. It’s not perfect by a long shot, but hey, it’s there.
The point is that there are MANY more instances of research fraud occurring, and the penalties for so caught academics usually being minimal if at all. I recommend Coffee Break's video on the replication crisis, as he also covers two other major academic fraud cases (University of Florida criminal justice and Duke University cancer research).
The fluorescence image I can believe is an honest mistake. Sometimes when we capture images of the same sample, we accidentally capture part of a previous region. We have hundreds of images to go through and sometimes human error occurs. But, the WB artifacts and copy paste data is laughably obvious... You would think these people would think to cover up their tracks a bit better. Maybe in 2010s it's not as important, but in 2023 you are over 90% going to get caught with copy paste data (10% reserved for low novelty papers published in low impact journals that not many people will read and review in detail).
Some Xerox machines would alter parts of copied documents to look like other similar parts. The machines used a compression algorithm that found similar parts of the page so it would only have to store one of them. But it was too greedy and would sometimes consider e.g. a 6 to be a duplicate of an 8. If these images were copied on one of those machines, it could have done the same thing with the images. More info in this talk: th-cam.com/video/c0O6UXrOZJo/w-d-xo.html The quote "Now and even 25 years ago computers were not routinely garbling images by cloning regions" is false. Lossy image compression can do exactly this, if the regions are similar enough to begin with or the algorithm is faulty (as in the Xerox case, which was discovered in 2013, which is between 25 years ago and now). And the probability of either of these cases is much more likely than 1 in 10^986.
This is asinine. Artefacts and compression could never clone MULTIPLE smaller sections, that are in the middle of BLANK SPACE, MULTIPLE times within the same image. This is ignorant and spreading harmful rhetoric that only excuses poor practices and obviously doctored data. Stop trying to imagine reasons someone isn't a fraud. Let con men speak for themselves. Attitudes like this are how con men like this are able to fester unmolested.
As a visual digital artist of 25 years, that happens when you use a rectangular selection to copy and paste over a region. It looks like they used several small samples, put them together, then copied that again to move it to another area. Yes this is used to cover blemishes
Maybe if the whole system of Academia wasn’t broken we wouldn’t be relying on people to actually do the review part of peer review after the fact. But when 4-5 top minds in their field get popped, where there is smoke there is fire
Please subscribe!!! Also, PubPeer comments can be found here: pubpeer.com/search?q=Thomas+Südhof
6:47 Wait, what!? Okay, that's amazing. Does Elisabeth Bik have some incredible neurodivergence that allows her to spot this by eye!?
She's like a detective born at just the right time in the right era to catch these types of. . . anomalies.
Ask people to "like" more. How did a video this good get only 3K likes on a 60K view count? Seems low to me.
Well would these photographic inconsistencies alter the general results and general findings of the paper? From what I can see it would not. So where is the scandal?
These alterations look more like some idiot assistant scribbled with a pen on the original chromatography, forgetting that pen ink will also be Chromaticagraphitsed. And somebody remove the inkpen scribblings into typed letters to the column on the right. Basically bringing back the chromatography to its original state before somebody spoiled it.
This all looks like clarification not like forgery. If a scientist doubts the results of a paper, the scientist repeat the experiment and see if the results match. They don't look for digital inconsistencies.
You should interview Elizabeth Bik
Which image generating AI was predominantly used in 2010 for immunoblots in BioLabs? 6:55
Professor at a top university, at the forefront of medical research:
"lemme just turn my pictures 90 degrees, that'll fool them"
"those are compression artifacts" lmao what a joke. This dude's supposed to be smart but doesn't even have a grasp for how unlikely it'd be for random noise in two areas of an image to match up perfectly? 😅😂😂
I mean... its kind of worked for a while. If it wasn't so easy to slip through, prominent people would at least put in more effort.
Suprisingly lazy at cheating!
@@pablovirus Not unlikely. Modern compression algorithms do that deliberately.
@@joshuahudson2170 Like what? They encode an area as "clone-brush from that other area that has the same noise spectrum"?
The bigger problem is that negative results don’t get published. So everyone will try to fudge their data, conclusions and say there is something significant.
As long as the people doing the experiments have a dog in the race there always will be these kind of problems.
True. I think negative results may often be more interesting than positive.
I read an article in a newspaper 35 years ago, or so. Researchers found that, "College women who got pregnant were more likely to finish successfully if they got an abortion." Besides the, "Well duh," factor, I marveled at the assumption on the direction of causality. Like maybe it's, "Women who prioritize college over motherhood are more likely to get an abortion."
I would find more usefulness from, "I tried this promising medical treatment and didn't get the result I hoped for." Others might try variations and find success, or the medical community learns and tries something else.
Time ago I was thinking the same, but now I think that if we include a new bunch of papers about negative results it would be impossible (may already be) to stay up to date of the smallest, lest significant micro area of research. So many journals, so many papers, so much systemic pressure… publish or perish… a big business with many journals that will support your work for a reasonable amount. Are positive results more easily faked than negative results? That’s is another one. Do a lazy experiment with a sick hypothesis , nothing comes out, publish on The Journal of Irreproducible Results. Done. Next
@@joajoajoaquin It is exactly this "publish or perish"-nonsense that pretty much forces people to forge. You *need* to publish, *only* positive results get published.
Guess what people will produce.
If your grad research gets negative results your academic career dies.
Science is now doing what peer review should have been doing the last 2 decades.
Pretty wild when you think about it like that, but yeah.. I, as a PhD student was put through some very harsh and frankly cynical questions before I managed to publish my first paper. A lot of rejection even though I am confident to say it was all honest work that I was able to repeat many times.
How come established scientists are not put to the same level of thorough scrutiny?
Surprisingly Joe Rogan interview with Terrance Howard went over how peer review is a scam created by Maxwell the guy that had something to do with Epstein
Perverse incentives.
Problems with peer review would disappear if we valued experimental replication more.
I'm first! isn't science, it's different people working on the same problem coming to the exact same conclusion.
Jan Hendrik Schön... thats the only name i gonna say, anything comming after him as "peer-reviewed" isnt worth a miniscule of consideration on itself. Peer Review is broken sicne that guy, and until the scientific community comes up with something which FINALLY fixes the hole Schön shoot in it i gonna consider chases like this as a inteded feature, not a bug.
This is why I always photograph extra unpublished blots to create unique forgeries.
I'm amazed that these PHDs use such rudimentary methods. I may be a low-rank engineer, but I'm pretty sure I can do significantly better forgeries.
@@federicolopezbervejillo7995I think it’s because the fraud is often not premeditated. Imagine grad students and postdocs are working in a pressure cooker lab where an overbearing supervisor demands you crank out amazing results, that fit their preconceived notions, and must make it into a top journal by some arbitrary deadline. Honest scientists too often get stymied by negative results, or are sidelined into the annals of mediocrity.
@@federicolopezbervejillo7995much of it is laziness and arrogance. They're in a field so used to publishing reams upon reams of nonsense that no one will ever read other than a few lazy reviewers that I'm sure they never actually expected anyone to give it any more than a cursory glance.
They’re sure to try harder in the future. They’re smart, science is hard.
@@federicolopezbervejillo7995 the concerning thing is, if undetectable forgeries are that simple, then they are out there undetected
another month another fraud
Careful, that's a really big accusation. Until there's a formal investigation it's best not to assume that
@@rickyc46 it's a youtub comment for fuck sake
Always run a few blur tools in Photoshop to cover your tracks!
Idiocracy
ALL past Nobel laureates' works should be reexamined for potential fraud. It is highly unlikely that this human behavior only began in recent years
This is happening ALL OVER ACADEMIA. Thank god it’s finally getting some attention. The social sciences are the absolute worst with this.
@jacobmumme You pointed out right: ALL OVER ACADEMIA. 99.99% of scientist do their reasearch for state or private company grant money and not for their interesting. And by chance, they always reach results that support the interests of their clients.
But medicine is catching up.
If your discipline requires the word science in its name can’t be very scientific
@@lv4077 Or worse, studies 👍😂😂😂
If there is ever going to be a prize for showing fraud, this should be called the Elizabeth Bik prize.
[Laughs in Hindenburg Research]
According to Wikipedia, in 2021, she was awarded the John Maddox Prize for "outstanding work exposing widespread threats to research integrity in scientific papers".
@@sophigenitor - The woman is on a mission! It can be lonely and thankless, though. We should support her efforts.
Well, If a proper scientist doubts the results of a paper, the scientist repeats the experiment and see if the results match or not match. They don't look for digital inconsistencies.
That's not scientific.
These alterations look more like some idiot assistant scribbled with a pen on the original chromatography, forgetting that pen ink will also be Chromaticagraphitsed. And somebody remove the inkpen scribblings into typed letters to the column on the right. Basically bringing back the chromatography to its original state before somebody spoiled it.
This all looks like clarification not like forgery.
@@winstonchurchill8300 "If a proper scientist doubts the results of a paper, the scientist repeats the experiment." There are no proper scientist then, i guess. Only broke loosers in a dire need of funding.
I think
1. The peer reviewed system needs an overhaul because it’s completely failing at its task.
2. It’s too easy for a “supervisor” to put their name on a paper without doing the leg work and then expect blame to lie elsewhere when they fail at supervising.
3. We need some kind of reward for the person/people who find the scam artists/Lazy work in the system.
That's the way to do it... incentivize peer review studies. These irregularities only were found due to the authors' sloppiness, and someone who was looking for duplicate data in the results. Other than that sort of double checking, nobody has done a review study to find out whether it's repeatable with similar results... because who will pay for that secondary research? It happens, but not on most studies. If the person who stitched the western blot image had used different "empty" patches for each coverup, then it wouldn't have been caught. Going forward, most embellishers won't be so sloppy.
They rubber stamp each other.
@@ksgraham3477 That's not quite it. Working in academia is a full-time job. Would you like to spend several dozens of hours monthly working FOR FREE just to earn browny points from journals you're reliant on to publish your own work? Probably not. So conscientious scientists actually read the manuscript, look at the figures and provide comments. But essentially nobody will go to the trouble of doing a thorough check for fraud. We have daytime jobs!
Proper rubber stamping is a small, if related, problem. When you know that rejected manuscripts will be published eventually, even if not in the same journal, you're incentivized to provide as many helpful comments, rather than rejecting, semi-trash research (and even right-out trash). But I wouldn't accept something I knew was fraudulent.
Number 2, the supervisor putting his name on a paper without doing the leg work, is how you get ahead in Academia. It is proving that the academic system is broken and we need a different way to choose and evaluate research.
Right? Supervisors expect to like, privatise all the credit and fame and socialise all the blame and mistakes. Fuck them. If you're the lead author and your paper is fraudulent, THAT IS ON YOU! None of this bollocks 'oh it must have been a research assistant sneaking in bad data not my fault!'
When does Elisabeth Bik get a Nobel Prize?
There should be a Nobel Prize for proving fraud or outright disproving the truthfulness in important papers. That might actually create a healthy tension between the scientists doing research and the people keeping them from using tricks and/or deception to gain status and wealth.
@@hungrymusicwolf That is an excellent idea because that person clearly has a better understanding than the peers who reviewed those published papers.
Exactly
A Nobel prize on unmasking fraudsters
You get a Nobel! You get a Nobel! You get a Nobel!
When I worked in academia, I was there long enough to hear rumors about specific professors. Some of these rumors even came from their own students. Only a few profs names came up again and again, but it became predictable after a time. There were certain names you came to mistrust, even if you had no direct proof or contact, because the stories never stopped coming.
Whether it was an academic integrity problem, or a problem with interpersonal conduct, some names became associated with this stigma. But the truly discouraging part were the recurring counterpart tales about students who supposedly HAD direct involvement in these issues, and took them to the administration, and their concerns were buried or ignored. These stories ALSO came up again and again.
These problems don't exist JUST because there are bad actors in academia. They also exist because *administrators* would rather sweep the concerns under the rug, and avoid a scandal that might hurt donations or grants, rather than maintain a rigid standard of integrity. I'm convinced that a lot of the fraudsters are given cover, intentionally or not, by their departments and institutions who are desperate for the gravy train to continue at any cost.
JF you have summarized it well. It takes a lot of different people to enable this type of fraud, and working in unspoken agreement. I have seen it from the inside.
Oh yeah, science is science and people are flawed. That's why we need people like Pete and Elizabeth Bik to identify flaws and perform corrections.
Sorry to burst your bubble, but good old Pete here is guarding his backside by never calling a spade a spade. the always skirts the edges and give the liars weasel room.
@@biggseye no bubble to be bursted, I am a PhD myself, spent 6 years in academia and still active.
Humans are flawed, but the scientific principles are sound - seeing as I am typing up this response using incredibly advanced phone techology and transmit this text to a site for the whole globe to see at any time of day or night.
@@biggseye I imagine unless you have rock solid evidence that someone has done something themselves, an accusation of that calibre is very dangerous. Even the fact that this kind of stuff is being brought to attention is commendable I think. You don't *need* to point fingers and call people explicitly liars. The scientific community is smart and can draw conclusions.
Yeah gonna have to side with the PhD guy here. Channels like this are misleading.
This isn't so much "Science" is flawed as it is "Human integrity is insufficient".
Elisabeth Bik is the real MVP. When these institutions fire some of these fraudsters they should send the discoverer that employee's would-be bonus or 6 months salary upon termination. It would show that they actually care about integrity and encourage academic honesty rather than just acting aghast and brushing things under the rug. Encouraging honesty and sending a message that academia can have a future when trust in institutions is at an all-time low.
That would create another whole problem of people being shady just to collect money. People should do good for the sake of it, once people get rewarded for something they do the bare minimum in order to obtain that reward.
They don't care, they probably get more cash committing the fraud. People without integrity will act in their own best interest, so you can suspect even rewarding "honesty" would be filled with fraud as well.
Trust in institutions should be at an all time low. The neoliberals that dominate all of our institutions have disdain for the public and choose to implement their agenda through social engineering, manipulation, and collusion with various central gov agencies.
As a supervisor it is also your role to verify the data and ask uncomfortable question-At least that is the case in Germany... Maybe you can mess it up once or twice but not 30 times.... You do get so much money BECAUSE you have to do this tidious task, you can not just rest on your past distinctions...
!?!??! Was a nazi working on it !?
Maybe there are some working groups in Germany that still work carefully, but I am afraid they are a minority.
In Germany my friends mentor stole his phd idea which he was working on and gave it to another student and helped him finish him before my friend did
@@crnojaje9288 Googling highest ranking universities, my first hit says that the best German ones are on 28th and 59th place. It's not like it was 100 years ago, is it? There's even those in that prison colony called Australia ahead, so do shave and put your cups in the cupboard in order! ;-)
A med student friend of mine asked her adviser if she should go into research, or medical practice.
He asked her how important it was to her to be able to look at herself in a mirror. Abby asked him to clarify, and he said, "They won't order you to commit fraud, but they'll press you to find a way to get their products approved - whether they help patients or not. Could you still look at yourself in the mirror after doing that?"
That was 20 years ago, almost. I never found out which way she went.
Scientists can't do science if they can be pressed to be biased or commit outright fraud.
""They won't order you to commit fraud, but they'll press you to find a way to get their products approved" -- Sounds a lot like the computer software industry.
@@JakeStine But computer software rarely kills tens of thousands of people, like Vioxx, Fen-Phen, and Fentanyl.
Makes software design sound almost benign... except they still cheat customers out of their money. Just not their lives. 💩🤡🤯😵💫
@@JakeStine Sounds like every job I have ever had in every industry I have worked. Construction, contracting, software, food and sales. If management is not requiring deception, the customers or culture are.
@@luck484either you're too stupid/lazy to find a field where that isn't the case, or too amoral to care
I love the fact that you took note of your audience's reaction to your first academia is broken videos and took action. Your action resulted in this channel becoming one of the most unique channels covering these subjects.
what reaction? what action did he take?
@@mrosskne what I mean is I think his channel was on marketing if I'm not mistaken but his first academia was broken video got a lot of views, so he took note and started to dig into cases like these which I believe had a good return in terms of audience count.
"As the final author on the paper & as a scientist of his caliber" he should NOT just be there in an advisory role: he should be reviewing & confirming the data collected & the work of his co-authors. Let's face it, as a Nobel Prize winner, the paper is flying under his authority & will attract attention in the marketplace because of his name on it.
If he's just surfing fame, resting on his laurels & gaining continued fame by co-opting the work of his co-authors, he deserves to go down in flames if THEY fabricated evidence.
For real. If he had so little to do with the paper that he couldn't even determine the veracity of the output, then he should be stripped of his nobel prize for not actually contributing anything.
@@kezia8027 He should not be mentioned as paper author, at all.
My ex was a research assistant at various economical-environmental agencies in DC. Huge multi-billion dollar facilities. Probably by now she is some lead researcher
Her papers were basically on various environmental factors on the economy of countries. Like, for example, how rainfall affects agricultural growth in Tibet and how this affects the economy, etc. She would sometimes actually visit the said countries to write the research (although she would never visit any actual sites)
But anyways, the point is, I would know the conclusions of EVERY one of her research assignments before any research. I was shocked that she did not see this way. Nothing she researched had a negative conclusion. Absolutely everything she did, the conclusion was that X impacts Y and Y impacts society and economy, and so by giving money to fix X, we will fix both Y and the economy. This is the conclusion of every single economic research paper of every 3rd world country economy/environment/etc
I just still dont understand why she could never understand this: that she is paid to write what THEY want, rather than conduct actual research. If she goes to Tiber, and writes that everything is just fine, the research center would have just lost money for nothing...
And obviously, all these research centers are heavily politically inclined. I would guess that 95% or more of the employees and especially the leadership, all vote and heavily support one party and one political view on economics/environment/psychology
Shocking.How many papers pointing out the idiocy of catastrophic anthropogenic CO2 “climate change “ have you seen in Nature recently? Money always dictates the conclusions,the studies are a mere formality
The desperate need for funding has corrupted ALL the sciences 😢
Did you mean "greed for funding"?
Nothing new sadly. Even 10 years ago when I was studying biochem it was an open secret that if you don't get the results they want they'll find someone else who will to give a grant. It's all crooked.
> need for funding
lol, hard to sympathize with scientists over this supposed desperate lack of funding when one of the best funded areas of research --- cancer research --- is filled with frauds and mismanaging of resource all the same. Not to mention, fundamentally this (lack of funding) is a problem that will never go away due to the nature of resource allocation and what we study in the first place.
@@thomasdykstra100 Considering not getting funding leads to people getting fired, and you need to work to live, it's need
@@TeamSprocketso does what that mean for integrity in the sciences? I am tired of businesses being demonized for seeking profit (even when they are acting ethically by anyone’s standards) when the sciences are considered noble for seeking funding even as it will use the most unethical means to get it.
As someone who as owned quite of few dogs, I find them eating my homework more believable than it being "compression artifacts" as would happen with low resolution rendering.
I've had my birds eat my homework once.
@@noelsteele My cat Grey tore up my finished assignment once. I had to redo the whole thing, internally crying all the while.
My dog chewed up my notes once. Fortunately never anything important. It was weird to find him chewing on paper
The second one would be the closest. But it just doesn't look like JPEG. Not to mention it wouldn't be copied around neatly like that. It would also need to be a very very high JPEG setting, as that content looks really difficult to compress - it's pretty close to random. And true random data cannot meaningfully be compressed.
Fun maths fact though: you can have an infinite string of random data. The chance that you can compress it by 90% is exactly zero. But it can still happen! It's the difference between surely and almost surely, where you can have probabilities of 1 or 0 but still have things not happen or happen.
@@lost4468yt there are compression algorithms that do exact duplicates
@PeteJudo1, the cloned rectangles shown round 5:30 in the video can actually be artifacts of an image upscale. Actually if you try to upscale an image that has any one-colored area will result in repeated upscale artifacts because the algorithm will make the same "decisions" on what to do there and given it will be seeded with the only non blank area will repeat the same output until it fills the blank area. I am just a software developer and have no idea of the subject of the photos and it's composition, but I have enough experience to say that "well.. I have seen this before" ;)
When your success in academia depends largely on the quality and relevance of your research, it’s no surprise that people fabricate data. There are essentially an infinite number of directions you can research, and the majority are dead ends. Imagine working your whole life in a merit-based system, climbing to the very top, then getting passed over because you happened to pick a research direction that was a dead-end, which you couldn’t have known was a dead end ahead of time. You can either attribute the last few years of your life to “at least others know not to do it this way now”, but the temptation to fabricate some results is very real.
Ironically, the negative result could, as you say, be beneficial to research as a whole. But it would hardly be a satisfying outcome for the researcher.
@@joannleichliter4308 Absolutely. Knowing what doesn’t work is extremely valuable and necessary for scientific development. It’s just not recognized as especially valuable as it’s a far more likely outcome.
@@tomblaise Moreover, in testing various compounds unsuccessfully for one purpose, researchers can inadvertently find that it is efficacious for something else. I don't think that is particularly unusual.
@@joannleichliter4308 University (at least Canadian university) is not entirely merit based. I would go as far to say that merit is not the main predictor of success. I have experienced as well as seen others be looked over for a multitude of reasons that have nothing to do with merit. If you want to include things like "politicing" and ones ability to blindly follow orders in "merit" then sure, but lets not confuse merit (marks, and significant findings) with other things. It's sad to see bright minds, over looked because they refuse to tow the line. People with power in these institutions would rather lie and manipulate people and data to remain "successful" than to accept that they are perhaps wrong.
Merit should lie in well executed research work and not in the results of that work. Researchers (should) have no influence on the results of thier research. They shouldn't be blamed for results which are unfavorable to society, reality being a certain way is not thier fault.
I know academics who publish hundreds of papers per year and are promoted and yet they don't seem to do any work Yet their work gets published in good peer reviewed journals In addition they don't even have access to lab equipment Who can explain this? I feel even the good peer reviewed journals are fraudulent and the editors just OK their friends' work Thanks for highlighting this issue Pete As someone who has worked in academia I can confirm that there is a lot of fraudulent papers
It's not as if the big journals haven't been caught with unreproducible papers...
I understand why the process from research to clinical applications is so slow: there's a bunch of contradictory BS data out there.
“Trust the science” has forever become a meme.
"Trust the science" doesn't mean you have to stay an idiot "blindly trusting your whole life!" 😌
Retracted PNAS!... It's just cold 😐
penis or peanuts
😂
i was looking for this in the comments
Finally! Why did I have to scroll so far down the comments for the "retracted PNAS" joke??? That's the real scandal here!
I have run a lot of Western blots, had published a number of them, and have seen my colleagues publish quite a lot of it. Also had the experience (as a graduate student) to see my fellow grad student forced to falsify a Western blot, pressured by her supervisor. When it came to publication, the lab director thought the blot looked really dodgy (nobody dared tell him it was doctored but he thought it was not right, and explicitly told the PI that he should repeat all the experiments before thinking of publishing it. The PI did not do that, and published without the lab directors name and permission. He got his paper, but the lab director refused to give him a salary raise at his next evaluation because of low scientific standards. In return the PI volunteered to spearhead a witch-hunt against the lab director, that was started by a scumbag high in the academic ladder (with similar standards as the falsifying PI)....
Shortly, in my experience there are two very distinct men in academia, and the ONLY commonality between them is that they are extremely smart. One type is extremely smart to come up with new ideas and how to demonstrate them. The other type is the pathological evil-genius, who has no original ideas but is an expert at leeching off the colleagues, scheming to ruin others' reputations and stealing their research and especially: their funding! I have seen the craziest funding thefts, and the most incredible allegations used to ruin careers.
It is super sad though that from the outside these extremely different career paths look the same... both are in academia.... and when the pathological cases are caught and shown to the public, the public thinks that this is how every scientist is. Which is the furthest from the truth. Yet, as always, hard work does not get a fraction of the attention that outrage gets.... so the general impression is that academia and science is all about fraud and misconduct, and nobody thinks twice that all the technology around us (from cell phones to liver transplants) came through science. The truth is that the majority of scientists are extremely altruistic, sacrifice much more from their lives as people in general do, work hard hours thanklessly, are abused and discarded by the system which has a lot of unfortunate influences at the top, and stand at a big disadvantage in life compared to a carpenter or a construction worker to have a solid financial foundation for personal life.
As always, where money is, trouble follows. As funding for science is becoming more scarce every year, even honest people are forced into desperate measures just to stay floating. While there's a very big difference between totally faking a Western blot and touching up a part of the blot to make it look super clean, the latter is unfortunate as it gets placed in the same category as outright forgery.
When you have ran Western blots, you know that sometimes they come out looking not so clean: the lanes might not be perfectly parallel, there might be a small uneven gel pockets skewing the lane, it picked up some dust from the camera, and so many little hickups. The way to take care of those is to run the experiment again until you get the perfect looking one, that's neat enough to get published in PNAS or Cell. However, since 2010 or so, we barely have enough funding to run a single blot for a given experiment. Nobody has the luxury to allow for multiple repeats (which likely require purchasing an additional set of antibodies, and maybe months to run the experiments to get the protein for the blots.) As a post doc, you have a job security that does not extend to more than one year, in lucky cases 2 or 3 years when you have a major lab backing you up (mainly through nepotism, but also happens rarely by sheer luck). Most often if you do not get the blots right the first time, it's the end of your career. You can start looking for a job as an adult with no practical training in any field whatsoever (other than your specialized field, that just chewed you out), and no life savings at all. And it's getting more cut-throat every year as NIH is constantly cutting the funding on R01s and other grants, while reagents go through staggering price increases every year. The funding system is forcing the academia to break, as thriving requires either uncanny luck to have experiments and ideas work right away (they almost never do in biology) or resorting to stealing and faking.
I think hardly anyone has a low opinion of scientists, especially as a profession. They’re just saying what you are-that the institutions are failing and something needs to be done.
It seems like rerunning a test to try and get the result you want is just as fraudulent though
I have a hard time believing that a lab at Stanford with a Nobel Prize winning Primary Investigator has such a lack of funding that repeat Western blots cannot be run. 🧐
last name and being corresponding author on the article means that he is the most responsible. first author is usually PhD student or postdoc. nobel prize is now just for propaganda purposes
???? What the hell are you talking about? He's the last author with about 8 people. There's a good chance he hasn't even read the paper. The last author is typically a courtesy. I'm guessing he lent a piece of equipment for the experiment. Being last means your cat probably had more involvement in the paper.
Kinda disappointed in this channel. It's pretty much click bait with the Nobel Prize spin.
@@johnsmithers8913 found the p hacker
@@johnsmithers8913 That may be true for some fields, but the comment above is correct that in many cases, the first author will be the PhD student responsible for the project, and the last (and more importantly, corresponding) author will be their supervisor, whose direct involvement depends a bit on the dynamics of a given research group but takes much of the long-term responsibility for the paper. The least involved are usually the group in the middle, which will be students (+supervisors) who just did maybe a small supporting measurement or calculation.
@@michaelbartram9944
?? I have my Ph.D. The first author, assuming a doctoral student, will publish papers based on his Thesis work. The Supervisor at the beginning of the Thesis will work with the Student to set up a structure of the thesis. From then on, the student is on his own but will periodically meet with his supervisor on a weekly or monthly basis. In my personal experience, I would go months without meeting my supervisor, and honestly, his input was minimal until the end of the thesis and was then a reviewer of the thesis and papers.
Yes, the supervisor is generally second author, the third is either a second researcher, which participates by providing something key, such as the samples or the specialized technique/equipment. He is likely to review the thesis and publications before publishing. You could break down the contribution to the research 80, 15, 5%, respectively, for the first 3 authors. All other authors after that are "courtesy" authors, generally lab technicians, professors who donated their moth-balled equipment for the thesis, or possibly people outside academia who provided help in excess that would require an acknowledgement. Whether these authors actually read the paper before publication (although I'm sure they received a copy) or after the publication is questionable.
In summary, one could make an argument that the first 3 authors are the only authors that actually put work into the paper, had enough exposure to the data and procedure to determine whether the data was good or bad.
If you look at the papers that were shown in this video Sudorf was dead last in the list of Authors. He must have contributed almost nothing to the work and anyone in academia would know this.
@@johnsmithers8913 he is last and corresponding author on the article Jude speaks about in the video which means that he is the most responsible. do you think that if is author on papers he didn't even read that somehow it makes it better?
The senior author is 100% responsible for the content of the paper.
LOL. I'll bet he hadn't even read the papers until now. Maybe still hasn't.
The passive aggression in his response. 😂
It would be more understandable if the concerns were minor, but these seem to indicate serious data manipulation. He just looks guilty, even if in reality he knew nothing about the data manipulation.
"Yeah but I have a ***NOBEL*** !!! Who are YOU to question ME?!!"
***sigh***
Typical academic.
bad strategy
@@adamrak7560 this is naive question: is duplicating a pattern of white noise which does not appear to be the crux of the illustration serious data manipulation?
While we are all speculating here, given the images clones in the two questionable protein blots all start as a certain horizontal area, it seems to be that those two images were not as wide as the others and whomever put this illustration together hap-haphazardly cloned the white noise non dataum section to give these two images the same horizontal width as the others. Not saying it was the best solution, but this is what appears to have happened here.
My question is the white noise area of that protein blot scan as important?
5:24 I'd guess that the duplicated regions in the noise are due to some lossy image compression used some step along the way. However I'd expect them in all the images, not just one or two of them.
I agree. If it was a compression artifact it would be more pervasive.
The idea of AI going through old publications to find duplications like this is actually a fascinating idea (even if that's not what happened this time)! We already have a reproducibility crisis as it stands, so being able to weed out at least the most obvious fraud would be immensely beneficial for science as a whole.
Of course, people would learn to "cheat better", but there's no reason why algorithms in the future couldn't be produced to catch that later as well.
As sad as it may seem for science now, this could be a major step on the path to "fixing" it.
Even if people learn to cheat better it would increase the amount of effort necessary to do so (and the tools to catch them would also improve as long as we reward people cheating).
there is no interest in fixing this problem as it is sistemic issue created from the top.
"Cheating better" actually wouldn't be a problem at all; AI generally greatly benefits from having an increasingly adversarial dataset, particularly when the data is genuine competition and not the arbitrary adversaries researchers are fabricating. The more effective way to beat this would be to intentionally create so many false positives that the model loses credibility permanently, and continuing to train the model to beat them becomes impossible to fund, but that would require the cheaters to be honest, which would never happen.
This AI will also never happen, though, since after a single false accusation, the project would probably get axed and ruin the creator's career, with the people who cheated their way to the top throwing around weight to blacklist them. It's too risky of a product to make for anyone inside academia.
@@cephelos1098 Which is how we can have tens of thousands of car fatalities a year from human error, but if we get 1 from a self-driving car, everyone loses their mind, even though it would still be objectively safer for everyone to have the AI do the driving.
The pushback against common sense AI solutions can only last for so long before the benefits present themselves and people no longer desire what life was like before it. Keep in mind that electricity had doom ads against it and people were protesting to keep electricity out of their towns. Either way, the passing of time itself is all that is needed to overcome that particular qualm. Especially if the leading experts keep losing credibility and their "weight" means less and less as time passes. Ironically, this is one of the few ways for them to maintain that weight long-term.
You could take the offensive aspect out by using it like a plagiarism checker and have it as a standard procedure when acceoting a paper for peer review.
If any anomalies just ask for clarification
Alternative title, Stanford professor has 35 papers scrutinized, penas retracts
Which of these two headlines do you find more informative? "Some uniformed Germans in 41,140 armored vehicles seen traveling on French border road" or "Hitler invades France!" ?
@@jorgechavesfilhofirst one obviously
*PNAS
@@BankruptGreek You've just lost the war.
Whoosh.
"Also, I found these regions by eye." What an epic line!!!
Thank you for publicising these matters. Thanks to Dr. Bik. for doing her work. Nobel prize or not this is science and all should have their work scrutinized. This is the only way to make progress.
I am loving this disobedience to "authority".
'Hotshot Bot Sought, Caught Fraught Blot Plot'
I love it
Princess Carolyn is that you?
@@maina.wambuiwho?
Amazing 😄
brilliant 😂 though it only works if you have the cot-caught merger
"This cannot have happened by chance. As in: much, much less than 10^986"
Brutal.
His response is not good!
This has happened before, many, many times. Nobel Prizes seem to bring out the worst in instututions and people. Examples include Jerome Friedman owing his physics Nobel Prize to a graduate student, who never got credit for anything. Or Samuel CC Ting/Burton Richter who owe their prizes to a a third party. The initial discovery was on Long Island Brookhaven National Laboratories. Ting doubted the results and dithered. In desperation a phone call to Stanford was made telling Richter where to look. Stanford had the better PR team, with book authors etc and they won the war of words. The guy who did the work, first, got tenure at MIT but is pretty much only known to the insiders-- this guy does not even have a Wikipedia page! Also Katalin Kariko/Drew Weissman would have stayed unknown too, considering their low status in the academic community if the pandemic hadn't happened.
Video idea: what happens to the professors after being caught in scandals.
I'm in research, and here are 2 interesting examples known in our field:
1. David Baker, who is often regarded by professors in my field as the next Nobel prize winner, recenty created and extremely well-funded startup. He also hired Tessier-Lavigne as a CEO. That's the Stanford/Genentech professor with potentially fraudulent papers in Alzheimer's.
2. David Sabatini, famous ex-Harvard and HHMI professor fired for sexual misconduct, recently got big funds for his own lab by 2 wealthy investors.
1. www.science.org/content/blog-post/another-new-ai-biopharma-company
2. www.science.org/content/article/sabatini-biologist-fired-sexual-misconduct-lands-millions-private-donors-start-new-lab
@@cpunykurde
You're doing the Lord's work.
The board members/investors can type. I’m sure they googled their recruits. They found what they were looking for.
The Sabatini "scandal" is absurd. A 50 year old man and a 29 year old woman entering a sexual relationship, the latter NOT an employee of the former, is not a crime. A violation of prudish and litigation-fearful university bylaws, yes, but there was no reason it should hamper his research.
Having identical blocks in an image is actually not as unlikely as the video suggests. It really depends on the encoding method used, digital artifacts really can produce these effects with non-trivial probability. For example, JPEG has a low-quality setting in which entire regions of the image will have the exact same pixel value; i.e., the pattern that's repeating across the image is just a constant value. In audio codec these things happen a lot too, and in measurement devices you have quantization errors... two random floating point values are very unlikely to be the same, but two measurements of very low amplitude signals are very very likely to be the same. Much like the image of basically a white background with low real-life variation being photographed.
I'm not sure which codec was used and I can't rule out wrongdoing just from this video, but I think people throw out these "it's as likely as 1 in 10^900" way too much without it being correct. The actual digital processes generating these numbers and images are complex, and NOT purely random, and it's usually really hard to tell the actual probability of something like this happening.
On top of that, for someone like Sudhof who has hundreds of published papers, with thousands of images in total, it might not be as unlikely as you might think to have 35 papers with random artifacts. That's before accounting for human error like the paste error in the video (which is really an honest mistake that can happen), and before accounting for the hundreds of thousands of scientists out there that are making mistakes all the time. I'd wager that finding a high-profile scientist with a lot of problematic papers due to no wrongdoing but simple pure chance is not an unlikely event.
These artifacts may be way more common than you think - th-cam.com/video/7FeqF1-Z1g0/w-d-xo.htmlsi=hXJ-J96cgwGXdM6r (or - if you prefer englisch - search for "Xerox glitch changes documents" ). In short - memory saving algorithm looked for similiar blocks of pixels in scanned documents and replaced them. Block size was similiar to letter size in common documants. Effect - mutliple documents where significant numbers were replaced with wrong ones (6 -> 8, 8 -> 9). And since - by definition - algorithm tried to find matchong blocks - changes werent obvious when reading. I dont think matching blocks of, basically, noise is a proof of wrongdoing. Sure - Sudhof could write better answer, but when someone throws "carrer-ending" accusation and you dont know whatys going on...
If people faked data by random sampling from a gaussian with the desired mean/variance, it would be much more difficult to prove anything. It is scary to think about how many experiments can be manipulated that we can never notice
What's shocking to me about this is how crude the manipulations are. I am a researcher myself, and if I ever had the intention to make fake data for a paper, I can think of so many ways of doing it in a much more sophisticated manner so that it would be much harder to find indications for the manipulation than looking for duplicates. And there are MANY people with the technical skills to pull this off -- therefore it stands to reason that people probably are cheating in more sophisticated ways while evading detection.
I agree, this must be the tip of the iceberg, just the sloppiest manipulations can be detected. I mean, with those blots, considering that these researchers probably collect hundreds of such blot pictures they never end up publishing, how lazy do they have to be to just copy-paste selected areas from the same picture to paper over the blots they want to erase?
PNAS is not a peer-reviewed journal. It's the ultimate old-boy's club. NAS member submits the paper on a non-member's behalf. It was meant to be a high profile rapid publication journal of exciting new results. There was a time when there were not any other options (or few options). That time is kind of past. So a retracted paper is egg on some NAS member's face. But I always treated anything in PNAS with a very healthy dose of skepticism.
9:29 I think you should mention in the beginning of the video how he most likely didn’t make these errors himself . That kind of changes the outlook on him, though in my opinion it still looks bad, but it doesn’t look like he knew it was happening.
Well, all authors are responsible of the paper. No differences. If you do not trust what's written, or worse you do not know how things were done, don't have you name put on it. As simple as that. There are scientists that publish a couple of articles a year and know what's on their articles, other publish 100 papers a year and do not even read them.
Lying for money? Who would do that? Certainly all academics have waaay too much integrity to do that...
Many do have that integrity and also don't have a job.
Academics did used to have integrity.
most notably to push shitty right-wing socioeconomic policy that places society behind individual fortunes
/see: pro-smoking, anti-climate, anti-vaccine, anti-PoC, i could go on but those are the major lowlights
//but you gotta hand it to those rich people - they know how easy it is to clown their base
///and take them for all they're worth. see: magats
@@mensrea1251 I don’t think so. I remember being aware of rampant academic corruption in the 1970s.
TBH why couldn't the similar regions in an image be an artefact of an image compression algorithm? That's basically what image compression algorithms do : look for "unimportant" regions (perhaps where all pixels are very similar colours), throw away details, and replace that with a general approximation of the colour / texture.
LZW does it most simplistically with stretches of the same colour being replaced by a single pixel and the length of the stretch. I'd guess more complex algorithms might do it with smaller 2d "patches" that are representative of larger areas.
So I wouldn't rule out some unusual lossy compression (JPG? etc) creating artefacts like those described here. Also, changing resolution, attempts to brighten or despeckle or do other visual processing algorithms, especially if done at too low a resolution, might leave awkward repeated patterns.
As someone who went through the submission process before, JPEG is not an accepted submission format for images, rather ALL images have to be TIFF for that particular reason of avoiding compression artifacts. So, I'm calling bs on that part. But more concerning is that there's definitely a pattern of copy-paste on 'several' occasions.
This is journal dependent. Plenty of journals accept jpeg format for images, but if you are submitting images where compression artifacts might affect how your images are interpreted then you probably shouldn’t submit compressed images.
@@thepapschmearmd I'm not sure which type of image you are referring to, but for me it was a western blot. But even if jpeg, there is no way it had anything to do with compression.
Those duplicated boxes in the images reminded me of something... In 2013 there was a scandal where Xerox photocopiers would sometimes change numbers on copied papers, 6 became 8 most commonly. The cause of this was some algorithm that detected repeated structures in the images and copied the same data onto all these places to save memory, but clearly it wasn't accurate enough to distinguish a small 6 from an 8. Perhaps this or a similar algorithm could cause such weird 'duplications'? There is plenty of information about this online if anyone wants to look more into it.
No, that makes zero sense that you would get entire sections of artifacting that were somehow identical in a blank section, multiple times, all within the same image.
Why are you trying to excuse what is OBVIOUSLY fraud by imagining some hypothetical scenario that you have zero evidence for, or understanding of? This attitude is how this fraudster managed to con his way to a nobel prize
No, that literally makes zero sense and could not possibly apply to the doctored images presented here. There are MULTIPLE perfectly rectangular, identical sections that appear "randomly" across the same data in otherwise "completely blank space" the odds of these being true artefacts created by compression would be so infinitesimally small that you would be just as likely to get multiple perfect bridge hands in a row than for it to occur the multiple times it did in this paper.
@@kezia8027 I don't follow your reasoning regarding them being perfectly rectangular and appearing in practically blank spaces. How would this make it less likely? Since all pixels are near-white, wouldn't that make it more likely to consider the areas 'similar enough'? And the areas being rectangular I'd almost take as a given for said type of compression artifacts. Much easier and more efficient to have an algorithm look for rectangular areas than any other shape.
Looking at the video again, there are other things that point to this not being the cause however, such as the fact they only seem to appear in specific 'lanes' of the results, and only in some of the results (assuming the highlighting squares are after an exhaustive search). If these specific spots are of high importance to the results that would increase suspicion further. The rotated images metioned earlier are much more difficult to explain away as well.
@@Jutastre yeah you're right, the rectangular aspect was an asinine point that is irrelevant and doesn't help my case at all, but the point is that all that 'white' is really just noise. One pixel is more cream, one is more gray, one is more beige, and to the naked eye, there are no perceptible differences, but for these sections to be modified to have the exact same characteristics, that is seemingly ONLY present in areas where we would expect to see relevant data, and not just in random completely irrelevant sections, shows a clear level of intent that could only be described through chance or an algorithm in an absurdly unlikely situation.
And as you say, there are many other indications that this is false. That is the issue, you cannot look at a case like this out of context. Yes specific data is being accused of being inaccurate/forged, but to determine the likely root cause, we have to look at comparable information, and the surrounding context. Given these various red flags (which is admittedly all they are) there are more than enough of them to seriously doubt this work, and those who took part in it.
These red flags warrant MORE scrutiny, not people making excuses or attempting to paint a narrative of innocence, based entirely on their own personal beliefs and understanding (or lack thereof). What benefit is there in making excuses for this behaviour? At best it was negligent scientific method and reporting, at worst its explicit fraud.
The scientific community should be able to hold up to any and all scrutiny, otherwise what is the point of science?
This is just sad. Sudhof would have reacted with interest and joined in reviewing all the work submitted if he had not known of the false data. Science. You seek the truth, accolades are supposed to be the perks. Outstanding job, Dr. Elisabeth Bik. We need many more of you out there.
Dude been loving your vids the past year or so since I found you. Its so interesting how so many things we believe are one discovery of duplicated data away from being destroyed
It is soooooo common for scientists to photoshop their images to make them look better, even if the general results would not change. And this has real implications. Investors make decisions on how good a discovery appears, and perfect images can convince them to invest.
Thank you for sharing this. No matter what, it's good to know the truth.
Science is a search for truth. If such discrepancies appeared in my work, rather than being offended I would want to learn why they are there, and what they mean. The off-hand, poorly thought out dismissals are concerning. Whenever someone seems to be saying “don’t look over there”, I have the overwhelming desire to look.
Sounds like you want to get turned into a pillar of salt!
@@inthefade Sounds like the humans who invented that myth wanted to give their "don't look at it" a veneer of divine validation!
@@inthefade Thanks for the laugh. 👍
The first lab I was in, they never told us to fudge the data, but they didn't have the instruments to read a key parameter and essentially told us to make something up, and if the data didn't support the hypothesis the lab manager would be mad for the next week. I tried to build on the work of a couple previous students and was told that was probably a bad idea. It starts from the professor.
I thought peer review was supposed to prevent this from happening.
Sorry to have to tell you this, but you are very naive. Peer review has become a charade, and indeed, a significant proportion of it is faked. Also, even if a reviewer says a paper is rubbish, but another says it is quite interesting The editor will probably publish it anyway.
Having been a peer reviewer myself it can be difficult. Elizabeth Bik has an amazing visual cortex. She sees things i would miss. We need more like her.
Data manipulation can be difficult to spot at least initially. I have just finished a battery of tests on a published data set. This set actually looks good despite it being almost impossible to recreate. Science isnt yet totally broken 😂.
Peer review is very poorly rewarded. You have a few days to teview material that can be very technical. You dont get paid for this. You do not get thsnked or acknowledged for the time and effort. Some put a lot of effort in. Others less so. One set of papers in nature had an overall error rate of approximately 30%. Final figure when i had it sorted out was 32% if i remenber correctly. I found these problems the day the papers came out. I wrote a note to nature giving examples of the problems. Nature decided that my letter was not of sufficient interest. Huh?
I tried to get this pslublished elsewhere. The authors of tye original papers did everything they could to stop it being published. I eventually got this published in the middle of a different paper.
It was an unreal experience.
The authors later admitted the presence of errors and seversl years later published a revised paper. Residual error rate was only about 10%.
Peer review is not easy. Journal editors dont like admitting mess ups. Authors can make life difficult for you.
The whole process is messy
@@binaryguru I should add that this is not a speculation if it’s a fact, because I review papers for journals as a world expert in the particular field until seen several papers, what I said were rubbish published anyway. Needless to say no communication from the editor, justifying his decisions .
No, peer reviewers have no access to the raw data, they assess the logic of the conclusions and the relevance of the findings. They have no way or incentive to actually check the validity of the data itself.
@@psychotropicalresearch5653 Has become???! It was always this way. Friends help each other even in deceit and try and question some inventoLord or some serial discoveroSearcher. It's all bullshit. Most of it, anyway.
Sounds to me like a large part of the issue is that the system is built such that professors that are in an advisory role (and a loose enough role that they can't spot faked data in the study) are getting the paper published under their own name.
If they didn't do the bulk of the work, their name should not be first on the paper.
Instead, established professors names are being put on papers that in reality they had very little to do with, and there is little incentive for those actually doing the work to do it right, because it is not their reputation on the line.
For my thesis, I brought the idea to my advisor, and we spoke for 30 minutes once a week, and she never even saw the experimental apparatus or the code.
For my cousins thesis, the project was his advisors initial idea, but the only time he got with his advisor was a 20 minute slot once a week, within which 15 students took turns updating the advisor on their own different projects. If all of those are published, that's 15 papers with the professors name at the front, none of which the professor actually knows all that much about.
profs names should never be at the front - that is reserved for their trainees who did the work. Senior authors almost always have their names listed last, not first, on published manuscripts.
Take the fame... take the blame...
"And as the result his paper was retracted from PNAS"
I think the correct term is “pull out” 😂😂
@@realGBx64 in this case it would be, "PNAS made decision to pull out"
@@14zrobot pull your pnas out of my articles
Boosting for the algorithm 🙌 Love your work, keep it up! 🌻🐝
Issues like this are partially why public trust in academia have fallen drastically as govt policy decisions & products such as medical treatments are developed based on academic findings. I'm extremely happy to see public peer reviews occurring to start weeding out the snakes in the grass that have been an issue imo for far too long. Time to bring credibility back.
Still, the problematic that there are repeating patterns in these western blot images could be easily explained by image compression in the editorials process (which would also make it the journal's faut and not TC Südhof's). Some image compression algorithms (like JBIG2) use pattern matching to reduce image size (like in the Xerox scandal 2013). That could also be a likely explanation of having the exact same noise pattern in images shown.
The correct response would be to be concerned about the anamoly and to follow it up with the first author. The actual response leads me to think that their was either direct complicity, or an indirect culture that makes it acceptable to manipulate results. Although I would add that the blot images may have been manipulated for aesthetic reasons ... like someone put their coffee mug down on them and someone made the call to not repeat the expt.
edit* Also to add, some supervisors can be really supportive of their teams and phd students, and will defend them really strongly.
To be fair, someone who blindly supports their team in spite of evidence that they should be inquiring into their accuracy/efficacy is not someone who SHOULD be supervising.
@@kezia8027 not really, defend robustly first is a perfectly reasonable strategy as their is a presumption of innocence. Also, you don't know what happened behind the scenes, the author who did the images could have been asked, came up with the BS excuse and that was just parroted out. The point is, that it might not be as clear cut as the vid makes out and I personally need more evidence.
just a comment on the first fluorescent images and nothing else: if you turn photo A89P clockwise 90°, you can overlay the yellow box in it on the yellow box in photo 9-89. So what is to the left of the yellow box in A89P is 'above' the yellow box area and what is below the yellow box in 9-89 is 'below' the yellow box area. Because the photo plate is arranged as a bunch of square photos, not rectangular photos, these 2 photos show overlapped fields of view. the areas outside of the yellow box in both photos are as important to show as the yellow box area. so to fit a nice square photo arrangement on a plate of photos, 2 different, overlapping photos were provided. The same with the purple rectangles. I do not see artificial manipulation of the photos, just some overlapping regions in a couple of them. For photography under a microscope this is not unusual. They could have created 2 larger rectangular photos to replace 2 sets of 2 overlapping square ones but the layout of the plate would be disrupted. For those who care about nice visual arrangements of photo plates (I am a microscopist) they have done nothing wrong. For those who don't care about how their plates (or graphs, plots, tables) look, there are plenty of incredibly awful examples in the literature. as for the rest of the stuff, will let the medical community review his papers.
I’m afraid that I lost a great deal of respect for the Nobel Prize after Barack Obama was apparently awarded the prize for being black.
As a quality engineer I know that it is “easier” to point out the flaws in everyone’s work than it is to be the OG and not generate them. In my experiences this an issue with other people doing the work and the manager not having a flipping clue about the work. Happens all the time bc the manager needs the project result and doesn’t know what the real risks they are taking bc the engineering is trying to minimize concerns. It is absurd that “managers” get their names on others work. This is similar when managers steal patent credit. These scientists need to start doing their own work again so they actually have a real connection and insight.
Could not happen to a better
University.
There should be a website database created for scientific papers that had no significant results. So that those papers actually go somewhere, and people can look them up.
These might be artifacts from the Floyd-Steinberg dithering algorithm, which is used to convert color images into grayscale. This method employs predefined repeating patterns, known as 'error-diffusion,' applied to surrounding pixels in both dimensions to create various shades.
Then why wouldn’t it be noticeable in all areas of all the images?
I’m very pleased to see this very detailed, accurate and honest reporting.
Been waiting for this
Having done quite a lot of flourescence microsopy myself, the instance of the rotated images i would whole heartedly believe it could be an honest mistake. You end up with hundreds of images that you have to resize/rotate and do whatnot. The other "mistakes" though are quite weird. My guess would be the pressure of having to get data with results got to someone in the team. Whoever that might be.
Elizabeth is like Lone Ranger kicking ass. Respect.
This happens rather commonly in academia. It won't stop until the honors are stripped away in a public ceremony or similar exemplary action is meted out.
The people who have studied at about the semi conservative model of DNA replication would know the British scientist Rosalind Franklin confirmed the semi conservative model through X ray crystallography process, but the honors and the Nobel Prize went to Watson and Crick.
Similarly, the Indian Nobel laureate Amartya Sen probably received the honor because of his influential wife Emma Rothschild ( the richest and most powerful European family) . He currently has allegations of land grabbing a huge plot of university land and some fraudery charges labelled against him.
Dr. Südhof's case might be classic PhD slave labour.
All in all this hinders the sanctity of the Nobel Prize and disregards visions and scientific temperament of the patrons.
To be fair, quite a few of the duplications/errors noted on Pubpeer got responses from the other authors acknowledging their mistakes and showing the original data or offering to correct them. The majority seem to be genuine mistakes. The others don't seem to affect results anyway. But Südhof seems to get irrationally angry at these posts. Sure its annoying to have mistakes constantly pointed out, but its free. Also if the rest really were unintentional, he should get to the bottom of why they happened. Personally I think there were probably irrelevant visual issues with a few images and someone along the way decided (wrongly) to edit them.
I'm currently drown in data for my thesis (biomed) yet I'm still on yt watching scientific beef, totally love it.
Yeah, the big boss himself is unlikely to be the one who manipulates the data (or even runs the analyses or makes the figures). However, when manipulation is commonplace in a group then it eventually boils down to the head of the group telling people to cover up the ugly stripes in their Western blots and to make some data up when nothing is available (or when the real data doesn't support the hypothesis). In short, it is a culture in his group, and who else to blame for the culture than the head.
I'd be most concerned about the pharmaceutical derived "conclusions", which may well be harming patients and/or making lots of money for some pharma-mafiosi. Has anyone investigated which drug and drug-making company benefits from these "errors"?
I'm always amazed on how mindnumbingly dumb they are in faking their data. They don't even add random "noise" onto their excel copypasta.
You are the Coffeezilla of science, and I love you for it.
AI engineer here, modern AI image-generating algorithms cannot produce exact duplication of the kind observed in these plots. They can produce very similar regions, but because of the way they create the image the exact pixel values will never be identical in different regions of the image. (It's similarly as unlikely to random noise being identical in two regions of a natural image.) This seems more like manual photoshopping.
The paper in question was published in 2010, there is no way, that any form of Generative AI was used in generating these plots. ML models were much less common and way way less user-friendly back then.
This is as it should be. Scrutiny of papers is essential and it is the only corrective measure for poor results --whether by malice or simple incompetence. The more suspicious people are, the better. So these scandals are a sign that things aren't broken, but that they are not getting caught as early as they should be.
This is disgustingly shameful and should be sanctioned harshly by the governing body.
They _are_ the governing body.
@@ferdinandkraft857 well who polices the police ?
@@jeskaaable no one, that is why science has become a laughing stock. They are all afraid of getting defunded for what ever reason.
@@jeskaaable , I thought your responder succinctly mooted that point...
@@jeskaaableunfortunately, no one.
Who among us has not dealt with an untimely PNAS retraction?
Take my like! Please!
sudof seems to be relatively offensive. which in of its own is absolutely terrible behavior from such a revered scientist. 😮😮😮😮
but absolutely unsurprising for anyone who has spent any time in academia
@@not_ever The guy even looks like a few I know who are as dodgy and arrogant as hell. Carbon copy.
Considering that he is about to become unemployable, have his tenure and pension taken away from him, along with possibly having to pay damages it makes a lot of sense!
Stanford should fire him
I remember when “the Harvard of the West Coast” was meant to be a compliment.
How did they not get noticed during peer review before publication?? That’s sus tooo
You are our youtube news source of academia drama and, for that, we thank you.
Could these duplications have something to do with the compression algorithms in photocopiers/scanners and similar devices?
There is a big problem like that known to occur with Xerox devices where similar looking areas in an immage were replaced with clones of just one area, leading to numbers being changed in millions of documents some of which were destroyed after digitizing and now cannot be corrected.
Something similar could be at play here and would have to be ruled out.
Nobody expects thier scanners to betray them, yet it can happen.
Nice clickbait not gonna lie, it's just a bunch of unexplained things which is far away from being a scandal
Oh yes, mysterious unexplained obvious photoshopping in dozens of separate images. Nothing to see here. It’s beyond comprehension.
@@AbsentMinded619Sure, you are emotionally attached to the subject, so I'm going to explain. It's bad, but it isn't as bad as plagiarism, like what happened at Stanford.
I’m a photographer and have edited many photos with the goal of getting them to “look right”. Sometimes the unmanipulated result looks misleading so you tweak it. That doesn’t excuse tweaking scientific images, but if an image in 2010 was ok in the meaningful area but had some artefact in the area of the coloured boxes then I would reach for a bandaid type tool to erase the artefact and clone some background area in. No one will ever know! Wrong, this kind of edit is pretty easy to detect if you know how they are done. I can’t tell if this is corrupt science or just the unwise application of photo editing software in the publication process. I would say there are two conflicting ethical standards here and the higher scientific standard rather than the image preparation standard should apply.
So many people who are attempting to claim it could have been caused by a XEROX. Clearly these people haven't seen what artefacts look like, or what copying and pasting "blank" areas over them looks like either. When you know, it is painfully obvious what has occurred.
WHY it occurred is still technically up for debate (though I have my suspicions) but without a doubt, these images have been deliberately doctored for SOME reason.
Kind of interesting that the cheaters seem pretty lazy about cheating, too.
*the cheaters who get caught
It's amazing how little effort these academia top brass put to forge a lie.
Why put in more effort? This amount got them a nobel prize. Elizabeth Holmes was worth BILLIONS.
More effort wouldn't get them any better results, so why bother?
Both my 1st cousin and nephew have Ph.Ds from Cambridge. Academia is not dead in Europe or Australasia. Asking for a fresh replication should be part of the process.
I wouldn’t call academia broken just from a misconduct of a scientist. And Academia being able to flag and retract works that might be a product of misconduct and dishonesty, and from a Nobel prize winner at that, sounds like a good system to me. It’s not perfect by a long shot, but hey, it’s there.
The point is that there are MANY more instances of research fraud occurring, and the penalties for so caught academics usually being minimal if at all. I recommend Coffee Break's video on the replication crisis, as he also covers two other major academic fraud cases (University of Florida criminal justice and Duke University cancer research).
The fluorescence image I can believe is an honest mistake. Sometimes when we capture images of the same sample, we accidentally capture part of a previous region. We have hundreds of images to go through and sometimes human error occurs.
But, the WB artifacts and copy paste data is laughably obvious... You would think these people would think to cover up their tracks a bit better. Maybe in 2010s it's not as important, but in 2023 you are over 90% going to get caught with copy paste data (10% reserved for low novelty papers published in low impact journals that not many people will read and review in detail).
Sadly, arrogance isn't a good defense in science.
Some Xerox machines would alter parts of copied documents to look like other similar parts. The machines used a compression algorithm that found similar parts of the page so it would only have to store one of them. But it was too greedy and would sometimes consider e.g. a 6 to be a duplicate of an 8. If these images were copied on one of those machines, it could have done the same thing with the images. More info in this talk: th-cam.com/video/c0O6UXrOZJo/w-d-xo.html
The quote "Now and even 25 years ago computers were not routinely garbling images by cloning regions" is false. Lossy image compression can do exactly this, if the regions are similar enough to begin with or the algorithm is faulty (as in the Xerox case, which was discovered in 2013, which is between 25 years ago and now). And the probability of either of these cases is much more likely than 1 in 10^986.
This is asinine. Artefacts and compression could never clone MULTIPLE smaller sections, that are in the middle of BLANK SPACE, MULTIPLE times within the same image. This is ignorant and spreading harmful rhetoric that only excuses poor practices and obviously doctored data. Stop trying to imagine reasons someone isn't a fraud. Let con men speak for themselves. Attitudes like this are how con men like this are able to fester unmolested.
As a visual digital artist of 25 years, that happens when you use a rectangular selection to copy and paste over a region. It looks like they used several small samples, put them together, then copied that again to move it to another area. Yes this is used to cover blemishes
Maybe if the whole system of Academia wasn’t broken we wouldn’t be relying on people to actually do the review part of peer review after the fact. But when 4-5 top minds in their field get popped, where there is smoke there is fire
PNAS retractions are tight!