For people freaking out in the comments: we don't need to change the scientific method, we need to change the publication strategies that incentive scientific behavior.
Adamast I did not for once stated my believe. I stated the point of the video, which many seem to have missed. Ironically, you now missed the point of my comment.
+Adamast no we dont need to change the method, the problem (which was explained VERY clearly in the video) is that people arent USING THE METHOD PROPERLY. like when someone crashes a car cause they were drunk, the car isnt broken its just being used incorrectly
You do state your believes. You think you know the point of the video, where you see other people just freak out. Personally I read: _Mounting evidence suggests a lot of published research is false._ Nothing more. There is I admit a short faith message at the end of the video.
The scientific method- you go out and observe things, and develop a hypothesis, and test the hypothesis. If you run a bunch of tests and come out with the wrong deductions that is called flawed research methodology. Flawed research doesn't imply the concept of going out and observing coupling with experimentation is flawed, it just means you suck at being a scientist.
I really think undergrads should be replicating constantly. They dont need to publish or perish, step-by-step replication is great for learning, and any disproving by an undergrad can be rewarded (honors, graduate school admissions, etc) more easily than publication incentives can change
When I was an undergrad in the physics department, we were (and still are) required to reproduce quite a few experiments that led to Nobel prizes. It's a fairly common practice, and you do this while working in a research group that is producing new research.
@@sYnSilentStorm How common are undergrads in research groups / replicating newer works? Well trodden ground breaking-now-foundational Nobel prize works make sense for learning, but we would benefit from a more institutionalized path of replication of newer claims. Especially if we can before graduate school, which is who populates research groups in my mind.
this happens because of "publish or perish" mentality. I hate writing scientific papers because it is too much of a hassle. I love the clinic work and reading those papers, not writing them. in this day and age it is almost an obligation that EVERYBODY HAS TO PUBLISH. if you force everyone to write manuscripts, flood of trash is inevitable. only certain people who are motivated should do these kind of work, it should not be forced upon everyone.
Indeed. You are totally right. I also feel that this over-publishing phenomenon has led these researchers into manipulating the data because it feels like a competition. In my opinion, researching is hard and sometimes frustrating, but you should always stay loyal to the fact. Otherwise, you are cheating for profit. Totally unethical. There are people who devoted their lives for this but still, they don't get as much recognition as these 'cheaters' do.
I'll tell you exactly why. The academia is controlled by rabid looney slavemasons and rabid looney jesus. Since they are both mentally and intellectually retarded rendering them incapable of any intellectual manifestation, they force grad students to do it in the publish or perish way. They actually steal and collect it for their own records because of the following reasons: 1. They think that these are valuable intellectual property which should only belong to them. Hoarding basically 2. They will pass it off as their own intellectual property subsequently in a post apocalyptic era or a different realm 3. Being intellectually retarded, they prefer quantity over quality. More the better 4. They are basically mining your intellectual property by exploiting you for their use 5. Commercial reasons just in case your papers somehow fetch money This is actually good. You should publish more crap for the slavemasons to prevent then from getting hold of some actual research
The Economist publishes blindly. The entire EVidence Based Medicine does not yield dosage for medicine tested - that includes your painkillers, your blood pressure pills, your mood stabilisers and your anesthesia. But this is not science's fault, nor is it a conspiracy of the medical system. The scientific method is limited in its ability to give us data. Yet, practically we need to understand the world. The only way out is innovative methods that people can actually understand their pros and cons and get published without trying to immitate the scientific methods to be listened to. Oh, and developing the right hemisphere science. Which is what I experiment with.
@@spiritofmatter1881 No, the scientific method can be skewed to provide misleading data; and if you don't believe there's a conspiracy in the medical/pharmaceutical system, there's a bridge in NY I can sell you for a good price. The pharmaceutical industry is mostly shady -- and medical doctors are their extended hand of national snake oil salesmen.
Journals are just newspaper for scientists. Publication house want to publish things that people want to read. And people want to read interesting and new things, not replication studies. Even though in modern times there are journals that accept and publish replication studies. These journals usually incur a much larger publication fee. Nothing is perfect. But at least, everyone is trying
Science is a battlefield of ideas, the darwinism of theories, if you will. Only the best ideas will survive. That's why the scientific method is so powerful.
Unfortunately the "best" ideas today are those that will result in profitable gadgets and not exactly those that would best propel human knowledge forward.
When I was in grad school for applied psychology , my supervising professor wrote the discussion section of a paper before the data was all gathered. He told me to do whatever I needed to do in order to get those results. The paper was delivered at the Midwestern Psychology Conference. I left grad school, stressed to the max by overwork and conscience.
Why are you the minority? How can people who have the power to create Utopia choose self interest. It's like I'm in the Twilight Zone. Everybody knows and nothing is done. I wish I was never born and I hope I never am again.
As a former grad student, the real issue is the pressure universities put on their professors to publish. When my dad got his PhD, he said being published 5 times in his graduate career was considered top notch. He was practically guaranteed to get a tenure track position. Now I have my Masters and will be published twice. No one would consider giving you a post doc position without being published 5-10 times, and you are unlikely to get a tenure track position without being published 30 or so times. And speaking as a grad student who worked on a couple major projects, it is impossible to be published thirty times in your life and have meaningful data. The modern scientific process takes years. It takes months of proposal writing, followed by months of modeling, followed by months or years of experimentation, followed by months of pouring over massive data sets. To be published thirty times before you get your first tenure track position means your name is on somewhere between 25-28 meaningless papers. You'll be lucky to have one significant one.
Damn. I really want to be a researcher in the natural sciences one day with hopefully a Master's or PhD, but I must say seeing this is a little unnerving. Would you happen to have any advice for aspiring researchers?
I studied in Japan, now Japanese has changed their view in research, not to get to top of the world, but how the reseach could be applied and contributed to society. If you look at the ranking today, most japanese univs are not at the top ranking as they used to be. Now, univs from Korea, HK, China, singapore are climbing for top ranking. But every year these univ have suicide cases.
Not really accurate though. None of my professors have been published 30 times and they've taught at Yale, Texas A&M, Brown, that's not really true at all.
@@ShapedByMusic they might not have, but it is beginning to become a standard for new hires. 30 might be a bit of an exaggeration, but there is no way you are getting hired with under 15-20 for a tenure track position.
It’s almost impossible to publish negative results. This majorly screws with the top tier level of evidence, the meta analysis. Meta analyses can only include information contained in studies that have actually been published. This bias to preferentially publish only the new and positive skews scientific understanding enormously. I’ve been an author on several replication studies that came up negative. Reviewers sometimes went to quite silly lengths to avoid recommending publication. Just last week a paper was rejected because it both 1. Didn’t add anything new to the field, and 2. disagreed with previous research in the area. These two things cannot simultaneously be true.
Don't get me started about meta analyses. I've never heard of one being undertaken except where the backers have some kind of agenda. And I've never heard of one the results of which didn't support that agenda. The entire concept is deeply flawed.
This has influenced my thinking more than any other video I have ever seen, literally it's #1. I always wondered how the news could have one "surprising study" result after another, often contradicting one another, and why experts and professionals didn't change their practices in response to recent studies. Now I understand.
Weird how popular this video is recently (most top comments are from ~1yr ago)... None of these problems apply to the fields of virology or immunology... right?
@@thewolfin I have no idea how much or little different areas of study are affected. I assume the very worst ones are when that ask people what they eat, and ask how healthy they are. Beyond that, no clue.
Yeah, I remember how eggs were bad for your health, then they were good, then they were bad again. Not even sure where the consensus on that is at this point.
My favorite BAD EXPERIMENT is when mainstream news began claiming that OATMEAL gives you CANCER. The study was so poorly constructed that they didn't account for the confounding variable that old people eat oatmeal more often and also tend to have higher incidences of cancer (nodding and slapping my head as I type this).
Perhaps Oatmeal isn't the problem. The problem could be with a change in the way it is grown and produced. Contamination, soil depletion, pesticides, etc. There are always variables that are un-accounted for in scientific research that invalidates the conclusions.
My favorite is probably the story of how it was proven that stomach ulcers are caused by bacteria and not by stress and spicy food. Big arguments for decades between the established science and its supporters vs the scientists discovering the truth. It illustrates how poorly established science treats scientists with new ideas no matter how valid.
"The problem with quotes on the Internet is that it is hard to verify their authenticity." Abraham Lincoln (source: the Internet) (quote is not from Hawking; probably from Daniel J Boorstin)
I remember seeing one like this attributed to Benjamin Franklin. Long story short, quoting individuals on the internet doesn't really seem too productive if you ask me. The idea should stand on its own and not hinge on someone seen as intelligent having said it. Then again, what do I know? I'm just Elon Musk.
warwolf6 Anyone who believes in the importance of replication studies? It’s not like everyone who reads it will have read every study. Far from it. It could be multi-disciplined also, which could be interesting to learn about other sciences
As someone who studies theoretical statistics and data science, this really resonates with me. I see students in other science disciplines such as psychology or biology taking a single, compulsory (and quite basic) statistics paper, who are then expected to undertake statistical analysis for all their research, without really knowing what they're doing. Statistics is so important, but can also be extremely deceiving, so to the untrained eye a good p-value = correct hypothesis, when in reality it's important to scrutinise all results. Despite it being so pertinent, statistics education in higher education and research is obviously lacking, but making it a more fundamental part of the scientific method would make research much more reliable and accurate.
Observational studies in the social sciences and health sciences are mostly garbage. People who don't do experiments or RCTs need to study a hell lot of statistics to get things right. And only recently we got natural experiments to help us with good research design for those areas. Until my masters, I was relatively well trained in traditional statistics (I'm an economist) but unaware of natural experiments. I was completely disheartened about how awful my research was, given that different specifications in my observational studies were giving me different results. I only gained renewed enthusiasm in a much better quality PhD who taught me much better research designs.
J Thorsson you can't talk about the "lack of self critical thinking" after trying to say that you're smarter than someone else because your daddy is a manager at a science institute... you can't contract knowledge lol
+ nosferatu5 The scientific method is indeed unsurpassed. But what this video is about (though it doesn't say so) is a relative newcomer in academia, the NHST, and its use is based on anti-science. Does fitting evidence to theory sound like science to you? I hope not, it most certainly does not to me. But that and anecdotes is what a young academic told me was the norm in many of these fields. I'd order a retest or reinterpretation of every NHST study from 1940 onwards using the actual scientific method, complete with logic and falsifications, given these results. A failure rate of 64% is abysmal, yet predicted by Ioannides.
Don't forget something Derek left out though. Re-sampling fixes a lot of these issues. Run a sampling test including replications to get your standard deviations and get a p-value less than 0.05 (or even better, less than 0.01 or 0.001). Then rerun the sampling tests multiple times to see if you can repeat the p-value. THEN (most importantly) report ALL your experimental runs with p-values. If even one out of (at least) three to five separate independent runs has a non-significant p-value, consider the entire study with a huge pinch of salt. Most reputable journals though nowadays insist on this - the peer reviewers worth anything will at the very least.
This is why statistics should be a mandatory course for anyone studying science at university. Knowing how to properly interpret data can be just as important as the data itself.
It's generally not done at undergraduate, but a massive part of a PhD is understanding the statistical analysis that is used in research. It is extremely complicated and would be way too advanced for an undergraduate stats course for, say, a biology student.
David that‘s usually not taught in undergrad in the us? Wow, that surprises me - a biology student from Germany, where we have to take a class in statistics in our bachelors. It might be easier to understand it during your PhD if you heard about it before
As a graduate MS student in A.I., I found my research statistics course to be probably my most relevant in terms of learning to think properly as an individual with an advanced degree. I was very much taken by surprise by statistics, pleastantly so.
Mirabell97 I took an Engineering statistics class in undergrad in America. I've also taken graduate level research statistics as a Comp Sci student, which was taught at a much much higher and more relevant level. There are also high school statistics class, which are even more watered down. So as you say, many have indeed heard about if before.
Just started my PhD. This video has inspired me to call in consultants outside of my supervisory team to check my methods. I don't want to be wasting my time or anyone else's with nonsense research, and I'm honestly feeling a little nervous about it now.
@@angrydragonslayer The whole comment? It isn't phrased very well. Are you being sarcastic, or serious? If you're being sarcastic, it's either because you were trying to be funny, or because you're--for a lack of better words--salty that people aren't honest in science. If you're being serious, it's either a sincere question, or you genuinely think scientists are dishonest on purpose. Explain the mentally you had when you made the comment, I guess?
The problem is people are suppose to be able to replicate the results by doing the experiment over again. If I can’t find multiple experiments of a study, it’s hard for me to not be skeptical
The big problem with that is noted in this video, replicating work already done generally has no rewards. The time, money, and need to publish to advance their careers mean even the best intentioned researchers are likely to avoid redoing someone else's study.
Some experiments are very expensive in terms of time and money but We shouldn't worry too much about our money getting into publishing false positives. At the end of the day only the true positives will be the basis for further advancements, experimental science is built on previous results and if those results are spurious, nature will stop you from discovering further real relationships. That's what this video is failing to point out, the incremental nature of scientific knowledge in the natural sciences is a natural peer review system and the best that we can ever have hoped for. So keep funding science, at the end only the true relationships will stand the test of time
It's a complete misunderstanding of how science research works. "Eureka" moments are rare. Instead, the truth is eked out little by little: many rounds of test, falsify, retest, improve until the truth is arrived at.
@@4bidn1 If I gather your meaning correctly, no, I'm deadly serious. Think about it. If published research was the crap he said it is, where is all this successful bio-tech coming from?
So much research exists purely so someone can get their PhD, or bring funds into their University to keep themselves employed. When the pressure is on, no-one really cares whether the research is useful or even reliable - just got to fill the coffers and get your research published and referenced to drive up your University's rankings.
As a PhD student, I can fully agree with this. I have come to hate the word "novel". No matter how correct and in-depth an analysis is, anything that doesn't turn the world upside down is always gladly dismissed with "not novel (enough)" as a killer argument. By now I've decided for myself that I don't want to have anything more to do with the academic world after the PhD. I love research, but I HATE academic publishing.
I know right. I just did some academic research on the effects of eggs on cardiovascular risk and the findings were very confusing. It frustrated me when I found one new study contradicted many previous studies stated that more than 3 eggs a week caused many health problem, and out of no where it receives lots of media coverage. I am still not really sure which study is correct. Another time is when I recently study on wealth inequality and its relation with the pandemic. The research process is very interesting but I can see many bias in those papers that I read (sorry for my English)
Sabine Hossenfelder: "Most science websites just repeat press releases. The press releases are written by people who get paid to make their institution look good, and who for the most part don't understand the content of the paper. They're usually informed by the authors of the paper, but the authors have an interest in making their institution happy. The result is that almost all science headlines vastly exaggerate the novelty and relevance of the research they report on."
An engineer with a masters in nuclear engineering, a mathematician with PhDs in both theoretical and applied mathematics, and a recent graduate with a bachelors in statistics are all applying for a job at a highly classified ballistics laboratory. Having even been given the opportunity to interview for the job meant that each candidate was amply qualified, so the interviewers ask each the simple question, "what's one third plus two thirds?" The engineer quickly, and quite smugly calls out, "ONE! How did you people get assigned to interview me!?" The mathematician's eyes get wide, and he takes a page of paper to prove to the interviewers that the answer is both .999... and one without saying a word. The statistician carefully looks around the room, locks the door, closes the blinds, cups his hands around his mouth, and whispers as quietly as he can, "what do you want it to be?"
Yes you are right but the bottom line is that any new scientific theory is completely unreliable. Since there is no other way to do science today other than the peer review method.
I think the problem is actually quite deeply embedded in academic research. Right from the selection of which projects get grant funding and resources onwards there is bias to show the result that the department head wants to be true. Their career, prestige and income relies on this. The careers, prestige and income of every person in every academic research department relies on only ever finding 'convenient' results.
Publishing is about making money, so they have the exact same problem as scientists do. Go for money or go for truth. Since the publications wouldn't exist without money, they are making the only choice they can.
I had so much trouble to publish when I corrected the p-values to counteract "p-hacking" or alpha inflation. Since I tested for multiple variables, I adjusted the models to minimise false positives and low and behold, almost all hypotheses that would have shown p
@@aravindpallippara1577 Just imagine, it's not compulsory to adjust the p-values. It's not mandatory to counteract alpha inflation. How much of published research must be (intentionally or not) not significant, but published as such.
Imagine all the human time saved from being able to get information from someone elses research without having to do it yourself. Now imagine all the time lost from all the people who have to research what has already been done, but they cant learn it because negative results don't show up in books.
P values of 0.05 are a joke. Look, I'm going to sound biased, and that's because I am. This is a much bigger problem in fields like Psychology than in fields like Physics. The emphasis on constant publication and on positive results is still a massive problem. Researcher bias is still a massive problem (although still, not as much as in Psych/Sociology). The existence of tenure helps a little since researchers become able to research whatever they want rather than what the system wants. But we aren't claiming world-changing discoveries with P=.05. Derek brushed right past this like he was afraid of sounding biased but I'll repeat: 5 sigma is a 1 in 3 million chance of getting a false positive purely by chance. Every physicist "knew" the Higgs had been discovered years before we finally announced it and started celebrating. But we still waited for 5 sigma. I did some research with one of my Psych professors in my freshman year. She was actually quite careful outside of the fact that her sample sizes were pathetic. We went to a convention where we saw several dozen researchers presenting the results of their studies, and it was the most masturbatory display I could have imagined. There were some decent scientists there, no doubt, but the *majority* of them were making claims too grandiose for their P-values and sample sizes, confusing correlation with causation, and most of all *failing to isolate variables.* If a freshman is noticing glaring problems in your research method, your research method sucks. The next year I had a Physics prof. who had a friend of mine and his grad students run an experiment 40,000 times. There is no comparison. We need a lot more rigor in the soft sciences than we have right now. Mostly because science. (But also because they're making us all look bad...)
And there's also the problem that experiments might be difficult to perform in fields outside physics. It can expensive and it requires a lot of planning and logistics. Not to mention that ethical dilemmas might stand in the way, which happens a lot in medicine. In a way, the physics field is blessed by not depending on studying people, and overall physics experiments are cheap; expensive particle accelerators non-withstanding. One thing I think Derek missed is to emphasize that one shouldn't be looking at single studies anyway. You look multiple studies for trends and toss out the flawed ones and the outliers. Or even better, look for meta studies. I'm also unsure if changing your model / what you measure *after* you have looked at the data is p-hacking. Such a mistake seems way more serious to me, as you're basically making your model fit a specific data set. Give me any data set and I can make a polynomial fit all the points. Basically, reusing the data after changing the model should be a crime :)
On this note, simply because you mention my discipline (Psychology), I will point out that Psychology lacks any kind of unifying theory that organises the predictions it makes. It's a lot easier to be a physicist trying to confirm the predictions of Einstein and Newton than a psychologist guessing at what the underlying mechanics of the mind are.
Another issue is that Sociology, Psychology and Economics are all black boxes that we don't know nearly enough about. In Physics, we can lower the temperature to close to absolute zero, and do the experiment in a vacuum. It is currently impossible to have that level of rigour in Sociology, Psychology and Economics. We still have a while to go.
I don't see why you need to single out psychology, even this video gives examples of neuroscience and physiology research holding even lower rates of reproducibility. When you look at the success of psychotherapy for individuals, you will find most people find it an indispensable resource in their lives, unlike the health tips or the vague claims about tiny brain regions coming out of neurology and physiology.
I feel like everyone in the world needs to watch this video. There's so much crap out there an no one ever thinks past what they want to hear. This should help. This should be a Ted Ed
You're right, people often only hear what they want, so this video would likely make that even worse. It gives people ammunition to discredit others with an informed view. People are going to see that if this is the result from honest science, then what happens to paid and biased science. To a wider audience I think this video would likely do a lot more harm than good.
For me, If there was a video I'd like everyone to watch it'd be one purely on the benefits of science. The last thing we need to throw out to the general public is something that might look at first glace to highlight its flaws.
I wanted to thank you for speaking up on this issue. The state of science today is a travesty and I’m glad to finally hear someone acknowledge this as I have been along in the dark with these troubles for far too long. I know we are creating the foundation of something great but acknowledging that the current state of science is not something we can rely on is just simply not said or acknowledged. Im so happy and so grateful that you have spoken about his issue and brought it to the public’s attention. Thank you for you work and congratulations.
The model we have is great, the problem is anything can be hacked if that is your goal. If you build a better mousetrap, Nature will build a better mouse. The problem is the incentive. Not enough money to go around for your own research and tenure is disappearing, so to do the work you want to do you need either to be a. Well known / respected in your field OR take funds to do work you don't want to do so you can do the work you do want to.
We shouldn't worry too much about our money getting into publishing false positives. At the end of the day only the true positives will be the basis for further advancements, experimental science is built on previous results and if those results are spurious, nature will stop you from discovering further real relationships. That's what this video is failing to point out, the incremental nature of scientific knowledge in the natural sciences is a natural peer review system and the best that we can ever have hoped for. So keep funding science, at the end only the true relationships will stand the test of time
It should be one of the requirements for getting a Bachelors in a field of science to do a replication study. Even with small sample sizes. It is a useful experience and pattern of thinking to carry into adulthood. Furthermore a meta-analysis using dozens or hundreds of further experiments would shake out all incorrect P values
An honestly, meta-analysis should be just as important to publish as novel as there is so much data out there and it has never been easier to analyze quickly.
Reason 1: Some one worked so much only to add a number to the papers published but not quality. Some other person points out a mistake in those papers.
Why not? Maybe they dont know dont recommend video function , so they thought thumb down this video will result in similiar type of video became featured in their homepage.
9 times out of 10, the answer to this question is BOTS. They have to like/dislike videos at random to try and fool the algorithm. That's all it is. I'm so tired of seeing "how could anyone dislike this GREAT video??" IT. IS. BOTS.
The lack of incentives for replication studies is obviously the biggest problem. The fact that some of those "landmark" studies were only attempted again recently... Hopefully, as people become more aware of this (it's happening), all those journals will change their mind about replications. They should release a separate issue for them, even.
+JavierBacon I get what you're saying and agree. Even though your testing/conclusions don't have statistical significance, the findings are still significant. In most cases, it would still help increase our understanding of a subject if null results were published.
The best way to start is to get rid of journals telling us what is worth publishing and what isn't. Then kill the h-index/impact-factor that are genuine SHITS. Then put everything in open access, the universities have all the infrastructure necessary and could even save millions $ in subscription fees that are frankly incredibly stupid to begin with...
Agree. We shouldn't worry too much about our money getting into publishing false positives. At the end of the day only the true positives will be the basis for further advancements, experimental science is built on previous results and if those results are spurious, nature will stop you from discovering further real relationships. That's what this video is failing to point out, the incremental nature of scientific knowledge in the natural sciences is a natural peer review system and the best that we can ever have hoped for. So keep funding science, at the end only the true relationships will stand the test of time
It's definitely bad in medicine. John Ionnadis has conducted "meta-research" into the quality of medical research and concluded that most medical research is severely flawed---in fact, "80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials." Wow. There are also problems with dealing with complex systems, and with challenging scientific orthodoxy into which some scientists have invested their entire careers.
As an engineering student specializing in med tech have the strong impression that med publications are less elaborate, lower quality and contain less explanation than eng ones
I just *knew* John Ionnadis would come up here. If you want to see "mostly wrong", take a look at his record on COVID-19 predictions-off by two orders of magnitude. His sensationalized contrarian kick has gotten people killed. There are many better, more thoughtful critics of the state of research. I'm not saying he's always wrong. But he does go for the sensational, and is often sensationally wrong, and doggedly so. A lot of progress has been made, especially in medicine, with pre-registration of trials, data & code repositories, etc, and I'll give him credit for helping kick-start some of that. (Preprints seem to me to be a move simultaneously in the right and wrong directions!) But statements like "80% of non-randomized studies turn out to be wrong" isn't even well-defined enough to be falsifiable. It's a non-scientific statement. And meta-research, like meta-analysis, is itself extremely subject to selection bias. Each need to be approached with great care and skepticism. A lot of what he says is not controversial. I'm not here to demolish John Ionnadis, but to urge people to steer clear of his broad sensationalized generalizations, and look carefully at the arguments he makes. Apply the same critical standards that he urges to his own research. Sometimes the kettle calls the pot black-but black is still black.
There is also the problem that you can test one medication, to some degree, but if you start talking about interactions between different medications in different people, most of the bets are definitely off. People discount "anecdotal" data completely, but if that data comes from doctors reporting on those medications, it definitely has value, as well, IMHO.
The vast majority of medical research is a shell game, run by pharma. You can tell little by study conclusions: you can actually tell something by the set of study parameters. Where study parameters will produce a unwanted conclusion, the research doesn't happen or isn't published. Example: no clinical or epidemiological evidence for the safety of aluminium adjuvants in vaccines. Draw your own conclusion
Years ago, I questioned some chemistry methodologies. It was very frustrating, because nobody was listening. Then a publication came out discrediting the methods used and discrediting many journal articles. Somebody had listened, or came to the same conclusions I did. Corrections were made.
When you say you questioned, did you establish a line of communication/collaboration with any of the authors or users of the method, working to test its limits, improve it, or compare it to other methods?
What were the methodologies, and were any significant findings overturned / discredited as a result? Or did it only affect small findings, with larger findings still being correct (or considered correct) in spite of some methodological errors?
You should have used an approach developed in cybersecurity research long time ago for the same issue: notify the authors that in 3 months you are going to publish your findings about all their mistakes no matter what they do. Then the authors have 3 months to retract their papers on their own and/or correct them. This solution is called "responsible disclosure" of vulnerabilities. You see, in cybersecurity the problem of "nobody listens unless you publish" has been acknowledged a long time ago. You can do this anonymously as well: from my experience, scientists are not more ethical than average human, and when you threaten their career and self-image they quite often freak out and try to hurt you back by all imaginable means -- just as many normal humans would in such a situation.
Outstanding video. It wasn't until I really started getting into research at MSc level that I began to realise so much of the research I was appraising was deeply flawed. At undergrad, I assumed that it was ME who was flawed every time I saw a glaring error. At that level, you don't have the confidence to criticise the work of experienced researchers.
We had to write a literature review on a chosen subject for our B.Sc. I read through dozens of articles on my subject and to my horror I realized that the results weren't in line at all. It seemed that some scientists had worked with rats and some with mice and they got different results. Still, many sources quoted each other regardless. It was difficult to piece through that mess and know who to trust.
industry influence is everywhere unfortunately. Climate science in an example of that. It's sad because you grow up learning to trust others. Now it seems so confused that we are starting to rely in religion, faith, miths, and so on. In Italy the misinformation campaign is tragic 😷
An undergraduate whom I knew, spent months trying to replicate a chemical synthesis that had been published in a journal. He failed repeatedly. Finally he contacted the authors. They told him that there was a typographical error in the article: the concentration of one chemical was listed as being 10 times higher than it was supposed to be. With that correction, his synthesis worked on the first attempt.
@@ephemera... -- The errata often don't appear until months after the original article. And the errata are often buried. It would also be helpful if authors checked the galleys.
Thanks for the analytical look at this topic. It seems timely with the recent resignation at Stanford University. It reminds me of a former colleague who shared the quip "publish or perish." In today's political world, the phrase "follow the science" is frequently and ignorantly applied, I'm glad to see science influencers such as yourself shedding light on this topic.
"If you live forever you'll see everyone you know die and then everything you know die because the universe willend." If the universe ends, something will come along to replace it. I'd be quite excited to see that. Plus, it's not necessarily true that "the universe" will end, although that's a widely spread myth so I can't really blame you for assuming that.
In Indonesia, many of supervisors in medicine would reject replication studies, expecting new studies and publication and therefore, causing us to have nearly zero epidemiological data. We prefer "good-looking research" to actually researching anything. Better not research than not looking good
This 12 minutes should be mandatory viewing for every course that touches the slightest bit on any kind of science, engineering, statistics, political science, or journalism. Starting in junior high school.
Short answer: yes. That was a real wake-up call when I was doing my Masters degree literature review - how often university professors push publications using "academic standard" statistical analysis to come to a demonstrably wrong conclusion. It is scary, not only how often this was the case, but how often these studies would be cited and their misinformation spread through academic circles without question.
Most academics doing the research are young and inexperienced in the real world. The people managing the research departments have a vested interest in only promoting research that finds 'convenient' results that will enhance their chance of getting bigger budgets next year. Maybe we should take people with 30 years of industry experience and put them in charge of research in academic institutions.....
@@davidwebb2318 Unfortunately true. If, as a young scientist, you talk to the head of your lab or department about your work and what your ideals are or what your idea of good science is, you will quickly be taught. You don't know anything! No, you really don't know what is important in science. What you know even less about is what "good work" is and what is expected of you. The most important thing is neither "good science" nor a prestigious publication. At the very top of the hierarchy is an accepted proposal letter! No funding, no research. All other output must be directed towards this goal and are just means to an end. The larger the organisation (Pareto Principal), the greater the pressure to meet this requirement. Exceptions exist.
@@haraldtopfer5732 I agree. Academia has become a big industry with big careers to support. The priority of the people heading up departments is to build bigger empires, secure bigger budgets and increase their personal exposure/status. This secures their jobs and the jobs of their colleagues/friends. That trumps everything else in many cases. It is really obvious in the climate change industry where nobody ever proposes or approves any budget for spending on anything that doesn't support the pre-existing narrative. They carefully choose and support only work that adds weight to the doom stories because this expands the 'importance' of their industry. Their future careers and their salary depends on doing it so they embrace it and steer all the research in one direction. The system is really flawed and has created a monster where half the world are intent on economic suicide to cure a problem that is relatively minor and will only have any impact over generations.
@@davidwebb2318 Well the thing is virtually every study that disapproves climate change are very usually well funded themselves - There is a vested interest among the folks with resources to forward that narrative as well, and they have resources, profits they can lose. Not to mention these studies also have to exercise pretty big mental gymnastics as the mounting evidence grows. Money does make the world go around after all. Wouldn't you agree?
@@aravindpallippara1577 No, I wouldn't agree. The climate change industry is mostly based on an emotional sales pitch pushed by celebrities and political activists who haven't got the first clue about the actual data concerning the climate. This is obvious because the main activists are pushing the idea that humans will be extinct in under 10 years. Politicians who are too weak-minded to work out this is complete lunacy have simply demonstrated their lack of intellectual horsepower by going along with it. Money does not make the world go round. It is just a convenient method of exchange used to buy and sell goods and services. Of course, the political activists that are using the climate change narrative to promote their political agenda will try to persuade you that money is evil (or that only evil people have money so they should take it and give it to people they consider more worthy).
When I first came across this problem, I wanted to become a scientist who simply redoes the old experiments. I am still very far away from becoming a scientist but I hope this becomes a legitimate job. Having a subset of scientists who simply redo the experiments with a little or no tweaking.
We need this. Can someone start an organization that does this? Not me, I have another thing to start. :P Also, there are AI to analyze data of experiments regardless fo the human conclusion. I think those are pretty helpful in sorting out truth from falsehood.
there is almost ZERO funding for this important task . more money is spent each year to study the mating behaviour of saltwater mud worms. I'm not even kidding ....
"scientist" What does that even mean? You have to study a certain field and then you can get a job at a university where they'll pay you for your research.
As a researcher, I find those numbers very conservative, even when I'm 4 years late to the video. I also feel like there's a reason missing for the false-positive results category which is a deviation from the main objective. Some true positive results shouldn't be considered as such when you make an in detail analysis of their methods, statistics and final findings just for the pure reason that, mid-study, some parts of the objetive were changed to accomodate the findings. This is also an issue that pisses me off, especially in my research field where there's such a huge mix of different scientific areas that it's next to impossible to verify anything at all in detail because everyone just pulls the results their way. As some people already mentioned here, some authors do withold critical pieces of information for citation boosts. If people can't reproduce something from a study, they can neither be proved wrong by the paper's information alone (as long as it checks out in theory) nor can they be denied autorships and citations from other papers which effectively boosts their 'worth'. The fact that researchers are evaluated using citations/autorship numbers is also one of the leading problems as to which false-positives exists in such large numbers (I don't believe false-positives are only ~30% for a damn second, but this is my biased opinion) and why some papers, even though everything checks out in theory, can never be truly reviewed on a p2p manner on the practical results sides of things. Anyone who works in research knows there's a lot of... misbehaving on most published works, regardless of the results. Therefore I have to disagree with the fact that researchers are fixing some of the problems. It's not that we don't want to fix them, but because the system itself, as it stands, is essentially rigged. We can sift through p-hacked results. We can't, however, sift through p-hacked results if the objective is mismatched with the reported findings (if someone told me that was involuntary, I'd believe them because I know how easy it is to deviate from it) nor from a paper which withholds critical information. And the worst part about it is that this is further fueled by higher degrees thesis such as masters or PhD's where it's mandatory to cite other people for their work to be 'accepted' as 'valid'. You have to approach published works with a very high level of cynicism and with some time and patience on your hands if you're even dreaming of finding a published work that remotely fits your needs and actually shows a positive result on most scientific areas.
When I finished my undergrad, I worked compiling a database for a retired professor. One day he asked me to find an article that had been recommended by one of his peers during review. He already had the author and subject so it was pretty easy to find and got me a nod in the paper for my invaluable research assistance. The paper was on how long bones had been drawn incorrectly in every medical text forever. Someone had drawn it incorrectly once and everyone had copied the original mistake.
@@ejipuh I have a copy that he gave me when it was published but it's packed away somewhere and I frankly don't remember what journal it was published in. I worked for him summer and fall 2004 so that's when it was published.
@Luís Andrade doctors have been learning from those textbooks for over a century, the mistake in the drawing didn't have an impact or someone would have pointed it out sooner. It took a scientist studying bones to point out the error.
There is a pressure to publish significant results. As a research assistant, I know for a fact my professors engage in this. I was preparing the data I collected on a crop, and somehow the paper was published a week after I finished the data... didn't make sense
Definitely doesn't make sense, as the peer review process alone takes months. Could it be that you were reproducing some past experiments, or gathering the same data to be used in a future publication?
@@lixloon Why exactly did you assume he is talking about one point of datum? It's the less logical explanation. I'll just assume you're a moron who wanted to let the world know something that makes you feel smart.
I'd like to point out, as he hints at near the end, that the underlying reason for much of these "p-hacked" studies is due to human nature and not the scientific process itself. Stopping a sample size when you find convenient, not getting published to counter-study, people only interested in unique findings; these are all human fallacies. A manipulation of the scientific method.
There's no "the scientific method." That's a complete myth. You should read "Against Method" by Feyerabend. Even if he goes overboard in his argument (which I wouldn't necessarily agree he does), it's naive to think of a defined, precise method in which science *is* done, or *ought to be* done. It's really "anything goes," as long as you convince your peers. Hopefully truth is convincing.
@@d4n4nable That's wrong. There's no set methods, but a general guideline to seek the truth. As OP said, if it wasn't for bias in the publishing, then the system would work fine. The scientific method is more of a way of thinking and general guidelines in how truth can be determined.
@@neurofiedyamato8763 You act as if epistemology were solved. There's no consensus as to how to get to "truth." There are various methodologies implemented in various fields of research.
@@arhamshahid5015 Narcissm? Why? Because I'm pointing to a classic contribution to the philosophy of science? It's not that I wrote it. I just read it, like thousands of others. How in the world is that narcissistic?
The real problem here are the journals. They have established themselves as the primary way of publishing. There are other ways, but in the end, the journals get you recognition and jobs. That results in many studies being done with the intent of publishing. Scientists cant be blamed for that. After all, they not only do the research but also have to contantly beg for money. The actual goal of optaining information gets lost along the way.
Exactly. One high-impact publication can set up a career, and leads to 'light-touch' peer review at other good journals, soft authorships on colleague's papers and requests to be co-investigators on other people's grants. More publications leads to more funding. Even as a Co-I that doesn't actually get money from a grant, you have demonstrated 'grant funding' success. The incentives to join that group are high.
Like all other systems and institutions, scholarly research and the academy is a game, with numerous irrational inputs and agents in pursuit of self serving interests.
I've been a world-class AI researcher for almost three decades now. I have personally, during this time, witness much deliberate scientific fraud, including rigged demos, fake results, and outright lies. Additionally, numerous colleagues have admitted to committing scientific fraud, and I've even been ordered to do so myself. I have always refused. I will not, as a scientist, report results I know or suspect to be misleading. My family and I have been severely punished for this. So I recently returned to mathematics, where true and false still seem to reign. And lo and behold, instead of abusive rejection letters, written on non-scientific grounds, I get best-paper nominations. PS: don't believe any of the current hype around AI.
Dang man I think you got out of A.I. at the wrong time lol. People dont have to fudge their results anymore because the results are real and improving every day now.
Even taking a basic science lab course that requires you to write up papers based on your "experiment," you run into this constantly. And even knowing this myself while taking that course, I found it hard to not let my own biases affect how I conducted the experiments. In small ways, but those small ways add up significantly.
@@allenholloway5109 Experienced the same! The reinforcement of bias is so joyful. Funny how subjectivity creeps in unnoticed like this in a field which demands objectivity. Some scientists employ outright dishonest policies of manipulating images, it's unbelievable.
@@allenholloway5109 you putting quotes around experiment made me remember an anecdote from my basic Chemistry class: we were given a substance in vial, and we had to do an experiment to identify it. Things like weight to volume ratio and boiling point. Well, the school was at an elevation significantly different from sea level. When we measured boiling point, we got the exact temperature you would expect at sea level, and the teacher was shocked. We asked him if we needed to redo the experiment, but he said “no, it’s probably fine.” I know my table of chemistry idiots weren’t the bleeding edge of research, but I feel like it illustrates the point that there isn’t enough time/funding to actually conduct proper experiments in several cases.
Well, as a wise man once said "Some poeple use statistics like a drunk would use a streetlamp -- not for illumination but for support". That being said, the most frustrating bit is that the journals and financing agencies actively encourage p-hacking and discourage replicating dubious studies.
There is nothing wrong with using statistics for support, as long as they are accurate and honest, AND you don't cherry pick them. That last part is often the biggest problem. I don't think pharma changes numbers any more, but they most definitely fund several studies and pick and choose what they want from each. That is not research, that is advertising. It's also changed a bit for them now that they have to have conclusions that at least have SOMETHING to do with the data collected. There was no requirement for that before, as I understand it.
The point about discouraging replicating dubious-or any-studies is important. There just aren’t incentives to duplicate or refute someone else’s findings, but rather come up with something ”original”. On a similar note, as an engineer who frequently volunteers at elementary - high school science fair judging, I’m constantly dismayed at the emphasis that other judges-both somewhat “lay” and professional STEM judges-place on “originality”… at the elementary & middle school level, even, not to mention the high school level! (Ok: maybe a district or regional winner at HS needs to be decently original, but…) Many people place originality and presentation skills (not to be entirely discounted, of course, but still not #1) above scientific inquiry, larger data trials, strict controls, and even just a good, solid use of the basic fundamentals of an experience by as taught in elementary science class.
@@whatisahandle221 I believe that in experimental physics it is customary to publish independent replications of breakthrough studies in comparatively high-impact journals (as well as to cite replication studies along with the original ones in future papers). Sadly this is more of an exception that proves the rule. In life sciences on the other hand there are so many subfields and so much competition, that far too many "original" yet shoddy papers (methodologically speaking) get published. My subjective impression is that this problem is slightly smaller in niche and/or "old-fashioned" subfields, where the odds of getting a reviewer who knows all the ins and outs of the topic are relatively high.
This reminds me of in college trying to find trends in data by any means possible just to come to a conclusion that would result in a good research grade. I think when your motivation becomes solely about money or grades (or whatever other comparable unit u might think of), you lose sight of the actual purpose behind what you're doing. In my case, because of my fear of getting a bad grade, i twisted the research process to show results that would impress my teacher, but which ultimately were false and useless. This video made me realize how many systems (in education, business, science) are actually structured for their participants to waste their time pursuing arbitrary goals rather than the ones which are actually valuable. If we could make it so a thorough and honest process would be rewarded just as well as one that has a flashy result then we would have a lot more true value being generated via these systems. This has been on my mind in school recently so I'm really curious to hear what others think if anyone wants to reply. Great video!
Hey, can you tell me a little about this research process? And how the current systems are a waste of time in education, business, and science? I am very interested in hearing what you think could be more valueable :)
Um, if you have no motivation or imagination for your research, you probably shouldn't be doing it. I would recommend you not blame your professor or department, but rather look at yourself and ask yourself why you dont like what you are doing. I am currently in a research field I am passionate about and I dont have to bend over backward to get results because I come up with imaginative solutions every single day.
His point was simple, the rewards for being able to show the desired results are better than for getting less desirable results within these systems. In all these feilds if you can show the desired result (regardless of if the results are valid) then you get better rewarded, be it grades, promotions, bonuses or publication. While in most scientific feilds most errors would likely be accidental bias, in areas like testing diet supplements or doing studies funded by corporations these are well known deliberate issues. Unfortunately most people have a very poor grasp on statistics and for that matter the scientific process so it's all too easy to make a lot of people believe false data. We really do need to improve the systems at all levels. There are currently moves to make things better, but we will continue to have these problems for a very long time, especially with journals not publishing studies that show other studies to be wrong and publishing studies that didn't pre submit their methods of evaluation before commencing.
oh the endless loop - there have been a fair number of attempted replications recently that have found pretty dismal results. When you consider they are all in agreement, that biases exist, that incentives are skewed, that .05 is not all that low, that p-hacking occurs, it is fairly unambiguous that a sizeable fraction (if not a majority) of research is actually false.
+Veritasium Wouldn't the odds of the exception being wrong be higher, than the odds of the norm being wrong? There's a reason why there's such a thing as peer review, after all. The scientific model is there to make sure you can replicate the results and methods of published papers. If something doesn't stand up to peer review, it's bad science, as it means something didn't add up.
That's the point: when deciding which papers to publish, the scientific method isn't being respected. There's selection bias tending toward publishing mostly positive results and not the inconclusive ones, and there's a complete lack of respect for replication since those studies are often rejected outright.
This is something I am learning a lot on reading studies for food health science. So many variables are not put into account in the final findings. Reminds me of the phrase, "If you look for something, you will find it"
which means how much more skeptical we should be of everything else, "alternative news" sites, alternative medicine, health blogs, mom blogs, etc etc...
I have an hypothesis. I think getting in car accidents decreases your chances of dying from cancer ...but increases your chances of dying in a car accident.
False. Somebody just published a paper about that. You have 100% chance to die from cancer if you where in a car accident. It was a small sample size, about 1 man. He was a truck Driver in chernobyl and he has been in small accident once. He died from cancer.
thanks, now i don't know what to do with my life. i'm a senior in highschool wanting to study physics, but i have watched a ton of videos that explain the reaserch paper publication strategies, and the way academia works in general, and now i realise that the perfect knowledge making science world i wanted to be a part of is nothing like i though it was....
Too bad the studies you see on Doctor Oz (the studies most of the sheep enjoy listening to) are never fact-checked because that would cut into profits.
Sadly though, the same can be true to a greater extent for legitimate science. Replication studies weren't getting funded much back then by the government or other sources precisely because it's not bombastic or groundbreaking enough to advance the field, so basically only a trace number of replication studies ever gets funded and published. In short, landmark studies didn't get fact-checked and replicated a lot because it would cut into their grant money application and prevent them from conducting the studies in the first place. Good thing it's changing nowadays though.
Sorry cant afford to replicate this experiment, the client didn't give us enough of their product to do further testing beyond the results they requested we deliver. We are a private laboratory and need to be profitable.
one the big problems are the big media who search for those crappy headlines: 1 chocolate bar a day or a cup of whine a day. The media search for those because it make good clickbait and they will even distord the scientific research and sometimes use words as increase the chances of x things instead of saying increase the cances by 0.01% of x thing Just the way they word it make it sound bigger than they are
But then could you not say its the fault of the human psychology, that people are drawn to unusual things making the media jumping on this crap inevitable? I am not saying the media are not to blame or that publishing that stuff isn't irresponsible but I do think everyone should take everything the mainstream media publishes about science with a large pinch of salt.
I'm curious about the comment you made at the end that "as flawed as our science may be, it is far and away more reliable than any other way of knowing that we have." I'd love to see a video on: 1) What are the "other ways of knowing that we have?" 2) A critical evaluation on why science is better than those "other ways of knowing" ~ A loyal fan
Always excellent and insightful!! If you publish a concept that the "Optometrist majority", does "NOT LIKE", well you will never get published. Been there ... DONE THAT!
The problem Veritasium exposes in this video is the same thing Richard Feynman spoke about during a Cal Tech speech that was published in his book "Surely You're Joking, Mr. Feynman." Richard Feynman spoke about Cargo Cult Science; which comprises practices that have the semblance of being scientific, but do not in fact follow the scientific method. In his speech, Feynman said, "We've learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature's phenomena will agree or they'll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven't tried to be very careful in this kind of work. And it's this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science." It pretty much sums up the problem within the science community. The lack of integrity as a scientist, largely influenced by the lack of freedom given to scientists at select institutions, is the downfall to most careers in science and scientific research. Feynman ends his speech by giving the students much needed advice on how to be a better scientist by saying, "So I wish to you-I have no more time, so I have just one wish for you-the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom." I could've have said it better!
Thank you for posting this! I'm often commenting on people's sycophantic acceptance of anything scientific. I try to point out that science is not about accepting wholesale the word of authority figures, but about being skeptical. Testing the world around you, and doing so in a way so as to limit your biases. The way some people rush to defend their favorite scientists or pet theories has prompted accusations of cult like behavior from me on a number of occasions. It's nice to hear such a man as Feynman speaking out about this. Of course I'm talking more about the lay persons blind acceptance of anything handed down on high from what they perceive as a high priest in the 'cargo cult'. The kind of person who parrots what they hear on NPR or PBS with _zero_ understanding of what they're talking about. You know this person, they wear a NASA t-shirt and are quick to comment on the fundamental nature of the universe despite receiving a 'D' in high school physics. They watch the Big Bang Theory and laugh along merrily. They have conjectures on the nature of black holes yet struggle to calculate a fifteen percent tip on their bill. They have a wealth of scientific and pseudo-scientific "fun facts" but no integrated understanding of any of it. You'd be hard pressed to find a single original thought floating around in their brain. These people are the tributaries of the 'cargo cult' of science. And unfortunately they represent the majority of people who are at least interested in science. It's their votes the government counts on when allocating money to scientific institutions. It's their views the little pop science videos all over youtube count on for their ad revenue. None of these institutions are interested in actually teaching their tributaries how the scientific method works, the value of skepticism, or a groundwork understanding of the subjects they claim to love so dearly. Long story short, the cult priests need the tributaries as much as the tributaries need the priests. And it's not going to end anytime soon. The only solution is to educate yourself. An undergraduate understanding of mathematics, and at least a conceptual understanding of some of the models we use in physics and engineering take hours of work but if you love these subjects put the work in and learn them. Most importantly, understand _what a model is_ and _what their limitations are_ . No real scientist claims to have a complete and true understanding of how the universe works! We do however have some _amazing models_ we can use to predict how the universe will behave. TLDR: Don't be a cult member. Don't take people's word for it. Put the work in and learn it for yourself!
That book is one of the most awesome books I have ever read so far. It perfectly describes what science is today and also shows how beautiful the pursuit of science is.
It's not that scientists are put in positions where they are forced to surrender their integrity, it is their _willingness_ to surrender their integrity. Integrity holds little value for many people. Notoriety, money, prestige, security - these things are esteemed higher than integrity. Integrity should be of infinite value but, alas, human nature is such that integrity is bought and sold and many times simply given away.
That kind of freedom can suck though. But basically, capitalism is the enemy of truth, since it is the way of corruption. It's all about existential fears and breeding bad character traits like greed.
This is an awesome explanation and it's going to be really fun reading everyone commenting with their confirmation bias while I read their comment with my own biases that I have about what biases they have Oh sweet brain, such complex things
This is gold. It is like playing 4D chess with your own brain. With every agreement and disagreement with your own intuitions, you fall into the trap of asking the perpetual questions of "what if my bias is the bias showing me others' bias, and what if that itself is some bias or error in judgement I fail to consider." This becomes too much and can throw people (myself) off, into a spiral of extrapolating truth. I suppose the remedy of bias is not only itself recognizing bias,but is perhaps the understanding of updating beliefs based on a consistent or more reliable framework such as Bayesian Thinking. So, I do think beginning with investigating biases and attempting nuance by finding multiple sides to research or a thought, is a starting point. It is tiring playing mental chess and questioning yourself, however, it does sometimes provide some insight is getting closer to truth. It also makes it easier to detach ideas when new information is presented. Well, that is my mental state. It might be a bias of its own :). If we are emotionally invested in an idea, consciously or subconsciously, we tend to be more inelastic to new and valid evidence that doesn't support our intuitions. These are just my observations and of course, given valid criticism, I shall update them. :)
And this is why the method exists. When in doubt, test it yourself, never rely on trust or opinion, even your own. This is the most important part of it all, as multiple independent tests reduce the bias. This is why when gauging any studies, you must look for indepenent confirmation: a single study can always go wrong in subtle ways. Even the methodology could be wrong, and then while it gets confirmed by others, different studies on the same subject cast a different light on it. The more studies, the more confident you can be. Evidence over opinion.
This is huge in my field, Structural Engineering, as people get way too lax about sample size. Thanks to testing things like full-sized bridge girders being incredibly expensive, samples sizes of 1-3 have become all too common, and no one does replication studies... Then that mentality bleeds over to things like anchor bolts that can be had for $5 a piece at any big box hardware store. It's getting dangerous out there!
@@LogicNotAssumed can you explain what you mean by rigging moving loads? Does this refer to loading up delivery vehicles and such or something else? Or is this 10% rule used for many different applications?
This is Crane operator stuff. There is a certain "strain" that is allowed for fatigue reasons. (Strain is material stretch vs applied stress). Exceeding that strain, while still below the breaking strength, will result in weakening of the material with repeated use causing failure below it's published minimum strength. E.g. steel might have a tensile strength of 110,000psi but a fatigue strength of only 63,000psi (63ksi/110ksi = only 57.27%). So, for conservative use, most industries require robust safety factors to account for fatigue, use, damage, etc. Commercial airliners are rated for +3.0g x 1.5 safety factor at maximum weight. Bridges vary, depending on seismic requirements, etc. But it's not a good idea to cross an old country road bridge rated for 6 tons, with a 12 ton vehicle. You might survive, but the bridge will be damaged.
Derek, I am convinced that the whole reason for this reproducibility problem is money. Unfortunately, science/academia is run like a business. Profit is key. Journals need to make money, institutions need to make money, funding agencies need to convince the rest of the world that the tax dollars spent on scientific research are well spent - and that means results, tangible results. So the scientific effort is reduced to the production of results - as much and as fast as possible. This creates a very destructive pressure on the researcher. I'm a graduate student currently working towards an academic career. I have been told by several profs in my field including my advisor that if I want to get a faculty job at a good institution after finishing my phd, I need to have at least 8-9 papers in my CV with at least 2-3 in a high impact journal. The reason is, of course, the sheer amount of competition; there are huge numbers of applicants and very few open positions. When hiring researchers, universities look at their previous research i.e., papers - and people can count much better than they can read. As a grad student, you can dedicate your 4-6 years to one fundamental problem, work on it rigorously, and -if things work out- end up publishing one or two papers with a huge impact on the field. But when you then go looking for jobs, you'll have trouble because people can count better than they can read. They'll say "Oh this guy only has 2 papers but this other guy has 15, let's hire the other guy." I know a lot of people in this situation - extremely bright grad students who cannot get faculty positions or even decent postdocs because they don't have enough papers in their CV. Many grad students who intend to stay in academia are aware of this, and there is no way you can publish at least 8 papers in 4-6 years without sacrificing rigor and/or reproducibility. Sorry for the long comment, but this is something that constantly bothers me and I felt a need to say something. Hope you get a chance to read this, I'd be very interested in what you think.
As an electrical design engineer I had the mantra "Everything works great until the current flows." You can design a circuit, model it in software, have your peers review it, take all kinds of steps to eliminate risk, but in the end you have to apply power and let mother nature decide if you were right or wrong. I have to say that a majority of the time there was a mistake or two in the first prototype.
Nicely summarized! I am a scientist (engineering) and reproducibility is a huge problem. I think there is a lack of throughout scientific method/experimental design teaching as well. I had to learn on my own about all the possible drawbacks (cognition bias etc) and I am still unsure I do everything correctly. Another important source of error can be listed for experimental science: it is literally impossible to control all variable in the environment (where the experiment is conducted), apart for very expensive facilities (ultra-clean rooms, cyclotrons...). Which means that a simple change of weather, some new vibrations (new road nearby the building), a new type of equipment (it is impossible to compare data from different groups that own the same machine, they are never the same - mostly after time pass and parts need to be replaced)... will differ the data set. All in all, it should be possible to by-pass it, if you had infinite resources and time. But since we don't (and as you show, it is hard to publish both negative and reproduced results), most researchers try to do the minimum amount of experiments. Sometimes not even reproducing their own data (because it will not be the same at all!). Well, all is not loss, as most of the time, a hypothesis is often quite robust to our errors, and that being aware of those errors can help reducing them.
You may be missing the difference between what are called pure sciences and what are called applied sciences. Applied sciences are not true science, i.e. they do not apply the scientific method to arrive at conclusions through data. Often they use trends, probabilities, criteria and statistics to allow for conclusions when the factors of experimentation cannot be controlled for. I think this video is really only trying to debunk these applied sciences as not producing scientifically supported facts. The experimental or 'hard' sciences should be exempt from this critique if I am not mistaken. You make a great point though, one I have always maintained, but on that note I would say don't forget that science never attempts to assert it has 'proved' something through the acquisition of its data but rather simply has 'found cause to support certain conclusions over others'. The conclusion that certain well-tested hypothesis are debunked due to a margin of error in the data, such as might be produced by a variance in the machines or proximal road construction etc etc, is far less tenable than it being explained, or even written off as they most likely are, as the consequence of such events. But things like scientific laws are so constantly observed under their expected conditions that we have never observed instances which could cause us to conclude they were not laws of the universe. To all intents and purposes laws are 'proven' but tomorrow could reveal observations which entirely destroy those conclusions based on today's observations, thus science can't 'prove' anything because at best science only produces conclusions appropriate for today's observations. To add to your critique though, one of the things I like to bring to the table is something I think which is missed by even most hard scientists today. That is, the current theories which account for current observations could actually be 'in-discernibly incorrect while entirely observably aligned with the measurable parts of the real universe'. That is, it is still entirely possible that our universal theories are actually only a model that can superimpose, without us noticing it doesn't do so entirely or actually, due to the possibility of our incapacity to measure or experience certain parts of the universe. We could be fooled into thinking our theories are more accurate than they are because there is no guarantee we can experience, measure or even comprehend the universe in its entirety but we would have to do so to think there is not a possibility of an an unnoticeable overlay.
Man, that is really surprising. That is something that is definitely taught in a chemistry degree track in Analytical Chemistry courses, the injection of personal bias, the bias towards measurements that end in even numbers or five, etc etc. the list goes on. Being as logic and mathematics based as engineering is I'm surprised to hear that. I'm sorry you had that experience, man.
Phuck Gewgle - No, of course we can't measure every possible variable in the universe. We don't know what they are. And we don't know what we don't know. There could be an infinite amount of unknown variables. All we can do is model the variables we do know about and give them the changing values as measured over time. A model is a simplified abstraction of one small part or aspect of the universe. Models are man made. We use models for the purpose of prediction and control. They are tools. They are not "truth" in any final, exclusive, complete sense. All models are provisional. As soon as a better model comes along, we will drop the old one or relegate it to certain approximations, parameters or purpose. For instance, the Apollo program brought men to the moon and back to earth using only Newtonian Mechanics, good enough, no need for Relativity or Quantum Mechanics. For different purposes, Newtonian Mechanics will not do as well as Relativity or Quantum Mechanics. And the beat goes on... Cheers!
Wow, i'm impressed actually. As a follower of empirical sciences (i study dielectric rotational magnetic fields, and the unified field), getting through to people that statistics and studies are frequently heavily flawed isn't always easy. The best method of delivering fact is delivering hard fact through retroduction. Reproducing and delivering impenetrable logic that confines a model to irrefutability. Abstract studies are pointless. If you wanna find out how to make people thin, you learn the physical chemical processes that increase body fats, and catch the problem at it's root. Experimenting blindly with random combinations of living habits is unimaginably inefficient.
Most simple predictable models were already proposed and studied, very little is left for any scientist in the world of "simplicity". The problem now is the increasing complexity of new models, and at this point you can't really "design" an experiment, you have to design general methods that run experiments stochastically in massive amounts. Rarely you can thoroughly test a complex multi-dimensional model or design an easy and encompassing experiment for it. Living habits is actually an example of a complex multi-dimensional model.
Aleksey Vaneev Well sure, but that's because the studies are impatient. If everything was learned retroductively and factually from the ground up, each process studied meticulously, there would be no mystery and no confusion. We would know each fundamental process and be able to compound them understandably into macro multi-dimensional models as such, because each dimension is understood in full with explanations.
Real-world systems can never be broken down into a graph of sub-systems with known relationships. Weather or human body, or economy are such systems which cannot be completely decomposed into elements. They are like systems of equations where variable A depends on B, C, D, and each variable depends on the others, in non-linear way. It works as a complete system, we see that it works, but if you start decomposing it into elementary things, they won't add up back, mainly because you just can't standardize (tie one fact to the other) nor detect all of system's elements.
Aleksey Vaneev Nono, they absolutely can, but not with Cartesian and Euclidean modelling. The real world is not made that way; it's an incommensurate system which behaves in a fractal sense. There are numerous elements at one scale that cohere to make a compound element have more presence at a larger scale. The standard mathematical systems require relative coordinate logic, which works on paper, but causes all of the apparent problems that we all face in the world of things not adding up. If you don't know, Cartesian mathematics is working with x,y,z graphing, and Euclidean math starts with a point (you just decide where it starts) and then a uni-directional line. Reality works with bi-directional curved lines (spiralling into a hypotrochoid), as described in Newton's Third Law: recoiling inertia. All things in reality behave this way. You can't divide something that does not have Cartesian dimensionality. The primest example is a magnet. If you chop a magnet in half anywhere, it will immediately form 2 fields that have identical geometry to the original, with N/S and inertial plane. You have to work from the bottom up, figuring out how the thing itself works before you can make any assumptions on how complex constructions using this thing works (even if you know how to use it, doesn't mean you know how it works fundamentally).
Well, what you are trying to say is an example attempt to generalize/standardize things. But they cannot. Some work one way, some work another way, in one dimensionality and another. That's why you can't build a model of a substantially complex system. You can do that in imagination, as a general point of view, but not in an actual model that can be computed and predicted. And without ability to predict there's no science.
I really hope people watch this video fully, because IMO the final message is extremely important. However flawed the scientific method, it's the best system in place ad hoc to get the closest approximatation to "objective truth". As a university student in the Netherlands, I can vouch that we really do learn about these flaws in the scientific method thoroughly. We are taught in statistics courses, research methods courses, and even general philosophical science courses to always be critical of data and papers we come across. We really go above and beyond the points you named in this video. I'm really glad about this aspect of my studies, I hope this is also the case in other universities world wide.
I remember the chocolate study. It may have been this one or a similar one being done about that time. I was a potential candidate for testing. However I turned it down when I was told they would be taking deep tissue samples... not by a nurse but by a 'Trained Technician', and not in a medical facility, but in a rented room at the local university. Those red flags were enough for me to tell them no, and to keep their $600 and the portioned food they were going to give me for my meals. I was poor as heck at the time and barely scrapping by. Still having a non-medical person digging into my body for samples was not better then ramen again for dinner.
The problem with the approach to science is that nobody likes negative results. They aren't sexy. We always have to see "differences". And people feel pressure to find them no matter how tenuous they are. Because of this it's very difficult to correct problems in the literature.
And they all want big crazy findings. I made a comment above about the new study saying women are more attracted to altruistic guys. Journals know that popular media will be all over that so they will do anything they can to connect themselves to the study so they can have their name out there. Peer reviewed scientific journals are all about brand recognition just like any other business out there.
Publication strategies of scientific findings are pretty unscientific. The dynamics of social prestige involved in publishing are clearly incompatible with the scientific method.
The xkcd "Jelly Beans" comic deserves a mention. I'm so glad it became popular because it illustrates the whole issue so well, and in just one frame. It should be required reading for the whole world!
@Fluffynator The scientists decide to test whether jelly beans cause acne. They do not get statistically significant results, so one of them suggests testing each of the 20 colours separately. When you break up the data into many categories, you increase your chances of a category showing statistically significant results by pure coincidence. This is essentially what happened with the chocolate study in the video. By monitoring many different categories of conditions (weight loss, sleep quality etc.), it was more likely that one of categories returned a false positive. The same thing happens in the comic. One of the 20 tested colours shows statistically significant results, which is not unexpected due to the number of categories they created. They publish the paper showing the (presumably) false positive with green jellybeans while the other 19 studies that correctly identified no relationship go unpublished and forgotten.
As a person that loves science but is not in the field, I’ve become quite disgusted by the lack of integrity shown by the university system. They have been corrupted to the core and need to be cleaned out. It’s become big business now and is not to be trusted if profit is the driving motivation, that’s not what universities are for. I have no issue with for profit companies doing research and development as long as everyone knows where it’s coming from and is driven solely by profit and is treated as such.
I definitely understand that feeling. As a scientist who has spent a disenchantingly long time in academia, I still have faith in individual scientists and the prevailing winds of science overall. Look how far the world has come in such a short span of time (for good and bad). That progress is built laregly on a basis if good science; the bad stuff ends up getting filtered out. Universities absolutely operate for profit, but not everything that makes a profit is without merit in my eyes.
About darker skin, it makes sense that players with darker skin have bigger chance to get red carder, because lighter skin makes the player more pleasant looking. Did I just committed the ultimate sin of mankind: telling the truth?
I'm so glad someone finally spoke out about something I have been worried about for so long. I thought maybe it was just my misunderstanding because everyone else seemed to be deeply engaging in this toxic/false process 🤔, but now I know it is indeed problematic. Thank you so much!
@@anchordrop1476 That's very interesting. Could you tell me the the other ways your professor would want you to draw conclusions? Also do you know which types of models statisticians are favouring?
This is such an important video. It's so easy for people to find "a study" that can prove or disprove whatever you want for this very reason. That's why one of the first things I look for when someone shows a study/finding is the sample size.
I just looked at this video again, and I realized something: When you say that they don't publish as many negative results, you still assumed that they ONLY would publish TRUE negative results, not false ones.
Some people make fun of greek philosophers because some of their ideas are nothing but thoughts and speculation. Wait for the future perhaps they will make fun of us in the future for being biased
@@MusaM8 Philosophy is like maths. The logic either adds up or does not after much scrutiny. Maths is pretty much the only field in research still maintaining integrity.
@@asumazilla Wrong maths does not obey the laws of nature. There are rules in maths that are based on the natural universe as we not it. If any of those are broken, then that maths is useless to us.
While getting a psych ba I wondered why journals are pretty much unregulated. The fact that a journal can publish findings then refuse to publish studies that disprove or refute them is troubling to say the least.
As the famous statistical saying goes, "If you torture data long enough, it will confess to anything"
Science is GAY!!!!!
@@jamesmonroe3043 get a life
@@alexanderabrashev1366 You must be Pro S0d0m33.
@@alexanderabrashev1366 you but hut
@@mayrokratt6195 did you have an aneurysm writing that or what?
For people freaking out in the comments: we don't need to change the scientific method, we need to change the publication strategies that incentive scientific behavior.
You *believe* that we don't need to change the method
Adamast I did not for once stated my believe. I stated the point of the video, which many seem to have missed. Ironically, you now missed the point of my comment.
+Adamast no we dont need to change the method, the problem (which was explained VERY clearly in the video) is that people arent USING THE METHOD PROPERLY. like when someone crashes a car cause they were drunk, the car isnt broken its just being used incorrectly
You do state your believes. You think you know the point of the video, where you see other people just freak out. Personally I read: _Mounting evidence suggests a lot of published research is false._ Nothing more. There is I admit a short faith message at the end of the video.
The scientific method- you go out and observe things, and develop a hypothesis, and test the hypothesis. If you run a bunch of tests and come out with the wrong deductions that is called flawed research methodology. Flawed research doesn't imply the concept of going out and observing coupling with experimentation is flawed, it just means you suck at being a scientist.
I really think undergrads should be replicating constantly. They dont need to publish or perish, step-by-step replication is great for learning, and any disproving by an undergrad can be rewarded (honors, graduate school admissions, etc) more easily than publication incentives can change
Agreed. The peer review portion of the scientific method is it's weakest link IMO.
I agree. Undergrads replicating classic experiments can also help with their education.
Undergrads do preform classic experiments ^
When I was an undergrad in the physics department, we were (and still are) required to reproduce quite a few experiments that led to Nobel prizes. It's a fairly common practice, and you do this while working in a research group that is producing new research.
@@sYnSilentStorm How common are undergrads in research groups / replicating newer works? Well trodden ground breaking-now-foundational Nobel prize works make sense for learning, but we would benefit from a more institutionalized path of replication of newer claims. Especially if we can before graduate school, which is who populates research groups in my mind.
this happens because of "publish or perish" mentality. I hate writing scientific papers because it is too much of a hassle. I love the clinic work and reading those papers, not writing them. in this day and age it is almost an obligation that EVERYBODY HAS TO PUBLISH. if you force everyone to write manuscripts, flood of trash is inevitable. only certain people who are motivated should do these kind of work, it should not be forced upon everyone.
Indeed. You are totally right. I also feel that this over-publishing phenomenon has led these researchers into manipulating the data because it feels like a competition. In my opinion, researching is hard and sometimes frustrating, but you should always stay loyal to the fact. Otherwise, you are cheating for profit. Totally unethical. There are people who devoted their lives for this but still, they don't get as much recognition as these 'cheaters' do.
And not only is it "publish or perish," but you also have to pay the journals to publish your work once it is approved for publication.
@@migenpeposhi6881 Certain people don't realise that "Research" doesn't mean "Prove something is true" but instead "See _if_ something is true"
@karlrovey I choose perish
I'll tell you exactly why. The academia is controlled by rabid looney slavemasons and rabid looney jesus. Since they are both mentally and intellectually retarded rendering them incapable of any intellectual manifestation, they force grad students to do it in the publish or perish way. They actually steal and collect it for their own records because of the following reasons: 1. They think that these are valuable intellectual property which should only belong to them. Hoarding basically 2. They will pass it off as their own intellectual property subsequently in a post apocalyptic era or a different realm 3. Being intellectually retarded, they prefer quantity over quality. More the better 4. They are basically mining your intellectual property by exploiting you for their use 5. Commercial reasons just in case your papers somehow fetch money
This is actually good. You should publish more crap for the slavemasons to prevent then from getting hold of some actual research
The most shocking thing to me in this video was the fact that some journals would blindly refuse replication studies.
Maybe they're system is flawed?
The Economist publishes blindly. The entire EVidence Based Medicine does not yield dosage for medicine tested - that includes your painkillers, your blood pressure pills, your mood stabilisers and your anesthesia.
But this is not science's fault, nor is it a conspiracy of the medical system.
The scientific method is limited in its ability to give us data.
Yet, practically we need to understand the world. The only way out is innovative methods that people can actually understand their pros and cons and get published without trying to immitate the scientific methods to be listened to.
Oh, and developing the right hemisphere science. Which is what I experiment with.
We should boycott that unscrupulous journal. Does anyone know the name?
@@spiritofmatter1881 No, the scientific method can be skewed to provide misleading data; and if you don't believe there's a conspiracy in the medical/pharmaceutical system, there's a bridge in NY I can sell you for a good price. The pharmaceutical industry is mostly shady -- and medical doctors are their extended hand of national snake oil salesmen.
Journals are just newspaper for scientists.
Publication house want to publish things that people want to read. And people want to read interesting and new things, not replication studies.
Even though in modern times there are journals that accept and publish replication studies. These journals usually incur a much larger publication fee.
Nothing is perfect. But at least, everyone is trying
Research shows lots of research is actually wrong
_spoopy_
Science can actually falsify science... makes more sense than you might think
Science is a battlefield of ideas, the darwinism of theories, if you will. Only the best ideas will survive. That's why the scientific method is so powerful.
Unfortunately the "best" ideas today are those that will result in profitable gadgets and not exactly those that would best propel human knowledge forward.
+Nal but that isn't science's fault, nor capitalism's. It is the fault of the consumers that value gadgets so highly.
Research shows that most statistics and published research are false, statistically speaking.
As a very wise man once stated, "It's not the figures lyin'. It's the liars figurin'". Very true.
Samuel Clemens
not alway it has to be intentional lie
Also true
I''m loving the quotes in these top comments!
😂😂😂
When I was in grad school for applied psychology , my supervising professor wrote the discussion section of a paper before the data was all gathered. He told me to do whatever I needed to do in order to get those results. The paper was delivered at the Midwestern Psychology Conference. I left grad school, stressed to the max by overwork and conscience.
What was the thesis? Asking to learn if topic was especially controversial or important to particular interests.
Why are you the minority? How can people who have the power to create Utopia choose self interest. It's like I'm in the Twilight Zone. Everybody knows and nothing is done. I wish I was never born and I hope I never am again.
Why not mention the subject of the paper then, brother? You lying…?
chances are high that the paper wasn't going to reveal anything new
@@ButtersCCookie ??? nihilist looking goober
As a former grad student, the real issue is the pressure universities put on their professors to publish. When my dad got his PhD, he said being published 5 times in his graduate career was considered top notch. He was practically guaranteed to get a tenure track position. Now I have my Masters and will be published twice. No one would consider giving you a post doc position without being published 5-10 times, and you are unlikely to get a tenure track position without being published 30 or so times. And speaking as a grad student who worked on a couple major projects, it is impossible to be published thirty times in your life and have meaningful data. The modern scientific process takes years. It takes months of proposal writing, followed by months of modeling, followed by months or years of experimentation, followed by months of pouring over massive data sets. To be published thirty times before you get your first tenure track position means your name is on somewhere between 25-28 meaningless papers. You'll be lucky to have one significant one.
Damn. I really want to be a researcher in the natural sciences one day with hopefully a Master's or PhD, but I must say seeing this is a little unnerving. Would you happen to have any advice for aspiring researchers?
I studied in Japan, now Japanese has changed their view in research, not to get to top of the world, but how the reseach could be applied and contributed to society. If you look at the ranking today, most japanese univs are not at the top ranking as they used to be. Now, univs from Korea, HK, China, singapore are climbing for top ranking. But every year these univ have suicide cases.
@@mohdhazwan9578 friend I was totally with you until you mentioned suicide. How is that even relevant in how research can be applied to society?
Not really accurate though. None of my professors have been published 30 times and they've taught at Yale, Texas A&M, Brown, that's not really true at all.
@@ShapedByMusic they might not have, but it is beginning to become a standard for new hires. 30 might be a bit of an exaggeration, but there is no way you are getting hired with under 15-20 for a tenure track position.
It’s almost impossible to publish negative results. This majorly screws with the top tier level of evidence, the meta analysis. Meta analyses can only include information contained in studies that have actually been published. This bias to preferentially publish only the new and positive skews scientific understanding enormously. I’ve been an author on several replication studies that came up negative. Reviewers sometimes went to quite silly lengths to avoid recommending publication. Just last week a paper was rejected because it both 1. Didn’t add anything new to the field, and 2. disagreed with previous research in the area. These two things cannot simultaneously be true.
This is very frustrating to hear.
damn
"These two things cannot simultaneously be true."
Yeah, I just wanted to say it.
If 1. is true, 2. can't be and vica versa.
Don't get me started about meta analyses. I've never heard of one being undertaken except where the backers have some kind of agenda. And I've never heard of one the results of which didn't support that agenda. The entire concept is deeply flawed.
BestAnimeFreak that’s what he said...
"There is no cost to getting things wrong, the cost is not getting them published"
It's a shame this also applies to news media as well.
books are media...
@@NaatClark maybe they meant news
@@fernando4959 i did, i fixed it
@@GiRR007 At this point I think it's indisputable that mainstream news is literally state propaganda.
@@NaatClark I don't know why it wouldn't apply to books
This has influenced my thinking more than any other video I have ever seen, literally it's #1. I always wondered how the news could have one "surprising study" result after another, often contradicting one another, and why experts and professionals didn't change their practices in response to recent studies. Now I understand.
Weird how popular this video is recently (most top comments are from ~1yr ago)...
None of these problems apply to the fields of virology or immunology... right?
@@thewolfin I have no idea how much or little different areas of study are affected. I assume the very worst ones are when that ask people what they eat, and ask how healthy they are. Beyond that, no clue.
Yeah, I remember how eggs were bad for your health, then they were good, then they were bad again. Not even sure where the consensus on that is at this point.
Meta analyses are difficult to conduct, but help weed out bad data and contradictory findings. Not enough of these are done.
@@bridaw8557 Some meta analyses weed out bad data, while others average it in. The trick is carefully reviewing how the original studies were done.
My favorite BAD EXPERIMENT is when mainstream news began claiming that OATMEAL gives you CANCER. The study was so poorly constructed that they didn't account for the confounding variable that old people eat oatmeal more often and also tend to have higher incidences of cancer (nodding and slapping my head as I type this).
Maybe don't stand so close to the microwave oven when you cook it.
Perhaps Oatmeal isn't the problem. The problem could be with a change in the way it is grown and produced. Contamination, soil depletion, pesticides, etc. There are always variables that are un-accounted for in scientific research that invalidates the conclusions.
@@kdanagger6894 the true answer was a lot simpler than that.
Don't forget about "vaccines-autism" one
My favorite is probably the story of how it was proven that stomach ulcers are caused by bacteria and not by stress and spicy food. Big arguments for decades between the established science and its supporters vs the scientists discovering the truth. It illustrates how poorly established science treats scientists with new ideas no matter how valid.
"The greatest enemy of knowledge is not ignorance, but the illusion of truth"
- After reading replies I have no idea who the author is
"The problem with quotes on the Internet is that it is hard to verify their authenticity." Abraham Lincoln (source: the Internet)
(quote is not from Hawking; probably from Daniel J Boorstin)
I've seen a variant of this attributed to Voltaire.
as a counterpoint, this quote has the illusion of truth about it. I like it!
I remember seeing one like this attributed to Benjamin Franklin. Long story short, quoting individuals on the internet doesn't really seem too productive if you ask me. The idea should stand on its own and not hinge on someone seen as intelligent having said it. Then again, what do I know? I'm just Elon Musk.
@@danielsonski the authenticity doesn't matter, just the meaning behind it
we should open up a journal for replication studies only
With full time staff who never take any compromised funding.
Whose gonna pay for a journal no one reads?
Call it "Well yes, but also No"
darth tator definitely
warwolf6 Anyone who believes in the importance of replication studies? It’s not like everyone who reads it will have read every study. Far from it. It could be multi-disciplined also, which could be interesting to learn about other sciences
As someone who studies theoretical statistics and data science, this really resonates with me. I see students in other science disciplines such as psychology or biology taking a single, compulsory (and quite basic) statistics paper, who are then expected to undertake statistical analysis for all their research, without really knowing what they're doing. Statistics is so important, but can also be extremely deceiving, so to the untrained eye a good p-value = correct hypothesis, when in reality it's important to scrutinise all results. Despite it being so pertinent, statistics education in higher education and research is obviously lacking, but making it a more fundamental part of the scientific method would make research much more reliable and accurate.
I barely passed my statistics class, and I'm in biology. Even now I feared that I might not gonna be able to interpret my data.
Observational studies in the social sciences and health sciences are mostly garbage. People who don't do experiments or RCTs need to study a hell lot of statistics to get things right. And only recently we got natural experiments to help us with good research design for those areas.
Until my masters, I was relatively well trained in traditional statistics (I'm an economist) but unaware of natural experiments. I was completely disheartened about how awful my research was, given that different specifications in my observational studies were giving me different results. I only gained renewed enthusiasm in a much better quality PhD who taught me much better research designs.
@@romanbucharist4708 dang. So how's it going now?
Also happy soon new year guys! 2023 Greetings from Florida
@@romanbucharist4708 Also happy soon new year guys! 2023 Greetings from Florida
Interesting. I am adding this video to my research courses. My students don't always understand why we need to be critical of research.
Modern science is science. Just because you're emotionally upset over something in the news doesn't invalidate the scientific method.
J Thorsson you can't talk about the "lack of self critical thinking" after trying to say that you're smarter than someone else because your daddy is a manager at a science institute... you can't contract knowledge lol
+ nosferatu5
The scientific method is indeed unsurpassed. But what this video is about (though it doesn't say so) is a relative newcomer in academia, the NHST, and its use is based on anti-science. Does fitting evidence to theory sound like science to you? I hope not, it most certainly does not to me. But that and anecdotes is what a young academic told me was the norm in many of these fields. I'd order a retest or reinterpretation of every NHST study from 1940 onwards using the actual scientific method, complete with logic and falsifications, given these results. A failure rate of 64% is abysmal, yet predicted by Ioannides.
Don't forget something Derek left out though. Re-sampling fixes a lot of these issues. Run a sampling test including replications to get your standard deviations and get a p-value less than 0.05 (or even better, less than 0.01 or 0.001). Then rerun the sampling tests multiple times to see if you can repeat the p-value. THEN (most importantly) report ALL your experimental runs with p-values. If even one out of (at least) three to five separate independent runs has a non-significant p-value, consider the entire study with a huge pinch of salt. Most reputable journals though nowadays insist on this - the peer reviewers worth anything will at the very least.
youre students have the same "publication incentives" as those publishing these "findings".
This is why statistics should be a mandatory course for anyone studying science at university.
Knowing how to properly interpret data can be just as important as the data itself.
It's generally not done at undergraduate, but a massive part of a PhD is understanding the statistical analysis that is used in research. It is extremely complicated and would be way too advanced for an undergraduate stats course for, say, a biology student.
David that‘s usually not taught in undergrad in the us? Wow, that surprises me - a biology student from Germany, where we have to take a class in statistics in our bachelors.
It might be easier to understand it during your PhD if you heard about it before
As a graduate MS student in A.I., I found my research statistics course to be probably my most relevant in terms of learning to think properly as an individual with an advanced degree. I was very much taken by surprise by statistics, pleastantly so.
Mirabell97 I took an Engineering statistics class in undergrad in America. I've also taken graduate level research statistics as a Comp Sci student, which was taught at a much much higher and more relevant level.
There are also high school statistics class, which are even more watered down. So as you say, many have indeed heard about if before.
forgotaboutbre glad to hear that :)
Just started my PhD. This video has inspired me to call in consultants outside of my supervisory team to check my methods. I don't want to be wasting my time or anyone else's with nonsense research, and I'm honestly feeling a little nervous about it now.
What is your research area?
Have you had a bad time with trying to be honest in science yet?
@@angrydragonslayer What does this even mean?
@@whyplaypiano2844 what part do you not get?
@@angrydragonslayer The whole comment? It isn't phrased very well. Are you being sarcastic, or serious? If you're being sarcastic, it's either because you were trying to be funny, or because you're--for a lack of better words--salty that people aren't honest in science. If you're being serious, it's either a sincere question, or you genuinely think scientists are dishonest on purpose. Explain the mentally you had when you made the comment, I guess?
The problem is people are suppose to be able to replicate the results by doing the experiment over again. If I can’t find multiple experiments of a study, it’s hard for me to not be skeptical
The big problem with that is noted in this video, replicating work already done generally has no rewards.
The time, money, and need to publish to advance their careers mean even the best intentioned researchers are likely to avoid redoing someone else's study.
Some experiments are very expensive in terms of time and money but We shouldn't worry too much about our money getting into publishing false positives. At the end of the day only the true positives will be the basis for further advancements, experimental science is built on previous results and if those results are spurious, nature will stop you from discovering further real relationships. That's what this video is failing to point out, the incremental nature of scientific knowledge in the natural sciences is a natural peer review system and the best that we can ever have hoped for. So keep funding science, at the end only the true relationships will stand the test of time
Gotta love when a published research article states that most published research findings are false
Research has found that 73.2% of all statistics are made up.
@@cinegraphics Rumor, not research, found that.
It's a complete misunderstanding of how science research works. "Eureka" moments are rare. Instead, the truth is eked out little by little: many rounds of test, falsify, retest, improve until the truth is arrived at.
@@genepozniak Can't tell if you're taking the piss or not........
@@4bidn1 If I gather your meaning correctly, no, I'm deadly serious. Think about it. If published research was the crap he said it is, where is all this successful bio-tech coming from?
Publish or perish ... and quality goes to the drains
So much research exists purely so someone can get their PhD, or bring funds into their University to keep themselves employed. When the pressure is on, no-one really cares whether the research is useful or even reliable - just got to fill the coffers and get your research published and referenced to drive up your University's rankings.
@@Thermalions Well, certainly, there are millions of theses out there, all required for the PhD. No way around that. Most of them are garbage.
@@UncleKennysPlace I would say around 95% of them are garbage
It has been estimated that it takes $5 million in funding to make a Ph.D in a STEM field. The research community has been corrupted from the base.
money is the root of all evil
As a PhD student, I can fully agree with this. I have come to hate the word "novel". No matter how correct and in-depth an analysis is, anything that doesn't turn the world upside down is always gladly dismissed with "not novel (enough)" as a killer argument. By now I've decided for myself that I don't want to have anything more to do with the academic world after the PhD. I love research, but I HATE academic publishing.
Consider starting your own publication journal?
Is there anyway to pursue a research or research-like career without the problematic issues of academia?
Academic publishing is a nepotistic and simultaneously cannibalistic _industry._
Nah Christian, just go work in industry or big company. Better money HA
I know right. I just did some academic research on the effects of eggs on cardiovascular risk and the findings were very confusing. It frustrated me when I found one new study contradicted many previous studies stated that more than 3 eggs a week caused many health problem, and out of no where it receives lots of media coverage. I am still not really sure which study is correct. Another time is when I recently study on wealth inequality and its relation with the pandemic. The research process is very interesting but I can see many bias in those papers that I read (sorry for my English)
Sabine Hossenfelder: "Most science websites just repeat press releases. The press releases are written by people who get paid to make their institution look good, and who for the most part don't understand the content of the paper. They're usually informed by the authors of the paper, but the authors have an interest in making their institution happy. The result is that almost all science headlines vastly exaggerate the novelty and relevance of the research they report on."
This is unrelated to what the video is talking about
@@fuelkslet me introduce you to the word "implication"
She’s a fraud, and a grifter, she says what she’s been paid to say
An engineer with a masters in nuclear engineering, a mathematician with PhDs in both theoretical and applied mathematics, and a recent graduate with a bachelors in statistics are all applying for a job at a highly classified ballistics laboratory. Having even been given the opportunity to interview for the job meant that each candidate was amply qualified, so the interviewers ask each the simple question, "what's one third plus two thirds?"
The engineer quickly, and quite smugly calls out, "ONE! How did you people get assigned to interview me!?"
The mathematician's eyes get wide, and he takes a page of paper to prove to the interviewers that the answer is both .999... and one without saying a word.
The statistician carefully looks around the room, locks the door, closes the blinds, cups his hands around his mouth, and whispers as quietly as he can, "what do you want it to be?"
That is a good one!
i thought the punchline was going to be 5/9
there is no bachelor in statistics -.-
@@ImperatorMo there is right??
@@ImperatorMo
It's a joke. I can't tell if you're making a joke as well or trying to insert asinine information into a his humorous comment.
This seems more like a problem with the publishing system over the scientific method.
Yes you are right but the bottom line is that any new scientific theory is completely unreliable. Since there is no other way to do science today other than the peer review method.
I think the problem is actually quite deeply embedded in academic research. Right from the selection of which projects get grant funding and resources onwards there is bias to show the result that the department head wants to be true. Their career, prestige and income relies on this. The careers, prestige and income of every person in every academic research department relies on only ever finding 'convenient' results.
It's human nature. That's what publishing is all about, exposing the study to other scientists, and seeing if it survives.
Publishing is about making money, so they have the exact same problem as scientists do. Go for money or go for truth. Since the publications wouldn't exist without money, they are making the only choice they can.
@@blitzofchaosgaming6737 Publishers earn (among other ways) money by selling subscriptions. They could publish anything
I had so much trouble to publish when I corrected the p-values to counteract "p-hacking" or alpha inflation. Since I tested for multiple variables, I adjusted the models to minimise false positives and low and behold, almost all hypotheses that would have shown p
Ugh, apparently negative results are so damn untouchables - the publication system really needs to chage
Stick all those negative findings in the supplementary figures of a somewhat related paper.
@@aravindpallippara1577 Just imagine, it's not compulsory to adjust the p-values. It's not mandatory to counteract alpha inflation. How much of published research must be (intentionally or not) not significant, but published as such.
But you kept your integrity.
Imagine all the human time saved from being able to get information from someone elses research without having to do it yourself.
Now imagine all the time lost from all the people who have to research what has already been done, but they cant learn it because negative results don't show up in books.
“p
P values of 0.05 are a joke.
Look, I'm going to sound biased, and that's because I am.
This is a much bigger problem in fields like Psychology than in fields like Physics. The emphasis on constant publication and on positive results is still a massive problem. Researcher bias is still a massive problem (although still, not as much as in Psych/Sociology). The existence of tenure helps a little since researchers become able to research whatever they want rather than what the system wants.
But we aren't claiming world-changing discoveries with P=.05. Derek brushed right past this like he was afraid of sounding biased but I'll repeat: 5 sigma is a 1 in 3 million chance of getting a false positive purely by chance. Every physicist "knew" the Higgs had been discovered years before we finally announced it and started celebrating. But we still waited for 5 sigma.
I did some research with one of my Psych professors in my freshman year. She was actually quite careful outside of the fact that her sample sizes were pathetic. We went to a convention where we saw several dozen researchers presenting the results of their studies, and it was the most masturbatory display I could have imagined. There were some decent scientists there, no doubt, but the *majority* of them were making claims too grandiose for their P-values and sample sizes, confusing correlation with causation, and most of all *failing to isolate variables.* If a freshman is noticing glaring problems in your research method, your research method sucks.
The next year I had a Physics prof. who had a friend of mine and his grad students run an experiment 40,000 times. There is no comparison. We need a lot more rigor in the soft sciences than we have right now. Mostly because science. (But also because they're making us all look bad...)
And there's also the problem that experiments might be difficult to perform in fields outside physics. It can expensive and it requires a lot of planning and logistics. Not to mention that ethical dilemmas might stand in the way, which happens a lot in medicine. In a way, the physics field is blessed by not depending on studying people, and overall physics experiments are cheap; expensive particle accelerators non-withstanding.
One thing I think Derek missed is to emphasize that one shouldn't be looking at single studies anyway. You look multiple studies for trends and toss out the flawed ones and the outliers. Or even better, look for meta studies.
I'm also unsure if changing your model / what you measure *after* you have looked at the data is p-hacking. Such a mistake seems way more serious to me, as you're basically making your model fit a specific data set. Give me any data set and I can make a polynomial fit all the points. Basically, reusing the data after changing the model should be a crime :)
On this note, simply because you mention my discipline (Psychology), I will point out that Psychology lacks any kind of unifying theory that organises the predictions it makes. It's a lot easier to be a physicist trying to confirm the predictions of Einstein and Newton than a psychologist guessing at what the underlying mechanics of the mind are.
We are all biased, but even the ones who admit it find it hard to fight it. I guess we can never win.
Another issue is that Sociology, Psychology and Economics are all black boxes that we don't know nearly enough about. In Physics, we can lower the temperature to close to absolute zero, and do the experiment in a vacuum. It is currently impossible to have that level of rigour in Sociology, Psychology and Economics. We still have a while to go.
I don't see why you need to single out psychology, even this video gives examples of neuroscience and physiology research holding even lower rates of reproducibility. When you look at the success of psychotherapy for individuals, you will find most people find it an indispensable resource in their lives, unlike the health tips or the vague claims about tiny brain regions coming out of neurology and physiology.
"Data doesn't speak for itself, it must be interpreted"~ and there we have it people the point of this thesis.
I feel like everyone in the world needs to watch this video. There's so much crap out there an no one ever thinks past what they want to hear. This should help.
This should be a Ted Ed
+
Do you love science and all it's complixity but wish it could be a little less complex, and a lot less scientific?
Introducing TODD Talks...
You're right, people often only hear what they want, so this video would likely make that even worse. It gives people ammunition to discredit others with an informed view. People are going to see that if this is the result from honest science, then what happens to paid and biased science.
To a wider audience I think this video would likely do a lot more harm than good.
For me, If there was a video I'd like everyone to watch it'd be one purely on the benefits of science. The last thing we need to throw out to the general public is something that might look at first glace to highlight its flaws.
Oh I "Heard" that if you eat butter, you'll be healthier than those who don't eat butter. Therefore it is correct. /sarcasm
I wanted to thank you for speaking up on this issue. The state of science today is a travesty and I’m glad to finally hear someone acknowledge this as I have been along in the dark with these troubles for far too long. I know we are creating the foundation of something great but acknowledging that the current state of science is not something we can rely on is just simply not said or acknowledged. Im so happy and so grateful that you have spoken about his issue and brought it to the public’s attention. Thank you for you work and congratulations.
The model we have is great, the problem is anything can be hacked if that is your goal. If you build a better mousetrap, Nature will build a better mouse. The problem is the incentive. Not enough money to go around for your own research and tenure is disappearing, so to do the work you want to do you need either to be a. Well known / respected in your field OR take funds to do work you don't want to do so you can do the work you do want to.
We shouldn't worry too much about our money getting into publishing false positives. At the end of the day only the true positives will be the basis for further advancements, experimental science is built on previous results and if those results are spurious, nature will stop you from discovering further real relationships. That's what this video is failing to point out, the incremental nature of scientific knowledge in the natural sciences is a natural peer review system and the best that we can ever have hoped for. So keep funding science, at the end only the true relationships will stand the test of time
It should be one of the requirements for getting a Bachelors in a field of science to do a replication study. Even with small sample sizes.
It is a useful experience and pattern of thinking to carry into adulthood.
Furthermore a meta-analysis using dozens or hundreds of further experiments would shake out all incorrect P values
An honestly, meta-analysis should be just as important to publish as novel as there is so much data out there and it has never been easier to analyze quickly.
That's essentially what they do in a lab class
Why would anyone give this a thumbs down?
Spent most of my life in research, painful yet true....
Ignore most of the thumbs down. 10-year-olds and trolls will down-vote a good video just to agitate people. It doesn't mean anything.
Life gave them a thumb down. Ignore 😂
Reason 1: Some one worked so much only to add a number to the papers published but not quality. Some other person points out a mistake in those papers.
Why not? Maybe they dont know dont recommend video function , so they thought thumb down this video will result in similiar type of video became featured in their homepage.
9 times out of 10, the answer to this question is BOTS. They have to like/dislike videos at random to try and fool the algorithm. That's all it is. I'm so tired of seeing "how could anyone dislike this GREAT video??" IT. IS. BOTS.
The lack of incentives for replication studies is obviously the biggest problem. The fact that some of those "landmark" studies were only attempted again recently...
Hopefully, as people become more aware of this (it's happening), all those journals will change their mind about replications. They should release a separate issue for them, even.
Agree. At some stage we will almost need to press "reset" and start again.
Yea, especially since almost every article in their conclusion implies that "further research in the area is needed" :p
***** Significant or not... It's always significant in some way
+JavierBacon I get what you're saying and agree. Even though your testing/conclusions don't have statistical significance, the findings are still significant. In most cases, it would still help increase our understanding of a subject if null results were published.
The best way to start is to get rid of journals telling us what is worth publishing and what isn't. Then kill the h-index/impact-factor that are genuine SHITS. Then put everything in open access, the universities have all the infrastructure necessary and could even save millions $ in subscription fees that are frankly incredibly stupid to begin with...
Science isn't the initial idea, it's the dozens of people who come along and test the idea afterwards
Agree. We shouldn't worry too much about our money getting into publishing false positives. At the end of the day only the true positives will be the basis for further advancements, experimental science is built on previous results and if those results are spurious, nature will stop you from discovering further real relationships. That's what this video is failing to point out, the incremental nature of scientific knowledge in the natural sciences is a natural peer review system and the best that we can ever have hoped for. So keep funding science, at the end only the true relationships will stand the test of time
It's definitely bad in medicine. John Ionnadis has conducted "meta-research" into the quality of medical research and concluded that most medical research is severely flawed---in fact, "80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials." Wow. There are also problems with dealing with complex systems, and with challenging scientific orthodoxy into which some scientists have invested their entire careers.
As an engineering student specializing in med tech have the strong impression that med publications are less elaborate, lower quality and contain less explanation than eng ones
I'd say 25% wrong is really really good for something as complex as medcin
I just *knew* John Ionnadis would come up here. If you want to see "mostly wrong", take a look at his record on COVID-19 predictions-off by two orders of magnitude.
His sensationalized contrarian kick has gotten people killed. There are many better, more thoughtful critics of the state of research.
I'm not saying he's always wrong. But he does go for the sensational, and is often sensationally wrong, and doggedly so.
A lot of progress has been made, especially in medicine, with pre-registration of trials, data & code repositories, etc, and I'll give him credit for helping kick-start some of that. (Preprints seem to me to be a move simultaneously in the right and wrong directions!)
But statements like "80% of non-randomized studies turn out to be wrong" isn't even well-defined enough to be falsifiable. It's a non-scientific statement. And meta-research, like meta-analysis, is itself extremely subject to selection bias. Each need to be approached with great care and skepticism.
A lot of what he says is not controversial. I'm not here to demolish John Ionnadis, but to urge people to steer clear of his broad sensationalized generalizations, and look carefully at the arguments he makes. Apply the same critical standards that he urges to his own research.
Sometimes the kettle calls the pot black-but black is still black.
There is also the problem that you can test one medication, to some degree, but if you start talking about interactions between different medications in different people, most of the bets are definitely off. People discount "anecdotal" data completely, but if that data comes from doctors reporting on those medications, it definitely has value, as well, IMHO.
The vast majority of medical research is a shell game, run by pharma. You can tell little by study conclusions: you can actually tell something by the set of study parameters. Where study parameters will produce a unwanted conclusion, the research doesn't happen or isn't published. Example: no clinical or epidemiological evidence for the safety of aluminium adjuvants in vaccines. Draw your own conclusion
Years ago, I questioned some chemistry methodologies. It was very frustrating, because nobody was listening. Then a publication came out discrediting the methods used and discrediting many journal articles. Somebody had listened, or came to the same conclusions I did. Corrections were made.
TRUE., I HAVE THE SAME EXPERIENCE.
Vindicated!
When you say you questioned, did you establish a line of communication/collaboration with any of the authors or users of the method, working to test its limits, improve it, or compare it to other methods?
What were the methodologies, and were any significant findings overturned / discredited as a result? Or did it only affect small findings, with larger findings still being correct (or considered correct) in spite of some methodological errors?
You should have used an approach developed in cybersecurity research long time ago for the same issue: notify the authors that in 3 months you are going to publish your findings about all their mistakes no matter what they do. Then the authors have 3 months to retract their papers on their own and/or correct them. This solution is called "responsible disclosure" of vulnerabilities. You see, in cybersecurity the problem of "nobody listens unless you publish" has been acknowledged a long time ago. You can do this anonymously as well: from my experience, scientists are not more ethical than average human, and when you threaten their career and self-image they quite often freak out and try to hurt you back by all imaginable means -- just as many normal humans would in such a situation.
Outstanding video. It wasn't until I really started getting into research at MSc level that I began to realise so much of the research I was appraising was deeply flawed. At undergrad, I assumed that it was ME who was flawed every time I saw a glaring error. At that level, you don't have the confidence to criticise the work of experienced researchers.
We had to write a literature review on a chosen subject for our B.Sc. I read through dozens of articles on my subject and to my horror I realized that the results weren't in line at all. It seemed that some scientists had worked with rats and some with mice and they got different results. Still, many sources quoted each other regardless. It was difficult to piece through that mess and know who to trust.
industry influence is everywhere unfortunately. Climate science in an example of that. It's sad because you grow up learning to trust others. Now it seems so confused that we are starting to rely in religion, faith, miths, and so on. In Italy the misinformation campaign is tragic 😷
An undergraduate whom I knew, spent months trying to replicate a chemical synthesis that had been published in a journal. He failed repeatedly. Finally he contacted the authors. They told him that there was a typographical error in the article: the concentration of one chemical was listed as being 10 times higher than it was supposed to be. With that correction, his synthesis worked on the first attempt.
Kevin Byrne Does this mean that scientific journals don’t publish errata?
@@ephemera... -- The errata often don't appear until months after the original article. And the errata are often buried. It would also be helpful if authors checked the galleys.
Thanks for the analytical look at this topic. It seems timely with the recent resignation at Stanford University. It reminds me of a former colleague who shared the quip "publish or perish." In today's political world, the phrase "follow the science" is frequently and ignorantly applied, I'm glad to see science influencers such as yourself shedding light on this topic.
This is the kind of material TH-cam needs more of
It's something extremely ironic, but TH-cam encourages other types of content, just like journalism encourages certain kind of results in science
I intend to live forever. So far, so good.
i like that haha
Who wants to live forever?
Ifyou live forever you'll see everyone you know die and then everything you know die because the universe willend.
+RedEyes Cat dude no way! That must be a record
"If you live forever you'll see everyone you know die and then everything you know die because the universe willend."
If the universe ends, something will come along to replace it. I'd be quite excited to see that.
Plus, it's not necessarily true that "the universe" will end, although that's a widely spread myth so I can't really blame you for assuming that.
In Indonesia, many of supervisors in medicine would reject replication studies, expecting new studies and publication and therefore, causing us to have nearly zero epidemiological data. We prefer "good-looking research" to actually researching anything. Better not research than not looking good
Your supervisors are speaking the language of gods
This 12 minutes should be mandatory viewing for every course that touches the slightest bit on any kind of science, engineering, statistics, political science, or journalism. Starting in junior high school.
Short answer: yes. That was a real wake-up call when I was doing my Masters degree literature review - how often university professors push publications using "academic standard" statistical analysis to come to a demonstrably wrong conclusion. It is scary, not only how often this was the case, but how often these studies would be cited and their misinformation spread through academic circles without question.
Most academics doing the research are young and inexperienced in the real world. The people managing the research departments have a vested interest in only promoting research that finds 'convenient' results that will enhance their chance of getting bigger budgets next year.
Maybe we should take people with 30 years of industry experience and put them in charge of research in academic institutions.....
@@davidwebb2318 Unfortunately true. If, as a young scientist, you talk to the head of your lab or department about your work and what your ideals are or what your idea of good science is, you will quickly be taught. You don't know anything! No, you really don't know what is important in science. What you know even less about is what "good work" is and what is expected of you. The most important thing is neither "good science" nor a prestigious publication. At the very top of the hierarchy is an accepted proposal letter! No funding, no research. All other output must be directed towards this goal and are just means to an end. The larger the organisation (Pareto Principal), the greater the pressure to meet this requirement. Exceptions exist.
@@haraldtopfer5732 I agree. Academia has become a big industry with big careers to support. The priority of the people heading up departments is to build bigger empires, secure bigger budgets and increase their personal exposure/status. This secures their jobs and the jobs of their colleagues/friends. That trumps everything else in many cases.
It is really obvious in the climate change industry where nobody ever proposes or approves any budget for spending on anything that doesn't support the pre-existing narrative. They carefully choose and support only work that adds weight to the doom stories because this expands the 'importance' of their industry. Their future careers and their salary depends on doing it so they embrace it and steer all the research in one direction. The system is really flawed and has created a monster where half the world are intent on economic suicide to cure a problem that is relatively minor and will only have any impact over generations.
@@davidwebb2318 Well the thing is virtually every study that disapproves climate change are very usually well funded themselves - There is a vested interest among the folks with resources to forward that narrative as well, and they have resources, profits they can lose. Not to mention these studies also have to exercise pretty big mental gymnastics as the mounting evidence grows.
Money does make the world go around after all.
Wouldn't you agree?
@@aravindpallippara1577 No, I wouldn't agree. The climate change industry is mostly based on an emotional sales pitch pushed by celebrities and political activists who haven't got the first clue about the actual data concerning the climate.
This is obvious because the main activists are pushing the idea that humans will be extinct in under 10 years. Politicians who are too weak-minded to work out this is complete lunacy have simply demonstrated their lack of intellectual horsepower by going along with it.
Money does not make the world go round. It is just a convenient method of exchange used to buy and sell goods and services. Of course, the political activists that are using the climate change narrative to promote their political agenda will try to persuade you that money is evil (or that only evil people have money so they should take it and give it to people they consider more worthy).
When I first came across this problem, I wanted to become a scientist who simply redoes the old experiments. I am still very far away from becoming a scientist but I hope this becomes a legitimate job. Having a subset of scientists who simply redo the experiments with a little or no tweaking.
Problem is who will pay you for it.
We need this. Can someone start an organization that does this? Not me, I have another thing to start. :P
Also, there are AI to analyze data of experiments regardless fo the human conclusion. I think those are pretty helpful in sorting out truth from falsehood.
there is almost ZERO funding for this important task . more money is spent each year to study the mating behaviour of saltwater mud worms. I'm not even kidding ....
"scientist" What does that even mean?
You have to study a certain field and then you can get a job at a university where they'll pay you for your research.
@LazicStefan If you're talking about climate change, it's real and the effects are observable outside of papers
As a researcher, I find those numbers very conservative, even when I'm 4 years late to the video.
I also feel like there's a reason missing for the false-positive results category which is a deviation from the main objective. Some true positive results shouldn't be considered as such when you make an in detail analysis of their methods, statistics and final findings just for the pure reason that, mid-study, some parts of the objetive were changed to accomodate the findings. This is also an issue that pisses me off, especially in my research field where there's such a huge mix of different scientific areas that it's next to impossible to verify anything at all in detail because everyone just pulls the results their way.
As some people already mentioned here, some authors do withold critical pieces of information for citation boosts. If people can't reproduce something from a study, they can neither be proved wrong by the paper's information alone (as long as it checks out in theory) nor can they be denied autorships and citations from other papers which effectively boosts their 'worth'. The fact that researchers are evaluated using citations/autorship numbers is also one of the leading problems as to which false-positives exists in such large numbers (I don't believe false-positives are only ~30% for a damn second, but this is my biased opinion) and why some papers, even though everything checks out in theory, can never be truly reviewed on a p2p manner on the practical results sides of things.
Anyone who works in research knows there's a lot of... misbehaving on most published works, regardless of the results. Therefore I have to disagree with the fact that researchers are fixing some of the problems. It's not that we don't want to fix them, but because the system itself, as it stands, is essentially rigged.
We can sift through p-hacked results. We can't, however, sift through p-hacked results if the objective is mismatched with the reported findings (if someone told me that was involuntary, I'd believe them because I know how easy it is to deviate from it) nor from a paper which withholds critical information. And the worst part about it is that this is further fueled by higher degrees thesis such as masters or PhD's where it's mandatory to cite other people for their work to be 'accepted' as 'valid'.
You have to approach published works with a very high level of cynicism and with some time and patience on your hands if you're even dreaming of finding a published work that remotely fits your needs and actually shows a positive result on most scientific areas.
I hope someday a scientist gets very rich and decides to devote his/her money and time in creating a healthier scientific publishing environment.
When I finished my undergrad, I worked compiling a database for a retired professor. One day he asked me to find an article that had been recommended by one of his peers during review. He already had the author and subject so it was pretty easy to find and got me a nod in the paper for my invaluable research assistance. The paper was on how long bones had been drawn incorrectly in every medical text forever. Someone had drawn it incorrectly once and everyone had copied the original mistake.
@@pattygould8240 What happened with the paper? Is it available?
@@ejipuh I have a copy that he gave me when it was published but it's packed away somewhere and I frankly don't remember what journal it was published in. I worked for him summer and fall 2004 so that's when it was published.
@Luís Andrade doctors have been learning from those textbooks for over a century, the mistake in the drawing didn't have an impact or someone would have pointed it out sooner. It took a scientist studying bones to point out the error.
There is a pressure to publish significant results. As a research assistant, I know for a fact my professors engage in this. I was preparing the data I collected on a crop, and somehow the paper was published a week after I finished the data... didn't make sense
Definitely doesn't make sense, as the peer review process alone takes months. Could it be that you were reproducing some past experiments, or gathering the same data to be used in a future publication?
“Science is the interpretation of data. The data is usually crap.”
Liam Scheff, science journalist and author
Ever heard of data wranglers?
Science journalist and author and he doesn't know that "data" is a plural noun? FYI, "datum" is the singular.
@@lixloon Why exactly did you assume he is talking about one point of datum? It's the less logical explanation. I'll just assume you're a moron who wanted to let the world know something that makes you feel smart.
Sugasphere and the Lancet concur
@@lixloon data is gramatically correct. It's not possible to interpret a single datum.
Anyone who reads articles online about "new research" needs to watch this
or people who hear science quoted (sometimes incorrectly) by Today Show, Dr. Oz, even Time Magazine etc.
When the contestants found out one of the walls would contain an erotic image, they enabled their inner chakras to get it right
One of your best. I go back and watch this one every once in a while.
I'd like to point out, as he hints at near the end, that the underlying reason for much of these "p-hacked" studies is due to human nature and not the scientific process itself. Stopping a sample size when you find convenient, not getting published to counter-study, people only interested in unique findings; these are all human fallacies. A manipulation of the scientific method.
There's no "the scientific method." That's a complete myth. You should read "Against Method" by Feyerabend. Even if he goes overboard in his argument (which I wouldn't necessarily agree he does), it's naive to think of a defined, precise method in which science *is* done, or *ought to be* done. It's really "anything goes," as long as you convince your peers. Hopefully truth is convincing.
@@d4n4nable That's wrong. There's no set methods, but a general guideline to seek the truth. As OP said, if it wasn't for bias in the publishing, then the system would work fine. The scientific method is more of a way of thinking and general guidelines in how truth can be determined.
@@neurofiedyamato8763 You act as if epistemology were solved. There's no consensus as to how to get to "truth." There are various methodologies implemented in various fields of research.
@@d4n4nable I can just smell the narcissm from across the screen .
@@arhamshahid5015 Narcissm? Why? Because I'm pointing to a classic contribution to the philosophy of science? It's not that I wrote it. I just read it, like thousands of others. How in the world is that narcissistic?
The real problem here are the journals. They have established themselves as the primary way of publishing. There are other ways, but in the end, the journals get you recognition and jobs.
That results in many studies being done with the intent of publishing. Scientists cant be blamed for that. After all, they not only do the research but also have to contantly beg for money.
The actual goal of optaining information gets lost along the way.
Exactly. One high-impact publication can set up a career, and leads to 'light-touch' peer review at other good journals, soft authorships on colleague's papers and requests to be co-investigators on other people's grants. More publications leads to more funding. Even as a Co-I that doesn't actually get money from a grant, you have demonstrated 'grant funding' success. The incentives to join that group are high.
It seems even more absurd to still have these gatekeepers publishing a limited number of papers when we live in the era of long tail economics
Researchers also have to pay the journals to publish their work, who in turn often charge you to read them.
Like all other systems and institutions, scholarly research and the academy is a game, with numerous irrational inputs and agents in pursuit of self serving interests.
Indeed. I lost interest to pursue to phd bcoz of this reason.
I've been a world-class AI researcher for almost three decades now. I have personally, during this time, witness much deliberate scientific fraud, including rigged demos, fake results, and outright lies. Additionally, numerous colleagues have admitted to committing scientific fraud, and I've even been ordered to do so myself. I have always refused. I will not, as a scientist, report results I know or suspect to be misleading. My family and I have been severely punished for this. So I recently returned to mathematics, where true and false still seem to reign. And lo and behold, instead of abusive rejection letters, written on non-scientific grounds, I get best-paper nominations. PS: don't believe any of the current hype around AI.
That's terrible , there are many stories like this that keep popping up. Stay strong this crap will change soon .
Could we talk I'd love to hear your thoughts on this Christer
Christer Samuelsson why would I believe this?
Dang man I think you got out of A.I. at the wrong time lol. People dont have to fudge their results anymore because the results are real and improving every day now.
Let me guess- it was natural language processing wasn't it?
I love your background music. It's so early-2000s-techy in the best possible way
Frutiger aero type beat
I'm taking a science research class and this is literally what I was thinking about with like 90% of my peer's projects.
same.
Even taking a basic science lab course that requires you to write up papers based on your "experiment," you run into this constantly. And even knowing this myself while taking that course, I found it hard to not let my own biases affect how I conducted the experiments. In small ways, but those small ways add up significantly.
@@allenholloway5109 Experienced the same! The reinforcement of bias is so joyful. Funny how subjectivity creeps in unnoticed like this in a field which demands objectivity. Some scientists employ outright dishonest policies of manipulating images, it's unbelievable.
@@allenholloway5109 you putting quotes around experiment made me remember an anecdote from my basic Chemistry class: we were given a substance in vial, and we had to do an experiment to identify it. Things like weight to volume ratio and boiling point. Well, the school was at an elevation significantly different from sea level. When we measured boiling point, we got the exact temperature you would expect at sea level, and the teacher was shocked. We asked him if we needed to redo the experiment, but he said “no, it’s probably fine.”
I know my table of chemistry idiots weren’t the bleeding edge of research, but I feel like it illustrates the point that there isn’t enough time/funding to actually conduct proper experiments in several cases.
And was that 90% statistically relevant? Especially when measured by someone who uses apostrophes to form plurals?
Well, as a wise man once said "Some poeple use statistics like a drunk would use a streetlamp -- not for illumination but for support".
That being said, the most frustrating bit is that the journals and financing agencies actively encourage p-hacking and discourage replicating dubious studies.
I'm stealing this quote, it's amazing.
There is nothing wrong with using statistics for support, as long as they are accurate and honest, AND you don't cherry pick them. That last part is often the biggest problem. I don't think pharma changes numbers any more, but they most definitely fund several studies and pick and choose what they want from each. That is not research, that is advertising. It's also changed a bit for them now that they have to have conclusions that at least have SOMETHING to do with the data collected. There was no requirement for that before, as I understand it.
The point about discouraging replicating dubious-or any-studies is important. There just aren’t incentives to duplicate or refute someone else’s findings, but rather come up with something ”original”.
On a similar note, as an engineer who frequently volunteers at elementary - high school science fair judging, I’m constantly dismayed at the emphasis that other judges-both somewhat “lay” and professional STEM judges-place on “originality”… at the elementary & middle school level, even, not to mention the high school level! (Ok: maybe a district or regional winner at HS needs to be decently original, but…) Many people place originality and presentation skills (not to be entirely discounted, of course, but still not #1) above scientific inquiry, larger data trials, strict controls, and even just a good, solid use of the basic fundamentals of an experience by as taught in elementary science class.
@@whatisahandle221 I believe that in experimental physics it is customary to publish independent replications of breakthrough studies in comparatively high-impact journals (as well as to cite replication studies along with the original ones in future papers). Sadly this is more of an exception that proves the rule.
In life sciences on the other hand there are so many subfields and so much competition, that far too many "original" yet shoddy papers (methodologically speaking) get published. My subjective impression is that this problem is slightly smaller in niche and/or "old-fashioned" subfields, where the odds of getting a reviewer who knows all the ins and outs of the topic are relatively high.
@@MrJdsenior They still do-and still do so much more. It is what it is.
this discussion is so important! thanks for making this video
+
you're very welcome!
+
That was indeed one of the best veritasium videos so far. So glad Derek tackled this problem!
Thanks for making these videos they are such an eye opener for me. I never thought this would be an issue at all, now I understand.
This reminds me of in college trying to find trends in data by any means possible just to come to a conclusion that would result in a good research grade.
I think when your motivation becomes solely about money or grades (or whatever other comparable unit u might think of), you lose sight of the actual purpose behind what you're doing. In my case, because of my fear of getting a bad grade, i twisted the research process to show results that would impress my teacher, but which ultimately were false and useless. This video made me realize how many systems (in education, business, science) are actually structured for their participants to waste their time pursuing arbitrary goals rather than the ones which are actually valuable. If we could make it so a thorough and honest process would be rewarded just as well as one that has a flashy result then we would have a lot more true value being generated via these systems.
This has been on my mind in school recently so I'm really curious to hear what others think if anyone wants to reply. Great video!
Hey, can you tell me a little about this research process?
And how the current systems are a waste of time in education, business, and science?
I am very interested in hearing what you think could be more valueable :)
Um, if you have no motivation or imagination for your research, you probably shouldn't be doing it. I would recommend you not blame your professor or department, but rather look at yourself and ask yourself why you dont like what you are doing.
I am currently in a research field I am passionate about and I dont have to bend over backward to get results because I come up with imaginative solutions every single day.
His point was simple, the rewards for being able to show the desired results are better than for getting less desirable results within these systems. In all these feilds if you can show the desired result (regardless of if the results are valid) then you get better rewarded, be it grades, promotions, bonuses or publication.
While in most scientific feilds most errors would likely be accidental bias, in areas like testing diet supplements or doing studies funded by corporations these are well known deliberate issues.
Unfortunately most people have a very poor grasp on statistics and for that matter the scientific process so it's all too easy to make a lot of people believe false data.
We really do need to improve the systems at all levels. There are currently moves to make things better, but we will continue to have these problems for a very long time, especially with journals not publishing studies that show other studies to be wrong and publishing studies that didn't pre submit their methods of evaluation before commencing.
Okay, I have both locked down pretty well
I think that unproductive incentives is a common theme in every problem with society.
How can we tell this research isn't wrong
oh the endless loop - there have been a fair number of attempted replications recently that have found pretty dismal results. When you consider they are all in agreement, that biases exist, that incentives are skewed, that .05 is not all that low, that p-hacking occurs, it is fairly unambiguous that a sizeable fraction (if not a majority) of research is actually false.
+Veritasium Wouldn't the odds of the exception being wrong be higher, than the odds of the norm being wrong? There's a reason why there's such a thing as peer review, after all. The scientific model is there to make sure you can replicate the results and methods of published papers. If something doesn't stand up to peer review, it's bad science, as it means something didn't add up.
you don't
sadly low sample sizes are a very common problem due to lack of finances or various other reasons.
That's the point: when deciding which papers to publish, the scientific method isn't being respected. There's selection bias tending toward publishing mostly positive results and not the inconclusive ones, and there's a complete lack of respect for replication since those studies are often rejected outright.
148% of people don't really understand statistics.
I absolutely 101% understand this.
This stat has -176% chance of being stolen by me.
But... only 37% of statistics are actually right.
Up to 50% of people are less intelligent than the average.
78 % of all statistics are made-up on the spot.
This is Veritasium's greatest video to this day.
This is something I am learning a lot on reading studies for food health science. So many variables are not put into account in the final findings. Reminds me of the phrase, "If you look for something, you will find it"
Each and everytime I see some article that says "According to studies by scientists...", I always and always read with skepticism.
Good! Always read with skepticism. That only benefits science.
Yes, that's the point ;)
which means how much more skeptical we should be of everything else, "alternative news" sites, alternative medicine, health blogs, mom blogs, etc etc...
Read with skepticism and report them to the authorities!
And then to think that scientists are bound to produce more truth than anyone else, you need to question everything and everyone around you
I have an hypothesis. I think getting in car accidents decreases your chances of dying from cancer
...but increases your chances of dying in a car accident.
"I shall test this! >8/ " -Hopefully some scientist out there.
Good analogy.
False. Somebody just published a paper about that. You have 100% chance to die from cancer if you where in a car accident. It was a small sample size, about 1 man. He was a truck Driver in chernobyl and he has been in small accident once. He died from cancer.
"You have 100% chance to die from cancer if you where in a car accident."
so if you get in an accident, you will for sure die from cancer!
***** xD
thanks, now i don't know what to do with my life. i'm a senior in highschool wanting to study physics, but i have watched a ton of videos that explain the reaserch paper publication strategies, and the way academia works in general, and now i realise that the perfect knowledge making science world i wanted to be a part of is nothing like i though it was....
Yes. Results are not science until verified/replicated! This is the scientific method.
Very informative video, thank you.
Too bad the studies you see on Doctor Oz (the studies most of the sheep enjoy listening to) are never fact-checked because that would cut into profits.
Tell that to sociology majors and they'll call you a bigot.
Sadly though, the same can be true to a greater extent for legitimate science. Replication studies weren't getting funded much back then by the government or other sources precisely because it's not bombastic or groundbreaking enough to advance the field, so basically only a trace number of replication studies ever gets funded and published.
In short, landmark studies didn't get fact-checked and replicated a lot because it would cut into their grant money application and prevent them from conducting the studies in the first place.
Good thing it's changing nowadays though.
Sorry cant afford to replicate this experiment, the client didn't give us enough of their product to do further testing beyond the results they requested we deliver. We are a private laboratory and need to be profitable.
one the big problems are the big media who search for those crappy headlines: 1 chocolate bar a day or a cup of whine a day.
The media search for those because it make good clickbait and they will even distord the scientific research and sometimes use words as increase the chances of x things instead of saying increase the cances by 0.01% of x thing
Just the way they word it make it sound bigger than they are
Stop whining man.
+Marvin Y but it's a fact
+Cédric Raymond no, I'm just making a joke because you spelt wine wrong
But then could you not say its the fault of the human psychology, that people are drawn to unusual things making the media jumping on this crap inevitable? I am not saying the media are not to blame or that publishing that stuff isn't irresponsible but I do think everyone should take everything the mainstream media publishes about science with a large pinch of salt.
I just hope that most people are inherently skeptical. When I hear something stupid like "chocolate makes you skinny" my reaction is "bull pucky"
I'm curious about the comment you made at the end that "as flawed as our science may be, it is far and away more reliable than any other way of knowing that we have."
I'd love to see a video on:
1) What are the "other ways of knowing that we have?"
2) A critical evaluation on why science is better than those "other ways of knowing"
~ A loyal fan
well, there's using logical deduction to eliminate improbable causes.
have you ever heard of IB theory of knowledge? These are exactly the type of questions we discussed in class in high school, it really opens your mind
That's the scientific way of knowing, isn't it?
What is the "IB" stand for?
International Baccalaureate
Always excellent and insightful!!
If you publish a concept that the "Optometrist majority", does "NOT LIKE", well you will never get published.
Been there ... DONE THAT!
The problem Veritasium exposes in this video is the same thing Richard Feynman spoke about during a Cal Tech speech that was published in his book "Surely You're Joking, Mr. Feynman." Richard Feynman spoke about Cargo Cult Science; which comprises practices that have the semblance of being scientific, but do not in fact follow the scientific method.
In his speech, Feynman said,
"We've learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature's phenomena will agree or they'll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven't tried to be very careful in this kind of work. And it's this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science."
It pretty much sums up the problem within the science community. The lack of integrity as a scientist, largely influenced by the lack of freedom given to scientists at select institutions, is the downfall to most careers in science and scientific research. Feynman ends his speech by giving the students much needed advice on how to be a better scientist by saying,
"So I wish to you-I have no more time, so I have just one wish for you-the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom."
I could've have said it better!
Thanks Eric , I know & like Richard Feynman . I'm going to get that book . it sounds fascinating . Thanks for the tip .
Thank you for posting this! I'm often commenting on people's sycophantic acceptance of anything scientific. I try to point out that science is not about accepting wholesale the word of authority figures, but about being skeptical. Testing the world around you, and doing so in a way so as to limit your biases. The way some people rush to defend their favorite scientists or pet theories has prompted accusations of cult like behavior from me on a number of occasions. It's nice to hear such a man as Feynman speaking out about this.
Of course I'm talking more about the lay persons blind acceptance of anything handed down on high from what they perceive as a high priest in the 'cargo cult'. The kind of person who parrots what they hear on NPR or PBS with _zero_ understanding of what they're talking about. You know this person, they wear a NASA t-shirt and are quick to comment on the fundamental nature of the universe despite receiving a 'D' in high school physics. They watch the Big Bang Theory and laugh along merrily. They have conjectures on the nature of black holes yet struggle to calculate a fifteen percent tip on their bill. They have a wealth of scientific and pseudo-scientific "fun facts" but no integrated understanding of any of it. You'd be hard pressed to find a single original thought floating around in their brain.
These people are the tributaries of the 'cargo cult' of science. And unfortunately they represent the majority of people who are at least interested in science. It's their votes the government counts on when allocating money to scientific institutions. It's their views the little pop science videos all over youtube count on for their ad revenue. None of these institutions are interested in actually teaching their tributaries how the scientific method works, the value of skepticism, or a groundwork understanding of the subjects they claim to love so dearly.
Long story short, the cult priests need the tributaries as much as the tributaries need the priests. And it's not going to end anytime soon. The only solution is to educate yourself. An undergraduate understanding of mathematics, and at least a conceptual understanding of some of the models we use in physics and engineering take hours of work but if you love these subjects put the work in and learn them. Most importantly, understand _what a model is_ and _what their limitations are_ . No real scientist claims to have a complete and true understanding of how the universe works! We do however have some _amazing models_ we can use to predict how the universe will behave.
TLDR: Don't be a cult member. Don't take people's word for it. Put the work in and learn it for yourself!
That book is one of the most awesome books I have ever read so far. It perfectly describes what science is today and also shows how beautiful the pursuit of science is.
It's not that scientists are put in positions where they are forced to surrender their integrity, it is their _willingness_ to surrender their integrity. Integrity holds little value for many people. Notoriety, money, prestige, security - these things are esteemed higher than integrity.
Integrity should be of infinite value but, alas, human nature is such that integrity is bought and sold and many times simply given away.
That kind of freedom can suck though.
But basically, capitalism is the enemy of truth, since it is the way of corruption. It's all about existential fears and breeding bad character traits like greed.
As just a stat grad student I did this a lot to pass classes. Imagine what if you make a living by publishing papers
This is an awesome explanation and it's going to be really fun reading everyone commenting with their confirmation bias while I read their comment with my own biases that I have about what biases they have
Oh sweet brain, such complex things
Exactly why I went into the comment section of this one. Lovely.
This is gold. It is like playing 4D chess with your own brain. With every agreement and disagreement with your own intuitions, you fall into the trap of asking the perpetual questions of "what if my bias is the bias showing me others' bias, and what if that itself is some bias or error in judgement I fail to consider." This becomes too much and can throw people (myself) off, into a spiral of extrapolating truth. I suppose the remedy of bias is not only itself recognizing bias,but is perhaps the understanding of updating beliefs based on a consistent or more reliable framework such as Bayesian Thinking. So, I do think beginning with investigating biases and attempting nuance by finding multiple sides to research or a thought, is a starting point. It is tiring playing mental chess and questioning yourself, however, it does sometimes provide some insight is getting closer to truth. It also makes it easier to detach ideas when new information is presented. Well, that is my mental state. It might be a bias of its own :). If we are emotionally invested in an idea, consciously or subconsciously, we tend to be more inelastic to new and valid evidence that doesn't support our intuitions. These are just my observations and of course, given valid criticism, I shall update them. :)
And this is why the method exists. When in doubt, test it yourself, never rely on trust or opinion, even your own. This is the most important part of it all, as multiple independent tests reduce the bias.
This is why when gauging any studies, you must look for indepenent confirmation: a single study can always go wrong in subtle ways. Even the methodology could be wrong, and then while it gets confirmed by others, different studies on the same subject cast a different light on it.
The more studies, the more confident you can be. Evidence over opinion.
This is one of the most important videos on youtube. Especially in todays world.
This is huge in my field, Structural Engineering, as people get way too lax about sample size. Thanks to testing things like full-sized bridge girders being incredibly expensive, samples sizes of 1-3 have become all too common, and no one does replication studies... Then that mentality bleeds over to things like anchor bolts that can be had for $5 a piece at any big box hardware store. It's getting dangerous out there!
I took a course on rigging moving loads. There I learned Working Load Limit is 10% of Minimum Breaking Strength.
That makes me feel safe.
@@LogicNotAssumed can you explain what you mean by rigging moving loads? Does this refer to loading up delivery vehicles and such or something else? Or is this 10% rule used for many different applications?
This is Crane operator stuff.
There is a certain "strain" that is allowed for fatigue reasons. (Strain is material stretch vs applied stress). Exceeding that strain, while still below the breaking strength, will result in weakening of the material with repeated use causing failure below it's published minimum strength.
E.g. steel might have a tensile strength of 110,000psi but a fatigue strength of only 63,000psi (63ksi/110ksi = only 57.27%).
So, for conservative use, most industries require robust safety factors to account for fatigue, use, damage, etc.
Commercial airliners are rated for +3.0g x 1.5 safety factor at maximum weight.
Bridges vary, depending on seismic requirements, etc. But it's not a good idea to cross an old country road bridge rated for 6 tons, with a 12 ton vehicle. You might survive, but the bridge will be damaged.
@@Triple_J.1 Strain is stretch per original length, like if you stretch 2" in what was originally a 100" rod, you've got 2% strain, or 0.02
Derek,
I am convinced that the whole reason for this reproducibility problem is money. Unfortunately, science/academia is run like a business. Profit is key. Journals need to make money, institutions need to make money, funding agencies need to convince the rest of the world that the tax dollars spent on scientific research are well spent - and that means results, tangible results. So the scientific effort is reduced to the production of results - as much and as fast as possible. This creates a very destructive pressure on the researcher.
I'm a graduate student currently working towards an academic career. I have been told by several profs in my field including my advisor that if I want to get a faculty job at a good institution after finishing my phd, I need to have at least 8-9 papers in my CV with at least 2-3 in a high impact journal. The reason is, of course, the sheer amount of competition; there are huge numbers of applicants and very few open positions. When hiring researchers, universities look at their previous research i.e., papers - and people can count much better than they can read. As a grad student, you can dedicate your 4-6 years to one fundamental problem, work on it rigorously, and -if things work out- end up publishing one or two papers with a huge impact on the field. But when you then go looking for jobs, you'll have trouble because people can count better than they can read. They'll say "Oh this guy only has 2 papers but this other guy has 15, let's hire the other guy." I know a lot of people in this situation - extremely bright grad students who cannot get faculty positions or even decent postdocs because they don't have enough papers in their CV. Many grad students who intend to stay in academia are aware of this, and there is no way you can publish at least 8 papers in 4-6 years without sacrificing rigor and/or reproducibility.
Sorry for the long comment, but this is something that constantly bothers me and I felt a need to say something. Hope you get a chance to read this, I'd be very interested in what you think.
8-9 papers? Heck no, rigour and precision is impossible at that level.
See, scientists are wrong. I knew eating less had nothing to do with weight loss, it's just my genetics.
I knew it all along.
But your conclusion was derived from genetic science experiments...which is wrong.
Yes I'm only fat because of genetics and psychological unhealthiness, food has nothing to do with it
+ThioJoe precisely!
didn't know you watched veritasium
As an electrical design engineer I had the mantra "Everything works great until the current flows." You can design a circuit, model it in software, have your peers review it, take all kinds of steps to eliminate risk, but in the end you have to apply power and let mother nature decide if you were right or wrong. I have to say that a majority of the time there was a mistake or two in the first prototype.
Ok
Nicely summarized! I am a scientist (engineering) and reproducibility is a huge problem. I think there is a lack of throughout scientific method/experimental design teaching as well. I had to learn on my own about all the possible drawbacks (cognition bias etc) and I am still unsure I do everything correctly.
Another important source of error can be listed for experimental science: it is literally impossible to control all variable in the environment (where the experiment is conducted), apart for very expensive facilities (ultra-clean rooms, cyclotrons...). Which means that a simple change of weather, some new vibrations (new road nearby the building), a new type of equipment (it is impossible to compare data from different groups that own the same machine, they are never the same - mostly after time pass and parts need to be replaced)... will differ the data set. All in all, it should be possible to by-pass it, if you had infinite resources and time. But since we don't (and as you show, it is hard to publish both negative and reproduced results), most researchers try to do the minimum amount of experiments. Sometimes not even reproducing their own data (because it will not be the same at all!).
Well, all is not loss, as most of the time, a hypothesis is often quite robust to our errors, and that being aware of those errors can help reducing them.
You may be missing the difference between what are called pure sciences and what are called applied sciences. Applied sciences are not true science, i.e. they do not apply the scientific method to arrive at conclusions through data. Often they use trends, probabilities, criteria and statistics to allow for conclusions when the factors of experimentation cannot be controlled for. I think this video is really only trying to debunk these applied sciences as not producing scientifically supported facts. The experimental or 'hard' sciences should be exempt from this critique if I am not mistaken.
You make a great point though, one I have always maintained, but on that note I would say don't forget that science never attempts to assert it has 'proved' something through the acquisition of its data but rather simply has 'found cause to support certain conclusions over others'. The conclusion that certain well-tested hypothesis are debunked due to a margin of error in the data, such as might be produced by a variance in the machines or proximal road construction etc etc, is far less tenable than it being explained, or even written off as they most likely are, as the consequence of such events. But things like scientific laws are so constantly observed under their expected conditions that we have never observed instances which could cause us to conclude they were not laws of the universe. To all intents and purposes laws are 'proven' but tomorrow could reveal observations which entirely destroy those conclusions based on today's observations, thus science can't 'prove' anything because at best science only produces conclusions appropriate for today's observations.
To add to your critique though, one of the things I like to bring to the table is something I think which is missed by even most hard scientists today. That is, the current theories which account for current observations could actually be 'in-discernibly incorrect while entirely observably aligned with the measurable parts of the real universe'. That is, it is still entirely possible that our universal theories are actually only a model that can superimpose, without us noticing it doesn't do so entirely or actually, due to the possibility of our incapacity to measure or experience certain parts of the universe. We could be fooled into thinking our theories are more accurate than they are because there is no guarantee we can experience, measure or even comprehend the universe in its entirety but we would have to do so to think there is not a possibility of an an unnoticeable overlay.
Man, that is really surprising. That is something that is definitely taught in a chemistry degree track in Analytical Chemistry courses, the injection of personal bias, the bias towards measurements that end in even numbers or five, etc etc. the list goes on. Being as logic and mathematics based as engineering is I'm surprised to hear that. I'm sorry you had that experience, man.
Phuck Gewgle - No, of course we can't measure every possible variable in the universe. We don't know what they are. And we don't know what we don't know. There could be an infinite amount of unknown variables. All we can do is model the variables we do know about and give them the changing values as measured over time. A model is a simplified abstraction of one small part or aspect of the universe. Models are man made. We use models for the purpose of prediction and control. They are tools. They are not "truth" in any final, exclusive, complete sense. All models are provisional. As soon as a better model comes along, we will drop the old one or relegate it to certain approximations, parameters or purpose. For instance, the Apollo program brought men to the moon and back to earth using only Newtonian Mechanics, good enough, no need for Relativity or Quantum Mechanics. For different purposes, Newtonian Mechanics will not do as well as Relativity or Quantum Mechanics. And the beat goes on... Cheers!
Wow, i'm impressed actually. As a follower of empirical sciences (i study dielectric rotational magnetic fields, and the unified field), getting through to people that statistics and studies are frequently heavily flawed isn't always easy.
The best method of delivering fact is delivering hard fact through retroduction. Reproducing and delivering impenetrable logic that confines a model to irrefutability. Abstract studies are pointless. If you wanna find out how to make people thin, you learn the physical chemical processes that increase body fats, and catch the problem at it's root. Experimenting blindly with random combinations of living habits is unimaginably inefficient.
Most simple predictable models were already proposed and studied, very little is left for any scientist in the world of "simplicity". The problem now is the increasing complexity of new models, and at this point you can't really "design" an experiment, you have to design general methods that run experiments stochastically in massive amounts. Rarely you can thoroughly test a complex multi-dimensional model or design an easy and encompassing experiment for it. Living habits is actually an example of a complex multi-dimensional model.
Aleksey Vaneev Well sure, but that's because the studies are impatient. If everything was learned retroductively and factually from the ground up, each process studied meticulously, there would be no mystery and no confusion. We would know each fundamental process and be able to compound them understandably into macro multi-dimensional models as such, because each dimension is understood in full with explanations.
Real-world systems can never be broken down into a graph of sub-systems with known relationships. Weather or human body, or economy are such systems which cannot be completely decomposed into elements. They are like systems of equations where variable A depends on B, C, D, and each variable depends on the others, in non-linear way. It works as a complete system, we see that it works, but if you start decomposing it into elementary things, they won't add up back, mainly because you just can't standardize (tie one fact to the other) nor detect all of system's elements.
Aleksey Vaneev Nono, they absolutely can, but not with Cartesian and Euclidean modelling. The real world is not made that way; it's an incommensurate system which behaves in a fractal sense. There are numerous elements at one scale that cohere to make a compound element have more presence at a larger scale. The standard mathematical systems require relative coordinate logic, which works on paper, but causes all of the apparent problems that we all face in the world of things not adding up.
If you don't know, Cartesian mathematics is working with x,y,z graphing, and Euclidean math starts with a point (you just decide where it starts) and then a uni-directional line. Reality works with bi-directional curved lines (spiralling into a hypotrochoid), as described in Newton's Third Law: recoiling inertia. All things in reality behave this way.
You can't divide something that does not have Cartesian dimensionality. The primest example is a magnet. If you chop a magnet in half anywhere, it will immediately form 2 fields that have identical geometry to the original, with N/S and inertial plane. You have to work from the bottom up, figuring out how the thing itself works before you can make any assumptions on how complex constructions using this thing works (even if you know how to use it, doesn't mean you know how it works fundamentally).
Well, what you are trying to say is an example attempt to generalize/standardize things. But they cannot. Some work one way, some work another way, in one dimensionality and another. That's why you can't build a model of a substantially complex system. You can do that in imagination, as a general point of view, but not in an actual model that can be computed and predicted. And without ability to predict there's no science.
Beautifully presented. P-hacking has been the plague of accurate and truthful research for decades.
I really hope people watch this video fully, because IMO the final message is extremely important.
However flawed the scientific method, it's the best system in place ad hoc to get the closest approximatation to "objective truth".
As a university student in the Netherlands, I can vouch that we really do learn about these flaws in the scientific method thoroughly. We are taught in statistics courses, research methods courses, and even general philosophical science courses to always be critical of data and papers we come across. We really go above and beyond the points you named in this video. I'm really glad about this aspect of my studies, I hope this is also the case in other universities world wide.
I remember the chocolate study. It may have been this one or a similar one being done about that time. I was a potential candidate for testing. However I turned it down when I was told they would be taking deep tissue samples... not by a nurse but by a 'Trained Technician', and not in a medical facility, but in a rented room at the local university. Those red flags were enough for me to tell them no, and to keep their $600 and the portioned food they were going to give me for my meals. I was poor as heck at the time and barely scrapping by. Still having a non-medical person digging into my body for samples was not better then ramen again for dinner.
This is a beautiful story
It set a high bar
Seems reasonable.
It would seem a lot of researchers are are engaging a pursuit to "prove" a hypothesis, rather than exploring scientifically for science sake.
This happens because if you don't pubblish, you will be out of a job. In many cases, the system does not promote high quality science
@@yaroslavsobolev6868 In short, science is provisional; it doesn't _prove_ anything is true but instead tells us what is _likely_ to be true.
@@yaroslavsobolev6868 Interesting. I've never actually heard of instrumentalism, but maybe that's more what I was attempting (and failed) to get at.
Somebody should make a journal that is dedicated to publishing replicated studies.
There should be grants for only doing replication studies, and replication studies should also give some level of recognition to the ones doing them.
Yes but right now most of the time replication will be rejected.
In the video he mentions that sites like this already exist.
retractionwatch.com exists. Also They need donations too.
Ironically since most basic research is government funded anyway there is absolutely no reason why this shouldn't already be happening.
Well this aged like wine
The problem with the approach to science is that nobody likes negative results. They aren't sexy. We always have to see "differences". And people feel pressure to find them no matter how tenuous they are. Because of this it's very difficult to correct problems in the literature.
And they all want big crazy findings. I made a comment above about the new study saying women are more attracted to altruistic guys. Journals know that popular media will be all over that so they will do anything they can to connect themselves to the study so they can have their name out there. Peer reviewed scientific journals are all about brand recognition just like any other business out there.
A lot of the biggest discoveries in science have been because of negative results. They are the sexiest forms of science.
Publication strategies of scientific findings are pretty unscientific. The dynamics of social prestige involved in publishing are clearly incompatible with the scientific method.
The xkcd "Jelly Beans" comic deserves a mention. I'm so glad it became popular because it illustrates the whole issue so well, and in just one frame. It should be required reading for the whole world!
@Fluffynator The scientists decide to test whether jelly beans cause acne. They do not get statistically significant results, so one of them suggests testing each of the 20 colours separately. When you break up the data into many categories, you increase your chances of a category showing statistically significant results by pure coincidence. This is essentially what happened with the chocolate study in the video. By monitoring many different categories of conditions (weight loss, sleep quality etc.), it was more likely that one of categories returned a false positive. The same thing happens in the comic. One of the 20 tested colours shows statistically significant results, which is not unexpected due to the number of categories they created. They publish the paper showing the (presumably) false positive with green jellybeans while the other 19 studies that correctly identified no relationship go unpublished and forgotten.
As a person that loves science but is not in the field, I’ve become quite disgusted by the lack of integrity shown by the university system. They have been corrupted to the core and need to be cleaned out. It’s become big business now and is not to be trusted if profit is the driving motivation, that’s not what universities are for.
I have no issue with for profit companies doing research and development as long as everyone knows where it’s coming from and is driven solely by profit and is treated as such.
I definitely understand that feeling. As a scientist who has spent a disenchantingly long time in academia, I still have faith in individual scientists and the prevailing winds of science overall. Look how far the world has come in such a short span of time (for good and bad). That progress is built laregly on a basis if good science; the bad stuff ends up getting filtered out. Universities absolutely operate for profit, but not everything that makes a profit is without merit in my eyes.
Apparently we need replication studies journal!
About darker skin, it makes sense that players with darker skin have bigger chance to get red carder, because lighter skin makes the player more pleasant looking. Did I just committed the ultimate sin of mankind: telling the truth?
@@seanleith5312 YOU NAZI!!
Jokes aside, you are very much in the right, and quite potentially right about the explanation as well
@@seanleith5312 Can you explain how?
@@Hekateras Here is how: take off your political correctness glasses, respect the facts, use reason over emotion. That's how.
@@seanleith5312 That's not an explanation. I asked you a pretty simple question.
"The Okay, the Bad and the Erotic" actually sounds like a reasonable movie name!
Yes!
Use me. I'll sign the release form.
Reminded me of "The Good, the Bad and the Ugly"
Interesting how dude also gone right from" Erotic" image to Slight "deviation" and then to "Pee" value phrase after phrase
The subtitle could be: "Get ready for a pee under one in twenty."
There's also the journal "Series of Unsurprising Results in Economics," which publishes results everyone would have expected.
I'm so glad someone finally spoke out about something I have been worried about for so long. I thought maybe it was just my misunderstanding because everyone else seemed to be deeply engaging in this toxic/false process 🤔, but now I know it is indeed problematic. Thank you so much!
@@anchordrop1476 That's very interesting. Could you tell me the the other ways your professor would want you to draw conclusions? Also do you know which types of models statisticians are favouring?
This is such an important video. It's so easy for people to find "a study" that can prove or disprove whatever you want for this very reason. That's why one of the first things I look for when someone shows a study/finding is the sample size.
Talik , why sample size over other parameters ?
I just looked at this video again, and I realized something: When you say that they don't publish as many negative results, you still assumed that they ONLY would publish TRUE negative results, not false ones.
true, though considering how few negative results get published, on that graphic it would only account for about one checkmark or maybe not even.
Academia is full of politics. Journals are trying to avoid vendictive publications.
That's what immediately stood out to me too.
Some people make fun of greek philosophers because some of their ideas are nothing but thoughts and speculation. Wait for the future perhaps they will make fun of us in the future for being biased
Yep, concerningly, 'thoughts and speculation' describes perfectly what is going on in parts of academia.
Philosophy is even more prone to bias.
computing science is not even a science, more like a prescription, too little testing of the hypothesis and too little formalism
@@MusaM8 Philosophy is like maths. The logic either adds up or does not after much scrutiny. Maths is pretty much the only field in research still maintaining integrity.
@@asumazilla Wrong maths does not obey the laws of nature. There are rules in maths that are based on the natural universe as we not it. If any of those are broken, then that maths is useless to us.
While getting a psych ba I wondered why journals are pretty much unregulated. The fact that a journal can publish findings then refuse to publish studies that disprove or refute them is troubling to say the least.