***** I see what you mean but if you replace sentinel with guard or another synonym it wouldn't make sense since its not an adjective. It's just semantics anyway
I am actually absolutely shocked you are being so open about this. Skeptics have been pointing out the exact problems described in this video for years and only got mocked and ridiculed in response. Fanatical, dogmatic belief in the infallibility of the current publishing and peer review system has been so pervasive that I was not expecting open honesty from any science educators within my lifetime.
Within the community, most agree that the publishing system is broken, but there is little we can do to fix it because of lack of funding and how the community follows the highest impact journals with little regard to the science itself.
However, a scientific study being false doesn't mean its opposite conclusion is correct. Being a skeptic means being critical of the METHOD not the conclusions.
Being a skeptic is different from being logical. It is annoying, elitist, and alienating. I have seen skeptics shut down good participants who just didn't have the energy to explain every tiny detail. Skeptics make good lawyers, if they're willing to put in the effort. I'm glad they are bringing the aforementioned errors in the science process to light so they may be addressed.
A phrase I heard thrown around in my Biomedical Research class is "publish or perish", and I think that's a pretty accurate summary of what you were describing from 5:24 to 6:34. It's definitely a real issue
KK426LH You should google "joke" and maybe "humor", they are some pretty interesting concepts, especially if you are new to the concept or maybe just not take things so literally.
Thank you very much for highlighting this! I feel like the worst factor is that replication studies are often considered "not worth the time" because, well, that thing has already been done and it wouldn´t be as exciting for the publishing journals or even simply the scientists themselves.
just so you know that book us very reliable. you can use it for a small height boost, to hit friends and to learn some choice things. pretty damn reliable that book is.
I work in a biological sciences lab and holy crap THIS IS TOTALLY A THING AND ANNOYS THE BAGIZASS OUT OF ME. Not being able to publish results because lack of significance becomes extremely frustrating because nobody wants to admit they've wasted a ungodly amount of time on something they thought would pan out. Changing the way you interpret the results can easily make data that shows little or no significance look very nice and very significant. Unfortunately, that's not proper science...but it happens a lot.
That sucks so much. We learn as much from our failures as successes as individuals so it's sad this doesn't apply to the scientific community as a whole.
Finally SciShow address the underline problem of "the scientific publishing industry". I do agree with everything were mention in this vid, such as: + The inconsistenc of those study + Human factor/errors + Lack of knowledge in math( which is being introduced/integrated to others field now) , + Pressure to public "positive" result. + And the culprit MONEY, that's right MONEY Well done on none bias report SciShow !!!
The beautiful thing about science is that when we are wrong, it's not because there was a flaw in science, it's because there was a flaw in the PEOPLE doing it. We _CAN_ examine our beliefs and identify exactly where flaws might lie. It makes things more complicated, sure, but reality is complicated. It's a personal choice if someone, for instance, chooses to trust something radically flawed such as psychology. But we know it's flawed, and that performing legitimate scientific studies in that field is impossible (in the few cases where it IS possible, it turns out to be neuroscience, not psychology). If a subject is very important, and people care very strongly about it, we can go to the absolute limits of our willingness to tolerate error and ability to investigate the matter. But in every single case where a thing previously held up as scientifically valid is found to be untrue, we can show exactly where prior scientists failed to meet the standards set by science itself. We believed leaded gasoline was safe because the standard of proof was stupendously bad, those doing the research did only a couple horribly flawed studies that no one in their right mind should have trusted, but a few people cut corners to make their careers look better. And they're personally responsible for the deaths of millions of people, disease in many more, and an unknowable amount of human suffering as a result. They absolutely should have known better, and their names should really be more widely known so that people can spit when they think of them. And we should definitely remember them any time someone studies a handful of WEIRD (western, educated, industrialized, rich, democractic) undergrads and claims to have made a finding about the entire human race. And if those studies are used to back some side (regardless of whether it's "your" side or not) of an argument influencing public policy, they should be called out and the people suggesting we affect peoples lives based on them should be seen as the dangerous, shortsighted, biased, malicious people they are. A real scientific study makes findings which are parsimonious - they are restricted to only the group from which their subjects were drawn (randomly). Any wider applicability of their findings may be reasoned about, and even conjectured, but only with ample warning to the reader that they are exactly that - mere conjecture. If a psychology study were to do this, and do it with integrity, the paper would end up being laughable. There would be 60 pages of irrelevant-sounding criteria that they didn't control for (doing a study on the effects of violent media on the youth? Did you take care to examine the diet of your subjects to ensure there was not a statistically abnormal amount of vegetarians or junk-food-eaters or high-carb or high-fat or low-fiber people present? No? Then you have to explain that. I've never even seen a study on media effects on a population that even controlled for consumption of the EXACT media they are claiming to have drawn conclusions about! Not once! 90% of their study participants could have just gotten off a weekend-long Halo LAN party binge and they would not have a clue because they don't even care enough to ASK. The vast majority of studies also totally conflate witnessing actual violence in person and watching violence on a screen, making the completely baseless assumption that all of the brain processing involved in taking a flat image at the wrong proportions while every single other sense is feeding completely contradictory sensory information and figuring out that it's supposed to represent a scene with human beings has zero effect upon the brains ability to process it differently from in-person experience of actual violence which puts them in real danger. That, not violent media, makes me want to slap somebody.) The terrible truth that most people don't want to realize is that there is no free lunch, and no simple answers. Even things which appear simple are complex. The desire for simplicity is understandable, an outgrowth of the base structure of the human brain, universal, and utterly wrong. Like most of our traits that came from long evolution, it is perfectly honed to make humans into survival machines - on an African savannah with no language skills and in a small tribe surrounded by abundant food and predatory animals where 'survival' meant making it a little past puberty and producing at least 1 child as young as possible. For a situation where we have language, complex social structures, no danger from predatory animals, and advanced stores of knowledge, everything that comes from that evolutionary development is misleading at best, and most often dangerously destructive. It's what enables a few researchers to trick themselves into thinking that cutting a few corners and not "being a hardass" about scientific rigor will be harmless, so that before they know it millions of people are profoundly and negatively affected by those little errors.
crazy808ish Many don't, but some are interested. Luckily the interface makes it super easy to skip over anything of substance if you're not one for reading.
Good thoughts. You raised leaded gasoline and media studies as examples, but my mind immediately went to Zimbardo's prison experiment. Wildly unethical, poorly designed, and its questionable findings are still thought to apply universally to all of humanity.
Geoff Ronan I'd say that some of the findings do, but only because they're backed by many other studies... and also cross species boundaries and have plausible neurological basis. But yes, there were serious problems with that experiment. I'm not sure they could have known that it was going to be unethical before the study was performed, though, most people would have assumed it was obvious that environment could not possibly change ethical, reasonable people into either overbearing or submissive animals.
+Elephant Warrior interesting study indeed. We should officially research this phenomena and see if it is in fact true or if it's just purely coincidental
Probably one of the MOST significant videos done to date. This is a VERY REAL and troubling issue, and it goes beyond just replication issue. Many medical researchers are NOT even doing studies on NEW or ALTERNATIVE drugs/treatments/ect.. in favor of doing a study on an OLD drug but used for a different treatment or used in a different way. This goes back to the search for positive results, versus search for NEW results. The entire function of publishing/review and MONEY is in (or should be) crisis. Hats off for bringing this topic to light... hopefully a broad knowledge of how poor the situation is, will encourage those in power to make positive changes... even if it means publishing NEGATIVE results. ( gasp!)
Oh God, P-Values are my trigger words. I'm getting horrible flashbacks to my advanced statistics class. If this gets any worse, I'm going to have to create a Patreon.
The point of the original comment was "The method is not the problem; it is the mechanism by which the method is utilized that is prone to failure." Though, yes, the original comment does explicitly say something else.
I am a high school science teacher and this is one of the best/most succinct videos I have ever watched on scientific studies and publishing. I will definitely be showing this to my AP classes.
This is why our research group uses online databases to publish our raw measurement data. We do electrical engineering, especially in wireless communications technology, so we have been trained to do statistics appropriately . But writing a paper to publish results is still hard, sometimes reviewers just don't have any expertise in the field and totally misinterpret it.
How data is interpreted is 100% of the issues....the truth could stare you in the face, but after enough rigorous data treatment a result could end up as nothing. Nothing could end up as something, with bias mixing it up further (whether with intent or not). peer-reviewed articles are good in theory, but in practice, as Einstein noted, Doing the same thing expecting different results is insanity.
I'm in college and I had to do educational research. The one major problem I came up with is that most research journals, magazines and more all require some sort of fee to access articles. The reason why I am bringing this up was if a free online journal were to come out which allows research articles to be published, it could get out to more people and allow for more researchers to test other team's work and would also allow for negative results to be published as well.
I'm glad someone beat me to this type of comment. You are completely right in that authors have to pay journals money in order to access journals, and they are not cheap. But one thing that would surprise you is that in those pay-to-access journals, if the author wants to make his/her work free to the public, he must pay the journal a fee to do so. However, there are "open access journals" that may, but usually do not require their authors to pay in order to make their papers free. However, the consequence is that open access journals are currently perceived as "not prestigious" so high ranking scientist tend not to publish in them in order to maintain funding and reputation. You should check out this database of open access journals here: doaj.org/
In dietetics research, reputable research labs will have all those working on the experiment complete research ethics training courses. The employee or volunteer then has quizzes they have taken indicating that they understood the modules. That way, if there is any infraction, that person can be held accountable for their actions because they knew it was wrong to alter the findings, tamper with data, etc. Some universities are also now offering at least a 1 credit hour course in research, teaching students how to recognize flaws in research, where to find reputable research, and to always have their research peer reviewed by other reputable researchers.
The scientific method is 100% reliable if followed properly. Human's are not reliable. If I do a test to see if the light in the fridge goes off when I close the door. I could put a camera inside and open and close the door 300 times to see if the light goes off. But if the person watching the camera lies to me then I'll get incorrect results. Otherwise there is no fault in the method.
No, the scientific method is never 100%. There will always be anomalies and artifacts in results that can be misinterpreted as false positives and negatives. The methods are designed by humans so to say one is reliable when the other isn't is just wrong - perhaps you mean the scientific principle/ theory behind a method, which yes in theory is 100%, but again this is model based on human interpretation of results - you are assuming from the evidence collected that a method behaves in a certain way. Basically the entire video is about how the methods of confirming antibody interaction is flawed.
Profit-oriented organizations disguised as scientific communities are worst of the problems. Always be skeptical about the results. Greed is the ultimate motivation for such organizations/companies.
+Garrett Robinson Actually the video title was a bit misleading. Did you notice how all of Hanks examples come from the medical community? I wish they had title this video "is MEDICINE reliable" instead, because this does not apply to all of science as a whole. It's because medicine is more complex then any other science, simply because it deals with humans. Whether it's humans physical health or mental health, humans are INCREDIBLY complex and very difficult, expensive, and time consuming to study, and so repeat studies are rare. But it's only rare in medicine. The problems Hank discussed here really don't apply much to other branches of science, and certainly not to this degree.
Science is reliable, it's just very unpredictable sometimes, such as in the case of psychology studies where every human's life is different, so you can never really expect to get the same results as before unless it's the same people being tested. And even then, things might have happened in their life that changed how they feel anyway.
+helidodge observe... describe what you observed... think about how what you observed came to be... tell others about what you think happened... repeat until accepted... if accepted/confirmed independently, then what you think happened to cause your observations becomes accepted Theory
helidodge Observe a phenomenon, postulate a hypothesis about the phenomenon, try to disprove it via experiment. If enough people can't do it it becomes a theory.
Being a scientist myself, I can really relate to the frustration of erroneous results in published journals and the heap of bad results hidden underneath the veneer of a few good ones. Scientists don't publish negative results because it makes their work look bad and people wouldn't read it. Less readers mean less citations, therefore less impact factor(the metric to see how good the work is). It's sort of like a marketing strategy to only publish the best results, like how a shampoo company will advertise on the benefits of using their shampoo and not the fact that it basically does nothing extraordinary. Not to mention that some researchers prefer to keep the methodology extremely brief. Almost as if they were trying to prevent the technique from being copied.
Another problem then, reform the teaching of maths, a lot of students are disgusted by mathematic, mostly because of competition (very elitist class, progression based ont the few best people) ,humiliation (like we should have the right and the time to fail and try, until we understand instead of using memory only), lack of pedaogy and money to be able to help everyone...maths are difficult, and often not intuitive. I'm doing geology studies, and yep, maths and physics teaching is awfull...
Math, unlike art and english and many other subjects, (don't stab me liberal arts people, I'm sure they're challenging in their own ways) simply requires you to get frustrated and deal with being lost and confused until things "click." Many people are not used to this hurtle, and assume they're just "bad" at math. But they're not: math really is just that hard for 99% of people. But since a solid grasp on mathematics is required for a vast majority of the best, most reliable job markets out there, kids should just be made to suck it up and get used to it. Sorry, but it's true. Life sucks, at least you're not in a slave state or being mauled to death by a tiger. I'm not saying it couldn't be taught better: there's always room for improvement. But I know so many parents who blame their kids' poor math grades on everything but their kid's lack of motivation. Gimme a break, people!!!
I disagree. I have seen several cases of kids that were completely failure at Math or science. One of them were considered borderline. Until they found a good teacher that motivate them until they began to study by themselves. The one that people thought he was borderline, last time I met him he had a Marie Curie grant. So maybe sit a lot of kids for ours in a class with a teacher that may be or not motivated just telling them the class, may not be a good method Also English is not so easy when you are not native XD
animo005 True, true. Can't deny that the teacher makes a big difference. But you could have the best teacher in the world: the students STILL have to deal with being frustrated and confused while mastering new topics. & that's the point at which many parents start with their winging... Oh, and--all due respect for anyone who has to learn english as their second language. It's a bastard of a language...
***** Sujay seems to have summed up the Big Bang quite nicely, and Dark Energy isn't something taught in elementary school unlike the Big Bang, so I will assume that you are just strawmanning Dark Energy as well as the Big Bang, It can't be helped as an assumption when you misinterpreted a much simpler theory.
Oreki Houtarou If all of Energy was contained in a near infinitely dense point, than what caused the expansion? Since energy likes to remain in a stable state, What caused it to expand in the first place? How About this an object at rest will stay at rest unless acted on by an outside force. What was the outside force that kicked off the big bang? Sorry guys I was thinking a step further than the Theory of the big bang. I admit I am no expert in astrophysics (or the other two branches). It only seems to me that something or someone kicked off the big bang. Now Dark energy definitely isn't for grade school. So let NASA give a description "One explanation for dark energy is that it is a property of space. Albert Einstein was the first person to realize that empty space is not nothing. Space has amazing properties, many of which are just beginning to be understood. The first property that Einstein discovered is that it is possible for more space to come into existence. Then one version of Einstein's gravity theory, the version that contains a cosmological constant, makes a second prediction: "empty space" can possess its own energy. Because this energy is a property of space itself, it would not be diluted as space expands. As more space comes into existence, more of this energy-of-space would appear. As a result, this form of energy would cause the Universe to expand faster and faster. Unfortunately, no one understands why the cosmological constant should even be there, much less why it would have exactly the right value to cause the observed acceleration of the Universe." Go ahead and argue with Einstein. :D
I am a scientist in training and this is a very interesting video. Thank you for making it! I am on school to be a microbiologist and it is very eye opening to know that some of the primers I've been using might not be exact. maybe this is why my protein gel is not working! lol keep up the good work!
Unfortunately, Hank repeats a common misconception when he states that "a p-value tells you the probability that the [positive] results of an experiment were a total coincidence," in other words the probability of the null hypothesis n given the positive result p, P(n|p). In fact, p-value is the probability of the positive result p given a null hypothesis n, P(p|n). These two concepts are mathematically different. Past conflation of P(n|p) with P(p|n) has resulted in false conviction and loss of life. With a proper understanding of p-value and Bayesian probability, one should actually not be surprised that significantly more than 5% of studies with p ≤ 0.05 could not be replicated.
Nonspecific binding of antibodies is someting I worry too. We study a couple of genes in the lab and we have different antobodies aim to their protein products. For a couple of them we have knockout animal models for controling nonspecific signals, but for the rest of them we just speculate with the result.
is science reliable? ...no it is not do we need science? ...yes we do can we live without it? ...yes we can how important is science? ...very very important do i like tacos? ...hell to the yes how are tacos made? ...science why am i asking these questions? i do not know study science stuff, kids.
I did a research project to complete my degree this year, and it really raised these issues to the forefront for me. We were testing whether or not arousal changed in people who played a game we had created. To do this we separated the subjects into two groups, and tested their arousal levels before and after they each played the game. The version of the game which each group played was also slightly different, to allow us to compare which factors of the game could change arousal in what way. After we got the results, we compared them in the appropriate way first, and it came back that it was not significant. I then noticed that if we compared the data in a totally different way, a way which was somewhat irrelevant to what we were actually looking for, we found a significant result. The latter result was that which we ended up presenting. The entire situation made me really uncomfortable, as I felt I was lying about the results of our research, but after talking to other members of my university (including someone who previously worked at a pharmaceutical company), I found out that it was a somewhat common practice. It just all seems dishonest to me.
Well, a good amount of it is rooted in fact (measures of chemicals and activity in the brain, etc. etc.) A good amount of psychology is also true, just not necessarily rooted in concrete fact. It's a slippery slope.
Kimberly Adams Like my nautical chum below you said, neuroscience is rooted in empirical facts/evidence/observation, psychology is a mistake. Its no more a science than homeopathy or astrology.
By defining science as a system that builds knowledge based upon testable explanations and predictions through the use of a particular methodology (the scientific method), then the field of psychology qualifies as a scientific subfield. It's frustrating that many psychological studies have reproducibility errors, but there are many psychological theories that have been examined rigorously, some of the few being the bystander effect (the phenomenon where a greater number of bystanders correlates to a decreased likelihood in helping someone in need), as well as the relationship between trauma in early childhood and the onset of mental disorders in adulthood. There are robust theories in psychology, they just underlie many of the "pop psychology" facts that make some argue that the entire field's a pseudoscience. It's the greatest pet peeve of a psychology major (defending the field as a science). To any biology or chemistry majors out there, what's your least favorite question to hear someone ask? :)
I'm about to spend the entire day running stepwise regression for the first time to build a best fit model for my data... This video just sapped all my confidence in myself... Thanks Hank :?
Its a good thing Hank is doing the Philosophy series on crash course. Hopefully after they finish the series, scientists can use what they've learned to fix the problem with scientific studies.
My favorite story from a Ph.D doing a study on bats where the grantee was a environmental group want to prove that the agriculture practice by the local fruit farmers was effecting their food supply. When he did the study he found their main food source came from local store dumpsters. He went to the grantees they increase the grant to remove "human influence". So they did the study again this time making sure the dumpster were seal or emptied daily. This caused the bat to go back the the food farms show that the farmer practice will effect the food supply of the bats. So if you don't get the outcome you want change the conditions to get the outcome you need.
One thing he didn't point out: Even if everyone is acting honestly and knows what they're doing (with statistical analysis, etc.), the fact that you can't publish negative results contributes massively to the replication crisis. It's not just the pressure to get a positive result leading to bad actors that leads to issues, but... let's say that ten different research groups all go after an extremely promising result. If p
Having done several studies in Linguistics, I have to say these problems are universal and not just related to Science alone. Much of the problem mention, such as the inclusion of p-values, is something I had experienced firsthand and that even my supervisor said that "it is something that you need to know and need to include in an academic paper, but have no significant value in the paper what so ever". Going back to publishing studies, I firmly believe that pushing out papers out in large quantities can damage the reputation of certain individual, though it should be stated that I do hastily push out papers as I do have multiple papers in queue to send to supervisors and most of which have datelines that need to be met.
HAHAH, I conducted some research in a laboratory at Mt. Sinai and actually we experienced similar trouble with a really simple ooligo sample that was NOT of the sequence we'd ordered. Wasted a lot of time and PCR reagents trying to get our SDS PAGE bands to actually show, and after trying everything under the sun we'd run out of our ooligo and the first try with the new solution worked like the a charm. Really difficult to pin-point problems like that when 10 different things and proportions could be the issue, or even someone's technique.
To add, replicating studies is also not considered worthwhile, and researchers are often pressured to conduct research that will receive more funding, as opposed to research on topics which interest them. Of course choosing to research a topic which is more likely to have profound [positive] impact is commendable, but in trying to publish "novel" research to avoid being a replication study, some topics are unreasonably inapplicable.
Good show, thanks. Thanks also for saying "data are". Has anyone ever published a study showing the relative frequency of "data is" in scientific papers that are not actually referring to Brent Spiner's iconic Star Trek character?
This is why when some one says " The science is in amd the debate is over" Just keeps the debate alive for me. My favorite science teacher and public speakers say things like "This is what we think is going on" Or other non definitive statements.
Aren't the results of a research supposed to replicated and peer reviewed before the study is actually published in a journal or taken to a conference? why is this problem popping up only now?
When I was in medical school I advocated among my classmates establishing The Journal of the Null Hypothesis. Of course, as a medical student, I lacked the knowledge and resources to create a whole new journal. I didn't, and still don't, know who might create that journal. I think it would really need to be many different journals based on subject matter. I also suspect it would be the heaviest journal on the shelves of university libraries. I think today it is more important than it was when I first conceived of it.
PLEASE READ: So a few things I liked about this video. It provided a comprehensive assessment of the reproducibility crisis taking information from different fields. It also looked at the problem from different perspectives (researchers, publishing standards, statistics, etc.). What I didn't like is perhaps the perceived mischaracterization of psychology people could glean from it due to some missing info. In the Open Science project (the one with the 100 studies), many psychologists (Daniel Gilbert, TedTalk guy, and colleagues) analyzed the methods of the project and found that several of the replications were poorly done (one for example was a study supposed to analyze americans views on issues, but italians were used instead). Plenty were more or less extreme than that. In addition, Gilbert had shown that due to sampling errors and not enough replications, the statistic of replications from the large project is probably heavily underestimated & definitely can't be generalized to all of psychology. In other words, 39% of studies replicating doesn't mean at all that 39% of psychology is correct. Also, replications have been used in the past and are still being used in psychology. There are many classic studies that are supported just fine, especially with distinguished researchers like Khaneman, Shelley Taylor, etc. (so no with the whole pseudoscience idea). In fact, Behavioral genetics and cancer biology have it worse than psychology. Lastly psychologists are probably more concerned than the public about this (and have known about it for 4 years now), they are devising new ways to accept new studies such as preregistering the studies to prevent experimenters from manipulating the results after analyzing the data, etc. Keep in mind science is a process and an art where flawed practices are eventually replaced with better ones. So yeah, this is a celebration for psychology research as bad research practices are making their way out and more robust and stable knowledge is being cultivated. More info go to the apa website.
It's hard to confirm a cancer study. Every cancer is completely different. Some medicine can work extremely well on one patient and have no effect on some other patient.
Very interesting! I tried to do my undergraduate biology dissertation on the arbitrary nature of p values and scientific significance as I was getting a fair amount of results around the significance level and it seemed strange to me that if they were below a magical significance level then suddenly they were important results, but my supervisor wasn't interested... :/ interesting to hear that it's getting more publicity now!
This is worrying, I've always believed in the scientific community for being a pillar of reliability and honesty, if this apparent problem isn't resolved then...damn :/ hope steps are made the make studies more reliable and accurate!
A favorite quote of mine "Science does not lie, but scientists do."
very true, I love it too
Now my favorite quote 😄
Some scientists *
I mean if they are calling shots 50 billion years in the future they are
Science like religion is based on faith and belief- not certainty.
This is a sentinel episode of SciShow. THANK YOU FOR THIS. IT IS INCREDIBLY IMORTANT. BEST EPISODE YET.
A video on John Ioannidis paper in PLoS Medicine would round out this concept in a nice way.
This video should be required viewing for every non-science media person prior to being allow to report on anything science-related.
Don't you mean seminal
***** I see what you mean but if you replace sentinel with guard or another synonym it wouldn't make sense since its not an adjective. It's just semantics anyway
I am actually absolutely shocked you are being so open about this. Skeptics have been pointing out the exact problems described in this video for years and only got mocked and ridiculed in response. Fanatical, dogmatic belief in the infallibility of the current publishing and peer review system has been so pervasive that I was not expecting open honesty from any science educators within my lifetime.
Within the community, most agree that the publishing system is broken, but there is little we can do to fix it because of lack of funding and how the community follows the highest impact journals with little regard to the science itself.
You're both right. It's very complicated and goes beyond the depth of what most uninterested people are willing to invest themselves in.
However, a scientific study being false doesn't mean its opposite conclusion is correct. Being a skeptic means being critical of the METHOD not the conclusions.
Being a skeptic is different from being logical. It is annoying, elitist, and alienating. I have seen skeptics shut down good participants who just didn't have the energy to explain every tiny detail.
Skeptics make good lawyers, if they're willing to put in the effort. I'm glad they are bringing the aforementioned errors in the science process to light so they may be addressed.
John Oliver just did a piece on this like a month ago
A phrase I heard thrown around in my Biomedical Research class is "publish or perish", and I think that's a pretty accurate summary of what you were describing from 5:24 to 6:34. It's definitely a real issue
I wonder how many people will interpret this video wrong.
How did you interpret the video?
The [number] is too damn high!
35 so far. Or they think what they read in really old fairytale books is true.
KK426LH
You should google "joke" and maybe "humor", they are some pretty interesting concepts, especially if you are new to the concept or maybe just not take things so literally.
+Zoltan slayer of trolls and lost sheeple there was no comedy in your comment, and that guy had no miserable tone, just stating a correction
Thank you very much for highlighting this! I feel like the worst factor is that replication studies are often considered "not worth the time" because, well, that thing has already been done and it wouldn´t be as exciting for the publishing journals or even simply the scientists themselves.
More reliable than a 5000 year old story book
looool
I'm not even sure if literature existed in 3000 BC, and if it did, it must have been too primitive to write a "story book."
just so you know that book us very reliable. you can use it for a small height boost, to hit friends and to learn some choice things. pretty damn reliable that book is.
Funny, but irrelevant.
+John Smith your hilarious
Very important topic!! Thanks for highlighting these issues.
I work in a biological sciences lab and holy crap THIS IS TOTALLY A THING AND ANNOYS THE BAGIZASS OUT OF ME. Not being able to publish results because lack of significance becomes extremely frustrating because nobody wants to admit they've wasted a ungodly amount of time on something they thought would pan out. Changing the way you interpret the results can easily make data that shows little or no significance look very nice and very significant. Unfortunately, that's not proper science...but it happens a lot.
That sucks so much. We learn as much from our failures as successes as individuals so it's sad this doesn't apply to the scientific community as a whole.
Finally SciShow address the underline problem of "the scientific publishing industry". I do agree with everything were mention in this vid, such as:
+ The inconsistenc of those study
+ Human factor/errors
+ Lack of knowledge in math( which is being introduced/integrated to others field now) ,
+ Pressure to public "positive" result.
+ And the culprit MONEY, that's right MONEY
Well done on none bias report SciShow !!!
The beautiful thing about science is that when we are wrong, it's not because there was a flaw in science, it's because there was a flaw in the PEOPLE doing it. We _CAN_ examine our beliefs and identify exactly where flaws might lie. It makes things more complicated, sure, but reality is complicated. It's a personal choice if someone, for instance, chooses to trust something radically flawed such as psychology. But we know it's flawed, and that performing legitimate scientific studies in that field is impossible (in the few cases where it IS possible, it turns out to be neuroscience, not psychology). If a subject is very important, and people care very strongly about it, we can go to the absolute limits of our willingness to tolerate error and ability to investigate the matter.
But in every single case where a thing previously held up as scientifically valid is found to be untrue, we can show exactly where prior scientists failed to meet the standards set by science itself. We believed leaded gasoline was safe because the standard of proof was stupendously bad, those doing the research did only a couple horribly flawed studies that no one in their right mind should have trusted, but a few people cut corners to make their careers look better. And they're personally responsible for the deaths of millions of people, disease in many more, and an unknowable amount of human suffering as a result. They absolutely should have known better, and their names should really be more widely known so that people can spit when they think of them. And we should definitely remember them any time someone studies a handful of WEIRD (western, educated, industrialized, rich, democractic) undergrads and claims to have made a finding about the entire human race. And if those studies are used to back some side (regardless of whether it's "your" side or not) of an argument influencing public policy, they should be called out and the people suggesting we affect peoples lives based on them should be seen as the dangerous, shortsighted, biased, malicious people they are.
A real scientific study makes findings which are parsimonious - they are restricted to only the group from which their subjects were drawn (randomly). Any wider applicability of their findings may be reasoned about, and even conjectured, but only with ample warning to the reader that they are exactly that - mere conjecture. If a psychology study were to do this, and do it with integrity, the paper would end up being laughable. There would be 60 pages of irrelevant-sounding criteria that they didn't control for (doing a study on the effects of violent media on the youth? Did you take care to examine the diet of your subjects to ensure there was not a statistically abnormal amount of vegetarians or junk-food-eaters or high-carb or high-fat or low-fiber people present? No? Then you have to explain that. I've never even seen a study on media effects on a population that even controlled for consumption of the EXACT media they are claiming to have drawn conclusions about! Not once! 90% of their study participants could have just gotten off a weekend-long Halo LAN party binge and they would not have a clue because they don't even care enough to ASK. The vast majority of studies also totally conflate witnessing actual violence in person and watching violence on a screen, making the completely baseless assumption that all of the brain processing involved in taking a flat image at the wrong proportions while every single other sense is feeding completely contradictory sensory information and figuring out that it's supposed to represent a scene with human beings has zero effect upon the brains ability to process it differently from in-person experience of actual violence which puts them in real danger. That, not violent media, makes me want to slap somebody.)
The terrible truth that most people don't want to realize is that there is no free lunch, and no simple answers. Even things which appear simple are complex. The desire for simplicity is understandable, an outgrowth of the base structure of the human brain, universal, and utterly wrong. Like most of our traits that came from long evolution, it is perfectly honed to make humans into survival machines - on an African savannah with no language skills and in a small tribe surrounded by abundant food and predatory animals where 'survival' meant making it a little past puberty and producing at least 1 child as young as possible. For a situation where we have language, complex social structures, no danger from predatory animals, and advanced stores of knowledge, everything that comes from that evolutionary development is misleading at best, and most often dangerously destructive. It's what enables a few researchers to trick themselves into thinking that cutting a few corners and not "being a hardass" about scientific rigor will be harmless, so that before they know it millions of people are profoundly and negatively affected by those little errors.
ikr
People on youtube don't really come for such long essays.
crazy808ish
Many don't, but some are interested. Luckily the interface makes it super easy to skip over anything of substance if you're not one for reading.
Good thoughts. You raised leaded gasoline and media studies as examples, but my mind immediately went to Zimbardo's prison experiment. Wildly unethical, poorly designed, and its questionable findings are still thought to apply universally to all of humanity.
Geoff Ronan I'd say that some of the findings do, but only because they're backed by many other studies... and also cross species boundaries and have plausible neurological basis. But yes, there were serious problems with that experiment. I'm not sure they could have known that it was going to be unethical before the study was performed, though, most people would have assumed it was obvious that environment could not possibly change ethical, reasonable people into either overbearing or submissive animals.
So glad you guys made this video. I'm an undergraduate researcher and my PI and postdocs have been complaining about this for a while now.
You can't expect serious comments in less then 15 minutes after the initial upload time, comment section is literally only shitposters
Like your comment? _-_
I hope you and your family get cancer
so creative.....uhhhh.
+Elephant Warrior interesting study indeed. We should officially research this phenomena and see if it is in fact true or if it's just purely coincidental
the fact that anyone expects serious comments at all in this cesspool is blowing my mind.
Probably one of the MOST significant videos done to date. This is a VERY REAL and troubling issue, and it goes beyond just replication issue. Many medical researchers are NOT even doing studies on NEW or ALTERNATIVE drugs/treatments/ect.. in favor of doing a study on an OLD drug but used for a different treatment or used in a different way. This goes back to the search for positive results, versus search for NEW results. The entire function of publishing/review and MONEY is in (or should be) crisis.
Hats off for bringing this topic to light... hopefully a broad knowledge of how poor the situation is, will encourage those in power to make positive changes... even if it means publishing NEGATIVE results. ( gasp!)
Science is the most reliable thing we got, so I guess I'll take my chances.
Sir would you like to ride in my rocket ship? I promise it's safe 😏
Such an important topic! Thanks for covering it!
The science of science should be called, wait for it, "Scientology" wait, that's not right.
I wish I could work for SciShow... I would love to do research for their shows, it's amazing, and I can't learn enough!
I find it interesting that many religious believers see the built-in error correction of the scientific method as a flaw, rather than a feature.
Thank you! *Finally* someone is addressing this to the public - Keep up the good work!
Oh God, P-Values are my trigger words. I'm getting horrible flashbacks to my advanced statistics class. If this gets any worse, I'm going to have to create a Patreon.
_p-values_ , _p-values_
But what if the p value that those flashbacks aren't happening because of the mention of p values?
TRIGGERED
Yep, when I heard "P-values" I broke out in a cold sweat and a shiver ran down my spine...
I'll support your patreon bbygrl , you are a special snowflake don't ever let no one tell you youre a -rabidfeminist- p-value
It's a real shame that negative results never get published. Even learning that a method *doesn't* work can be important.
The scientific method is always reliable. But sometimes, mistakes happen.
Like your birth OHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
n
+Jordan Shank you realized that you just contradicted yourself, right? the method ALWAYS work but MISTAKES happen.
The point of the original comment was "The method is not the problem; it is the mechanism by which the method is utilized that is prone to failure."
Though, yes, the original comment does explicitly say something else.
Wrong. The scientific method suffers from a lot of problems in the social sciences. And you can't scientifically test math, or logic.
Superb video. Thanks for the great work!
Science is reliable, except if you're taking a science test.
true
Science is definitely not reliable...
it is, just saying it isn't prove that it isn't, most often people who's personal views are refuted by science make these claims
Well then I guess u won't mind if we take away ur unreliable internet, computer, phone, medicine....I could go on for months
Science is reliable....just that people sometimes aren't.....and money.
I am a high school science teacher and this is one of the best/most succinct videos I have ever watched on scientific studies and publishing. I will definitely be showing this to my AP classes.
I liked the one of John Oliver more though :-p
[CURRENT YEAR]
This is why our research group uses online databases to publish our raw measurement data. We do electrical engineering, especially in wireless communications technology, so we have been trained to do statistics appropriately .
But writing a paper to publish results is still hard, sometimes reviewers just don't have any expertise in the field and totally misinterpret it.
How data is interpreted is 100% of the issues....the truth could stare you in the face, but after enough rigorous data treatment a result could end up as nothing. Nothing could end up as something, with bias mixing it up further (whether with intent or not).
peer-reviewed articles are good in theory, but in practice, as Einstein noted,
Doing the same thing expecting different results is insanity.
I'm in college and I had to do educational research. The one major problem I came up with is that most research journals, magazines and more all require some sort of fee to access articles. The reason why I am bringing this up was if a free online journal were to come out which allows research articles to be published, it could get out to more people and allow for more researchers to test other team's work and would also allow for negative results to be published as well.
I'm glad someone beat me to this type of comment. You are completely right in that authors have to pay journals money in order to access journals, and they are not cheap. But one thing that would surprise you is that in those pay-to-access journals, if the author wants to make his/her work free to the public, he must pay the journal a fee to do so. However, there are "open access journals" that may, but usually do not require their authors to pay in order to make their papers free. However, the consequence is that open access journals are currently perceived as "not prestigious" so high ranking scientist tend not to publish in them in order to maintain funding and reputation. You should check out this database of open access journals here: doaj.org/
Thank you very much! I will head there right now.
creationist are gonna use this argument xd
Dufuq is a cracionist? I know what a creationist is, but a cracionist?
Joe Sunder sorry I am learning english xd
**tips fedora**
Better than some who have spoken it all their life.
creationism isn't anti science.
It's probably your best video so far! great work!
The problem is: You can either do good science - or have a job.
In dietetics research, reputable research labs will have all those working on the experiment complete research ethics training courses. The employee or volunteer then has quizzes they have taken indicating that they understood the modules. That way, if there is any infraction, that person can be held accountable for their actions because they knew it was wrong to alter the findings, tamper with data, etc. Some universities are also now offering at least a 1 credit hour course in research, teaching students how to recognize flaws in research, where to find reputable research, and to always have their research peer reviewed by other reputable researchers.
So the answer to the question in the title is, "Not all the time, but scientists are trying to improve"?
Basically
I agree
The scientific method is 100% reliable if followed properly. Human's are not reliable.
If I do a test to see if the light in the fridge goes off when I close the door. I could put a camera inside and open and close the door 300 times to see if the light goes off. But if the person watching the camera lies to me then I'll get incorrect results. Otherwise there is no fault in the method.
No, the scientific method is never 100%. There will always be anomalies and artifacts in results that can be misinterpreted as false positives and negatives. The methods are designed by humans so to say one is reliable when the other isn't is just wrong - perhaps you mean the scientific principle/ theory behind a method, which yes in theory is 100%, but again this is model based on human interpretation of results - you are assuming from the evidence collected that a method behaves in a certain way. Basically the entire video is about how the methods of confirming antibody interaction is flawed.
Well, that is pretty much how science has always worked. Science isn't 100% accurate but it is self-correcting.
Profit-oriented organizations disguised as scientific communities are worst of the problems. Always be skeptical about the results. Greed is the ultimate motivation for such organizations/companies.
This is crazy. I had no idea this was going on, to be honest.
I feel ya brother.
Should watch the show adam ruins everything. He tells you alot of stuff you didn't know was going on...
Maybe not as crazy as it's put here read my post
Good. Next time someone arguing with you, think carefully and research the subject. Don't just mindlessly demand sources.
+Garrett Robinson Actually the video title was a bit misleading. Did you notice how all of Hanks examples come from the medical community? I wish they had title this video "is MEDICINE reliable" instead, because this does not apply to all of science as a whole. It's because medicine is more complex then any other science, simply because it deals with humans. Whether it's humans physical health or mental health, humans are INCREDIBLY complex and very difficult, expensive, and time consuming to study, and so repeat studies are rare. But it's only rare in medicine. The problems Hank discussed here really don't apply much to other branches of science, and certainly not to this degree.
wheeeww.. its sooo refreshing to find an unbias well put together content.
great job guys.. love it
Science is reliable, it's just very unpredictable sometimes, such as in the case of psychology studies where every human's life is different, so you can never really expect to get the same results as before unless it's the same people being tested. And even then, things might have happened in their life that changed how they feel anyway.
That's why law of big numbers is a thing, same applies to all branches of science, that's why big sample sizes should be the standard.
What is the definition of the scientific method?
+helidodge observe... describe what you observed... think about how what you observed came to be... tell others about what you think happened... repeat until accepted... if accepted/confirmed independently, then what you think happened to cause your observations becomes accepted Theory
helidodge Observe a phenomenon, postulate a hypothesis about the phenomenon, try to disprove it via experiment. If enough people can't do it it becomes a theory.
The scientific method :-
1. observable
2. measurable
3. repeatable
4. predictable
if any of these fails then it's not science.
I'm glad this is being addressed, I feel like conflicting studies about the same subject are coming out every week
Title is a little misleading. Sounds like science is reliable. But scientific research and publications are not.
Being a scientist myself, I can really relate to the frustration of erroneous results in published journals and the heap of bad results hidden underneath the veneer of a few good ones. Scientists don't publish negative results because it makes their work look bad and people wouldn't read it. Less readers mean less citations, therefore less impact factor(the metric to see how good the work is). It's sort of like a marketing strategy to only publish the best results, like how a shampoo company will advertise on the benefits of using their shampoo and not the fact that it basically does nothing extraordinary.
Not to mention that some researchers prefer to keep the methodology extremely brief. Almost as if they were trying to prevent the technique from being copied.
All the "S"s in his speech are sharp and louder compared to everything else, anyone know why? Is it just me?
It amazes me how scishow always uploads a video of the subjects of my exam 1 day before my exams.
Another problem then, reform the teaching of maths, a lot of students are disgusted by mathematic, mostly because of competition (very elitist class, progression based ont the few best people) ,humiliation (like we should have the right and the time to fail and try, until we understand instead of using memory only), lack of pedaogy and money to be able to help everyone...maths are difficult, and often not intuitive. I'm doing geology studies, and yep, maths and physics teaching is awfull...
I think you can apply that to all the educative system.
It seems that most of the kids end hating most that is taught in school.
Math, unlike art and english and many other subjects, (don't stab me liberal arts people, I'm sure they're challenging in their own ways) simply requires you to get frustrated and deal with being lost and confused until things "click." Many people are not used to this hurtle, and assume they're just "bad" at math. But they're not: math really is just that hard for 99% of people.
But since a solid grasp on mathematics is required for a vast majority of the best, most reliable job markets out there, kids should just be made to suck it up and get used to it. Sorry, but it's true. Life sucks, at least you're not in a slave state or being mauled to death by a tiger. I'm not saying it couldn't be taught better: there's always room for improvement. But I know so many parents who blame their kids' poor math grades on everything but their kid's lack of motivation. Gimme a break, people!!!
I disagree. I have seen several cases of kids that were completely failure at Math or science. One of them were considered borderline. Until they found a good teacher that motivate them until they began to study by themselves.
The one that people thought he was borderline, last time I met him he had a Marie Curie grant.
So maybe sit a lot of kids for ours in a class with a teacher that may be or not motivated just telling them the class, may not be a good method
Also English is not so easy when you are not native XD
animo005 True, true. Can't deny that the teacher makes a big difference. But you could have the best teacher in the world: the students STILL have to deal with being frustrated and confused while mastering new topics. & that's the point at which many parents start with their winging...
Oh, and--all due respect for anyone who has to learn english as their second language. It's a bastard of a language...
dev02ify lol... and for all I know, you're being completely serious! :-P
Last time I was this early... Science was actually reliable
God lol
lol
Well I mean , Science never fits in with my ancient book of fairy tales inspired by DA LAWD GAWD . So how can it be reliable ?
But it does...
+SuperSMT
Law of Conservation of Mass and Energy...
Big Bang theory and Dark Energy seem to violate the Law of Conservation of Mass and Energy. Can you reconcile these for me?
*****
Sujay seems to have summed up the Big Bang quite nicely, and Dark Energy isn't something taught in elementary school unlike the Big Bang, so I will assume that you are just strawmanning Dark Energy as well as the Big Bang, It can't be helped as an assumption when you misinterpreted a much simpler theory.
Oreki Houtarou If all of Energy was contained in a near infinitely dense point, than what caused the expansion? Since energy likes to remain in a stable state, What caused it to expand in the first place? How About this an object at rest will stay at rest unless acted on by an outside force. What was the outside force that kicked off the big bang? Sorry guys I was thinking a step further than the Theory of the big bang. I admit I am no expert in astrophysics (or the other two branches). It only seems to me that something or someone kicked off the big bang. Now Dark energy definitely isn't for grade school. So let NASA give a description "One explanation for dark energy is that it is a property of space. Albert Einstein was the first person to realize that empty space is not nothing. Space has amazing properties, many of which are just beginning to be understood. The first property that Einstein discovered is that it is possible for more space to come into existence. Then one version of Einstein's gravity theory, the version that contains a cosmological constant, makes a second prediction: "empty space" can possess its own energy. Because this energy is a property of space itself, it would not be diluted as space expands. As more space comes into existence, more of this energy-of-space would appear. As a result, this form of energy would cause the Universe to expand faster and faster. Unfortunately, no one understands why the cosmological constant should even be there, much less why it would have exactly the right value to cause the observed acceleration of the Universe." Go ahead and argue with Einstein. :D
thanks for these awesome videos scishow!!
The statistical knowledge of Psychologists is really bad.
Source: i'm a psychologist.
I am a scientist in training and this is a very interesting video. Thank you for making it! I am on school to be a microbiologist and it is very eye opening to know that some of the primers I've been using might not be exact. maybe this is why my protein gel is not working! lol keep up the good work!
Unfortunately, Hank repeats a common misconception when he states that "a p-value tells you the probability that the [positive] results of an experiment were a total coincidence," in other words the probability of the null hypothesis n given the positive result p, P(n|p). In fact, p-value is the probability of the positive result p given a null hypothesis n, P(p|n). These two concepts are mathematically different. Past conflation of P(n|p) with P(p|n) has resulted in false conviction and loss of life.
With a proper understanding of p-value and Bayesian probability, one should actually not be surprised that significantly more than 5% of studies with p ≤ 0.05 could not be replicated.
this is a great overview, really appreciate the citations
So in short its not science thats un reliable its the people doing them?
Nonspecific binding of antibodies is someting I worry too. We study a couple of genes in the lab and we have different antobodies aim to their protein products. For a couple of them we have knockout animal models for controling nonspecific signals, but for the rest of them we just speculate with the result.
is science reliable? ...no it is not
do we need science? ...yes we do
can we live without it? ...yes we can
how important is science? ...very very important
do i like tacos? ...hell to the yes
how are tacos made? ...science
why am i asking these questions? i do not know
study science stuff, kids.
the most pointless comment ever^
I did a research project to complete my degree this year, and it really raised these issues to the forefront for me. We were testing whether or not arousal changed in people who played a game we had created. To do this we separated the subjects into two groups, and tested their arousal levels before and after they each played the game. The version of the game which each group played was also slightly different, to allow us to compare which factors of the game could change arousal in what way.
After we got the results, we compared them in the appropriate way first, and it came back that it was not significant. I then noticed that if we compared the data in a totally different way, a way which was somewhat irrelevant to what we were actually looking for, we found a significant result. The latter result was that which we ended up presenting. The entire situation made me really uncomfortable, as I felt I was lying about the results of our research, but after talking to other members of my university (including someone who previously worked at a pharmaceutical company), I found out that it was a somewhat common practice. It just all seems dishonest to me.
Are we all pretending psychology isnt pseudo-science?
Well, a good amount of it is rooted in fact (measures of chemicals and activity in the brain, etc. etc.) A good amount of psychology is also true, just not necessarily rooted in concrete fact. It's a slippery slope.
+Kimberly Adams that's neurology and psychiatry, psychology doesn't deal in chemicals, it's borderline a pseudoscience.
Kimberly Adams Like my nautical chum below you said, neuroscience is rooted in empirical facts/evidence/observation, psychology is a mistake. Its no more a science than homeopathy or astrology.
No, it's a science, it's just that explaining people is fucking hard.
By defining science as a system that builds knowledge based upon testable explanations and predictions through the use of a particular methodology (the scientific method), then the field of psychology qualifies as a scientific subfield.
It's frustrating that many psychological studies have reproducibility errors, but there are many psychological theories that have been examined rigorously, some of the few being the bystander effect (the phenomenon where a greater number of bystanders correlates to a decreased likelihood in helping someone in need), as well as the relationship between trauma in early childhood and the onset of mental disorders in adulthood. There are robust theories in psychology, they just underlie many of the "pop psychology" facts that make some argue that the entire field's a pseudoscience.
It's the greatest pet peeve of a psychology major (defending the field as a science). To any biology or chemistry majors out there, what's your least favorite question to hear someone ask? :)
NOW this is a good video. Hank. you have done it again.
first... oh god kill me.
Why would you want that?
you don't have too
*slow clap*
Ok.
no I'm busy
This is like finding out that the measuring tapes at a construction site are not all the same...
As an electronics engineer, calculations will only get you so far.
A dangerous, but important video, and published on my birthday! Good job.
Thank you
This is melting my brain ......ahhhh......
I'm about to spend the entire day running stepwise regression for the first time to build a best fit model for my data... This video just sapped all my confidence in myself... Thanks Hank :?
I am a science student and this video do point out the concern while being told to use "prepared" materials that buy in batches like antibodies.
Its a good thing Hank is doing the Philosophy series on crash course. Hopefully after they finish the series, scientists can use what they've learned to fix the problem with scientific studies.
Extreme good videos! :D
Im a huge fan of your work!!
My favorite story from a Ph.D doing a study on bats where the grantee was a environmental group want to prove that the agriculture practice by the local fruit farmers was effecting their food supply. When he did the study he found their main food source came from local store dumpsters. He went to the grantees they increase the grant to remove "human influence". So they did the study again this time making sure the dumpster were seal or emptied daily. This caused the bat to go back the the food farms show that the farmer practice will effect the food supply of the bats. So if you don't get the outcome you want change the conditions to get the outcome you need.
One thing he didn't point out: Even if everyone is acting honestly and knows what they're doing (with statistical analysis, etc.), the fact that you can't publish negative results contributes massively to the replication crisis. It's not just the pressure to get a positive result leading to bad actors that leads to issues, but... let's say that ten different research groups all go after an extremely promising result. If p
this is a really important topic and i'm glad you made this episode.
Having done several studies in Linguistics, I have to say these problems are universal and not just related to Science alone. Much of the problem mention, such as the inclusion of p-values, is something I had experienced firsthand and that even my supervisor said that "it is something that you need to know and need to include in an academic paper, but have no significant value in the paper what so ever". Going back to publishing studies, I firmly believe that pushing out papers out in large quantities can damage the reputation of certain individual, though it should be stated that I do hastily push out papers as I do have multiple papers in queue to send to supervisors and most of which have datelines that need to be met.
Good episode. Important issue.
First step to solve a problem is to acknowledge it. It's a good thing that science does introspection from time to time.
very awesome topic to do an episode on!
Why I can just like this video just once? I want to improve it's positive results! Dam-it!
Research can be so frustrating at times. I'm starting to rethink this whole career decision.
simply amazing
This was exactly the thrust of my PhD!
I'm in psychology statistics. I'm taking my final tomorrow, and this video helped me. Thanks!!!
HAHAH, I conducted some research in a laboratory at Mt. Sinai and actually we experienced similar trouble with a really simple ooligo sample that was NOT of the sequence we'd ordered. Wasted a lot of time and PCR reagents trying to get our SDS PAGE bands to actually show, and after trying everything under the sun we'd run out of our ooligo and the first try with the new solution worked like the a charm. Really difficult to pin-point problems like that when 10 different things and proportions could be the issue, or even someone's technique.
To add, replicating studies is also not considered worthwhile, and researchers are often pressured to conduct research that will receive more funding, as opposed to research on topics which interest them. Of course choosing to research a topic which is more likely to have profound [positive] impact is commendable, but in trying to publish "novel" research to avoid being a replication study, some topics are unreasonably inapplicable.
Good show, thanks. Thanks also for saying "data are". Has anyone ever published a study showing the relative frequency of "data is" in scientific papers that are not actually referring to Brent Spiner's iconic Star Trek character?
This is why when some one says " The science is in amd the debate is over" Just keeps the debate alive for me. My favorite science teacher and public speakers say things like "This is what we think is going on" Or other non definitive statements.
Aren't the results of a research supposed to replicated and peer reviewed before the study is actually published in a journal or taken to a conference? why is this problem popping up only now?
thanks for a idea for a paper for my phd in economics.
When I was in medical school I advocated among my classmates establishing The Journal of the Null Hypothesis. Of course, as a medical student, I lacked the knowledge and resources to create a whole new journal. I didn't, and still don't, know who might create that journal. I think it would really need to be many different journals based on subject matter. I also suspect it would be the heaviest journal on the shelves of university libraries. I think today it is more important than it was when I first conceived of it.
This is an amazing show..
Excellent video
PLEASE READ: So a few things I liked about this video. It provided a comprehensive assessment of the reproducibility crisis taking information from different fields. It also looked at the problem from different perspectives (researchers, publishing standards, statistics, etc.).
What I didn't like is perhaps the perceived mischaracterization of psychology people could glean from it due to some missing info. In the Open Science project (the one with the 100 studies), many psychologists (Daniel Gilbert, TedTalk guy, and colleagues) analyzed the methods of the project and found that several of the replications were poorly done (one for example was a study supposed to analyze americans views on issues, but italians were used instead). Plenty were more or less extreme than that.
In addition, Gilbert had shown that due to sampling errors and not enough replications, the statistic of replications from the large project is probably heavily underestimated & definitely can't be generalized to all of psychology. In other words, 39% of studies replicating doesn't mean at all that 39% of psychology is correct.
Also, replications have been used in the past and are still being used in psychology. There are many classic studies that are supported just fine, especially with distinguished researchers like Khaneman, Shelley Taylor, etc. (so no with the whole pseudoscience idea). In fact, Behavioral genetics and cancer biology have it worse than psychology.
Lastly psychologists are probably more concerned than the public about this (and have known about it for 4 years now), they are devising new ways to accept new studies such as preregistering the studies to prevent experimenters from manipulating the results after analyzing the data, etc. Keep in mind science is a process and an art where flawed practices are eventually replaced with better ones. So yeah, this is a celebration for psychology research as bad research practices are making their way out and more robust and stable knowledge is being cultivated.
More info go to the apa website.
It's hard to confirm a cancer study. Every cancer is completely different. Some medicine can work extremely well on one patient and have no effect on some other patient.
SciShow, you will need to make another video, much like this one, to convince me. Thanks.
Incorrect sample sizes is a significant factor as well. Also driven by money concerns.
Can you do one on cerebral aneurysms? I had one rupture at 19 and would love to learn more!
This is absolutely terrifying
Very interesting! I tried to do my undergraduate biology dissertation on the arbitrary nature of p values and scientific significance as I was getting a fair amount of results around the significance level and it seemed strange to me that if they were below a magical significance level then suddenly they were important results, but my supervisor wasn't interested... :/ interesting to hear that it's getting more publicity now!
This is worrying, I've always believed in the scientific community for being a pillar of reliability and honesty, if this apparent problem isn't resolved then...damn :/ hope steps are made the make studies more reliable and accurate!
Great video!
Flash back of my cell biology course last semester.
That's why we have quality assurance and goods inward inspections.
Science isn't to be put in question. It's our methodology that is in flaw - whether singular or collaborative.