The Toxic Metric Ruining Academia [Researchers Worst Nightmare]

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 ก.ย. 2024

ความคิดเห็น • 248

  • @me0101001000
    @me0101001000 9 หลายเดือนก่อน +105

    Hirsch was one of my professors in undergrad. I did not enjoy his classes, and it seemed like he was forced to teach when he would rather just be running his lab. I didn't like him as a professor, but learning about this "contribution" to academia made me pretty upset.

    • @ajthevillageidiot
      @ajthevillageidiot 8 หลายเดือนก่อน

      Wow, he really made an impression on you huh?

    • @jessicascoullar3737
      @jessicascoullar3737 8 หลายเดือนก่อน +15

      I had a Cell Biology lecturer who was forced to lecture. He couldn’t speak English and no one could understand a word he said. He had published over 400 papers so was considered a great researcher and therefore had to teach. I don’t know why universities do this.

    • @me0101001000
      @me0101001000 8 หลายเดือนก่อน +14

      @@jessicascoullar3737 I think you do know why. You just hate it. I do, too.
      These big name researchers bring in a lot of money via grants, fellowships, and contracts. He may not even do any of the work himself, but instead just throws an army of postdocs and students at his projects over and over again. He gets lots of money, the university takes a healthy share of it, and the students get prestigious publications to their name. Why does he need to worry about teaching, mental health, or work-life balance, when he's the biggest cash cow in the department?
      I oversimplified the situation of course, but that's often how it goes.

  • @sunway1374
    @sunway1374 9 หลายเดือนก่อน +62

    I give you a concrete example why H-index is bad to know if someone is a good researcher. James Maynard - professor of mathematics at University of Oxford, FRS, Fields medalist, ... His H-index is 13. In his own department, there are postdocs who haven't done anything significant with higher H-index, and some professors who stopped doing research themselves decades ago having H-index over 50.

    • @nolanalexander8696
      @nolanalexander8696 8 หลายเดือนก่อน +4

      This is sad, that happened in Oxford where someone already considered "privileged" or "elite" when able to tenured or hired in one of its institution, so even with few publications in some "big (predatory?) journal", when they tenured, and get that Oxford credential, no one cares about their H-index anymore, because they are on top already. All while students, researchers and lecturers in universities in third world countries, in Southeast Asian, are running like hell to raise their H-index to be seen as equal with those from "elite, world-class, western" universities.

    • @sunway1374
      @sunway1374 8 หลายเดือนก่อน +7

      @@nolanalexander8696 Not disagreeing on anything you say. But I was just giving an extreme example of why h-index is useless to indicate how good a researcher is. Although it is extreme (in the sense that James Maynard has a low h-index for being an extraordinary researcher), it is not special. Given two researchers from the same field with different h-index values, we would have no idea which one is a better researcher with just this info.

    • @jedediahjehoshaphat
      @jedediahjehoshaphat 8 หลายเดือนก่อน

      Agreed, the veracity of significance in academia should be based soley on the novelty of research papers. And in this regard, the H-index is cancer.

    • @Neverendless92
      @Neverendless92 8 หลายเดือนก่อน

      Not only that the google h-index take self citations into acount, so I know of people that have higher h-index becuase they include self citations in there works. I hate it here....

    • @bardsamok9221
      @bardsamok9221 3 หลายเดือนก่อน

      "It's sad!", "It's bad!" - get over yourselves, hysterical complainer nonsense.
      No one who requires a field mathematician is going to reject them because of h index.
      Such nonsense.
      Obviously publication quality will be checked, as it has done far before h index.

  • @TheYoungtrust
    @TheYoungtrust 9 หลายเดือนก่อน +86

    Journals are a racket too. Science made major leaps before the post world war two era before these kinds of journals existed, and it has slowly been stagnating since. With the internet, we don't even need publishers.

    • @bradbellomo6896
      @bradbellomo6896 8 หลายเดือนก่อน +8

      Do you think we need peer-review?

    • @TheYoungtrust
      @TheYoungtrust 8 หลายเดือนก่อน

      We need to verify results, not suck the *** of publishers@@bradbellomo6896

    • @HaoSunUW
      @HaoSunUW 8 หลายเดือนก่อน +4

      I'll tell you what my supervisor told me. If you want an academic job you got to have publications in the top conferences, WAOA isn't gonna cut it I'm not saying the works not good but nobody's got time to read all your papers.

    • @whatisrokosbasilisk80
      @whatisrokosbasilisk80 8 หลายเดือนก่อน

      @@bradbellomo6896 Open Review is a thing, saying only publishers can facilitate peer review is foolish.

    • @graemem111
      @graemem111 8 หลายเดือนก่อน +1

      And now? Let’s talk predatory…

  • @murffmjtube
    @murffmjtube 9 หลายเดือนก่อน +42

    Banger video -- so good. I'm a third year PhD candidate and this one hits home. I look at my Chair and Committee who have many thousands of citations and it feels like an impossible climb without taking a very tactical POV to start racking up citations . . . In practice, this means perhaps one manuscript per month, building a pipeline, and cranking out papers like a formulaic spam machine, all to appease the academic metric gods . . .

    • @sunway1374
      @sunway1374 9 หลายเดือนก่อน +4

      Ya. You have to find out who are the academic metric gods in your community and don't upset them. They determine if you get a position, your career track, if you get a grant, if you get a nice office, how much admin work you are asked to do, etc... If you think I will let my good work speak for itself..., it's not enough.

    • @-astrangerontheinternet6687
      @-astrangerontheinternet6687 9 หลายเดือนก่อน

      9:09 why did there need to have a system of how successful a scientist is? Why popularity contests at all?
      Deal with the facts. Instead of the reputation of the author.

    • @sunway1374
      @sunway1374 9 หลายเดือนก่อน +1

      @@-astrangerontheinternet6687 "Deal with the facts. Instead of the reputation of the author." - That is what all the metrics try to do. What facts are you suggesting we look at? Because if you have a better answer, we could all use it.

    • @liangyuaq-qoyunlu407
      @liangyuaq-qoyunlu407 8 หลายเดือนก่อน

      @@sunway1374 All you need is to allow the flexibility to consider other factors. It's ok to use a metric, but humans don't need to enslave themselves to a metric like a robot.

    • @sidewalkslacker
      @sidewalkslacker 8 หลายเดือนก่อน

      Anyone outside the main cycle is going to suffer a lot

  • @mallninja9805
    @mallninja9805 8 หลายเดือนก่อน +24

    I'm not an academic, but I have been a manager in several capacities. I rail against trivial single-number metrics any time anyone proposes them. They will almost always fail to quantify the thing you _actually_ want to measure. As a general rule, people who are measured by the metric will tend toward the path of least resistance to maximizing the metric, and it almost always ends up punishing whatever behavior you're actually trying to encourage and/or degrading whatever you're producing. "How productive is X person? And how does X compare to Y" are complicated questions, no matter what your field.
    So then you develop a composite metric that does a better job of taking into account all the things people do, and your management glazes over when you try to tell them how it works. They don't want to think. They just want to know that Joe Bob is a 3 so they're justified cutting him loose while Bobbi Sue is a 10 so they're justified in bringing her on.

    • @tetronym4549
      @tetronym4549 8 หลายเดือนก่อน +1

      Please become a manager in more capacities, we fucking need you

    • @schwarzerritter5724
      @schwarzerritter5724 8 หลายเดือนก่อน

      An objective test for a subjective problem will measure conformity and nothing more.

    • @bardsamok9221
      @bardsamok9221 3 หลายเดือนก่อน

      ​@@schwarzerritter5724That's a false statement.

    • @schwarzerritter5724
      @schwarzerritter5724 3 หลายเดือนก่อน

      @@bardsamok9221 How so?

  • @TVWJ
    @TVWJ 9 หลายเดือนก่อน +27

    The function of scientific literature is to communicate findings and ideas to the public. Nothing more, nothing less. Citations are there to give the reader of an article the necessary background information to understand the arguments in the publication at hand. There is no judgement in there. These are not good metrics of "quality". However, they have become that because it is easy to count, and people are just too lazy to make a real assessment of scientific quality. What is 'scientific quality' in the first place? Quality would mean: good data and/or a solid argumentation and conclusions.
    Impact is related to the relevance. However, relevance is not a constant, what is relevant today, might not be it tomorrow and vice versa. Therefore, any measure of 'impact' in society is also biased towards popular items. CO2 emissions and climate change are 'hot' today (for a good reason, though), but it also means that a mediocre contribution in this field can have a larger impact than e.g. a breakthrough in mathematics, the next step in curing cancer, a better understanding of the psychology of stress, just to name a few. These other items can also end up being a major step forward, it is just that we today do not recognize it. Strange example: people figured out the mathematics of the binary number system in the 1600's, but it was regarded as 'curiosity' in those days, so no 'impact'. I do not need to explain the impact this has today. Was it therefore "bad science" back then, that now has become "good"?

  • @ChronusZed
    @ChronusZed 8 หลายเดือนก่อน +40

    Math PhD student here. Even without the really egregious kinds of gaming, the incentives created by this system are seriously broken. I feel pressured to publish incomplete solutions to problems that I'm capable of solving completely, because I feel like I'll get more citations if there are obvious holes remaining for other people to fill.

    • @jaguatiricaimediata5305
      @jaguatiricaimediata5305 8 หลายเดือนก่อน +2

      Mate that sounds really bad...as in any game, the top spots are often occupied by the exploiters, not the ones who actually play well...this can really the collective science effort.

    • @davidpadillagarza1146
      @davidpadillagarza1146 8 หลายเดือนก่อน

      Math Ph.D. graduate here. Don't do that. A citation pointing out a gap or a mistake will be a point against you.

    • @ChronusZed
      @ChronusZed 8 หลายเดือนก่อน +2

      @@davidpadillagarza1146 I think you misunderstood what I wrote. I'm not saying I intentionally add mistakes to my papers; that would be stupid. I'm saying I feel pressured to publish partial results so there remain open problems for others to work on instead of complete results.

    • @davidpadillagarza1146
      @davidpadillagarza1146 8 หลายเดือนก่อน

      @@ChronusZed Ok, maybe I misunderstood. Publishing partial results is a possibility, and several people do it. However, don't think that if it gives you more citations it is a good move. In math, people care more about journal prestige than about citations. A complete result has a better chance of getting into a good journal than a partial result.

    • @davidpadillagarza1146
      @davidpadillagarza1146 8 หลายเดือนก่อน

      It is also possible to publish several partial results instead of one complete result. But this ends up being more or less the same thing.

  • @debjithoreroy9784
    @debjithoreroy9784 9 หลายเดือนก่อน +22

    Fully agreed.
    I am an Economics graduate (2022) and have been looking forward to a PhD programme since then. I want to work upon topics related to Environmental, Natural Resource, Ecological and Climate Economics. Now the issue begins over here. In Economic research, the 'hot cake topic' in the universities in the international sphere is poverty and development, and in my country India, it is agricultural economics. Now I am not saying that these two aren't important topics. They are tremendously important topics no doubt. But the issue that comes up over here is that almost everyone is running towards these topics and hence the academics whom we can approach to be our PhD supervisors is very very less, thus making the competition very much toxic. This primarily leads to two situations-
    1. Graduates like me change their subject of interest, at least temporarily, only to get entry into the academic world. Now, spending four to five years in a topic that you are not really passionate about...doing a nonacademic job would be better I guess.
    2. The second scenario is to keep waiting for an opportunity in your desired field. This will lead to your age increasing with each passing year. Hence, if you genuinely want to make career in academia, you have to helplessly choose option 1.

    • @DSScully
      @DSScully 9 หลายเดือนก่อน +5

      Sadly

    • @debjithoreroy9784
      @debjithoreroy9784 9 หลายเดือนก่อน +2

      ​@@DSScully 😢

    • @forstuffjust7735
      @forstuffjust7735 8 หลายเดือนก่อน

      Enviromental economics is something you can easily find supervisor for in europe though

  • @gozzilla78
    @gozzilla78 9 หลายเดือนก่อน +19

    Kudos, Andy: this is one of your best videos! Fully agree. Fitting people (and institutions, because also universities are ranked) on the real line is a crazy thing to do. Except that finance does it all the time: we’ve all got our price tags pierced on our ears.

  • @Tupinamba77
    @Tupinamba77 9 หลายเดือนก่อน +20

    Absolutely agree with your criticism. The great thinkers and academics of the past wouldn't be able to get a job nowadays.

    • @sunway1374
      @sunway1374 8 หลายเดือนก่อน +4

      Many of the previous great scientists struggled during their lifetime too. I read biographies of many of them. Nearly all have some descriptions of them struggling in this respect. Problems of certain topics were preferred, political problem, personal conflicts, immigration issues, funding problems, racism, gender bias, etc, etc.

  • @darkknightq863
    @darkknightq863 9 หลายเดือนก่อน +19

    lol, It's like academia literally turned into a video game meta, where each topic is assigned into a tier list

  • @ulaat4215
    @ulaat4215 8 หลายเดือนก่อน +4

    Totally million percent agree with you , this video should be spread in all academic platforms

  • @edwardsallow8931
    @edwardsallow8931 9 หลายเดือนก่อน +11

    Just from the title I knew it was the H index.

  • @philippetrov4881
    @philippetrov4881 8 หลายเดือนก่อน +6

    In Bulgaria we have our own metric, which is based on different criteria. Most importantly in some scientific fields (in pedagogy and some humanitarian fields) we divide the points we gain from articles to the number of authors. This solved lots of problems :)
    Sadly the gov. is on its way to remove that and unify everything. So the sad practice of "one writes an article - half of the dept. is a co-author and the other half is citing it" may come back.

  • @TheSandkastenverbot
    @TheSandkastenverbot 8 หลายเดือนก่อน +13

    I worked as a scientist in a German university for about 5 years. In this time I had almost no time to work on anything I thought would contribute to science. Every single project was such that you can't possibly screw it up - and the results were trivial. Then there's teaching, writing articles, going on conferences. There was just no time to do creative work. Except for a few topics you can forget about high risk high reward work.

    • @michen25
      @michen25 8 หลายเดือนก่อน

      I feel you.

  • @pufopc8749
    @pufopc8749 8 หลายเดือนก่อน +5

    Metrics are the problem because even if they can't be gamed, which is usually not true, they tell only one side of the story. Academics in good universities used to be judged by whether they can or cannot do something, and whether they did or did not discovered something. Rather than comb 20 quasiuseless papers for nuggets of wisdom, the academics chose to use metrics to evaluate their prospective candidates and grant recipients. This definitely saves a lot of time, but also eliminates the guy who only wanted to do science not games and politics. Which brings us to the true metric which is grant money. And that is obtained by doing games and politics. Without the academic politicians and hustlers silent takeover of academia, the H-index would be an interesting idea in social sciences and nothing more.

  • @vasyavasilich7659
    @vasyavasilich7659 9 หลายเดือนก่อน +9

    I actually dont understand how the number of citations is making you or your work better, it just doesnt make sense. Was Einstein a genius because of his h index or number or of citations or whatever? Hell no, and where are Einsteins of our century? Science is not about competetion, its about creativity, sharing the knowledge

    • @jasonali4122
      @jasonali4122 9 หลายเดือนก่อน +2

      People are using it. As a scientist, what's the point of doing research that very few people can make use off?

  • @alexfierro5945
    @alexfierro5945 8 หลายเดือนก่อน +1

    Great video. I left academia for the private sector for this and other related reasons (my current H-index is 23 and i10-index 36). The video omits to mention another major drawback of citations and h-index metrics: They are independent of the author ordering. That is, you'll accumulate the same citation whether you are the first or N-th co-author. In my view, only the first author should get full credit (with 2nd and 3rd receiving some form of partial credit). This approach, however, could cause scientists to "fight" for the first place & limit collaboration.

  • @YamanoRyuu
    @YamanoRyuu 9 หลายเดือนก่อน +5

    There is no hope, there is only death.

  • @lorenzomasia9024
    @lorenzomasia9024 9 หลายเดือนก่อน +5

    in the old times before H-index, you knew people from their outstanding papers...you had to read them, digest, understand and keep them in your mind. From these cumbersome works a researcher used to develope the "mindset" and innovate, propose new paths.....now lot´s of predatory publishers and numbers on numbers.....what was quality once, now is size.

  • @FrancescoDondi
    @FrancescoDondi 8 หลายเดือนก่อน +2

    So basically we've turned academia into a social network. A like is a like, a citation is a citation; "haha yeah" and "world shattering revelation" count the same. Shifting out things that are barely past 0 in value but with consistency trumps everything.

  • @thescoringcompany165
    @thescoringcompany165 9 หลายเดือนก่อน +3

    h-index is the most horrible thing ever. It's so easy to game that anyone who uses it should get their academic badge revoked. All you need to do is "publish" 2n articles and self cite all your previous articles. That will guarantee you a h-index of n.

  • @fkxfkx
    @fkxfkx 9 หลายเดือนก่อน +6

    I’m developing the I index.
    It’s 12 articles with 12 references.

  • @robertmazurowski5974
    @robertmazurowski5974 9 หลายเดือนก่อน +5

    So basically if there is a branch of science where there is only several scientists working on and they put maybe 1 paper per year doing actually good research, without doing the paper mafia stuff, they will have a low h index despite doing good work?

    • @pufopc8749
      @pufopc8749 8 หลายเดือนก่อน +2

      Their careers will go up in smoke really fast.

    • @sirbaguette8378
      @sirbaguette8378 8 หลายเดือนก่อน +1

      Yeah pretty much. Every science has their own niche branches that don't garner much interest. If you happen to be interested in such a niche branch, it'll be difficult if not near impossible to garner any substantial citations.

  • @Thefare1234
    @Thefare1234 8 หลายเดือนก่อน +1

    The h-index is detrimental to those of us interested in theoretical work in statistics. These papers are extremely difficult to write and can only be understood by few people who can then use the theory to build new applied methods. On the other hand, doing simple statistical analysis for medical doctors in clinical trials or other straightforward applied fields can quickly lead to hundreds of citations. This metric discourages innovation and favors conventional research.

  • @jameskingsbery3644
    @jameskingsbery3644 8 หลายเดือนก่อน +1

    I'm an outsider, but it doesn't seem like there is a consensus about what academia is even for. It seems like most academics will say that their career is about the acquisition and creation of knowledge. But, looking at behaviors, it seems like the point of academics is to secure grant money and (where applicable) patents.
    It also strikes me as strange that very little emphasis is put on teaching. I've seen in my own field of computer science the result of that, with many young software engineers not knowing the most basic aspects of the field.

  • @-astrangerontheinternet6687
    @-astrangerontheinternet6687 9 หลายเดือนก่อน +1

    It’s a popularity contest. Social publication is still social media.

  • @user-zb3op6vz3c
    @user-zb3op6vz3c 5 หลายเดือนก่อน

    The analogy goes like this. Imagine that you are a student about to take your A-level mathematics official examination in next 2 weeks.
    You saw a new mathematics textbook from a different publisher in a bookshop. You bought the book and returned home. One
    day around 5.30 pm, you attempt to do 5 questions from the textbook, you selected 5 random questions from these 5 chapters: Matrices, Complex Numbers,
    Vectors, Differentiation and Integration. You spend about 25 minutes doing the five mathematics questions, scribbling down your working
    on 5 different sheet of papers. The time is about 6 pm, you stopped and went out to play basketball with your friends. You returned
    home around 7.30 pm. You took a bath and had a dinner with your family. You returned back to your study table and checked your scribbled down 5 answers
    against the book's answers section. To your horror, you found out that you have got it all wrongs. But you said "Hey, I have published 5 papers, right?
    See, I am holding 5 sheet of papers".
    Another student somewhere out there, did the same feat, but he attempted 2 questions from 2 chapters: Differential Equations and Numerical Methods and he
    got it all rights, a 2 out of 2 and you got 0 out of 5. But you said, "Hey I published more papers than him, 5, he only got 2. I deserved
    to be an assistant professor, right? If I keep on publishing more, I get to be promoted to be a full tenured university professor,
    keep on doing this, then onward to become a university president or chancellor or maybe a future director of a research institute, right?"
    You found out that your friend Thomas, did the same feat, he also got it all wrongs, 0 out of 5 from the same 5 chapters. But this is okay, since you are going to
    put a reference at the end of your 5 papers, citing Thomas's work and Thomas returned the favor and did the same, citing your work.
    Now you have them all: published papers, citations, H-index, impact factors, research grants, etc...

  • @Nudo1985
    @Nudo1985 8 หลายเดือนก่อน +1

    The biggest issue is that this is not about the science, it is about power. Measurement in a social setting will inevitably run in to "you get what you measure" problem and humans are clever.... the consequence ends up being a system that divides. In my view these types of systems have no place in academia as they enforce existing structures and limit curiosity.

  • @ThePowerLow
    @ThePowerLow 9 หลายเดือนก่อน +5

    It is so funny how you say "Jorge, you should not have published it mate!". Too many metrics half working while ruining lives.

  • @MAOGHA
    @MAOGHA 8 หลายเดือนก่อน +1

    The h-index is pretty bad for "hard to publish" areas like mathematics where you may win a Fields Medal or Abel Prize but end up with a single digit h-index. Not at all representative of academic merit but academic "cunning".

  • @ElijahHunter77
    @ElijahHunter77 9 หลายเดือนก่อน +7

    How is your pie-pie?😂😂😂 Andy, you are hilarious!😂😂😂

  • @jeffsmith9420
    @jeffsmith9420 8 หลายเดือนก่อน +1

    The problem with H indexes but also valuing highly competitive journals is that it ignores people who work in niches and who also do non-academic work. For example, if I do research on a relatively specific topic that might be of interest to only a couple of other researchers my H index will be lower regardless of the quality of the research I do either in terms of analysis, method, or ultimate societal impact. Also, with the highly competitive journals you are more or less playing the lottery if the journal has a 1/1000 acceptance rate. At that acceptance level or even higher there really is no rational way to determine what is quality versus what is not. The other major problem with this is that it privileges the already privileged at large prestigious schools since they likely are editors at the journals and will ensure that only their people get published. This is a huge problem in the social sciences and humanities. Personally, I would rather see a system where anything gets published so long as it meets certain rather low standards for quality. This would allow people to get their ideas out there and would limit the type of high school popularity contest that academia has become.

  • @normangoldstuck8107
    @normangoldstuck8107 8 หลายเดือนก่อน +2

    I am quite proud of my h-index of 18 because I have never held an academic position or had a research grant in my life. I'm a reproductive endocrinologist and still teach students and give the University they attend the publication credit but am not obligated to do so.

  • @atheneaberdeen9926
    @atheneaberdeen9926 8 หลายเดือนก่อน

    Thank you Andy for this video. I didn't know about the H-index but I did decide not to pursue a PhD when I realized that what many institutions /supervisors wanted was just variations on a theme. Critical thinking especially on non-popular subjects as in the Humanities get the thumbs down. Keep up the good work.

  • @StupidusMaximusTheFirst
    @StupidusMaximusTheFirst 9 หลายเดือนก่อน +4

    I absolutely agree. This is a horrible idea. It says nothing. Forcing people to produce as many papers as possible, sounds equally horrible. No wonder there's so much garbage science. I bet Isaac Newton's h-index must suck big time. On the other hand, I'm certain multiple PhD professor Zrinc Von Papierz and his successful papers on the relation between dark matter and cabbage, or PhD Von Der Linen on decision making and USB embeddings, top the h-index.

    • @RedVio972
      @RedVio972 8 หลายเดือนก่อน +1

      Most of Newton's major works are books or letters to other scholars, so they wouldn't even count in the h-index

    • @StupidusMaximusTheFirst
      @StupidusMaximusTheFirst 8 หลายเดือนก่อน

      ​@@RedVio972 so you would say that is a zero (0) h-index?

    • @RedVio972
      @RedVio972 8 หลายเดือนก่อน +2

      ​@@StupidusMaximusTheFirst Pretty much yes. And whatever article citations Newton might have from its contemporaries are in non-indexed works, so they would also not be counted

  • @r-d-v
    @r-d-v 8 หลายเดือนก่อน +1

    I’m a PhD student and a musician - and it feels like academia is not dissimilar to the music industry. Jumping on trends to get the views, advertising yourself doing irrelevant things because it will increase exposure, stuff like that. Neither are meritocracies essentially.

    • @walkerorr341
      @walkerorr341 3 หลายเดือนก่อน

      Collaborations like features, papers that are basically diss tracks…

  • @miguel.angoitia
    @miguel.angoitia 9 หลายเดือนก่อน +2

    Unfortunately, even the nice words and statements of the new chief of the evaluation body in Spain,I don’t really believe Spanish evaluators are likely to open the scope and leave the impact factor criteria. The new evaluation process requires top JCR journals AND quality indicators. That AND is a key point as quantity is compulsory, it’s not quantity of JCR papers OR quality of research. From now on, you will have to justify the quality of your contribution but… you will always be required a minimum number of top JCR papers. From my point of view (and I would love to be wrong), it’s going to be harder to get a positive evaluation.

  • @sangeethvaheesan3173
    @sangeethvaheesan3173 8 หลายเดือนก่อน

    So prioritising quantity over quality. When money and profit is involved real scientific innovation and real collaboration goes down the drain

  • @tribalypredisposed
    @tribalypredisposed 9 หลายเดือนก่อน +6

    1) Academics are not powerless, they are just unwilling to exercise their power. The universities provide nothing a group of PhDs cannot provide themselves by creating a professor owned and run university where things are run well, like not a bunch of adjuncts and hiring based on rational efforts to identify the best candidates. Stop being such incredible cowards.
    2) If you DO want to use some sort of metric involving publications, it should greatly favor novelty and multi-disciplinarity, the later at least in the social sciences. It should also be weighted for the average impact of the journal one published in. If the average paper in journal X gets fifteen citations and your paper gets fifty, that seems much more significant than your paper getting fifteen citations too. If your paper published in a Political Science journal is getting cited by authors in History and Women's Studies, that also seems much more important than just getting cited by the ten other academics specializing in the stucco coated tripod pottery of Teotihuacan. Not that hyper specialization is bad, but that we need to also reward the mavericks who apply, for example, evolutionary psychology to the study of Greek literature for the first time.
    3) As a rule, the professors in a given discipline should not be involved in hiring decisions at all for their discipline. First, because they may be defending their orthodoxy, may be jealous of a superior scholar, may not want to have their ideas challenged. Second, because a real expert will be able to explain everything simply enough to at least other professors in other fields so they can understand and judge their work. Lastly, because explaining it simply enough to students is also a very important part of the job of being a professor.
    4) Going a step further, professors now have far too much power over their student's futures in academia, and I think really professors from a completely different university should be the ones that PhD students defend their dissertations to, and let's have a stop to the letters of recommendations that some professors quite literally make you perform sex acts for too.

  • @HaoSunUW
    @HaoSunUW 8 หลายเดือนก่อน +1

    Look at the end of the day we're gonna need an easy metric which is likely gonna get gamed. My advisor told me "if you want an academic job you gotta publish in the top conferences, I'm not saying your work isn't good it's just there are too many candidates to read all their work thoroughly "

  • @cybernd78
    @cybernd78 9 หลายเดือนก่อน +4

    The software industry has a long history of harmful metrics. Academia could have learned from them.

  • @scottmiller2591
    @scottmiller2591 9 หลายเดือนก่อน +1

    Nice demonstration of Goodhart's and Campbell's laws.

  • @MohammedDahmash-iu7ot
    @MohammedDahmash-iu7ot 7 หลายเดือนก่อน

    I never full heartedly believed in such indices as one of the most important thing a researcher should strive for! Finally, a fellow phd holder somehow agrees.. such a relief 😅

  • @powerpointnight3710
    @powerpointnight3710 8 หลายเดือนก่อน +1

    This is basically goodhart's law in practice. One would think that academics would be the ones to pick up on that first. Sigh

  • @TheThreatenedSwan
    @TheThreatenedSwan 8 หลายเดือนก่อน

    It's really a part of the constant battle between communicating things that are real vs people just manipulating language for personal status

  • @Eldiran1
    @Eldiran1 8 หลายเดือนก่อน

    As a french who went to university and know doctorate who struggle with that, i 100% agree.
    It don't know how it work outside of the world, but here you have to paid, sometime with your own money, to be publish; you have to paid to read your peer and you have to review them for free!
    It's absurd!

  • @___________mrivan___________
    @___________mrivan___________ 9 หลายเดือนก่อน +2

    The pp-index part straight up just killed me. How can one be so bad at naming things?

    • @piktuliz
      @piktuliz 8 หลายเดือนก่อน

      That is because you have small pp-index. And professor want to show everyone his big pp.

  • @qwertyuuytrewq825
    @qwertyuuytrewq825 8 หลายเดือนก่อน +1

    that's like when you choose wrong reward function in reinforcement learning )

  • @lingdocs
    @lingdocs 8 วันที่ผ่านมา +1

    So basically, this is turning research into social media, where people are incentivised to turn out garbage in search of like and shares.

  • @KS-bt8ko
    @KS-bt8ko 7 หลายเดือนก่อน

    I've seen this, all too much. Even typing out my story here would require me to go have a valium afterwards.
    I give people the analogy of the NBA draft night requiring draft nominees to write a 5000 word essay on why they should be picked by an NBA team, and this essay is the ONLY metric used to choose players.
    Foundations and governments set up a system of checks and balances to support researchers by introducing metrics, like collaborative capacity, much like H-index. Like any such systems, eventually it gets to a point where people learn the system to get the funds, with varying degrees of actual talent.
    And then you introduce things like journals for profit.
    In my own (former) field, cancer research, I am absolutely convinced very few people would be dying of cancer in the western world, if only we had supported scientists and gave funding in a proper, unbiased, manner. Whenever I read about a "game changer" result, I wonder how much further we'd be if only the proper people got the proper funds.

  • @ElijahHunter77
    @ElijahHunter77 9 หลายเดือนก่อน +5

    Andy🎉! Go! Make in 200K before 2024!
    (This comments did NOT use any AI to be generated--pure fingers typed it in 😂😂😂)

  • @eliotcougar
    @eliotcougar 8 หลายเดือนก่อน +1

    My h-index is 12… I don't care… I just do experiments, maintain devices in working order, and get included as a co-author everywhere…

  • @juan-fernandogomez-molina645
    @juan-fernandogomez-molina645 8 หลายเดือนก่อน +1

    The problem with the h-index is that it does not track ideas or scientific contributions, but rather papers. As a result, it makes it worse, not better, to determine who deserves credit for originality and research rigor.
    Single numbers are a simplified way to measure this. We need at least vectors to capture the full picture.
    By the way, congratulations on your research on photovoltaic cells. As you know, perovskites degrade over time and are not as durable as silicon-based solar cells. However, the final merit of photovoltaic cells - like the merit of a researcher - is not whether it is more efficient than others (although this is important too), but rather its intellectual value.
    Following those lines, as Einstein pointed out, we should not strive to be men of success, but rather men of value.

  • @SC-bs7jd
    @SC-bs7jd 8 หลายเดือนก่อน +4

    It also leads to study in areas that are easier to publish like simulations and data analysis leaving experimentalists (like me) behind. Another demonstration showing how academia is toxic.

    • @mariaa.2240
      @mariaa.2240 8 หลายเดือนก่อน

      What do you mean by that? 😯

    • @Magnus-n2t
      @Magnus-n2t 8 หลายเดือนก่อน

      Strongly disagree. I work on simulations that I code myself. I cite many experimental papers for verification of my model. I rarely see any experimentalist citing modelling papers.

  • @gibbs-13
    @gibbs-13 5 หลายเดือนก่อน

    Some people write review papers, because they are often cited and increase the h-index. However, the review papers are not peer-reviewed papers. Therefore, h-index is nonsense for researchers.

  • @user-zb3op6vz3c
    @user-zb3op6vz3c 6 หลายเดือนก่อน

    If you want the government to nurture your research and academic career and you have to start off as an assistant professor going all the way to become an emeritus professor, then number of publications, impact factors, citations and research grants will control and dictate your lifelong career. Patents will come in at later stage of your research career.

  • @michen25
    @michen25 8 หลายเดือนก่อน

    Totally agree with the bad effect of what Hirsch is doing. I have some colleagues with a lot of review papers that have a hirsch number of 17, but in reality they do not produce any real research. I am doing research and struggling to go to a higher hirsch.

  • @imthatnggruponthatnag7784
    @imthatnggruponthatnag7784 9 หลายเดือนก่อน +2

    Thanks for introducing me to Jorge Hirsch. I know who to hate now.

  • @g_leuenberger
    @g_leuenberger 8 หลายเดือนก่อน +1

    h-index was intended for theoretical physicists only. I know a way better metric for theoretical physicists, not based on citations, and I‘ll publish this performance metric in spring.

  • @Mnogojazyk
    @Mnogojazyk 9 หลายเดือนก่อน +1

    The H-index reminds me of what the Science Citation Index and Social Sciences Citation Index were often used for thirty odd years ago.

    • @erkinalp
      @erkinalp 8 หลายเดือนก่อน

      SCI and SSCI are a different kind of index. Those are not metrics but they are pools that track citations. Some institutions only consider papers submitted to venues subscribing to those indexes as acceptable submissions/citations.

  • @frankpulmanns6685
    @frankpulmanns6685 8 หลายเดือนก่อน

    I think publish or perish has been a problem for far longer than since 2005. The index is just the latest (?) expression of it.

  • @MrSubsound90
    @MrSubsound90 8 หลายเดือนก่อน

    I've seen a lot of researchers work to meet this metric without really giving considerations to the type or quality of their research since it's often the only way to stay employed...since that often means their family eating and living indoors. Especially since it incentivizes profitable private research over more nebulous, governmental, or even cost or harm reduction studies.

  • @scottmiller2591
    @scottmiller2591 9 หลายเดือนก่อน +1

    Self-citing authors who are the only citation of their older papers are the worst - there are exceptions, but this is usually just gamification of the academic process. Being paid by the paper/citation is the root of this problem.

  • @michen25
    @michen25 8 หลายเดือนก่อน

    6:00 I teach etics and academic integrity. And what you said here is exactly what I tell students when speaking of what means publishing on a topic which is not in the trend.

  • @Daniel_Zhu_a6f
    @Daniel_Zhu_a6f 8 หลายเดือนก่อน

    any sort of formal qualification system favors people who have qualification but don't necessarily have the skill. reducing people to a few metrics, where each one can be easily replaced has always been a dream of technocrats and finally, with the invention of computers it was possible. the implications of such a possibility are huge, potentially it may put a dream of democracy to an end. possibilities for various fraud are limitless: farming ratings, hidden discrimination and censorship, etc. whatever can happen with views, monetization, blocking on youtube/facebook is probably already happening in scientific publishing, provided that youtube will never attract likes of money that pharmacological research, material science or climate research can bring to or take from corporations. a better index is probably not a permanent solution to a much larger problem of treating very flaky metrics as a proof of one's ability.

  • @Gordy-io8sb
    @Gordy-io8sb 2 หลายเดือนก่อน

    I don't think the Hirsch index is exactly wrong, though. A single number can actually measure a complex statistic pretty well.

  • @bradbellomo6896
    @bradbellomo6896 8 หลายเดือนก่อน

    Publishing large volumes of unoriginal ideas was a problem before the h-index, and part of the popularity of the h-index is trying to get away from publishing lots of bad papers to game metrics. The thinking was that encouraging citations x papers is better than rewarding either a large number of citations or a large number of papers alone. The real problem is the compulsive need to compare scientists. If I take X dollars of funding and produced work that was worth at least X dollars, I am a successful scientist, at least in the eyes of people who have a legitimate need to judge scientists. If my work is worth more than someone who took the same amount of money, but both of us are disappointing, and I am not better in any meaningful way. If we both produced above expectations for the funding we took, we both deserve to continue getting funding, tenure, and being scientists regardless of which one of us was more successful by any metric.

  • @jamesboswell9324
    @jamesboswell9324 8 หลายเดือนก่อน +1

    May I propose the BS-index as an alternative?

  • @sundancebilson-thompson414
    @sundancebilson-thompson414 8 หลายเดือนก่อน

    I think the single most important consideration, at least if you're making a hiring decision, is "what does this person do in their spare time?". If they go camping, or perform in an amateur circus, or do just about anything distinct from the field in which they're being considered for a job there's a good chance they're going to "reset" their brain on a regular basis, and be more creative, exploratory, and productive in their research. Numerical metrics like the H-index are too easy to game, and too likely to mislead.

  • @KenFullman
    @KenFullman 8 หลายเดือนก่อน

    It's not the first time the originator has found his work being mistreated. The first guy to suggest that CO2 was causing climate change was actually joking. Unfortunately it got picked up as a hot topic and then politics became involved. So it's now got to the level of DON'T YOU DARE DENY IT. All because he suggested that burning fossil fuels could prevent the next iceage arriving.

  • @reinux
    @reinux 8 หลายเดือนก่อน

    At this point, radial symmetry in a corporate (or government) logo is an immediate red flag for me.

  • @iqao
    @iqao 8 หลายเดือนก่อน

    it all comes down to funding, I bet. The corporate doesn’t have the time to determine “best” person to give the money. “Oh hey, here’s an easy way to sort them all! good! TAKE MY MONEY!”

  • @figgettit
    @figgettit 8 หลายเดือนก่อน

    you have to place this in context of the financialisation of the university to understand it.

  • @bunkertons
    @bunkertons 8 หลายเดือนก่อน

    Yeah, this is what I'm noticing in my new degree. I really want to earn a Ph.D., but my GOD, I am so sick of this garbage.

    • @MrDamning
      @MrDamning 7 หลายเดือนก่อน

      Either leave or design your PhD to get you into industry if your specialisation has an RnD field or pivot

  • @blankseventydrei
    @blankseventydrei 8 หลายเดือนก่อน

    this is sad and I am glad I got my phd before this, I know many great scientist who did not publish much as there projects did not get the results they were hoping for. This was not a problem for the professors as they saw the students knew the material, constructed great experiments, and were thoughful with the conclusions. The factors for me, seem to be more embraced by administators who may not know the material or the time to read everything to understand a canidate, then use this as a quick guide. but these factors are like money, when you have someone hungry and needing a job, they are do what ever gets the a job, and the cycle starts again. what needs to happen is someone at the top that has a high H factor needs to step up and say, lets reject it even though it benefits me, like sheep, people will follow.

  • @AdrianCHOY
    @AdrianCHOY 9 หลายเดือนก่อน +1

    0:55 He’s Argentine!

  • @33brtl
    @33brtl 8 หลายเดือนก่อน

    It's a classic consequence of Goodhart's law - i.e. optimizing to the evaluation criteria or "algorithms".

  • @Estarlio
    @Estarlio 8 หลายเดือนก่อน

    Can you subtract the years since publication on a curve? Seems that would solve a lot of the issues you raise. It's a proxy either way, but inherently humans want to reduce decisions to binary choices - knowing that you want to build into the proxy the best approximation of reality as you know it.

  • @barumbadum
    @barumbadum 9 หลายเดือนก่อน +1

    Great content

  • @anothersquid
    @anothersquid 8 หลายเดือนก่อน

    It's a fact that if you can measure it, someone can and will game it.

  •  8 หลายเดือนก่อน

    Vey important qustion. Thx for this video.

  • @Drganguli
    @Drganguli 8 หลายเดือนก่อน

    Yes, the h-index has destroyed the fun of doing research because comparison with others, especially in fertile fields like Chemistry and Material Science, ruins the happiness of many. Now administrators are comparing researchers from Chemistry with Sociology or Mathematics in terms of h-index.

  • @user-zb2st6zi6j
    @user-zb2st6zi6j 8 หลายเดือนก่อน

    In 1905, Einstein had a h-index of 5. Obviously, someone you wouldn't want to hire.

  • @chronobot2001
    @chronobot2001 8 หลายเดือนก่อน

    It's all about bean counting.
    This metric becomes the end goal rather than developing the actual science.

  • @bradlockerbie
    @bradlockerbie 3 หลายเดือนก่อน

    How many times is it used by people who have no idea as to how it is calculated making decisions about careers of others?

  • @georgesamaras2922
    @georgesamaras2922 8 หลายเดือนก่อน

    You really can't quantify the consequences of a scientific publications. You may do it 'locally' in a graph theoretic way ie. how many references/connections/usefulness this paper has to other papers or works, short of like the early pagerank website algorithm of google or as a proxy of a scientist's popularity among the community, but when we expand our view we cannot fathom the consequences of a particular body of work in the broader society. Sounds like h-index would prevent people from publishing 'novel' ideas but someone could argue if there is anything novel in the modern era. Academia sounds like youtube or stock market .. Chase the latest hot thing, write about it, do videos about it, write papers about it .. I guess the common denominator is money/funding, putting food on the table and it is guided by our collective attention. Great things come out when you give scientists creative freedom as was the case with bell labs and they don't have to worry about their salary. But you may also argue that internet was created as because military wanted continuity of data in case usa got nuked in the broader context of cold war that had our collective attention for 40 years. We need to find a balance between moonshots vs what puts food on the table.

  • @ericrawson2909
    @ericrawson2909 8 หลายเดือนก่อน

    We need quality, not quantity.

  • @Andrew-rc3vh
    @Andrew-rc3vh 9 หลายเดือนก่อน +2

    I don't think any index, even a multidimensional one is any good. What you want is an expert to be measured by an even greater expert, so for one, the cost of compiling any index is prohibitive. Also if you think what the judging expert does, his brain has billions of neurons and those neurons are slightly shifted for each person he meets. With this neural matrix the second part is so ask it questions, and each question will give you a different output. So what you would do normally is to have a complex analysis of a complex entity and then that analysis can give you accurate mappings from questions to gradings, like one question might be, if we employ this person, will he make us money. A different score maybe forthcoming if instead you asked what is the chance of the person getting the Nobel Prize, or say him getting our fusion reactor to work. The stupid h index crunches down all information to one number and then you expect that to give you the right answer to every question. Well to put it bluntly you are asking the impossible. Yes we all like stuff to be simple, but there becomes a point where you can't compress the data any more, a bit like a computer compressing an image. The H index compresses to one monochrome pixel.

    • @DanceAffectionist
      @DanceAffectionist 8 หลายเดือนก่อน

      We scientists all know what it means if another scientist published a lot of papers and they are not cited (or cited a couple of times). h-index partly reflects that but it is not perfect and should be replaced. Meanwhile, expert-based evaluation is not a solution at all. So who are these extremely honest, fully objective all-knowing experts?! Experts are people and their expertise is subjective and biased to lesser or larger degree. Who is going to pay for these expert evaluations? Tax-payers again?! Human experts is not a solution. Expert evaluation paradigm is already failing in European (e.g. Horizon) project proposals evaluations where proposals with max score are not funded. Now, no indicator is perfect! And h-index could be improved by removing self-citations (as a minimum). Then, the index should take into account number of academic employment years. And, then h-index should be only a certain part of another well-constructed compound/multidimensional indicator that more comprehensively measures popularity and impact of published works. And then, that multidimensional indicator should be only a part of a comprehensive periodic performance evaluation of a scientist. And finally, further advancement of AI/LLMs will dramatically transform the whole publication paradigm (not in 2-3 years, but in some 7-10 years IMHO).

    • @Andrew-rc3vh
      @Andrew-rc3vh 8 หลายเดือนก่อน

      @@DanceAffectionist It all depends on how valuable it is to you to get an accurate answer. I'd suggest an expert would be the right way of going about it if we are asking should the chap should be employed on the team. I don't think grading humans like you might grade nuts and bolts very sensible. It is indeed degrading them.

  • @KiaAzad
    @KiaAzad 9 หลายเดือนก่อน +2

    I propose BaP index.
    It's based on measuring the benefit of a paper to humanity, and the potential of benefiting humanity.
    B can be measured by just counting the people that benefitted from the research.
    P is up to the publications, sci-fi writers and the imagination of the other scientists.

  • @phyarth8082
    @phyarth8082 8 หลายเดือนก่อน

    Salami slicing research, it is paper split in n-th many pieces to get bigger number of paper published by scientists. It so annoying that some papers are available free for publics but some are not, and you miss the point you read 5 page long paper then in reality it is 20, and 15 are not available. Some projects are long and you must divide research into themes.
    It is another big flaw of h-index first is pp measuring contest and "paper mill" which is consequence of salami slicing research. Newton also needed "Mathematica principia" publish into small 60 papers work :) And third is "fluff" half of new research is fillers or old the same, it has connection with salami slicing method, same as tv-series starts with small synopsis of previous series.

  • @peterhall6656
    @peterhall6656 8 หลายเดือนก่อน

    A Russian academic friend of mine once compared the h-index' s role in academic life to the use of soap and a rope. Think about it for a second or maybe longer if you want to implement the theory.

  • @cedriclothritz7281
    @cedriclothritz7281 9 หลายเดือนก่อน

    1:25 Ooh, I see you're at 999 citations! Are you doing a special when you reach 1000?😉

  • @PhilipFaster
    @PhilipFaster 8 หลายเดือนก่อน

    Really? I've never heard of any institution using just the h-index alone to judge someone's work.
    No offense, but it's completely new to me.

  • @2trichoptera
    @2trichoptera 8 หลายเดือนก่อน

    Quantity over quality

  • @ralphbecket
    @ralphbecket 8 หลายเดือนก่อน

    I would put a very decent bottle of single malt betting that the grievance studies people all have stellar H-indices.

  • @herbertdaly5190
    @herbertdaly5190 8 หลายเดือนก่อน

    6:50 Yeah we can... big number vs little number. Now do University rankings....

  • @AaronBondSU
    @AaronBondSU 9 หลายเดือนก่อน +1

    are you fammiliar with Goodhart's law? This is something I think a lot of fields run into.

  • @Flan67
    @Flan67 7 หลายเดือนก่อน

    H-index became the academic videogame score.

  • @emmanuelameyaw9735
    @emmanuelameyaw9735 9 หลายเดือนก่อน

    Most of the professors that taught me University of Ghana, Charles University, Soka University and Tohoku University had 4 or less. 😊