Continuation of updates in Wegs' retracted studies and papers thread, but under a broader title to better reflect the range of material that becomes contiguous with the narrower topic.
Ah, so most people think they are better than average, not necessarily the best, at something. I think that's a common misconception about the Dunning-Kruger effect. This reminds me of a quote from Bertrand Russell, “The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts.''Debunking the Dunning-Kruger effect – the least skilled people know how much they don’t know, but everyone thinks they are better than average
https://theconversation.com/debunki...ne-thinks-they-are-better-than-average-195527
INTRO: John Cleese, the British comedian, once summed up the idea of the Dunning–Kruger effect as, “If you are really, really stupid, then it’s impossible for you to know you are really, really stupid.” A quick search of the news brings up dozens of headlines connecting the Dunning–Kruger effect to everything from work to empathy and even to why Donald Trump was elected president.
As a math professor who teaches students to use data to make informed decisions, I am familiar with common mistakes people make when dealing with numbers. The Dunning-Kruger effect is the idea that the least skilled people overestimate their abilities more than anyone else. This sounds convincing on the surface and makes for excellent comedy. But in a recent paper, my colleagues and I suggest that the mathematical approach used to show this effect may be incorrect... (MORE - details)
_
Saudi universities entice top scientists to switch affiliations — sometimes with cash
https://www.nature.com/articles/d41586-023-01523-x
Some institutions arrange for highly cited researchers to change their main affiliations, which boosts their position in global university rankings.
_
Well, not really. It is after all a human activity and scientists are not saints. But at least scientists don't choose their profession for the money, so there is perhaps a reason to expect most of them to be more or less honest.If there's one sacred place in the entire universe, free from corruption, you'd think it'd be within science.
Yea, I’d like to believe that most scientists go into their profession for altruistic reasons, but science has become a business; look at the healthcare industry for example. Money is a powerful motivator to breed corruption, even in science. This isn’t to say scientists shouldn’t profit on their discoveries and experiments; we need good scientists for progressive inventions, therapies and medicines, for example. Unfortunately though, science isn’t immune to greed and corruption.Well, not really. It is after all a human activity and scientists are not saints. But at least scientists don't choose their profession for the money, so there is perhaps a reason to expect most of them to be more or less honest.
That’s true about science being “self correcting,” but to what end has the damage already been done, and you can’t lead the horse back into the barn. Like all of these retraction articles I’ve posted in my other thread and that C C has found - it’s great that retractions were made, but will anyone care? Once a story is out there, even science-based, it’s difficult to reel it back in and say “hey everyone, that was wrong and this is actually the correct data.” Mistakes happen, but some of these retractions aren’t due to mistakes.One of the current problems is the pressure to publish, in order to retain tenure and advance careers. This has become absurd and leads to a deluge of shit papers. A BBC radio programme I heard recently said that a significant proportion of the papers that emanate from Chin, in particular, are bogus, produced by "paper mills" that fake all or parts of the research and write it up for authors that are too busy or desperate. The deluge of papers is such that even the peer review process can be overwhelmed, so that reviewers are not able to fully examine the papers they sign off. It seems to be worst in the social sciences. There are now people who have devoted themselves to searching out dud papers, using sophisticated algorithms to spot data copied from elsewhere and so forth.
We need more gamekeepers to catch the poachers, and in my view a more intelligent, humane and respectful way to judge the value of research, rather than by sheer volume of stuff produced.
But having said all that, science is ultimately self-correcting. Bad research can't be reproduced and will eventually be smoked out. But it could have wasted a lot of people's time before that happens.
Well no, actually, it does get reeled in and the horse does get back into the barn. It's not like politics, in which a lie can get traction and be impossible to snuff out. The distinguishing feature of science is it relies on reproducible observation. If someone writes a paper with poor research, and people do further work in that area assuming what is in the paper is correct, they will fail. And then they will find out, soon enough, that the paper they relied on was no good. Research claims are tested, in effect, because they become part of the structure on which further research is built. But it is true it can take a while and involve people wasting their time trying to do something that doesn't work.Yea, I’d like to believe that most scientists go into their profession for altruistic reasons, but science has become a business; look at the healthcare industry for example. Money is a powerful motivator to breed corruption, even in science. This isn’t to say scientists shouldn’t profit on their discoveries and experiments; we need good scientists for progressive inventions, therapies and medicines, for example. Unfortunately though, science isn’t immune to greed and corruption.
That’s true about science being “self correcting,” but to what end has the damage already been done, and you can’t lead the horse back into the barn. Like all of these retraction articles I’ve posted in my other thread and that C C has found - it’s great that retractions were made, but will anyone care? Once a story is out there, even science-based, it’s difficult to reel it back in and say “hey everyone, that was wrong and this is actually the correct data.” Mistakes happen, but some of these retractions aren’t due to mistakes.
That makes sense. I was comparing this with politics (in terms of how retractions are handled) - you’ve helped me see that the two are not the same. Scientists like anyone else, are fallible and mistakes can happen but, how is misconduct getting past peer reviews?Well no, actually, it does get reeled in and the horse does get back into the barn. It's not like politics, in which a lie can get traction and be impossible to snuff out. The distinguishing feature of science is it relies on reproducible observation. If someone writes a paper with poor research, and people do further work in that area assuming what is in the paper is correct, they will fail. And then they will find out, soon enough, that the paper they relied on was no good. Research claims are tested, in effect, because they become part of the structure on which further research is built. But it is true it can take a while and involve people wasting their time trying to do something that doesn't work.
Usually it doesn't of course, but it depends on what the misconduct is. A reviewer can take apart conclusions that don't follow from the data presented, can criticise poor experimental methodology, or point out that results don't seem consistent with previous work in the field. But if someone chooses to fake actual data points, there's not much a reviewer can do to detect that. He or she can't be expected to repeat the actual research themselves, after all.That makes sense. I was comparing this with politics (in terms of how retractions are handled) - you’ve helped me see that the two are not the same. Scientists like anyone else, are fallible and mistakes can happen but, how is misconduct getting past peer reviews?