OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
Today, researchers are publishing more than ever before. New assistant professors have already published twice as much as their peers did in the early 1990s to secure a position in top departments or to achieve tenure. Nobel laureate Peter Higgs believes he wouldn’t be deemed “productive” enough for academia in today’s world. However, merely publishing more papers doesn’t suffice. The number of citations those papers receive is the true currency in science.
Rudolf Weigl, a Polish biologist who invented the first effective vaccine against typhus, described the practice of publishing many papers as ‘duck shit’: just as ducks leave a lot of traces while walking about in a yard, scientists hastily publish articles with partial results that are the product of underdeveloped thought. This is one of the unfortunate outcomes of the evaluation game in today’s science, where researchers attempt to follow various evaluation rules and meet metrics-based expectations.
Counting scholarly publications has been practiced for two centuries. In Russia from the 1830s, professors had to publish yearly to determine their salaries. The Soviet Union and various socialist countries developed national research evaluation systems before the Western world. The effects of those practices are still vital.
In my talk, I will use the concept of the ‘evaluation game’ developed in my recent book (The Evaluation Game: How Scholarly Metrics Shape Scholarly Communication, CUP 2023) to show how this concept can enrich our understanding of how researchers, institutions, and other stakeholders respond to pressures generated by metrics and research evaluation exercises.
I will offer a fresh take on the origins and effects of metrics in academia, as well as suggest ways to improve research evaluation. I will show why the phenomenon of predatory publishing has not only geopolitical aspects but that publishing in so-called predatory journals might be perceived at the (semi)periphery as a justified and rational way to in accordance with institutional loyalty.