On 28th November OxTalks will move to the new Halo platform and will become 'Oxford Events' (full details are available on the Staff Gateway).
There will be an OxTalks freeze beginning on Friday 14th November. This means you will need to publish any of your known events to OxTalks by then as there will be no facility to publish or edit events in that fortnight. During the freeze, all events will be migrated to the new Oxford Events site. It will still be possible to view events on OxTalks during this time.
If you have any questions, please contact halo@digital.ox.ac.uk
This study introduces a novel theoretical framework for understanding peer review and academic evaluation. Drawing from the works of Michèle Lamont, Randall Collins, and Pierre Bourdieu, the research develops a logics of judgment theory to explain how different evaluative cultures shape academic assessments. Three key logics are distinguished:
Logic of Truth – Evaluation based on methodological, theoretical, and logical correctness.
Logic of the Scholarly Community Game (game-S) – Consideration of disciplinary conventions, scientific biographies, and markers of prestige.
Logic of the Evaluation Game (game-E) – Alignment with national and international metricization policies and bibliometric indicators.
By analyzing 195 Polish habilitation proceedings and 474 peer reviews using a mixed-methods approach, the study explores how these logics function in academic decision-making, particularly in cases with conflicting assessments. The findings contribute to broader discussions on the transformation of evaluative cultures in semi-peripheral academic systems under pressures of internationalization and economization. The following research questions will be addressed in the presentation: How do different academic systems balance traditional disciplinary norms with emerging metric-driven evaluation criteria? To what extent does the increasing reliance on bibliometric indicators shape peer review cultures? How do scholars navigate conflicting evaluation logics in promotion decisions? What lessons can be drawn from semi-peripheral academic systems for understanding global trends in research assessment?