On 28th November OxTalks will move to the new Halo platform and will become 'Oxford Events' (full details are available on the Staff Gateway).
There will be an OxTalks freeze beginning on Friday 14th November. This means you will need to publish any of your known events to OxTalks by then as there will be no facility to publish or edit events in that fortnight. During the freeze, all events will be migrated to the new Oxford Events site. It will still be possible to view events on OxTalks during this time.
If you have any questions, please contact halo@digital.ox.ac.uk
Zoom: Join Zoom Meeting: zoom.us/j/99057170141?pwd=H6jZR72T3cJPLOU8iq5jSWNxz8YbBV.1
Meeting ID: 990 5717 0141; Passcode: 421752
Abstract: Generative large language models (LLMs) are increasingly used in the social sciences for data generation and text annotation, yet concerns remain about their biases and performance. This talk addresses these issues in two parts. First, we examine political biases in LLM output by analyzing responses to sensitive political questions across languages spoken in politically divergent societies. Focusing on OpenAI’s GPT-3.5 and GPT-4, we find that model outputs are more conservative in languages associated with conservative societies, and that GPT-4 tends to produce more left-leaning responses than GPT-3.5. Second, we evaluate LLM performance on complex annotation tasks using specialized political science texts. We propose a memory-based annotation approach, where the model retains its own prior classifications. This method significantly outperforms few-shot chain-of-thought prompting, suggesting a new direction for improving LLM-based annotation tasks.