I am an associate professor at the University of Copenhagen, Department of Computer Science, where I head the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section. I also co-head the research team at CheckStep Ltd, a content moderation start-up. My main research interests are fact checking, low-resource learning and explainability.
Before starting a faculty position, I was a postdoctoral research associate in Sebastian Riedel's UCL Machine Reading group, mainly investigating machine reading from scientific articles. Prior to that, I was a Research Associate in the Sheffield NLP group, a PhD Student in the University of Sheffield Computer Science department, a Research Assistant at AIFB, Karlsruhe Institute of Technology and a Computational Linguistics undergraduate student at the Department of Computational Linguistics, Heidelberg University.
I currently hold a prestigious DFF Sapere Aude Research Leader fellowship on 'Learning to Explain Attitudes on Social Media'. I am president of the ACL Special Interest Group on Representation Learning (SIGREP), co-founder of Widening NLP (WiNLP), and maintain the BIG Directory of members of underrepresented groups and supporters in Natural Language Processing.
- September 2021: I've passed the defence of my higher doctoral thesis (doktordisputats, habilitation) titled 'Towards Explainable Fact Checking' and have been awarded the title of doctor scientiarum (dr.scient.)!
- December 2021: 2 papers accepted to AAAI 2022, on diagnostics-guided explanation generation and few-shot cross-lingual stance detection!
- September 2021: My DFF Sapere Aude Research Leader project on 'Learning to Explain Attitudes on Social Media' is finally kicking off -- four new members have joined CopeNLU today.
- August 2021: 3 papers on stance detection, exaggeration detection and on counterfactually augmented data accepted to EMNLP 2021!
- May 2021: 2 papers on interpretability and scientific document understanding accepted to ACL 2021!
- April 2021: Paper on multi-hop fact checking of political claims accepted to IJCAI 2021!
- April 2021: We're calling for submissions to a special issue of the Cambridge University Press Journal of Natural Language Engineering (NLE) on NLP Approaches to Offensive Content Online. Submissions are due on 31 August 2021.
- April 2021: I'm still looking to hire a PhD student on explainable AI. Feel free to reach out informally before applying if you have any questions about the position.
- April 2021: I wrote a new blog post, where I examine the relationship between notability, research impact, gender & institutional affiliations of NLP researchers.
- February 2021: We're calling for papers and talk proposals for the 2021 Conference on Trust & Truth Online. Submissions are due on 30 July (papers) and 13 August (talk proposals).
- January 2021: Slides and lab exercises for our ALPS tutorial on Explainability for NLP are now available on Github.
- January 2021: Paper on typological blinding of cross-lingual models accepted to EACL 2021!
- January 2021: I have been appointed as head of a newly created Natural Language Processing section at the Department of Computer Science, University of Copenhagen.