I am an associate professor at the University of Copenhagen, Department of Computer Science, where I head the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section. I also co-head the research team at CheckStep Ltd, a content moderation start-up. My main research interests are fact checking, low-resource learning and explainability.
Before starting a faculty position, I was a postdoctoral research associate in Sebastian Riedel's UCL Machine Reading group, mainly investigating machine reading from scientific articles. Prior to that, I was a Research Associate in the Sheffield NLP group, a PhD Student in the University of Sheffield Computer Science department, a Research Assistant at AIFB, Karlsruhe Institute of Technology and a Computational Linguistics undergraduate student at the Department of Computational Linguistics, Heidelberg University.
I currently hold prestigious a DFF Sapere Aude Research Leader fellowship on 'Learning to Explain Attitudes on Social Media'. I am president of the ACL Special Interest Group on Representation Learning (SIGREP), co-founder of Widening NLP (WiNLP), and maintain the BIG Directory of members of underrepresented groups and supporters in Natural Language Processing.
- May 2021: 2 papers on interpretability and scientific document understanding accepted to ACL 2021!
- April 2021: Paper on multi-hop fact checking of political claims accepted to IJCAI 2021!
- April 2021: We're calling for submissions to a special issue of the Cambridge University Press Journal of Natural Language Engineering (NLE) on NLP Approaches to Offensive Content Online. Submissions are due on 31 August 2021.
- April 2021: I'm still looking to hire a PhD student on explainable AI. Feel free to reach out informally before applying if you have any questions about the position.
- April 2021: I wrote a new blog post, where I examine the relationship between notability, research impact, gender & institutional affiliations of NLP researchers.
- February 2021: We're calling for papers and talk proposals for the 2021 Conference on Trust & Truth Online. Submissions are due on 30 July (papers) and 13 August (talk proposals).
- January 2021: Slides and lab exercises for our ALPS tutorial on Explainability for NLP are now available on Github.
- January 2021: Paper on typological blinding of cross-lingual models accepted to EACL 2021!
- January 2021: I have been appointed as head of a newly created Natural Language Processing section at the Department of Computer Science, University of Copenhagen.
- November 2020: I feel truly honoured to have received a DFF Sapere Aude Research Leader fellowship on 'Learning to Explain Attitudes on Social Media', which will allow me to do blue-skies research and expand my research group CopeNLU. Want to join the team? Read more here.
- September 2020: I joined CheckStep Ltd, a content moderation start-up, where I co-lead the research team.
- September 2020: 7 papers accepted to EMNLP 2020! Topics include fact checking, explainability, domain adaptation and more.
- May 2020: Paper on explaining model transfer accepted to UAI 2020!
- April 2020: 2 papers accepted to ACL 2020, on explainable fact checking and on script conversion.