I am an associate professor at the University of Copenhagen, Department of Computer Science, where I head the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section. I also co-head the research team at CheckStep Ltd, a content moderation start-up. My main research interests are fact checking, low-resource learning and explainability.

Before starting a faculty position, I was a postdoctoral research associate in Sebastian Riedel's UCL Machine Reading group, mainly investigating machine reading from scientific articles. Prior to that, I was a Research Associate in the Sheffield NLP group, a PhD Student in the University of Sheffield Computer Science department, a Research Assistant at AIFB, Karlsruhe Institute of Technology and a Computational Linguistics undergraduate student at the Department of Computational Linguistics, Heidelberg University.

I currently hold prestigious a DFF Sapere Aude Research Leader fellowship on 'Learning to Explain Attitudes on Social Media'. I am president of the ACL Special Interest Group on Representation Learning (SIGREP), co-founder of Widening NLP (WiNLP), and maintain the BIG Directory of members of underrepresented groups and supporters in Natural Language Processing.

For more details, see my publications and CV. More information on how to join us in Copenhagen can be found here.


  • April 2021: I'm still looking to hire a PhD student on explainable AI. Feel free to reach out informally before applying.
  • April 2021: I wrote a new blog post, where I examine the relationship between notability, research impact, gender & institutional affiliations of NLP researchers.
  • January 2021: Slides and lab exercises for our ALPS tutorial on Explainability for NLP are now available on Github.
  • January 2021: Paper on typological blinding of cross-lingual models accepted to EACL 2021!
  • January 2021: I have been appointed as head of a newly created Natural Language Processing section at the Department of Computer Science, University of Copenhagen.
  • November 2020: I feel truly honoured to have received a DFF Sapere Aude Research Leader fellowship on 'Learning to Explain Attitudes on Social Media', which will allow me to do blue-skies research and expand my research group CopeNLU. Want to join the team? Read more here.
  • September 2020: I joined CheckStep Ltd, a content moderation start-up, where I co-lead the research team.
  • September 2020: 7 papers accepted to EMNLP 2020! Topics include fact checking, explainability, domain adaptation and more.
  • May 2020: Paper on explaining model transfer accepted to UAI 2020!
  • April 2020: 2 papers accepted to ACL 2020, on explainable fact checking and on script conversion.