I am a full professor at the University of Copenhagen, Department of Computer Science, where I head the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section. I am also a co-lead of the Pioneer Centre for Artificial Intelligence. My main research interests are fact checking, low-resource learning and explainability.
Before starting a faculty position, I was a postdoctoral research associate in Sebastian Riedel's UCL Machine Reading group, mainly investigating machine reading from scientific articles. Prior to that, I was a Research Associate in the Sheffield NLP group, a PhD Student in the University of Sheffield Computer Science department, a Research Assistant at AIFB, Karlsruhe Institute of Technology and a Computational Linguistics undergraduate student at the Department of Computational Linguistics, Heidelberg University.
I currently hold a prestigious ERC Starting Grant on 'Explainable and Robust Automatic Fact Checking', as well as the Danish equivalent of that, a DFF Sapere Aude Research Leader fellowship on 'Learning to Explain Attitudes on Social Media'. I am a member of the Danish Young Academy, a unit under the Royal Danish Academy of Sciences and Letters. I am also the Vice President of SIGDAT which organises the EMNLP conference series, a co-founder of Widening NLP (WiNLP), and maintain the BIG Directory of members of underrepresented groups and supporters in Natural Language Processing.
For more details, see my publications and CV. More information on how to join us in Copenhagen can be found here.
- January 2023: I am still looking to recruiting PhD students and postdocs to join my research group CopeNLU from September 2023: 1 postdoc on explainable fact checking in the context of my ERC StG fellowship ExplainYourself and 1 PhD student on Fair and Accountable NLP in the context of a Carlsberg-funded project on analysing employer descriptions in job ads. The application deadlines are 24 May 2023. See the links above for more information on the respective positions, and reach out out to me by email if you have further questions.
- May 2023: 4 papers accepted to ACL 2023! The papers make contributions within faithfulness of explanations, measuring intersectional biases, event extraction and few-shot stance detection.
- April 2023: I'm extremely grateful to the Hartmann Foundation for having honoured me with the Hartmann Diploma Prize, which is awarded to younger people who are expected to make a valuable contribution to Danish society.
- March 2023: I have joined the Pioneer Centre for Artificial Intellgence as a co-lead of the Speech and Language collaboratory. In this context, my research group CopeNLU has moved to the Østervold Observatory, located in the Botanical Gardens in central Copenhagen.
- February 2023: Excited to be giving an invited talk on Modelling Information Change in Scientific Communication at AAAI 2023!
- January 2023: I am recruiting PhD students and postdocs to join my research group CopeNLU: 1 PhD student & 1 postdoc on explainable fact checking in the context of my ERC StG fellowship ExplainYourself; 1 PhD student in Social Data Science; 1 PhD student on Fair and Accountable NLP in the context of a Carlsberg-funded project on employer images in job ads. There's also the possibility to apply for an open call for PhD and postdoc fellowships via the Danish Data Science Academy. See the links above for more information on the respective positions, and reach out out to me by email if you have further questions.
- November 2022: I've been awarded a prestigious ERC Starting Grant on explainable fact checking! Read more in ERC's offical press release, as well as in this post on the CopeNLU website.
- October 2022: Popular science article in the Danish press about work on quantifying gender biases towards politicians on Reddit and its relevance to the 2022 Denmark general election now online.
- October 2022: I've been promoted to full professor, making me the youngest ever female full professor in Denmark!
- October 2022: We're looking to recruit a PhD student on explainable machine learning, to be supervised by my colleague Christina Lioma and me. More information and application link [here].
- October 2022: Two papers on scientific document understanding accepted to EMNLP 2022!
- October 2022: Happy to share that our paper Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing was accepted to appear in the MDPI Journal "Information".
- October 2022: Popular science article about fact checking and the impact of false information on democracy now online.
- September 2022: Our paper "TempEL: Linking Dynamically Evolving and Newly Emerging Entities" was accepted to the NeurIPS Datasets and Benchmarks Track!
- September 2022: A brief summary of my habilitation thesis "Towards Explainable Fact Checking" is now published in the Springer journal "KI - Künstliche Intelligenz".
- August 2022: 2 survey papers accepted to ACM Computing Surveys (CSUR), on question answering datasets, and on contrastive learning in NLP!
- August 2022: 2 papers on probing QA models accepted to COLING 2022!
- August 2022: Our paper on studying gender biases towards politicians on Reddit was accepted to PLoS ONE!
- July 2022: I'm serving as a programme co-chair for EACL 2023.