Welcome!

I am professor of machine learning at Humboldt-Universität zu Berlin. My research focuses on natural language processing (NLP), i.e. methods that enable machines to understand human language. This spans research topics such as transfer learning, few-shot learning and semantic parsing, as well as application areas in large-scale text analytics. My research is operationalized in form of the open source NLP framework Flair that allows anyone to use state-of-the-art NLP methods in their research or applications. Together with my group and the open source community, we maintain and develop the Flair framework.

If you'd like to know more, check out my publications, the Flair NLP project, the Universal Proposition Banks, or contact me.


Latest News


Latest Publications

Fundus: A Simple-to-Use News Scraper Optimized for High Quality Extractions. Max Dallabetta, Conrad Dobberstein, Adrian Breiding and Alan Akbik. ArXiv, 2024. [pdf]

OpinionGPT: Modelling Explicit Biases in Instruction-Tuned LLMs. Patrick Haller, Ansar Aynetdinov and Alan Akbik. 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics: System Demonstrations, NAACL 2024. [pdf]

BEAR: A Unified Framework for Evaluating Relational Knowledge in Causal and Masked Language Models. Jacek Wiland, Max Ploner and Alan Akbik. 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL 2024.

HunFlair2 in a cross-corpus evaluation of named entity recognition and normalization tools. Mario Sänger, Samuele Garda, Xing David Wang, Leon Weber-Genzel, Pia Droop, Benedikt Fuchs, Alan Akbik, Ulf Leser. ArXiv 2024. [pdf]

PECC: Problem Extraction and Coding Challenges. Patrick Haller, Jonas Golde and Alan Akbik. 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, COLING-LREC 2024.

SemScore: Automated Evaluation of Instruction-Tuned LLMs based on Semantic Textual Similarity. Ansar Aynetdinov and Alan Akbik. ArXiv, 2024. [pdf]

Large-Scale Label Interpretation Learning for Few-Shot Named Entity Recognition. Jonas Golde, Felix Hamborg and Alan Akbik. 18th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2024. [pdf]

Parameter-Efficient Fine-Tuning: Is There An Optimal Subset of Parameters to Tune? Max Ploner and Alan Akbik. 18th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2024. [pdf]

more publications


Main Research

Flair NLP. My main current line of research focuses on new neural approaches to core NLP tasks. In particular, we present an approach that leverages character-level neural language modeling to learn latent representations that encode "general linguistic and world knowledge". These representations are then used as word embeddings to set new state-of-the-art scores for classic NLP tasks such as multilingual named entity recognition and part-of-speech tagging. Check out the project overview page for more details

Universal Proposition Banks. In this line of research, I am investigating methods for semantically parsing text data in a wide range of languages, such as Arabic, Chinese, German, Hindi, Russian and many others. In order to train such parsers, we are automatically generating Proposition Bank-style resources from parallel corpora. We are making all resources publicly available, so check out the project overview page for more details and the generated Proposition Banks.

A picture of me should be here

Alan Akbik

Professor of Machine Learning
Humbold-Universität zu Berlin
alan [dot] akbik [ät] hu-berlin [dot] de