Welcome!

I am professor of machine learning at Humboldt-Universität zu Berlin. My research focuses on natural language processing (NLP), i.e. methods that enable machines to understand human language. This spans research topics such as transfer learning, few-shot learning and semantic parsing, as well as application areas in large-scale text analytics. My research is operationalized in form of the open source NLP framework Flair that allows anyone to use state-of-the-art NLP methods in their research or applications. Together with my group and the open source community, we maintain and develop the Flair framework.

If you'd like to know more, check out my publications, the Flair NLP project, the Universal Proposition Banks, or contact me.


Latest News

  • News (17.08.2020): Flair (v0.6) released, a major biomedical NLP upgrade that adds state-of-the-art models for biomedical NER!
  • News (24.05.2020): Flair (v0.5) released, with tons of new models, embeddings and datasets, support for fine-tuning transformers and greatly improved sentiment models!

Latest Publications

FLAIR: An Easy-to-Use Framework for State-of-the-Art NLP. Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter and Roland Vollgraf. Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL 2019. [pdf]

Pooled Contextualized Embeddings for Named Entity Recognition. Alan Akbik, Tanja Bergmann and Roland Vollgraf. Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL 2019. [pdf]

more publications


Main Research

Flair NLP. My main current line of research focuses on new neural approaches to core NLP tasks. In particular, we present an approach that leverages character-level neural language modeling to learn latent representations that encode "general linguistic and world knowledge". These representations are then used as word embeddings to set new state-of-the-art scores for classic NLP tasks such as multilingual named entity recognition and part-of-speech tagging. Check out the project overview page for more details

Universal Proposition Banks. In this line of research, I am investigating methods for semantically parsing text data in a wide range of languages, such as Arabic, Chinese, German, Hindi, Russian and many others. In order to train such parsers, we are automatically generating Proposition Bank-style resources from parallel corpora. We are making all resources publicly available, so check out the project overview page for more details and the generated Proposition Banks.

A picture of me should be here

Alan Akbik

Professor of Machine Learning
Humbold-Universität zu Berlin
alan [dot] akbik [ät] hu-berlin [dot] de