Tassilo Klein, Ph.D.

tjk

LinkedIn

Google scholar

GitHub

Tassilo J. Klein, Ph.D.

Principal Research Scientist & Research Manager

📍 SAP AI CTO Office 💡 LLMs, NLP, Structured Data AI 🌐 LinkedInGoogle Scholar

🧑‍💻 About Me

I am a Principal Research Scientist and Research Manager in the SAP AI CTO Office, working on Natural Language Processing (NLP), large language models (LLMs), and machine learning for enterprise structured data.
My work spans from advancing foundational AI techniques to delivering enterprise-ready systems — including knowledge-augmented LLMs, privacy-preserving AI, and intelligent agents for complex workflows.

Previously, I was a postdoctoral research fellow at Harvard Medical School and Brigham & Women’s Hospital in Boston, and a postdoctoral research associate at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.
I earned my Ph.D. from the Technical University of Munich (TUM) at the intersection of medical imaging and machine learning.

Member of the European Laboratory for Learning and Intelligent Systems (ELLIS).


🎯 Research Focus

Large language models (LLMs) and NLP; representation learning for structured/tabular data; few-shot & self-supervised learning; multi-modal AI; intelligent agents


📄 Selected Publications & Projects

[2025.05] - New pre-print available on foundation models for tabular data in enterprises

arXiv

[2025.05] - Paper accepted at ACL 2025 — Contrastive Perplexity for Controlled Generation: An Application in Detoxifying Large Language Model

arXiv

[2024.10] - Two papers accepted at the NeurIPS’24 Table Representation Learning Workshop

[2023.05] - Paper accepted at ACL 2023 on low-shot contrastive learning of sentence representations.

arXiv View on GitHub Download Model

[2022.02] Paper accepted at ACL 2022 on self-supervised sentence representation learning

arXiv View on GitHub

[2021.08] Paper at EMNLP 2021 on Contrastive Language Model Refinement for Commonsense Reasoning

arXiv View on GitHub video

[2021.08] Paper at EMNLP 2021 on Contrastive Self-Supervised Learning for Commonsense Reasoning

arXiv View on GitHub

[2021.04] Acceptance of co-organized at ICML 2021 workshop on Self-Supervised Learning for Reasoning and Perception

[2021.02] Paper accepted at IPMI 2021 on self-supervised representation learning for medical imaging (acceptance rate 30.0%)

arXiv

[2020.09] Presentation on commonsense reasoning in AI

video Medium

[2020.04] Paper accepted at ACL 2020 on contrastive self-supervised commonsense reasoning (acceptance rate of 17.6%)

arXiv View on GitHub video

[2020.02] Paper accepted at NeuroImage

arXiv

[2019.10.20] Paper on Multi-Domain Learning accepted at ICCV 2019 (acceptance rate 25.0%)

arXiv

[2019.05.14] Short-paper on commonsense reasoning accepted at ACL 2019 (acceptance rate 18.2%)

arXiv View on GitHub Open Notebook

[2019.02.25] Paper accepted at CVPR 2019 (acceptance rate 25.2%)

arXiv View on GitHub

[2017.02.01] Paper accepted at NeuroImage

arXiv View on GitHub


🤝 Community & Mentorship


Last updated — 14 August 2025