Students enrolled in this Pro-Seminar will send their favorite topics (possibly with priorities) to Simon Klüttermann (simon.kluettermann@cs.tu-dortmund.de <ahref="mailto:simon.kluettermann@cs.tu-dortmund.de">write me</a>) until <b>08.04.2022</b>. We will assign topics based on your choices until <b>15.04.2022</b>. If you are uncertain about your choice, we will meet shortly before the deadline and answer your questions (probably in the first week of April).
After you are assigned a topic, you will also be assigned a supervisor from us to help you with questions you might have. If you have general questions, you can also always write one of us (see Contacts below).
We will not provide a presentation course; hence you must take the one offered by the faculty. Also, we will hold the course in <b>English</b>.
We will distribute the Presentations over 1-3 days in the <b>second half of Juli</b>. Every Presentation should be between 25 and 30 minutes long. Also, you will have to hand in a written report about your topic before the deadline on <b>16.09.2022</b>. Finally, you shall learn to be critical with any given topic. To train this, we will assign to you two students' reports to critically comment until the end of the semester. You need to participate in every part of this seminar to pass it.
In the Proseminar, you will learn how to work yourself into a topic, research related literature, and answer questions to this topic. To this end, you need to read the chapters assigned to you and use different sources to verify and extend the statements made. Also, by listening and engaging with the other presentations, you will get a comprehensive understanding of interpretable machine learning methods.
<h4>Abstract from "Interpretable Machine Learning" by Christoph Molnar</h4>
<p>
Machine learning has great potential for improving products, processes and research. But computers usually do not explain their predictions which is a barrier to the adoption of machine learning. This book is about making machine learning models and their decisions interpretable.
</p>
<p>
After exploring the concepts of interpretability, you will learn about simple, interpretable models such as decision trees, decision rules and linear regression. The focus of the book is on model-agnostic methods for interpreting black box models such as feature importance and accumulated local effects, and explaining individual predictions with Shapley values and LIME. In addition, the book presents methods specific to deep neural networks.
</p>
<p>
All interpretation methods are explained in depth and discussed critically. How do they work under the hood? What are their strengths and weaknesses? How can their outputs be interpreted? This book will enable you to select and correctly apply the interpretation method that is most suitable for your machine learning project. Reading the book is recommended for machine learning practitioners, data scientists, statisticians, and anyone else interested in making machine learning models interpretable.
This seminar relies on the book <b>Interpretable Machine Learning - A Guide for Making Black Box Models Explainable</b> by Christoph Molnar. This book is available for free here <ahref="https://christophm.github.io/interpretable-ml-book/">https://christophm.github.io/interpretable-ml-book/</a>. Please note that a single person wrote this book and thus probably contains some errors. So finding alternative sources is even more significant in this seminar.