forked from simon.kluettermann/website_prosem
initial push
This commit is contained in:
commit
b4dfda2dfc
|
@ -0,0 +1,171 @@
|
|||
<!-- Begin Inhalt -->
|
||||
|
||||
<!--AG <div id="inhalt" style="text-align: left;" class="inhalt_breit"> -->
|
||||
<div id="inhalt" class="inhalt_breit">
|
||||
<div class="wrapper">
|
||||
<a name="PageTop"></a>
|
||||
|
||||
|
||||
|
||||
<h2>Proseminar Wintersemester 2021/2022</h2>
|
||||
<h1>Interpretable Machine Learning</h1>
|
||||
<h3>Prof. Dr. Emmanuel Müller - Informatik LS9</h3>
|
||||
|
||||
<br>
|
||||
|
||||
<h3>Procedure</h3>
|
||||
<p>
|
||||
Students who are enrolled in this Pro-Seminar will send their favorite topics (possibly with priorities) to simon.kluettermann(at)cs.tu-dortmund.de until the <b>08.04.2022</b>. We will assign topic based on your choices until the <b>15.04.2022</b>. If you are uncertain about which topic to choose, we will meet shortly before once and answer your questions. The exact date depends on when we can get a room, but will probably be in the first week of april.
|
||||
</p>
|
||||
<p>
|
||||
After you are assigned a topic, you will also be assigned a supervisor from us to help you with questions you might have. If you have more general questions you can also always write to chiara.balestra(at)cs.tu-dortmund.de or to simon.kluettermann(at)cs.tu-dortmund.de.
|
||||
</p>
|
||||
<p>
|
||||
We will not have a special presentation course, you will have to take the one offered by the faculty. Also we will hold the course in english.
|
||||
</p>
|
||||
<p>
|
||||
We will distribute the Presentations over 1-3 days in the <b>last week of Juli</b>. Every Presentation should be between 25 and 30 minutes long.
|
||||
Finally you will have to hand in a written report about your topic until the mid of September (Friday the <b>16.09.2022</b>)
|
||||
Finally, it is important to us, that you learn to be critical with any given topic. To train this, you will be given two reports of other students to critize until the end of the semester.
|
||||
You need to participate in every part of this seminar to be able to pass it.
|
||||
</p>
|
||||
|
||||
<h3>Goals and Criteria for a succesful seminar</h3>
|
||||
<p>
|
||||
In the Proseminar you shall learn how to work yourself into a topic, research related literature and answer questions to this topic. For this it is important to not rely on the chapter given to you and use different sources to verify all statements made.
|
||||
Also, by listening and engaging with the other presentations, you will get a wide understanding of interpretable machine learning methods.
|
||||
</p>
|
||||
|
||||
<br>
|
||||
|
||||
<h3>Content</h3>
|
||||
<h4>Abstract from "Interpretable Machine Learning" by Christoph Molnar</h4>
|
||||
<p>
|
||||
Machine learning has great potential for improving products, processes and research. But computers usually do not explain their predictions which is a barrier to the adoption of machine learning. This book is about making machine learning models and their decisions interpretable.
|
||||
</p>
|
||||
<p>
|
||||
After exploring the concepts of interpretability, you will learn about simple, interpretable models such as decision trees, decision rules and linear regression. The focus of the book is on model-agnostic methods for interpreting black box models such as feature importance and accumulated local effects, and explaining individual predictions with Shapley values and LIME. In addition, the book presents methods specific to deep neural networks.
|
||||
</p>
|
||||
<p>
|
||||
All interpretation methods are explained in depth and discussed critically. How do they work under the hood? What are their strengths and weaknesses? How can their outputs be interpreted? This book will enable you to select and correctly apply the interpretation method that is most suitable for your machine learning project. Reading the book is recommended for machine learning practitioners, data scientists, statisticians, and anyone else interested in making machine learning models interpretable.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
<b> Literature</b>
|
||||
</p>
|
||||
<p>
|
||||
This Seminar is based on the book "<b>Interpretable Machine Learning</b> - A Guide for Making Black Box Models Explainable" by Christoph Molnar. This book is available for free here <a href="https://christophm.github.io/interpretable-ml-book/">https://christophm.github.io/interpretable-ml-book/</a>.
|
||||
Please note that, as this book is only written by a single person and thus probably contains some errors. So finding alternative sources is extremely important here.
|
||||
</p>
|
||||
<!-- this is a comment tag, I only keep this to not have to look for a table syntax later
|
||||
|
||||
A prelimenary choice of Topics (as this still depends on the number of students) follows:
|
||||
</p>
|
||||
<table cellspacing="0" border="1">
|
||||
<tbody><tr>
|
||||
<td>Topic</td>
|
||||
<td>Chapter</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Explainability</td>
|
||||
<td> <a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/rueping2005d.pdf">Interpreting Classifiers by Multiple Views</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Explainability</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/chen2018c.pdf">Learning to Explain: An Information-Theoretic Perspective on Model Interpretation</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Explainability</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/guidotti2019a.pdf">A Survey Of Methods For Explaining Black Box Models</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Explainability</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/weiss2018a.pdf">Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Explainability</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/shrikumar2017a.pdf">Learning Important Features Through Propagating Activation Differences</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Explainability</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/montavon2018a.pdf">Methods for Interpreting and Understanding Deep Neural Networks</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Explainability</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/huang2020a.pdf">A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability -- Safety and Verification</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Explainability</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/huang2020a.pdf">A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability -- Testing and Adversarial Attack</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Explainability</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/huang2020a.pdf">A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability -- Interpretability</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Explainability</td>
|
||||
<td><a href="https://arxiv.org/pdf/1903.12261.pdf">Benchmarking neural network robustness to common corruptions and perturbations</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Ethics and Bias</td>
|
||||
<td>"Unsichtbare Frauen: Wie eine von Daten beherrschte Welt die Hälfte der Bevölkerung ignoriert", Caroline Criado-Perez. Das Buch ist in der Lehrstuhlbibliothek und auch in der Uni-Bibliothek, aber nicht online verfügbar; vorzutragen ist nur das Einleitungskapitel und ein weiteres Kapitel der eigenen Wahl. </td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Ethics and Bias</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/caliskan2017a.pdf">Semantics derived automatically from language corpora contain human-like biases</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Ethics and Bias</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/floridi2018a.pdf">AI4People - An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Ethics and Bias</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/saha20a.pdf">Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Ethics and Bias</td>
|
||||
<td><a href="https://arxiv.org/pdf/1607.02533.pdf">Adversarial examples in the physical world</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Fairness</td>
|
||||
<td> <a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/calders2010a.pdf">Three naive Bayes approaches for discrimination-free classification</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Fairness</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/garcia2020a.pdf">Fair-by-design matching</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Fairness</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/zehlike2017a.pdf">FA*IR: A Fair Top-k Ranking Algorithm</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Fairness</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/zafar2017a.pdf">Fairness Beyond Disparate Treatment Disparate Impact: Learning Classification without Disparate Mistreatment</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Fairness</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/biega2018a.pdf">Equity of Attention: Amortizing Individual Fairness in Rankings</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Fairness</td>
|
||||
<td> <a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/joachims2017a.pdf">Unbiased Learning-to-Rank with Biased Feedback</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Fairness</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/yadaf2019a.pdf">Fair Learning-to-Rank from Implicit Feedback</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Fairness</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/yurochkin2020a.pdf">Training Individually Fair ML Models With Sensitive Subspace Robustness</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Fairness</td>
|
||||
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/munn2020a.pdf">Angry by design: toxic communication and technical architectures</a></td>
|
||||
</tr>
|
||||
</tbody></table>
|
||||
-->
|
||||
|
||||
</div> <!-- id="wrapper" -->
|
||||
</div> <!-- id="inhalt" -->
|
||||
|
||||
<!-- End Inhalt -->
|
Reference in New Issue