updated chiaras modifications

This commit is contained in:
Simon Klüttermann 2022-01-05 11:09:46 +01:00
parent b4dfda2dfc
commit 1229689e36
1 changed files with 13 additions and 119 deletions

View File

@ -15,25 +15,18 @@
<h3>Procedure</h3> <h3>Procedure</h3>
<p> <p>
Students who are enrolled in this Pro-Seminar will send their favorite topics (possibly with priorities) to simon.kluettermann(at)cs.tu-dortmund.de until the <b>08.04.2022</b>. We will assign topic based on your choices until the <b>15.04.2022</b>. If you are uncertain about which topic to choose, we will meet shortly before once and answer your questions. The exact date depends on when we can get a room, but will probably be in the first week of april. Students enrolled in this Pro-Seminar will send their favorite topics (possibly with priorities) to Simon Klüttermann (simon.kluettermann@cs.tu-dortmund.de <a href="mailto:simon.kluettermann@cs.tu-dortmund.de">write me</a>) until <b>08.04.2022</b>. We will assign topics based on your choices until <b>15.04.2022</b>. If you are uncertain about your choice, we will meet shortly before the deadline and answer your questions (probably in the first week of April).
</p> </p>
<p> <p>
After you are assigned a topic, you will also be assigned a supervisor from us to help you with questions you might have. If you have more general questions you can also always write to chiara.balestra(at)cs.tu-dortmund.de or to simon.kluettermann(at)cs.tu-dortmund.de. After you are assigned a topic, you will also be assigned a supervisor from us to help you with questions you might have. If you have general questions, you can also always write one of us (see Contacts below).
We will not provide a presentation course; hence you must take the one offered by the faculty. Also, we will hold the course in <b>English</b>.
</p> </p>
<p> <p>
We will not have a special presentation course, you will have to take the one offered by the faculty. Also we will hold the course in english. We will distribute the Presentations over 1-3 days in the <b>second half of Juli</b>. Every Presentation should be between 25 and 30 minutes long. Also, you will have to hand in a written report about your topic before the deadline on <b>16.09.2022</b>. Finally, you shall learn to be critical with any given topic. To train this, we will assign to you two students' reports to critically comment until the end of the semester. You need to participate in every part of this seminar to pass it.
</p> </p>
<p>
We will distribute the Presentations over 1-3 days in the <b>last week of Juli</b>. Every Presentation should be between 25 and 30 minutes long.
Finally you will have to hand in a written report about your topic until the mid of September (Friday the <b>16.09.2022</b>)
Finally, it is important to us, that you learn to be critical with any given topic. To train this, you will be given two reports of other students to critize until the end of the semester.
You need to participate in every part of this seminar to be able to pass it.
</p>
<h3>Goals and Criteria for a succesful seminar</h3> <h3>Goals and Criteria for a succesful seminar</h3>
<p> <p>
In the Proseminar you shall learn how to work yourself into a topic, research related literature and answer questions to this topic. For this it is important to not rely on the chapter given to you and use different sources to verify all statements made. In the Proseminar, you will learn how to work yourself into a topic, research related literature, and answer questions to this topic. To this end, you need to read the chapters assigned to you and use different sources to verify and extend the statements made. Also, by listening and engaging with the other presentations, you will get a comprehensive understanding of interpretable machine learning methods.
Also, by listening and engaging with the other presentations, you will get a wide understanding of interpretable machine learning methods.
</p> </p>
<br> <br>
@ -54,116 +47,17 @@ All interpretation methods are explained in depth and discussed critically. How
<b> Literature</b> <b> Literature</b>
</p> </p>
<p> <p>
This Seminar is based on the book "<b>Interpretable Machine Learning</b> - A Guide for Making Black Box Models Explainable" by Christoph Molnar. This book is available for free here <a href="https://christophm.github.io/interpretable-ml-book/">https://christophm.github.io/interpretable-ml-book/</a>. This seminar relies on the book <b>Interpretable Machine Learning - A Guide for Making Black Box Models Explainable</b> by Christoph Molnar. This book is available for free here <a href="https://christophm.github.io/interpretable-ml-book/">https://christophm.github.io/interpretable-ml-book/</a>. Please note that a single person wrote this book and thus probably contains some errors. So finding alternative sources is even more significant in this seminar.
Please note that, as this book is only written by a single person and thus probably contains some errors. So finding alternative sources is extremely important here.
</p> </p>
<!-- this is a comment tag, I only keep this to not have to look for a table syntax later
A prelimenary choice of Topics (as this still depends on the number of students) follows: <p>
<h3>Contacts <a href="mailto:simon.kluettermann@cs.tu-dortmund.de?cc=chiara.balestra@cs.tu-dortmund.de">(Write us)</a></h3>
</p>
<p>
chiara.balestra@cs.tu-dortmund.de
</p><p>
simon.kluettermann@cs.tu-dortmund.de
</p> </p>
<table cellspacing="0" border="1">
<tbody><tr>
<td>Topic</td>
<td>Chapter</td>
</tr>
<tr>
<td>Explainability</td>
<td> <a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/rueping2005d.pdf">Interpreting Classifiers by Multiple Views</a></td>
</tr>
<tr>
<td>Explainability</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/chen2018c.pdf">Learning to Explain: An Information-Theoretic Perspective on Model Interpretation</a></td>
</tr>
<tr>
<td>Explainability</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/guidotti2019a.pdf">A Survey Of Methods For Explaining Black Box Models</a></td>
</tr>
<tr>
<td>Explainability</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/weiss2018a.pdf">Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples</a></td>
</tr>
<tr>
<td>Explainability</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/shrikumar2017a.pdf">Learning Important Features Through Propagating Activation Differences</a></td>
</tr>
<tr>
<td>Explainability</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/montavon2018a.pdf">Methods for Interpreting and Understanding Deep Neural Networks</a></td>
</tr>
<tr>
<td>Explainability</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/huang2020a.pdf">A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability -- Safety and Verification</a></td>
</tr>
<tr>
<td>Explainability</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/huang2020a.pdf">A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability -- Testing and Adversarial Attack</a></td>
</tr>
<tr>
<td>Explainability</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/huang2020a.pdf">A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability -- Interpretability</a></td>
</tr>
<tr>
<td>Explainability</td>
<td><a href="https://arxiv.org/pdf/1903.12261.pdf">Benchmarking neural network robustness to common corruptions and perturbations</a></td>
</tr>
<tr>
<td>Ethics and Bias</td>
<td>"Unsichtbare Frauen: Wie eine von Daten beherrschte Welt die Hälfte der Bevölkerung ignoriert", Caroline Criado-Perez. Das Buch ist in der Lehrstuhlbibliothek und auch in der Uni-Bibliothek, aber nicht online verfügbar; vorzutragen ist nur das Einleitungskapitel und ein weiteres Kapitel der eigenen Wahl. </td>
</tr>
<tr>
<td>Ethics and Bias</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/caliskan2017a.pdf">Semantics derived automatically from language corpora contain human-like biases</a></td>
</tr>
<tr>
<td>Ethics and Bias</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/floridi2018a.pdf">AI4People - An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations</a></td>
</tr>
<tr>
<td>Ethics and Bias</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/saha20a.pdf">Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics</a></td>
</tr>
<tr>
<td>Ethics and Bias</td>
<td><a href="https://arxiv.org/pdf/1607.02533.pdf">Adversarial examples in the physical world</a></td>
</tr>
<tr>
<td>Fairness</td>
<td> <a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/calders2010a.pdf">Three naive Bayes approaches for discrimination-free classification</a></td>
</tr>
<tr>
<td>Fairness</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/garcia2020a.pdf">Fair-by-design matching</a></td>
</tr>
<tr>
<td>Fairness</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/zehlike2017a.pdf">FA*IR: A Fair Top-k Ranking Algorithm</a></td>
</tr>
<tr>
<td>Fairness</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/zafar2017a.pdf">Fairness Beyond Disparate Treatment Disparate Impact: Learning Classification without Disparate Mistreatment</a></td>
</tr>
<tr>
<td>Fairness</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/biega2018a.pdf">Equity of Attention: Amortizing Individual Fairness in Rankings</a></td>
</tr>
<tr>
<td>Fairness</td>
<td> <a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/joachims2017a.pdf">Unbiased Learning-to-Rank with Biased Feedback</a></td>
</tr>
<tr>
<td>Fairness</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/yadaf2019a.pdf">Fair Learning-to-Rank from Implicit Feedback</a></td>
</tr>
<tr>
<td>Fairness</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/yurochkin2020a.pdf">Training Individually Fair ML Models With Sensitive Subspace Robustness</a></td>
</tr>
<tr>
<td>Fairness</td>
<td><a href="https://www-ai.cs.tu-dortmund.de/LEHRE/SEMINARE/WS2122/TrustworthyAIMachineLearning/munn2020a.pdf">Angry by design: toxic communication and technical architectures</a></td>
</tr>
</tbody></table>
-->
</div> <!-- id="wrapper" --> </div> <!-- id="wrapper" -->
</div> <!-- id="inhalt" --> </div> <!-- id="inhalt" -->