forked from simon.kluettermann/website_prosem
107 lines
6.5 KiB
HTML
107 lines
6.5 KiB
HTML
<!DOCTYPE html>
|
|
<html class="no-js" lang="en" prefix="og: http://ogp.me/ns# fb: http://ogp.me/ns/fb#">
|
|
<head>
|
|
<meta name="viewport" content="width=device-width, initial-scale=1">
|
|
<meta charset="utf-8">
|
|
<link rel="alternate" hreflang="x-default" href="https://ls9-www.cs.tu-dortmund.de/" />
|
|
|
|
<base href="http://ls9-www.cs.tu-dortmund.de/">
|
|
<title>Pro-Seminar - Interpretable Machine Learning<</title>
|
|
<link rel="canonical" href="http://ls9-www.cs.tu-dortmund.de/">
|
|
<link href="https://fonts.googleapis.com/css?family=Roboto" rel="stylesheet">
|
|
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/css/bootstrap.min.css" integrity="sha384-WskhaSGFgHYWDcbwN70/dfYBj47jz9qbsMId/iRN3ewGhXQFZCSftd1LZCfmhktB" crossorigin="anonymous">
|
|
<link rel="stylesheet" type="text/css" href="/static/css/style.css">
|
|
<!--<script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>-->
|
|
<!--<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous"></script>-->
|
|
<!--<script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl" crossorigin="anonymous"></script>-->
|
|
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-175065860-1"></script>
|
|
<script>
|
|
window.dataLayer = window.dataLayer || [];
|
|
function gtag(){dataLayer.push(arguments);}
|
|
gtag('js', new Date());
|
|
|
|
gtag('config', 'UA-175065860-1');
|
|
</script>
|
|
|
|
</head>
|
|
<body>
|
|
<div id="top-menu">
|
|
<img src="/static/icons/menu.png" alt="menu button" height="40">
|
|
|
|
</div>
|
|
<div id="header-band">
|
|
<img src="/static/images/header_band.png" alt="background band">
|
|
</div>
|
|
<div id="page-wrapper" class="container-fluid">
|
|
<div id="top-link">
|
|
<a href="http://www.tu-dortmund.de/" target="_blank">TU Dortmund</a>
|
|
</div>
|
|
<div id="header-section">
|
|
<img src="/static/images/header_w.png" alt="Chair of Data Science and Data Engineering">
|
|
</div>
|
|
|
|
<div id="main-section">
|
|
<div id="content-section">
|
|
|
|
<h1>Proseminar Wintersemester 2021/2022</h1>
|
|
<h1>"Interpretable Machine Learning"</h1>
|
|
|
|
<br>
|
|
|
|
<h2>Content from "Interpretable Machine Learning" book by Christoph Molnar</h2>
|
|
<p>
|
|
Machine learning has great potential for improving products, processes and research. But computers usually do not explain their predictions which is a barrier to the adoption of machine learning. This book is about making machine learning models and their decisions interpretable.
|
|
</p>
|
|
<p>
|
|
After exploring the concepts of interpretability, you will learn about simple, interpretable models such as decision trees, decision rules and linear regression. The focus of the book is on model-agnostic methods for interpreting black box models such as feature importance and accumulated local effects, and explaining individual predictions with Shapley values and LIME. In addition, the book presents methods specific to deep neural networks.
|
|
</p>
|
|
<p>
|
|
All interpretation methods are explained in-depth and discussed critically. How do they work under the hood? What are their strengths and weaknesses? How can their outputs be interpreted? This book will enable you to select and correctly apply the interpretation method that is most suitable for your machine learning project. Reading the book is recommended for machine learning practitioners, data scientists, statisticians, and anyone else interested in making machine learning models interpretable.
|
|
</p>
|
|
|
|
<p>
|
|
<b> Literature</b>
|
|
</p>
|
|
<p>
|
|
This seminar relies on the book <b>Interpretable Machine Learning - A Guide for Making Black Box Models Explainable</b> by Christoph Molnar. This book is available for free here <a href="https://christophm.github.io/interpretable-ml-book/">https://christophm.github.io/interpretable-ml-book/</a>. Please note that a single person wrote this book and thus probably contains some errors. So finding alternative sources is even more significant in this seminar.
|
|
</p>
|
|
|
|
<br>
|
|
|
|
|
|
|
|
<h2>Enrollment Procedure</h2>
|
|
<p>
|
|
Students enrolled in this Pro-Seminar will send their favorite topics (possibly with priorities) to <a href="mailto:simon.kluettermann@cs.tu-dortmund.de?cc=chiara.balestra@cs.tu-dortmund.de">Simon Klüttermann</a> until <b>08.04.2022</b>. We will assign topics based on your choices until <b>15.04.2022</b>. If you are uncertain about your choice, we will meet shortly before the deadline and answer your questions (probably in the first week of April).
|
|
</p>
|
|
<p>
|
|
After you are assigned a topic, you will also be assigned a supervisor from us to help you with questions you might have. If you have general questions, you can also always write me (see Contacts below).
|
|
We will not provide a presentation course; hence you must take the one offered by the faculty. Also, we will hold the course in <b>English</b>.
|
|
</p>
|
|
<p>
|
|
We will distribute the Presentations over 1-3 days in the <b>second half of July</b>. Every Presentation should be between 25 and 30 minutes long. Also, you will have to hand in a written report about your topic before the deadline on <b>16.09.2022</b>. Finally, you shall learn to address constructive criticism with your and the other presented research topics. To train this, we will assign to you two students' reports to critically comment until the end of the semester. You need to participate in every part of this seminar to pass it.
|
|
</p>
|
|
<h3>Goals and Criteria for a successful seminar</h3>
|
|
<p>
|
|
In the Pro-Seminar, you will learn how to work yourself into a topic, research related literature, and answer questions to this topic. To this end, you need to read the subchapters assigned to you and use different sources to verify and extend the statements made. Also, by listening and engaging with the other presentations, you will get a comprehensive understanding of interpretable machine learning methods.
|
|
</p>
|
|
<p>
|
|
We will use this <a href="https://moodle.tu-dortmund.de/user/index.php?id=32741">moodle room</a> for further organisation.
|
|
</p>
|
|
<p>
|
|
<h3>Contacts</h3>
|
|
<a href="mailto:simon.kluettermann@cs.tu-dortmund.de?cc=chiara.balestra@cs.tu-dortmund.de">Simon Klüttermann</a>
|
|
</p>
|
|
</div>
|
|
</div>
|
|
|
|
|
|
<div id="footer-section">
|
|
<a href="https://www.tu-dortmund.de/impressum/" target="_blank">Impressum</a>
|
|
| © TU Dortmund 2020 |
|
|
<a href="https://www.tu-dortmund.de/datenschutz/" target="_blank">Data Protection</a>
|
|
</div>
|
|
</div>
|
|
</body>
|
|
</html>
|