Thoughts and Theory

Understanding normalizing flows and how they can be used in generative modeling

Introduction

In this post, I will explain what normalizing flows are and how they can be used in variational inference and designing generative models. The materials of this article mostly comes from [Rezende and Mohamed, 2015], which I believe is the first paper that introduces the concept of flow-based models (the title of this article is almost identical to the paper’s title). There are a lot of other interesting papers that followed up this one and used flow-based models to solve other interesting tasks. …


6. SHAP (SHapley Additive exPlanation) Values

A Unified Approach to Interpreting Model Predictions

explanation model: any interpretable approximation of the original model

six current explanation methods (LIME, DeepLIFT, Layerwise relevance propagation, Classic Shapley value estimation, Shapley sampling values, Quantitative input influence) use the same additive explanation method as follows:

where zʹ s are binary values, ϕ s are real values and M is the number of features.

attributing an effect ϕ to each feature

7. Integrated Gradients

Proposed by the paper Axiomatic Attribution for Deep Networks in ICML 2017.

numerically obtaining high-quality integrals adds computational overhead.

Takes an axiomatic approach and proposes two fundamental axioms which are Sensitivity and…


Welcome to the second article of “A Review of Different Interpretation Methods in Deep Learning” series! As its name suggests, this series aims to introduce you to some of the most important interpretation (i.e. explanation) methods in deep learning. As a brief introduction, interpretation methods can help you understand how and why a deep neural network arrives at a particular prediction and whether the high accuracy of the predictions of a model is reliable or not.

Before proceeding any further, I highly recommend you to go through the first article of this series, which is available here, as some of…


In order to build trust in machine learning models and move towards their integration into our everyday lives, we need to make “transparent” models that could explain why they predict what they predict. To this end, many researchers have proposed several methods that help us gain insights into how these “black-box” models work. In this post, I will go through some of the most frequently used interpretation methods in deep learning, one of the fruitful areas of research in machine learning. As plenty of interpretation methods exist, I will cover only those that are applied to image classification neural networks…

Mohammadreza Salehi

Lifelong learner

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store