top of page

2021-: Explaining and Improving Deep Networks for Vision and Language Tasks (Prasad Tadepalli)

Private·11 members

About

Explanations for image classification with deep neural networks typically rest on different kinds of `activation maps' that show which parts of the image are responsible for the classification. However, due to the non-linearity of the neural network functions, this kind of maps do not explain the behavior of the network on different sub-images of the image. Indeed, different parts of the image contribute to the classification at varying levels depending on which parts of the image are visible. Structured Explanation Graphs (SAGs) explicate this behavior of the network in the form of a graph over activation maps. Our extensive user studies reveal that SAGs with an interactive user interface gives significantly more insight to the users in predicting the behavior of the network under unseen conditions. In the current proposal, we seek to extend this work in three directions: 1) build a fully interactive interface for SAGs where the users can query the network's behavior for any subimage of the image, 2) develop and evaluate algorithms for generating complete SAGs for any desired confidence level and 3) extend SAGs to work with different images of the same class and generate class-level explanations.

Info

  • Private

    Only approved members can view this group.

  • Visible

    Shown to site visitors.

  • June 15, 2021

    Created

  • PPI Center

    Created by

bottom of page