top of page

2020-2021: Visual Analytics for Scalable AI Debugging (Minsuk Kahng)

Private·6 members

About

Even state-of-the-art AI models fail to make correct predictions for many data instances, which can be a big issue especially for safety-critical applications (e.g., medical diagnosis, self-driving cars). How can we help people effectively analyze when AI fails and infer why it fails, so that they get actionable insights for improving the model accuracy? We propose a new human-in-the-loop approach to this challenging problem, by creating a novel visual analytics tool for debugging AI models with a focus on large datasets used for training them. We will build upon our successful experience of developing visual analytics systems for industry-scale deep learning models (e.g., ActiVis deployed at Facebook). Specifically, the proposed tool will visually summarize a large number of failed cases using scalable data visualization techniques and provide a list of potential reasons for these mispredictions caused by common dataset-related issues such as insufficient data, distribution shift, or labeling errors. With this information, users can get actionable insights for improving the model accuracy (e.g., adding more outdoor images to training set if such images are insufficient), which reduces their time for debugging and fosters their trust in AI.

Info

  • Private

    Only approved members can view this group.

  • Visible

    Shown to site visitors.

  • June 15, 2021

    Created

  • PPI Center

    Created by

bottom of page