CIKM 2017 Workshop on Interpretable Data Mining (IDM) – Bridging the Gap between Shallow and Deep Models

November, 2017 - Pan Pacific Singapore

Abstract

Intelligent systems built upon complex machine learning and data mining models (e.g., deep neural networks) have shown superior performances on various real-world applications. However, their effectiveness is limited by the difficulty in interpreting the resultant prediction mechanisms or how the results are obtained. In contrast, the results of many simple or shallow models, such as rule-based or tree-based methods, are explainable but not sufficiently accurate. Model interpretability enables the systems to be clearly understood, properly trusted, effectively managed and widely adopted by end users. Interpretations are necessary in applications such as medical diagnosis, fraud detection and object recognition where valid reasons would be significantly helpful, if not necessary, before taking actions based on predictions. This workshop is about interpreting the prediction mechanisms or results of the complex computational models for data mining by taking advantage of simple models which are easier to understand. We wish to exchange ideas on recent approaches to the challenges of model interpretability, identify emerging fields of applications for such techniques, and provide opportunities for relevant interdisciplinary research or projects.

Topics

Topic areas for the workshop include (but are not limited to) the following:

  • Interpretable machine learning

  • Interpretable deep learning

  • Information fusion and knowledge transfer

  • Anomaly detection with interpretability

  • Healthcare analytics

  • Social computing

  • Computer vision

  • Human-centric computing

  • Visual analytics

  • Human computer interaction in data mining

  • Interactive modeling between human and intelligent systems