DaML Seminar

In this seminar, we learn together about the trending data mining and machine learning (DaML) techniques. The seminar of fall 2019 is currently scheduled on Fridays on a monthly basis.


9/13 Learning with Small Data

Date: 9/13 Friday 2:30pm-5:00pm

Location: W219 Westgate Building  (location change!) E205 Westgate Building

Abstract: Though being in the era of big data, we frequently face the real-world problems with only small (labeled) data. Could we still make machine learn from small data? In this seminar, we will cover the state-of-the-art machine learning techniques to handle small data issue.

Outline:

    1. Data: augmentation
      1. Augmentation by using labeled data (present by Guanjie Zheng)

        1. Hand-crafted rule based augmentation
 (application specific techniques)
        2. Feature space augmentation

          1. SMOTE (Chawla et al, 2002) 
          2. Autoencoder (Bengio et al., 2007, Vincent et al., 2010)
        3. Adversarial model
s
          1. Adversarial training (Goodfellow et al., 2014)
          2. Generative adversarial networks (GAN) 
            1. GAN for image (Goodfellow et al., 2014)
            2. GAN for NLP: seqGAN (Yu et al., 2016)
            3. GAN for RL: Generative adversarial imitation learning (GAIL) (Ho et al., 2016)
      2. Augmentation by using unlabeled data (present by Hua Wei)

        1. Semi-supervised learning

          1. Co-training (Avrim and Mitchell, 1998)
          2. Graph-based methods (Xiaojin et al., 2003)
          3. Self-training
            1. kNN propagation
            2. Pseudo-labeling (Dong-Hyun, 2013)
          4. Entropy minimization
          5. Consistency regularization (Samuli and Aila, 2016, Antti and Valpola, 2017)
          6. MixMatch: combination of self-training, entropy minimization, and consistency regularization (David et al., 2019)
        2. Active learning

    2. Model: knowledge transfer

      1. Transfer knowledge from the model learned from similar datasets (present by Huaxiu Yao)

        1. Transfer learning

          1. Fine tuning (target domain with labeled data)
          2. Unsupervised transfer learning (target domain with no labeled data)
            1. Discrepancy-based method: Loss function considering the discrepancy between source and target (Long et al., 2016)
            2. Adversarial method (Tzeng et al., 2017)
        2. Multi-task Learning

        3. Meta-learning

          1. Gradient-based: MAML (Finn et al., 2017)
          2. Non-parametric: (Snell et al., 2017)
          3. Task heterogeneity
      2. Transfer knowledge from domain expert (present by Porter Jenkins)

        1. Enriching representations
 using knowledge graph
          1. ConceptNet (Speer et al., 2017)
          2. Healthcare (Ma et al. 2018)
        2. Regularizing the loss function

          1. Adding prior in Bayesian model 
          2. Adding prior in discriminative model (Ma et al. 2018)

Slideshttps://docs.google.com/presentation/d/1NtFw5YE2WK9xdaEytTyuy1WO3nhvR6O5wohvZVIZT18/edit?usp=sharing

Speaker:

Audience: This seminar is open to public, feel free to forward this information to interested people. The room has limited seats (54). Audience should be familiar with basic machine learning and data mining techniques since this seminar is targeted at advanced machine learning and data mining techniques.

We had a full room in September seminar!


10/11 Interpretable Machine Learning

Date: 10/11 Friday 2:30pm-5:00pm

Location: E205 Westgate Building

Abstract: Machine learning models have shown success in terms of accuracy in prediction or classification. But in order to use such models to make policies, it is important to interpret them first. In this seminar, we will talk about the traditional and trending techniques in interpretable ML models.

Outline:

  • Intrinsically Interpretable Models
    • Target: Model (Presented by Chacha Chen)
      • Linear regression 
      • Logistic regression
      • Decision tree
      • Equation-based model [Schmidt, M., et al, Science 2009]
      • Other traditional approaches
    • Target: Sample (Presented by Fenglong Ma)
      • KNN (Instance-based)
      • Attention-based model[Bahdanau et al, ICLR 2015; Xu et al, ICML 2015; Ma et al, KDD 2017; Vaswani et al, NeurIPS 2017]
  • Post Hoc Interpretable Models
    • Target: Model (Presented by Fenglong Ma)
      • Permutation feature importance [Altmann et al, Bioinformatics 2010]
      • Representative instance generation [Nguyen et al, NeurIPS 2016]
    • Target: Sample (Presented by Wenbo Guo)
      • Model-specific explanation (treat the networks as white boxes)
        • Perturbation-based important feature identification [Fong et al, ICCV 2017;  Dabkowski et al, NeurIPS 2017]
        • Gradient-based saliency maps [Sundararajan et al, ICML 2018; Zhang et al, USENIX Security 2020]
      • Model-agnostic explanation  (treat the networks as black boxes)
        • Auxiliary model-based explanation[(Ribeiro et al, KDD 2016; Guo et al, NeurIPS 2018]
        • Instance-based explanation (Presented by Xinyang) [Koh and Liang, ICML 2017; Yeh et al, NeurIPS 2018]
  • Evaluation (Presented by Xinyang Zhang)
    • Qualitative evaluation [Murdoch et al, ICLR 2018; Simonyan et al, ICLR Workshop 2014]
    • Quantitative evaluation [Dabkowski and Gal, NeurIPS 2017; Guo et al, CCS 2018; Yeh et al, NeurIPS 2019]
  • Open Questions 

Slides: https://docs.google.com/presentation/d/1KhAYjxee_bup281ro3hCmR2mKUtT5zdbiGOniSstIEQ/edit?usp=sharing

Speaker:

Audience: This seminar is open to public, feel free to forward this information to interested people. The room has limited seats (54). Audience should be familiar with basic machine learning and data mining techniques since this seminar is targeted at advanced machine learning and data mining techniques.

Photos on 10/11  (credit: Jordan Ford, Penn State)

 


11/8 Towards Robust Machine Learning Models

Date: 11/8 Friday 2:30pm-5:00pm

Location: E205 Westgate Building

Outline: TBD

Speaker:


12/6 TBD

Date: 12/6 Friday 2:30pm-5:00pm

Location: E205 Westgate Building

Outline: TBD

Speaker: TBD


Interested in contributing or collaboration? Send an email to Prof. Jessie Li (jessieli@psu.edu).