- mitigate small data sets: use auxiliary tasks, domain knowledge - auxiliary tasks: transfer learning, multi-task learning, meta learning - transfer learning (domain adaptation): supervised vs. unsupervised - 10:20 unsupervised transfer learning: no labeled data in target domain - goal: similar feature space after embedding - MMD (maximum mean discrepancy): distance of distributions - 14:11 multi-task learning - hard-parameter sharing - soft-parameter sharing - layer routing - 16:55 meta-learning
- mitigate small data sets: use auxiliary tasks, domain knowledge
- auxiliary tasks: transfer learning, multi-task learning, meta learning
- transfer learning (domain adaptation): supervised vs. unsupervised
- 10:20 unsupervised transfer learning: no labeled data in target domain
- goal: similar feature space after embedding
- MMD (maximum mean discrepancy): distance of distributions
- 14:11 multi-task learning
- hard-parameter sharing
- soft-parameter sharing
- layer routing
- 16:55 meta-learning