machine learnng在线公开课第 3 期:logistic regressionmachine learning在线公开课第 3 期:logistic regressionmachine learning在线公开课第 3 期:logistic regressionmachine learning在线公开课第 3 期:logistic regression.pptVIP

  • 6
  • 0
  • 约5.69千字
  • 约 41页
  • 2017-03-18 发布于贵州
  • 举报

machine learnng在线公开课第 3 期:logistic regressionmachine learning在线公开课第 3 期:logistic regressionmachine learning在线公开课第 3 期:logistic regressionmachine learning在线公开课第 3 期:logistic regression.ppt

5. Generative vs. Discriminative Model Rachel Zhang /abcjennifer Generative Classifier Class-condition density Class-prior Discriminative Classifier Which one is more accurate? 5. Generative vs. Discriminative Model Rachel Zhang /abcjennifer Class-condition density Prior over class 5. Generative vs. Discriminative Model Easy to fit? Fit classes separately? Handle missing features easily? Marlin 2008 Can handle unlabeled training data? Lasserr 2006, Liang 2007 Can handle feature preprocessing? Provide calibrated probability? Rachel Zhang /abcjennifer 5. Generative vs. Discriminative Model Easy to fit? Na?ve Bayes Model description Model fitting Rachel Zhang /abcjennifer 5. Generative vs. Discriminative Model Rachel Zhang /abcjennifer 5. Generative vs. Discriminative Model Dealing with missing data MCAR MAR NMAR else Rachel Zhang /abcjennifer 5. Generative vs. Discriminative Model Dealing with missing data @testing time @training time Rachel Zhang /abcjennifer Reference Ng, A. Y. and M. I. Jordan (2002). On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In NIPS-14. Nemirovski, A. and D. Yudin (1978). On Cezari’s convergence of the steepest descent method for approximating saddle points of convexconcave functions. Soviet Math. Dokl. 19. Kushner, H. and G. Yin (2003). Stochastic approximation and recursive algorithms and applications. Springer. Bottou, L. (1998). Online algorithms and stochastic approximations. In D. Saad (Ed.), Online Learning and Neural Networks. Cambridge. Bach, F. and E. Moulines (2011). Nonasymptotic analysis of stochastic approximation algorithms for machine learning. In NIPS. Bottou, L. (2007). Learning with large datasets (nips tutorial). Duchi, J., E. Hazan, and Y. Singer (2010). Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. In Proc. of the Workshop on Computational Learning Theory. Sutskever etc. (2013). On the importance of initialization

您可能关注的文档

文档评论(0)

1亿VIP精品文档

相关文档