Google's self-learning AI AlphaZero masters chess in 4 hours

In this video, we'll learn about K-fold cross-validation and how it can be used for selecting optimal tuning parameters, choosing between models, and selecting features. We'll compare cross-validation with the train/test split procedure, and we'll also discuss some variations of cross-validation that can result in more accurate estimates of model performance. This is the seventh video in the series: "Introduction to machine learning with scikit-learn". Read more about this video here: http://blog.kaggle.com/2015/06/29/scikit-learn-video-7-optimizing-your-model-with-cross-validation/ The IPython notebook shown in the video is available on GitHub: https://github.com/justmarkham/scikit-learn-videos == RESOURCES == Documentation on cross-validation: http://scikit-learn.org/stable/modules/cross_validation.html Documentation on model evaluation: http://scikit-learn.org/stable/modules/model_evaluation.html GitHub issue on negative mean squared error: https://github.com/scikit-learn/scikit-learn/issues/2439 An Introduction to Statistical Learning (book): http://www-bcf.usc.edu/~gareth/ISL/ K-fold and leave-one-out cross-validation (video): https://www.youtube.com/watch?v=nZAM5OXrktY Cross-validation the right and wrong ways (video): https://www.youtube.com/watch?v=S06JpVoNaA0 Accurately Measuring Model Prediction Error: http://scott.fortmann-roe.com/docs/MeasuringError.html An Introduction to Feature Selection: http://machinelearningmastery.com/an-introduction-to-feature-selection/ Cross-Validation The Right and Wrong Way (notebook): http://nbviewer.ipython.org/github/cs109/content/blob/master/lec_10_cross_val.ipynb Cross-validation pitfalls (paper): http://www.jcheminf.com/content/pdf/1758-2946-6-10.pdf) == SUBSCRIBE! == https://www.youtube.com/user/dataschool?sub_confirmation=1 == LET'S CONNECT! == Blog: http://www.dataschool.io Newsletter: http://www.dataschool.io/subscribe/ Twitter: https://twitter.com/justmarkham GitHub: https://github.com/justmarkham