Packt Publishing, 2017. — 450 p. — ISBN: 978-1-78829-575-8.
True PDFComplex statistics in Machine Learning worry a lot of developers. Knowing statistics helps you build strong Machine Learning models that are optimized for a given problem statement. This book will teach you all it takes to perform complex statistical computations required for Machine Learning. You will gain information on statistics behind supervised learning, unsupervised learning, reinforcement learning, and more. Understand the real-world examples that discuss the statistical side of Machine Learning and familiarize yourself with it. You will also design programs for performing tasks such as model, parameter fitting, regression, classification, density collection, and more.
By the end of the book, you will have mastered the required statistics for Machine Learning and will be able to apply your new skills to any sort of industry problem.
Journey from Statistics to Machine LearningStatistical terminology for model building and validation
Machine learning terminology for model building and validation
Machine learning model overview
Parallelism of Statistics and Machine LearningComparison between regression and machine learning models
Compensating factors in machine learning models
Machine learning models - ridge and lasso regression
Logistic Regression Versus Random ForestMaximum likelihood estimation
Logistic regression – introduction and advantages
Random forest
Variable importance plot
Comparison of logistic regression with random forest
Tree-Based Machine Learning ModelsIntroducing decision tree classifiers
Comparison between logistic regression and decision trees
Comparison of error components across various styles of models
Remedial actions to push the model towards the ideal region
HR attrition data example
Decision tree classifier
Tuning class weights in decision tree classifier
Bagging classifier
Random forest classifier
Random forest classifier - grid search
AdaBoost classifier
Gradient boosting classifier
Comparison between AdaBoosting versus gradient boosting
Extreme gradient boosting - XGBoost classifier
Ensemble of ensembles - model stacking
Ensemble of ensembles with different types of classifiers
Ensemble of ensembles with bootstrap samples using a single type of classifier
K-Nearest Neighbors and Naive BayesK-nearest neighbors
KNN classifier with breast cancer Wisconsin data example
Tuning of k-value in KNN classifier
Naive Bayes
Probability fundamentals
Understanding Bayes theorem with conditional probability
Naive Bayes classification
Laplace estimator
Naive Bayes SMS spam classification example
Support Vector Machines and Neural NetworksSupport vector machines working principles
Kernel functions
SVM multilabel classifier with letter recognition data example
Artificial neural networks – ANN
Activation functions
Forward propagation and backpropagation
Optimization of neural networks
Dropout in neural networks
ANN classifier applied on handwritten digits using scikit-learn
Introduction to deep learning
Recommendation EnginesContent-based filtering
Collaborative filtering
Evaluation of recommendation engine model
Unsupervised LearningK-means clustering
Principal component analysis – PCA
Singular value decomposition – SVD
Deep auto encoders
Model building technique using encoder-decoder architecture
Deep auto encoders applied on handwritten digits using Keras
Reinforcement LearningIntroduction to reinforcement learning
Comparing supervised, unsupervised, and reinforcement learning in detail
Characteristics of reinforcement learning
Reinforcement learning basics
Markov decision processes and Bellman equations
Dynamic programming
Grid world example using value and policy iteration algorithms with basic Python
Monte Carlo methods
Temporal difference learning
SARSA on-policy TD control
Q-learning - off-policy TD control
Cliff walking example of on-policy and off-policy of TD control
Applications of reinforcement learning with integration of machine learning and deep learning
Further reading