(2025, D. T. McGuines, Ph.D)

Current version is 2025.WS.

This document includes the contents of Drive Systems, official name being Machine Learning and Data Science 2, taught at MCI in the Mechatronik Design Innovation. This document is the part of the module MECH-B-5-MLDS-MLDS2-ILV taught in the B.Sc degree.

All relevant code of the document is done using SageMath where stated and Python v3.13.7.

This document was compiled with LuaTeX v1.22.0, and all editing were done using GNU Emacs v30.1 using AUCTeX and org-mode package.

This document is based on the following books and resources shown in no particular order:

Neural Networks: Methodology and Applications by Gérard Dreyfus , Springer Python for Data Analysis: Data Wrangling with Pandas, Numpy, and iPython by Wes McKinney , Springer Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow by Aurélien Géron , O’Reilly TensorFlow for Deep Learning: From Linear Regression To Reinforcement Learning by B. Ramsundar, and R. B. Zadeh , O’Reilly AI and Machine Learning for Coders by Moroney L. , O’ Reilly Neural Networks and Deep Learning by Aggarwal S. , Springer Python Machine Learning by Raschka., et. al. , Packt Machine Learning with Python Cookbook by Albon C. , O’ Reilly CS229 Lecture Notes by Ng A., et.al , - Lecture Notes on Machine Learning by Migel A., et. al , -


The document is designed with no intention of publication and has only been designed for education purposes.

The current maintainer of this work along with the primary lecturer
is D. T. McGuines, Ph.D. (dtm@mci4me.at).

Support Vector Machines
1.1  Introduction
1.2  Linear Support Vector Machine Classification
1.3  Nonlinear Support Vector Machine Classification
1.4  Regression
Decision Trees
2.1  Introduction
2.2  Training and Visualising Decision Trees
2.3  Making Predictions
2.4  The CART Training Algorithm
2.5  Regression
2.6  Sensitivity to Axis Orientation
Ensemble Learning and Random Forests
3.1  Introduction
3.2  Bagging and Pasting
3.3  Random Forests
3.4  Boosting
3.5  Bagging v. Boosting
3.6  Stacking
Dimensionality Reduction
4.1  Introduction
4.2  Main Approaches to Dimensionality Reduction
4.3  Principal Component Analysis (PCA)
4.4  Random Projection
4.5  Locally Linear Embedding
Unsupervised Learning
5.1  Introduction
5.2  Clustering Algorithms
5.3  Gaussian Mixtures
Introduction to Artificial Neural Networks
6.1  Introduction
6.2  From Biology to Silicon: Artificial Neurons
6.3  Implementing Multi-layer Perceptrons (MLP)s with Keras
Computer Vision using Convolutional Neural Networks
7.1  Introduction
7.2  Visual Cortex Architecture
7.3  Convolutional Layers
7.4  Pooling Layer
7.5  Implementing Pooling Layers with Keras
7.6  CNN Architectures
7.7  Implementing a ResNet-34 CNN using Keras
7.8  Using Pre-Trained Models from Keras
7.9  Pre-Trained Models for Transfer Learning
 List of Acronyms