Machine learning theory made easy.
So, you want to master machine learning. Even though you have experience in the field, sometimes you still feel that something is missing. A look behind the curtain.
Have you ever felt the learning curve to be so sharp that it was too difficult even to start? The theory was so dry and seemingly irrelevant that you were unable to go beyond the basics?
If so, I am building something for you. I am working to create the best resource to study the mathematics of machine learning out there.
Join the early access and be a part of the journey!
Math explained, as simple as possible.
Every concept is explained step by step, from elementary to advanced. No fancy tricks and mathematical magic. Intuition and motivation first, technical explanations second.
Open up the black boxes.
Machine learning is full of mysterious black boxes. Looking inside them allows you to be a master of your field and never be in the dark when things go wrong.
Be a part of the process.
This book is being written in public. With early access, you’ll get each chapter as I finish, with a personal hotline to me. Is something not appropriately explained? Is a concept not motivated with applications? Let me know, and I’ll get right on it!
This is what is covered in detail
Structure of vector spaces: norms and inner products
Linear transformations and their matrices
Eigenvectors and eigenvalues
Solving linear equation systems
Special matrices and their decomposition
Function limits and continuity
Minima, maxima, and the derivative
Basics of gradient descent
Partial derivatives and gradients
Minima and maxima in multiple dimensions
Gradient descent in its full form
Integration in multiple dimensions
The mathematical concept of probability
Distributions and densities
Information theory and entropy
Fundamentals of parameter estimation
Maximum likelihood estimation
The Bayesian viewpoint of statistics
Bias and variance
Measuring predictive performance of statistical models
The taxonomy of machine learning tasks
Linear and logistic regression
Fundamentals of clustering
Principal Component Analysis
Most common loss functions and what’s behind them
Regularization of machine learning models
t-distributed stochastic neighbor embedding
Logistic regression, revisited
Loss functions, from a neural network perspective
Stochastic gradient descent
The Lookahead optimizer
The convolutional layer, in-depth
Dropout and BatchNorm
Fundamental tasks of computer vision
Alexnet and Resnet
Generative Adversarial Networks
Want to find out more?
Listen to Practical AI’s interview with Tivadar about the book!
Help me write the book you want.
Get early access now!