Machine Learning Crash Course - [Berkeley]
Daniel Geng and Shannon Shih created a 4-part (and on-going) crash course on machine learning, available at Berkeley's ML blog. The first part was posted in November 2016 while the last one in July 2017; so it's really recent. Here's what it includes:
- Part 1 - Introduction, Regression/Classification, Cost Functions, and Gradient Descent
- Part 2 - Perceptrons, Logistic Regression, and SVMs
- Part 3 - Neural Networks
- Part 4 - Bias and Variance
This is really a crash course as the first part begins with introductory concepts of ML. But it takes off really quickly and gets you into the maths of ML, which are fundamental, in my opinion. It doesn't go too much into the programming side of it because if you understand the concepts, you can pick from a multitude of languages for implementation.
The graphics are 'stellar' and it should help along with the understanding. Plus, there are a few interactive graphics that you should really 'play' with, as they further the grasping of the concepts. I'd recommend those from part 3 on neural nets. Enjoy the learning!
To stay in touch with me, follow @cristi
Cristi Vlad, Self-Experimenter and Author
Funny thing, the Perceptron project was assumed to take no more than a few months at most to solve.
Humans tend to under-estimate the complexity of certain tasks :)
Awesome! I've been learning about gradient descent and neural networks through content pushed out from @jackeown
This looks like a perfect course to help me understand his content and really get a grasp on it.
@cristi - thanks for pulling together another learning resource. Great to digest on my commutes. Definitely adding this set for the ride home tonight. Perused them and they seem like great introductions to the topics.
this is indeed a good read while commuting! :)
Excellent article brother !, hey, can you help me with a vote of yours? I would thank you from the heart!
wow. really?
Love this post. Good info. Thanks!