Machine Learning itself is an umbrella that covers a vast number of uses and applications, just like Computer Science. It's is a topic that has been gaining a lot of interest for the past decade. The stigma is that it is often associated with page-long computations. Thankfully, that's mostly been eliminated with a bunch of frameworks like scikit-learn. However, as my friend Kabir, likes to say: "I think it is just as important to know what exactly goes on under the hood."
Defined by Arthur Samuel in 1959, Machine Learning gives "computers the ability to learn without being explicitly programmed." In other words, the computer teaches itself. How?, you may ask. Well, Machine Learning, a subset of AI, uses algorithms and specific models to analyze and train data. This includes the Neural Network, a system that takes several inputs, and produces a single output. Based on the fundamental network of the human brain, the interconnected web of neurons transmitting patterns of electrical signals are what Artifical Neural Networks attempt to mimic.
A computer program is said to learn from experience E with respect to some task T and some performance measure P if its performance on T, as measured by P, improves with experience E.
―Tom Mitchell, Carnegie Mellon University
In the quote above, Machine Learning is described at the fundamental level. Say you have a task (T) such as predicting stock prices. In this case, experience (E) is your data set, and the performance (P) is your accuracy of which the algorithm learns from.
The thing is, machines aren't human. Sure, a computer can calculate the square root of 603,729, but can it discriminate between a fish and a bird? For these tasks, machine learning is essential.
Neural Networks, at the core, are algorithms that are adaptable, which therefore allow it to learn. Most of the time, this is done by adjusting the weights. If the network produces a desired output, the weights are not adjusted. However, if the network produces an undesired, or "poor" output, the system will adapt and alter the weights in order to imrpove results.
There are three types of learning:
The output of a dataset (label) rely on the input (features) and the model that is being used. In fact, the more features you have, the better. This is because you give the model more data to work with, which leads to more accurate labels. Here are some examples of machine learning models:
Now that we've got a brief overview. Let's head over and build some projects!
Build a Feedforward Neural Network
Build a Stock Prediction Algorithm