Machine learning focuses on applications that learn from experience and improve their decision-making or predictive accuracy over time.
The investor Marc Andreessen famously portrayed a future where smartphones, the world wide web, and cloud computation would be commonplace.
“Software is eating the world”– Marc Andreessen, 2011
We now live in that software-eaten world. However, technology continues to develop at an astounding speed and nipping at software’s heels is Artificial Intelligence.
“Software ate the world, now AI is eating software” - Terry Singh, 2019
Introducing one of the most popular fields of AI, machine learning. Last week we covered the goals of AI. Let’s return to those ideas again to shape our understanding of machine learning.
How does machine learning fit into the picture?
The word machine learning was first used by Arthur Samuel, who stated:
“Machine learning is the field of study that gives computers the ability to learn without being explicitly programmed.”
Later, Tom Mitchell added that computers learn from experience to do some task.
There are two important points to emphasize from these definitions:
There’s a distinction between AI that is thinking or acting. An ideal AI is a system that thinks like a human (or more rationally than a human). Another goal of AI is to behave like a human (or more rationally than a human). Machine learning has a focus on learning a behavior (or task). A computer learns from many iterations of a base action.
Not explicit programming
Consider a calculator. In order for us to use all the various functions that are on the calculator like plus, minus, multiply, and divide. The electronic device has been programmed to perform these calculations. It can take an input and give an output at an inhuman speed, but the calculator cannot do more than what it has been programmed to do. Machine learning is different because it learns from data (i.e. from experience) and infers behavior. Data can include all sorts of things – numbers, text, photos, video, clicks, etc.
The process of machine learning is thus: collect data, train the model with the data, and test the model. If the model works well, we can keep using it and improving the model using future data. If it doesn’t work, we choose a new model and begin again.
Another understanding is that machine learning is a combination of computer science and statistics. Michael I. Jordan (no, not the basketball legend), a leading figure in the field of machine learning, commented on reddit.
“When Leo Breiman developed random forests, was he being a statistician or a machine learner? When my colleagues and I developed latent Dirichlet allocation, were we being statisticians or machine learners? Are the SVM and boosting machine learning while logistic regression is statistics, even though they're solving essentially the same optimization problems up to slightly different shapes in a loss function? Why does anyone think that these are meaningful distinctions?”
For Jordan, statistics and machine learning solve problems in a similar way. Differentiating between the fields is only a name game and doesn’t contribute anything actually meaningful.
Another important idea Jordan touches upon is that in engineering, theory, and practice are often blurred. Machine learning takes core concepts and transforms them into engineering systems. Jordan claims that separating theory and practice doesn’t make sense since the concepts are so closely intertwined. Now that we have a better understanding of machine learning, let’s go turn to deep learning.
What about deep learning?
Deep learning is a class of machine learning algorithms that use multiple layers to extract higher-level features from the data. Basically, deep learning is machine learning but...
“on steroids: it uses a technique that gives machines the enhanced ability to find—and amplify—even the smallest patterns.”
Let’s consider an image of a cat. One layer might be able to identify the edges of the cat’s body. Other, high levels, might identify ears, whiskers, or eyes. Deep learning is a powerful tool because the techniques can recognize even the most minute of patterns.
In 2012, Andrew NG did exactly that. Ng taught a neural network how to recognize a cat with data from 10 million YouTube videos 😱. Deep learning enabled engineers to find practical applications for machine learning and push the boundaries of the field even further!
The second part of this article will focus on how problem solving works in machine learning. Next week we will discuss the categories of solutions in ml and identifying a solution through your data. Read up more on preparing your data for machine learning in the mean time.
Looking to to understand if a machine learning solution is right for you?