Deep Learning is a sub-class of Machine Learning algorithms whose peculiarity is a higher level of complexity. So, Deep Learning belongs to Machine Learning and they are absolutely not opposite concepts. We refer to shallow learning to those techniques of machine learning that are not deep.
Let’s start placing them in our world:
What does this high-level complexity rest on?
In practice, Deep Learning consists of multiple hidden layers in a neural network. We explained the basics of neural nets in the post From the Neuron to the Net, and then we already introduced deep learning as a special kind of super-net:
This increment in the number of layers and the networks’ complexity is known as deep learning, something like a conventional network on steroids.
Why is this complexity an advantage?
The knowledge flows along an extensive sequence of layers. As humans, the information is learnt step by step. First layers focus on learning more specific concepts while the deeper layers will use the information already learnt to soak in more abstract concepts. This procedure of constructing representations of the data is known as feature extraction.
Their complex architecture provides deep neural nets with the ability to perform a feature extraction automatically. On the contrary, in conventional machine learning, or shallow learning, this task is carried out outside the algorithmic stage. People, data scientists’ teams and not machines, are in charge of analyzing raw data and change it into valuable features.
The fundamental advantage of Deep Learning is that these algorithms can be trained on unstructured data, with unlimited access to info. And this powerful condition provides them the opportunity to obtain more profitable learning.
Maybe now you are wondering…
Starting from how many layers is it considered Deep Learning?
There is not a universal definition of when shallow learning ends and deep learning begins. However, the most agreed consensus is that multiple hidden layers mean Deep Learning. In other words, we consider Deep Learning from at least 3 nonlinear transformations, i. e. >=2 hidden layers + 1 output layer.
Is there any Deep Learning apart from neural networks?
I couldn’t find a complete consensus about this either. Nevertheless, it seems that everything about Deep Learning is related, at least indirectly, to neural nets. So, I agree with those who affirm that without neural networks, deep learning would not exist.
When do we need Deep Learning?
The Universal Approximation Theorem (UAT) declares that only one hidden layer, with a finite number of neurons, is enough for approximating any looked-for function. This is an impressive statement for two reasons: On the one hand, this theorem proves the immense capacity of neural networks. But, on the other hand… Does it mean that we never need Deep Learning? No, breathe deeply, it doesn’t mean that…
The UAT doesn’t specify how many neurons it must contain. Although a single hidden layer could be enough to model a specific function, it could be more efficient to learn it by multiple hidden layers net. Furthermore, when training a net, we are looking for a function that best generalizes the relationship in data. Even if a single hidden network is able to represent the function that best fits the training examples, this would not mean that it generalizes better the behavior of the data out of the training set.
This is very well explained in the book Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville:
In summary, a feedforward network with a single layer is sufficient to represent any function, but the layer may be infeasibly large and may fail to learn and generalize correctly. In many circumstances, using deeper models can reduce the number of units required to represent the desired function and can reduce the amount of generalization error.
What is the difference between Machine Learning and Artificial Intelligence?
You can find a detailed answer in this entry.
What is Machine Learning?
Well… Maybe in your case you are putting the cart before the horse… Try starting with this.
To sum up…
Deep Learning is a sub-class of Machine Learning, basically, neural networks that use multiple hidden layers. Their complexity allows this type of algorithms to perform feature extraction on their own. As they are able to deal with raw data, they have opened access to the whole information and so they could potentially find out better solutions.