deep learning (deep neural network)

Deep learning is an aspect of artificial intelligence (AI) that is concerned with emulating the learning approach that human beings use to gain certain types of knowledge. At its simplest, deep learning can be thought of as a way to automate predictive analytics.

While traditional machine learning algorithms are linear, deep learning algorithms are stacked in a hierarchy of increasing complexity and abstraction. To understand deep learning, imagine a toddler whose first word is dog. The toddler learns what a dog is (and is not) by pointing to objects and saying the word dog. The parent says, “Yes, that is a dog,” or, “No, that is not a dog.” As the toddler continues to point to objects, he becomes more aware of the features that all dogs possess. What the toddler does, without knowing it, is clarify a complex abstraction (the concept of dog) by building a hierarchy in which each level of abstraction is created with knowledge that was gained from the preceding layer of the hierarchy.

How deep learning works

Computer programs that use deep learning go through much the same process. Each algorithm in the hierarchy applies a nonlinear transformation on its input and uses what it learns to create a statistical model as output. Iterations continue until the output has reached an acceptable level of accuracy. The number of processing layers through which data must pass is what inspired the label deep.

 

 

In traditional machine learning, the learning process is supervised and the programmer has to be very, very specific when telling the computer what types of things it should be looking for when deciding if an image contains a dog or does not contain a dog. This is a laborious process called feature extraction and the computer’s success rate depends entirely upon the programmer’s ability to accurately define a feature set for “dog.” The advantage of deep learning is that the program builds the feature set by itself without supervision. Unsupervised learning is not only faster, but it is usually more accurate.

Initially, the computer program might be provided with training data, a set of images for which a human has labeled each image “dog” or “not dog” with meta tags. The program uses the information it receives from the training data to create a feature set for dog and build a predictive model. In this case, the model the computer first creates might predict that anything in an image that has four legs and a tail should be labeled “dog.” Of course, the program is not aware of the labels “four legs” or “tail;” it will simply look for patterns of pixels in the digital data. With each iteration, the predictive model the computer creates becomes more complex and more accurate.

Because this process mimics a system of human neurons, deep learning is sometimes referred to as deep neural learning or deep neural networking. Unlike the toddler, who will take weeks or even months to understand the concept of “dog,” a computer program that uses deep learning algorithms can be shown a training set and sort through millions of images, accurately identifying which images have dogs in them within a few minutes.

To achieve an acceptable level of accuracy, deep learning programs require access to immense amounts of training data and processing power, neither of which were easily available to programmers until the era of big data and cloud computing. Because deep learning programming is able to create complex statistical models directly from its own iterative output, it is able to create accurate predictive models from large quantities of unlabeled, unstructured data. This is important as the internet of things (IoT) continues to become more pervasive, because most of the data humans and machines create is unstructured and is not labeled.

Use cases today for deep learning include all types of big data analytics applications, especially those focused on natural language processing (NLP), language translation, medical diagnosis, stock market trading signals, network security and image dentification.

Using neural networks

A type of advanced machine learning algorithm, known as neural networks, underpins most deep learning models. Neural networks come in several different forms, including recurrent neural networks, convolutional neural networks, artificial neural networks and feedforward neural networks, and each has their benefit for specific use cases. However, they all function in somewhat similar ways, by feeding data in and letting the model figure out for itself whether it has made the right interpretation or decision about a given data element.

Neural networks involve a trial-and-error process, so they need massive amounts of data to train on. It’s no coincidence that neural networks became popular only after most enterprises embraced big data analytics and accumulated large stores of data. Because the model’s first few iterations involve somewhat-educated guesses on the contents of image or parts of speech, the data used during the training stage must be labeled so the model can see if its guess was accurate. This means that, though many enterprises that use big data have large amounts of data, unstructured data is less helpful. Unstructured data can be analyzed by a deep learning model once it has been trained and reaches an acceptable level of accuracy, but deep learning models can’t train on unstructured data.

Examples of deep learning applications

Because deep learning models process information in ways similar to the human brain, models can be applied to many tasks people do. Deep learning is currently used in most common image recognition tools, NLP processing and speech recognition software. These tools are starting to appear in applications as diverse as self-driving cars and language translation services.

Limitations of deep learning

The biggest limitation of deep learning models is that they learn through observations. This means they only know what was in the data they trained on. If a user has a small amount of data or it comes from one specific source that is not necessarily representative of the broader functional area, the models will not learn in a way that is generalizable.

visit our website 🙂     ==>    https://arisahpc.com  

source : searchenterpriseai.techtarget.com

 

پاسخی بگذارید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *