With this goal in mind, we developed convolutional transform learning (CTL). In this work, we would like to keep the best of both worlds, i.e., the success of convolutive models from CNN and the promises of unsupervised learning formulations. More on the topic of unsupervised versus supervised learning can be found in a blog by DeepMind Footnote 2. Thus, it does not take into account the task to be performed while learning about the data, saving from the need of human expertise that is required in supervised learning. This approach typically takes benefit from the fact that data is inherently very rich in its structure, unlike targets that are sparse in nature. Unsupervised learning technique does not require targets/labels to learn from data. He even added that “I don’t think it’s how the brain works,” and “We clearly don’t need all the labeled data.” It seems that Hinton is hinting towards unsupervised learning frameworks. In an interview with Axios Footnote 1, Hinton mentioned his “deep suspicion” on backpropagation, the workhorse behind all supervised deep neural networks. The same is believed by a number of machine learning researchers, including Hinton himself, who are wary of supervised learning. Indeed, such tasks require expert labeling that is difficult to acquire, thus limiting the size of available labeled dataset. However, when it comes to tasks that require expert labeling, such as facial recognition from sketches (requiring forensic expertise) or ischemic attack detection from EEG (requiring medical expertise), the accuracies become modest. In the said problem, deep networks reach almost 100% accuracy, even surpassing human capabilities. These companies are easily equipped with gigantic labeled facial images data as these are “tagged” by their respective users. Google’s FaceNet and Facebook’s DeepFace architectures are trained on 400 million facial images, a significant proportion of world’s population. This probably explains their tremendous success in facial recognition tasks. The research in this area is primarily led by its success rather than its understanding.Īn important point to mention is that the performance of CNN is largely driven by the availability of very large labeled datasets. One possibility may lie in the universal function approximation capacity of deep neural networks rather than its biological semblance. The reason for the great results of CNN methods for time series analysis (1D data processing in general) is not well understood. Owing to these shortcomings, LSTMs are being replaced by CNNs. However, LSTM are not able to model very long sequences, and their training is hardware intensive. Until recently, long short-term memory (LSTM) networks were the almost exclusively used neural network models for time series analysis as they were supposed to mimic memory and hence were deemed suitable for such tasks. Neural network models have also been used for analyzing time series data. For instance, biologists consider that the human visual system would consist of 6 layers and not 20+ layers used in GoogleNet. Although such a link between human vision and CNN may be present, it has been observed that deep CNNs are not exact models for human vision. The operations within the CNN were believed to mimic the human visual system. It was initially applied for images in computer vision tasks. In the last decade, convolutional neural network (CNN) has enjoyed tremendous success in different types of data analysis. A comparison with state-of-the-art methods (based on CNN and long short-term memory network) shows the superiority of our method for performing a reliable feature extraction. We apply the proposed technique, named DeConFuse, on the problem of stock forecasting and trading. The present paper aims at (i) proposing a deep version of CTL, (ii) proposing an unsupervised fusion formulation taking advantage of the proposed deep CTL representation, and (iii) developing a mathematically sounded optimization strategy for performing the learning task. In a recent work, we show that such shortcoming can be addressed by adopting a convolutional transform learning (CTL) approach, where convolutional filters are learnt in an unsupervised fashion. However, CNN cannot perform learning tasks in an unsupervised fashion. The success of convolutive features owes to the convolutional neural network (CNN). The great learning ability of convolutional filters for data analysis is well acknowledged. This work proposes an unsupervised fusion framework based on deep convolutional transform learning.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |