Skoltech is an international graduate research-focused university that was founded by the group of world-renowned scientists in 2011. Skoltech's curriculum focuses on technology and innovation, offering Master's programs in 11 technological disciplines. Students receive rigorous theoretical and practical training, design their own research projects, participate in internships and gain entrepreneurial skills in English. The faculty is comprised of current researchers with international accreditation and achievements.

Researchers from Skoltech have participated in NIPS – the largest global forum on cognitive research

Screen Shot 2015-12-29 at 11.05.15 PMScientists from Bayesian Methods Research Group, led by Prof. Dmitry Vetrov, have presented a paper describing a novel approach to efficient use of neural networks at the prestigious NIPS Conference in Montreal, Canada. Their method, called Tenzorization of neural networks, reduces neural networks memory requirements, without losing the quality of these networks.

Few years ago mankind has entered the Big Data era. Volume of data is growing very intensively in different areas. Growth of the amount of data available for analysis started to exceed the growth in digital technologies that can be used for this analysis. Standard analytical tools become useless in case of huge datasets because they do not have enough scalability. Analysis of relatively small arbitrary chosen subsample from huge dataset can’t solve a problem without losing vital information. Deep neural networks can use all the information and they can be trained with scalable stochastic optimization methods. Deep neural networks currently demonstrate state-of-the-art performance in many domains of large-scale machine learning, such as computer vision, speech recognition, text processing, etc. The problem is, that they have very big hardware requirements (e.g., memory). Therefore, opportunities to increase the number of neurons in neural networks are limited by these parameters.

This challenge can be solved by tensors from multilinear algebra. Tensor is an organized multidimensional array of numerical values. The amount of elements in tensor grows exponentially with tensor size increase. At the same time modern mathematical techniques are able to re-develop a tensor into a more compact format, often almost without information loss. Tenzorization of a neural network can be compared with file archiving. Dmitry Vetrov’s team found how to reduce the volume of memory needed to store fully-connected layer of the neural net (in modern neural networks, such layers take over 99% of the total amount of memory consumed by the neural network) up to 700 000 times. It is interesting that this technology allows not only to store neural networks in this compressed format, but also to train them.

The method described in the paper opens up big opportunities for the application of neural networks, for example, in mobile devices. New technology makes it possible to store neural nets in the mobile phone without sending a signal to the server and back as it’s done now. This will allow to solve the problems of speech recognition and image analysis directly on mobile devices, which is important in case of unstable or slow mobile connection. There is another use of this technique.

“Currently, there is a revolution connected with the widespread introduction of deep neural networks, whose abilities in a number of cognitive tasks (e.g.,image understanding) already exceed the ability of human intellect. Experiments show that the deeper and wider the neural network is, the higher performance it shows. Unfortunately, there are technological limitations to the depth and width of the neural networks. Tenzorization algorithms help to remove the restrictions on the width of a network, which will allow training the neural network with billions and trillions of neurons, rather than hundreds of thousands, as is currently the case, “- said Dmitry Vetrov.

Link: http://arxiv.org/abs/1509.06569

* NIPS is the largest global forum on cognitive research, artificial intelligence, and machine learning, rated A* by the international CORE ranking. CORE, divides the well-known international scientific conference on 4 categories: A * (top), A (excellent), B (good), C (not bad). NIPS held since 1987. The papers submitted to the conference pass rigid competitive selection.

* The Skolkovo Institute of Science and Technology (Skoltech) is a private graduate research university in Skolkovo, Russia, a suburb of Moscow. Established in 2011 in collaboration with MIT, Skoltech educates global leaders in innovation, advances scientific knowledge, and fosters new technologies to address critical issues facing Russia and the world. Applying international research and educational models, the university integrates the best Russian scientific traditions with twenty-first century entrepreneurship and innovation.

 

Contact information:
Skoltech Communications
+7 (495) 280 14 81

Share on VK