2016-03-27

tiny-cnn

I know that Deep Learning is the future of anything related to intelligent robotics. Since 2012 it is the most disruptive technology in the field, making several decades worth of carefully hand engineered and tweaked code bases obsolete literally over night. And as a result, all the big players in IT such as Google, Facebook, Nvidia etc. are pouring their biggest bucks into this retro area of research.



Retro? Yes, because research has been done for decades in this field before it was kind of forgotten. Why was it forgotten? After numerous stabs at getting a working implementation of the many advance models based on biological principles like neurons and synapses, it was collectively deemed too resource intensive for the contemporary computer hardware.

But along with fancy virtual reality headgear the "neural network artificial intelligence" of 1980's and 1990's sci-fi has now resurfaced, this time with actual real promise (For VR see Oculus Rift).

How? Well because of the unfathomable increase in computer's capacity to process, store and communicate data combined with the unfathomable increase in the number of computers connected together in easily accessible clusters and farms such as AWS combined with maybe the most important parameter: the unfathomable amount of readily tagged media for training purposes (read: cat pictures on youtube).




NOTE: These graphs really do not not give my statement justice unless you grasp that the scale is exponential, and notice the part where it says "human brain" to the right.

Suddenly the dusty old models from the 1990's could be plugged into a new computer and give results literally over night.

I am truly fascinated by this new-old technology that suddenly promises that our computers in the near future may understand our desires in an exponentially growing degree. A technology that makes self-driving cars intelligent speech driven assistants something that we get to see not sometime  before we die, but something we can buy within a decade or even sooner. And who knows what kind of crazy tech we will depend on once this first generation of deep learning has reshaped our lives?

I am a true beginner in Machine Learning and Deep Learning, and I intend to use OctoMY™ as a vessel for learning about it. It is my ambition to make DL an integral part of OctoMY™ as soon as possible, putting it in the hands of the hobby robotics enthusiasts out there. Because, as fascinating as DL is, almost no-one outside the field understand what it is, the significance it carries.

But where would a beginner like myself start? It is really a jungle out there, in terms of available software libraries/frameworks/toolboxes that promise a varying degree of features, performance and integration.

So, in my quest to find the perfect deep learning library/framework/toolbox to use from OctoMY™ I found this useful presentation of tiny-cnn, and I decided to try it out.

According to the project page on github tiny-cnn is

A header only, dependency-free deep learning framework in C++11
 It promises integration with data developed in Caffe. I will give it a shot and we will go from there. Stay tuned!

No comments:

Post a Comment