2016-03-27

tiny-cnn

I know that Deep Learning is the future of anything related to intelligent robotics. Since 2012 it is the most disruptive technology in the field, making several decades worth of carefully hand engineered and tweaked code bases obsolete literally over night. And as a result, all the big players in IT such as Google, Facebook, Nvidia etc. are pouring their biggest bucks into this retro area of research.



Retro? Yes, because research has been done for decades in this field before it was kind of forgotten. Why was it forgotten? After numerous stabs at getting a working implementation of the many advance models based on biological principles like neurons and synapses, it was collectively deemed too resource intensive for the contemporary computer hardware.

But along with fancy virtual reality headgear the "neural network artificial intelligence" of 1980's and 1990's sci-fi has now resurfaced, this time with actual real promise (For VR see Oculus Rift).

How? Well because of the unfathomable increase in computer's capacity to process, store and communicate data combined with the unfathomable increase in the number of computers connected together in easily accessible clusters and farms such as AWS combined with maybe the most important parameter: the unfathomable amount of readily tagged media for training purposes (read: cat pictures on youtube).




NOTE: These graphs really do not not give my statement justice unless you grasp that the scale is exponential, and notice the part where it says "human brain" to the right.

Suddenly the dusty old models from the 1990's could be plugged into a new computer and give results literally over night.

I am truly fascinated by this new-old technology that suddenly promises that our computers in the near future may understand our desires in an exponentially growing degree. A technology that makes self-driving cars intelligent speech driven assistants something that we get to see not sometime  before we die, but something we can buy within a decade or even sooner. And who knows what kind of crazy tech we will depend on once this first generation of deep learning has reshaped our lives?

I am a true beginner in Machine Learning and Deep Learning, and I intend to use OctoMY™ as a vessel for learning about it. It is my ambition to make DL an integral part of OctoMY™ as soon as possible, putting it in the hands of the hobby robotics enthusiasts out there. Because, as fascinating as DL is, almost no-one outside the field understand what it is, the significance it carries.

But where would a beginner like myself start? It is really a jungle out there, in terms of available software libraries/frameworks/toolboxes that promise a varying degree of features, performance and integration.

So, in my quest to find the perfect deep learning library/framework/toolbox to use from OctoMY™ I found this useful presentation of tiny-cnn, and I decided to try it out.

According to the project page on github tiny-cnn is

A header only, dependency-free deep learning framework in C++11
 It promises integration with data developed in Caffe. I will give it a shot and we will go from there. Stay tuned!

2016-03-20

The blog formerly known as "Devol Robot Project" is dead. Long live "The OctoMY™ Blog"!

The title of this post really says it all. I decided to change direction of this blog. From this day forth it will no longer be the ramblings of a curious learner without direction. It will au contraire be the ramblings of a curious learner WITH direction!



All the old posts will remain, the design will remain, and for now the old URL will remain as well. In fact nothing will be changed but the title, tagline, logo and editorial direction that this blog takes.

The new direction will be something like 40%  OctoMY™ news 40% robotic tidbits  and 20% actually useful content like tutorials and cheat sheets.

Qt5.6 is out now

OctoMY™ is in many ways a showcase of Qt. Not only because I happen to think that Qt is the best development platform ever, but because it also happens to be a perfect fit for the project.

Fore each feature I want to add there seems to be a perfect Qt module ready to handle it. It is mind-blowing. Need to make a new parser with dynamic loading of shared objects that allow controlling arduino boards over serial, crossplatform? Check. Need to gather real-time data from GPS, accellerometer, gyro, compass and temperature sensors on Android and stream them over UDP together with video from 2 cameras attached via USB to display them in fancy 3D accelerated views on desktop application running on Ubuntu -or- another Android application? Check. Compile all my icons in SVG format directly into the binary? Check. I could go on an on about this, but I won't.

Instead I want to celebrate the arrival of the latest version Qt5.6, because it has (as always) an impressive list of new cool features and bug fixes) that I have been waiting eagerly for:

  • Support for C++14
  • Updated Qt3D (I need a way to render the robot state, and the last version of Qt3D left a lot to be deserved)
  • Updated Camera support (this was really buggy on my platforms)
It also promises a lot of bug-fixes because it is a long term support version.



2016-03-17

Water cooled servo

I came over this really innovative solution to a common problem in robotics, namely the lack of power.

The hopelessly American solution as employed by robotics specialist Boston Dynamics as can seen in footage of their legendary big dog robot among others is simply to put a big noisy petroleum engine into the robot capable of driving the hydraulics system with the power they need to make it jump around like a gazelle on steroids.

Now another approach has surfaced that has more intelligence and lateral thinking to it. SCHAFT is a relatively small Japanese robotics company recently purchased by Google. They compete in the DARPA robot challenge, and have won at least one competition.

According to them the reason for their win is simply that they have managed to create "much stronger muscles" in their robots (higher power to weight ratio in servo motors). Their idea boils down to a brilliantly simple concept: Pump too much current into the motors but keep the motor from burning up by applying enough cooling.

SCHAFT robot motor and controller

So how do they provide more power? They solved this simply by raising the voltage of the power source beyond what a normal motor would endure and putting a capacitor bank between the power source_(battery) and the motors capable of delivering enough current even for the "spikes".

And how is cooling solved? They have constructed sealed motors that allows them to circulate non-conductive liquid coolant through the motors. The heated liquid pass through a passive heat exchanger that effectively dissipates the excess heat to the air around the robot.

I will definite look at ways of using this concept once I get the chance!




2016-03-08

Initial development of gait planning in OctoMY™

I started working on an initial MVP gait planning code for OctoMY™. I decided it did not have to perfect, but it needed to exist :P

Initial OctoMY™ gait planning test code that shows 2d rep of limbs (white lines) and their targets (blue circle).


There are a lot of excellent tutorials and working code out there, but I decided to roll my own for a few simple reasons:


  • The solutions I found were tied up in existibng robot projects and not "stand-alone".
  • I found the code to be cumbersome and unstructured, at least from the perspective of a beginner like me.
  • I want to learn, tweak and one day outperform the alternatives.

2016-03-04

QR Code added to OctoMY™

I added a widget for rendering QR codes to OctoMY™


I also incorporated zbar into the project but implementing the camera-based QR scanner I want will have to wait due to nasty video/camera bugs in Qt5 (crossing fingers for Qt5.6)!

It will be used for multi factor transfer of security sensitive data such as keypairs etc.


2016-03-02

OctoMY™ update

I have been busy recently with OctoMY™.

3 Friends - Agent + Hub + Remote. OctoMY™ tiers.


This project has turned out to become all-consuming for my limited spare time for the following reasons:
  • It's winter here and that means pitch dark and sub-zero, so I tend to stay indoors rather than in the dark cold workshop.
  • I am long-term flat broke so I can't afford the constant stream of cool gadgets to keep me engaged.
  • The project really has taken a turn for the serious. It implements the architectural plans I have for devol robot project and more, so working on it doesn't "steal" time from this project. Au contraire.
  • Even for a project with the irrational level of ambition that OctoMY™ undoubtedly has, I am actually getting somewhere! Fueled by constant "success" the project progress has an astonishing velocity. Let's just hope it lasts.
OK, so with that out of the way, what really is status of OctoMY™ as of today?
  1. Main project structure is set up with separate apps for agent, hub and remote.
  2. Some basic profile work like colors, font etc. have been prepared together with some basic logos and icons for the UI. 
  3. Compiling to both desktop and Android targets is working flawlessly. It really was a breeze to get going with Android, in fact much easier than I had first anticipated. Way to go Qt5!
  4. Serial communication with servotor32 to control servos of the hexy robot is working flawlesly using simple API.
  5. Initial version of network communication over UDP using the novelty QoS/on-demand api created specially for OctoMY is working as MVP+. This is in fact the code that started it all in the first place.
  6. Gathering and real-time transmission of telemetry/odometry data from available device sources including gyro, accellerometer, compass, light sensor, temperature sensor, pressure sensor and gps is working.
  7. Incorporation of realistic aviation widgets working.
  8. Automatic platform ID generation and matching logo-based identicon generation working.
  9. Incorporation of basic tile based map view is working.
  10. Initial display of "pose" in 3D using OpenGL started, however this has been put on hold while waiting for better support for Qt3D 2.0 scheduled for inclusion in Qt5.6 due soon. (which will basically make this part 12x simpler).
  11. Incorporation of GEAR dynamics/kinematics into project build, although it is not being used for anything yet.
  12. Incorporation of flex/qlalr into qmake build for use when developing the "mandate description language" which is the core of "top-level" layer in the OctoMY™ project. This took days of frustration and added 2 wrinkles to my forehead that were instantly revoked the moment I was rewarded with the knowlege that I am in fact the only entity in this universe capable of setting up flex+qlalr+qmake to make a successful build of a parser that takes input from a QString.
What remains before there is an MVP available for OctoMY™?
  1. Gait stuff (look at adapting something from phoenix code, or even better adapt GEAR to do it from scratch).
  2. An actual functioning paradigm of remote control (touch simulation of joystick to move forward and turn etc).
  3. MVP for the "mandate description" parser so that at least a basic "remote->hub->agent" configuration can be set up.

This sounds simple but I know for a fact that it's when integrating the parts the complexity goes up. Having this astounding list of features cooperate without throwing all kinds of bugs will be a challenge.