2016-01-07

OctoMY Update: Some sensors working in remote app on Android device

New in this commit:


  • Made it compile and run on Android
  • Added some sensors to remote application
  • Many small fixes all over


2016-01-04

OctoMY™ on github

I decided to wrap my current code into a project and put it on github. I also made a project page on google sites and bought a domain name for it. The logo I sketched out real quick in inkscape looks like this:


Logo for eight legged madness!
Logo in SVG format can be downloaded here

Currently I have just posted my work-in-progress code that compiles without errors on Ubuntu, but that does not really do anything useful. I will keep this updated as I progress in making the code more useful!





2016-01-01

Overal Architecture Revision 2016

It's a new year (2016) and time to update the old power & electronics schematic as well as the old software stack schematic with all the new ideas that have accumulated since then.

In summary, here are the changes:


  • Software
    • The Roboard is out of the equation in favor of an Android device as the "strategic controller" connected via USB OTG cable to a general purpose servo controller like arduino as the "real-time controller". This has several benefits; Android is an awesome platform that packs more relevant features per penny than any other. There are a plethora of new devices to choose from in all sizes and shapes to accommodate any changes in the project's requirements (and also playing along nicely with Moore's law when it comes to performance). The same can be said of Arduino.
    • All higher-level-than-embedded tiers are now developed in C++ using my new universal platform of choice for anything,  Qt5. Since Qt5 is now easy to deploy to Android it was an easy choice. For embedded I am planning on selecting a usable subset of C++ on AVR that gives me at least some possibility of code sharing. I am also looking at implementing Google Protobuf.
    • All the hand crafted vision code such as clsurf will be replaced using deep learning algorithms instead. This is a bold move, but really I think that the time spent learning this new paradigm is better spent than carefully hand-crafting and hand-optimizing a bunch of low-level feature detection routines. So far the caffe framework seems like the most promising starting point for this endeavor.
    • There is more focus on reliable real-time communication with a central command, and remote control is more center stage. The reason is two-fold. In short term having a remote control available will work as a catalyst of development time by providing a tool to efficiently debug the robot. In longer term thinking, many of the algorithms that are proposed used in the robot require hardware to run on that is just not feasible in embedded form factors. For clsurf having a decent mobile graphics accelerator would probably cut it, but with the transition to deep learning based algorithms all over the board I will need to rely on off-loading them to cheap rented cloud resources and/or stationary clusters during development. Eventually in a few years time Moore will catch up with the project and a final platform for embedding everything into the robot will be available.
    • Some of the dots have received concrete suggestions attached to them:
      • Logic Inference: Open Mind Common Sense (cyc was found too proprietary)
      • Event Invetory: RocksDB
      • Comms: A new home-brew protocol on top of UDP, as none of the alternatives seemed appropriate.
      • Text-to-speech: espeak possibly extended with mbrola voices for now.
      • Video Out: Qt5 to the rescue. Lean hard on all the completely awesome features present.
      • Pose manager / Gait calculator (IK+): I will probably end up writing my own using GEAR based on the ideas expressed in the famous Phoenix hexapod robot project's code
      • SLAM: Try to use Deep Learning.
      • Object detection: Try to use Deep Learning.
      • Audio input: Many ears.
  • Hardware
    • The robot will mimic a specific spider, at least in spirit, aesthetics and anatomic proportions. Perhaps the moist important is that the robot will have 8 instead of 6 legs.
    • The starter motor and generator is now the same physical device, thanks to some new and clever thinking.
    • I am not so sure any more about the choices of actuators and the structural build materials. I have pondered several alternatives but none strike me as an obvious choice anymore. The main problem is that I don't have any hands-on experience with any of them and feel the need to play with them more before making a decision. Another limiting factor is the fact that my CNC machine is still not operable. Thus, focus is on getting up working prototypes for software and select sub-systems like power train before hopefully I will gain some insights as to what are my definite requirements.
    • I might be switching the battery to more modern lithium-ion to have more power density. I am not sure if I want to tackle battery management in my own circuit or just use some off-shelf product that allows the circuits pretend it is an ordinary lead-acid battery.
    • Adding a super-capacitor bank as a cushion to the power train crossed my mind.
    • For cameras the new thinking removes some problems. Now more is merrier and quality is not as important. So just buying logitech in buckets will do. I will first have some software working with single camera before shelling out though.