2016-12-28

Wheeled rig update

So I put in some time on the wheeled rig:

OctoMY™ Wheeled rig with speakers, buzzer, warning light and electronics mounted.
Side closeup showing the battery mounting bracket and port holes where wires will be fed into the electronics box.
Front closeup showing the stereo speakers and alarm buzzer.


New features include:

  • Created and mounted two separate stainless steel speaker housings made from the caps of  awesome waterwell™ bottles.
  • Mounted a piezo deterrent/attention grabbing buzzer.
  • Created a mounting bracket from stainless steel wire to hold the lead acid battery.
  • Found the perfectly sized and shaped weather proof electrical box for mounting all the electronics for the robot.
  • Created a mounting plate for the electrical box from an old plastic plate I had laying around my shop.
  • Mounted a LED warning/attention grabbing flash-light on top of the electrical box.
Still on the TODO list:
  • Mount all the electronics in a smart way inside the electronics box
  • Connect all the electronics together and test it
  • Protect the wires from wear/damage/water
  • Protect the whole rig from water
  • Paint job? Not sure about this

2016-12-26

Merry Christmas & Happy New Year 2017!

A little late, but better late than sorry! Here are the seasons' best wishes to you, your friends & family from OctoMY™ Project. I never used to have a new-years' resolution, but this year is different.

My personal new-years' resolution for 2017 is to most definitely release the first binary version of OctoMY™ even if it is in a bare and simple form.

OctoMY™ seasonal greetings 2017

2016-12-20

OctoMY™ Official License set to LGPLv3 or commercial.

The official license of the OctoMY™ project is now inspired by the official Qt project licence, namely a dual license that is either LGPLv3 or proprietary. Anyone can use OctoMY under the terms of the LGPLv3, while there exists a proprietary license just in case we need that at some point.

NOTE: Some parts of the OctoMY source code will always be under open source licenses only.

You can see the full OctoMY license here.


So why was this model chosen? Two reasons. First of all, the LGPL license guarantees everyone involved full access to the source code forever. It puts the O in Open Source.

Secondly the proprietary license prepares the project to accept contributions from sponsors that may have a hard time approaching a purely open source project. This is purely preemptive, we don't plan to accept such contributions at this time.


2016-12-18

Two new widgets

I have been making tremendous pace on the OctoMY™ development lately. Despite being sick several times and super stressed out at work I managed to put in some serious coding effort.

My latest innovations include two new widgets;

The first is the user interface for remote controlling wheeled robots, where first channel is throttle and second is steering angle. It looks like this:

OctoMY™ Remote interface for RC Car type robot.
Relevant code is here.

It works as expected, the red dot follows your movements along the wheel to steer, while the yellow bar grows with the distance you move away from the wheel in upward direction, indicating higher throttle. If you move below the wheel the bar turns blue and you will start reversing (or breaking, depending on your controller).

The second interface is part of the agent configuration program and allows the user to sett up the number of actuators and the correct mapping of named agent outputs to indexed controller input channels. It looks like this:

OctoMY™ Agent interface for mapping outputs to servos.
Relevant code is here.

To use it, simply select the number of actuators you will be needing in the spin-box at the top. Proceed to name them their logical names using the rename button. Finally, map them to indexed servos by clicking the buttons to connect in order. I was especially happy with the way my "virtual wires" turned out.

2016-12-16

iMacwear W1 First impressions

Good news: I managed to build, deploy and run OctoMY™ Agent without any changes to the source code! It crashed a few times, and I have a bad feeling that these crashes are due to out-of-memory. I really have not been able to debug that fully yet. Will continue in this trail soon. But it got past the delivery and having the Agent eyes looking back at me from the small screen is a delight!

OctoMY™ Agent running on iMacwear W1 Android Smart Watch


After my last post about the iMacwear W1 unboxing, I have now had some time to form a first impression.

At first I was not sure the devivce was actually running Android as it claimed, but I soon figured out that this is because it runs a custom version of Android called "FunOS". I have not managed to find any information about this OS online. But in practice it means that many stock Android applications have been replaced with less resource intensive and more compact ones.

The main UI and navigation makes sense once you get used to it, and getting used to it takes about 15 minutes. I find it lacking a bit in the aesthetics department, the icons are not well executed. But the UX is ok, and as we all know, form follows function!

I had some problems with getting the device to register with Ubuntu. The device reports the USB vendor of HTC with a device ID of 2008,  (0bb4:2008), so if you need a line for your Android udev.rules, it will look like this:

# iMacwear W1
SUBSYSTEM=="usb", ATTR{idVendor}=="0bb4", ATTR{idProduct}=="2008", MODE="0666", OWNER="<username>"

I found that enabling developer options in the settings on device and installing mtpfs package helped a little, but it was still unstable.

sudo apt-get install mtpfs

In the end, just retrying a lot makes it work. Once it works it will work for some time. I also found you have to keep the pogo connector completely still the whole time, because just unsetting it a little will break the USB handshakje and the connection/debug session/whatever.


iMacwear W1 Unboxing

I recently purchased the iMacwear W1 Full Android smart watch for testing with OctoMY™ software. I will put it as a recommended device in the shop page as soon asi get the OctoMY™ agent to run without problems. But before that, le't do the unboxing!

Neat gift box with magnetic lock

Tidily arranged contents

Top compartment contains user manual

Watch is strapped to black velvet cushion


Polishing cloth under the watch

All the box content side by side

Accessories box open in the short end

Contains pogo USB data/charging cable in plastic bag

Pogo cable removed from bag

Closeup of pogo USB connector showing gold plated pogo pins and magnets

Back side of watch showing wrist band

Protective film for screen (I removed once already)

Backside of watch

Closeup of backside showing pogo pad and sim slot.

pogo cable connected

After 3 hours of charging, showing main watch screen.

After removing protective film
 
Main menu

2016-10-19

4-Wheeled rig build

I currently have 3 "rigs" I want to integrate OctoMY™ with. The hexy hexapod robot from arcbotics has been featured a few times before on this blog, but the other two have yet to be exposed. This post is about a 4-wheeled rig based on an old Traxxas TMaxx nitro engine RC truck.

This is how the tuck looked initially (with cover off).

At this stage I have stripped everytihng except the steering servo (which will be used).

Just a random "shop" image. This room is now painted gray and tidied up to look like a TV studio.
This is how it looks now with 12V Lead acid battery, RC ECU and mobile holder.

Closeup of RC-ECU and separate power for it.

Underside shows the geared electric motor (notice it is now only 2wd).

Closeup of mobile holder in front.
Right now it works as an RC car, but I will soon integrate it with OctoMY™. The current blockage is that the final step in discovery needs to be implemented (handoff, storage and use of recent connection parameters from discovery through to pairing). This part in turn needs some more UI work to allow sensible multiplexing between devices in remote.

2016-09-02

How to get going with Vulkan & NVIDIA on Ubuntu 14.04 and 16.04

First, I found that there really was no support for Vulkan in Ubuntu 14.04 LTS, or at least without a lot of custom configurations that I would like to avoid. So I decided to upgrade my system to Ubuntu 16.04.
Getting started with Vulkan in Ubuntu with nVidia.


Next I tried to find my way by following a lot of different tutorials and advice. Nothing seemed to work for me, and getting Vulkan on my laptop was looking impossible until I decided to follow this simple advice. I decided to shamelessly steal his advice and add my own extra observations. So here are the steps:
  1. Open this page on your tablet/phone
  2. Hit <CTRL> + <F1> to go to a text-only terminal
  3. Log inn using your normal username and password
  4. Get rid of the old nvida drivers, libraries, configurations and other gunk (this step is optional, but if you tried some stuff before getting here it can be well worth it starting fresh): # sudo apt-get remove --purge nvidia*
  5. Add official ppa repository for vulkan: # sudo apt-add-repository ppa:canonical-x/vulkan
  6. Install nvidia driver that supports vulkan (NOTE: version 367 was recommended at the time I wrote this): # sudo apt-get install nvidia-367
  7. Install Vulkan libs: # sudo apt-get install libvulkan1 mesa-vulkan-dev mesa-vulkan-drivers
At this point you may reboot and try the vulkaninfo command from terminal to see what vulkan stuff is found in your system.

TIP: If you have a dual gpu setup which is common nowadays, your secondary inferior (usually Intel) GPU might get in the way of your Vulkan setup and this page shows you how to disable it.

Now it's time to look at and run some code. The official "wrangler lib" and SDK for Vulkan is called "LunarG" and it is not in the Ubuntu repos. It contains libraries, tools and example code for you to play with. You have to download it and unpack/build/install it the old fashioned way:

  1. Download the sdk from here: https://lunarg.com/vulkan-sdk/
  2. Once downloaded #chmod +x the file to be able to run it.
  3. Run the file, copy the extracted folder to some location and add the following path variables:
#export LD_LIBRARY_PATH=$HOME/VulkanSDK/1.0.21.1/x86_64/lib
#export VK_LAYER_PATH=$HOME/VulkanSDK/1.0.21.1/x86_64/etc/explicit_layer.d

You can put these in some script that is run on login or boot like ~/.bash_profile

NOTE: You may need to adjust the paths to match the location where you put the SDK. ~/.local/, /opt/ and /usr/local/ are good candidates.









Vulkan Path Tracer

Vulkan, the new graphics API to replace OpenGL and OpenCL in one fell swoop holds great promise. But It seems that adoption of this new technology is slow. It has been 6 months already and only a few tutorials, examples and projects have emerged that claim Vulkan support.

Since parts of what OctoMY™ will do requires enormous amounts of compute power, I have decided to explore the possibility of using Vulkan in OctoMY™. I am thinking avbout porting the smallpt path tracer (similar to a raytracer but more realistic at the expense of requiring more compute time) project to Vulkan. I might also look at smallptGPU and smallptGPU2 which is a port of smallpt for OpenCL for inspiration.

In the case that I ever get there, the project will be called smallptVulkan.

2016-08-31

Per pixel labeling in cn24

I just found this project which shows great promise for use in OctoMY™.

cn24 is a portable and embeddable set of tools for building, training and using neural networks for per-pixel classification/labelling of input images.



It is refreshing to see a University project with such modern standards! It is available on github with travis ci build. It strives to be easy to use, performant and platform independent. It takes great care to not introduce any unnecessary dependencies and it even sports a commercial-friendly 3-clause BSD license! This aligns perfectly with our core values.

I hoipe to use this project for the many classification problems that OctoMY™ will handle in the future.

Status update for OctoMY™

There is a new feature page up here that shows the status of each sub-component of the OctoMY™ project. I will keep it updated as development moves along.

2016-08-30

FYI: tiny_cnn was renamed to tiny_dnn

Just a tiny headsup (pun intended)..

the tiny_cnn prtoject I previously introduced on this blog has now been renamed to tiny_dnn.

https://github.com/tiny-dnn/tiny-dnn

2016-08-28

OctoMY™ Identicons

One of the central ideas in OctoMY™ was to have unique identities for each robot that could be recognized easily by the display of an identicon image. This identicon would retain some character so that you would instantly know that it was related to OctoMY™.

In the first implementation I simply generated a hash of the MAC address for the first non-wifi ethernet adapter found on the device as the basis for generating the identicon. The identicon graphics itself was simply the original OctoMY™ logo with some changes applied to the limbs and colors.

This proved to be very effective and it became clear right away that there was something to the identicon idea. However there were some problems with the approach . First it would be very easy to "change" or "spoof" the identity of a robot simply by changing the MAC address. Second there was no guarantee that the device would have a MAC address or network interface at all. Finally, what if we decided to scrap the old hardware and move the software to a new one? The robot would "loose its mind"!

In the current iteration I have coupled the identity to the security aspects of the platform; the identity is now a secure hash of the full text of the public-key for the robot in PEM format. Benefits of this are many:

  • The robot may have an identity regardless of what hardware it runs on and how many ethernet devices are present.
  • An identity requires a key-pair, guaranteeing that the robot has the needed security in place.
  • By verifying a message using RSA and generating the id and identicon directly from the related pub-key you know that the  identicon is a real one and actually relates to the robot you are communicating with (paranoids would compare the full hash text as well, not just the identicon graphics).
Another thing that popped up as a problem was that I now had beautiful charming identicons for the robots them selves (OctoMY™ Agents), but there was no similar way of positively identifying the remotes and hubs that would interact with the robots! What I ended up doing was simply to make the same identicon generator produce identicons for OctoMY™ Remotes and OctoMY™ Hubs as well. And the results speak for themselves!

OctoMY™ Agent, Remote & Hub identicons in purple and green.

Notice how each tier has a different identicon personality while retaining the color palette.

The "personality" of the OctoMY™Agent identicon varies the direction and bend of its limbs.

OctoMY™ Remote & Hub vary the radius/thickness of the rings and the rotation of the brain respectively.

2016-08-10

Anatomy of a hydraulic cylinder

Hydraulic cylinders look like this on the inside:



The pin eye and clevis is where the cylinder is mounted to the appliance, and are the points between which the cylinder exerts its force.

The barrel is the body of the cylinder where the piston and rod can slide in and out.

The ports are where the hydraulic fluids are pressed in or out.

The wiper keeps dirt from reaching the gland. O rings seal the crevices between parts. the seals and wear rings keep the rod wear and leak free.

The nut keeps the rod securely attached to the rod.


2016-07-27

OctoMY™ Birth Certificate

In a previous post I pondered if I should have robots be mortal/immortal or let the user choose. In the end I decided to let all robots be immortal, as "forcing" a robot to be mortal would be an artificial barrier at best. Clever OctoMY™ users would be able to "resurrect" their robots simply by performing a low-level copy anyways.

Instead of inviting a fight that is impossible to win, I decided to just ignore the implied fate of OctoMY™ robots by saying "if you are deleted you are no more, if you are copied you will live, and the copying happens at the mercy of your owner's skills".

Here is an early screenshot of the birth certificate in OctoMY™:

OctoMY™ Agent - Birth Certificate



2016-05-28

Deep Learning Terms For Toddlers

The best way to learn is to teach others. That is why I am making this list of terms related to deep learning in a way that is meant to be easy to understand for total beginners like myself.

This list tries to start with the basics, and tries to be readable from start to finish by only referencing items that are already covered earlier. It also intentionally spreads out the buzzwords so they can be grasped one by one.

It will try to stick to one term when several terms mean the same, and it will try to debunk ambiguities and overlaps when possible.

But first a fair warning/std disclaimer: I am doing this only in the hope that I personally will learn from it, and I share it with you for it's entertainment value only.

  • Nerve Cell: See Neuron.
  • Neuron: The kind of cell that brains are made of. Works by sending a pulse to other neurons when it receives a pulse. It only sends out a pulse if it receives pulses fast enough or strong enough for its liking. Neurons are modeled in software as "Artificial Neurons" to form "Artificial Neural Networks".
  • Synapse: A connection between the output of one Neuron and the input of another. This is where pulses travel between neurons. In a computer model synapses may be greatly simplified.
  • Transfer Function: See activation Function
  • Activation Function: The function in a neuron that triggers an output based on the sum of its inputs. There are several types (see the section below), but the most relevant for Deep learning is called  ReLU ("Rectified Linear Unit").
  • Neural Network: A number of neurons connected into a network. The number of neurons, number of connections, and the way in which the connections are made (aka the "architecture") can vary greatly and are all deciding factors for how the neural network will work, and what it can be used for.
  • Layer: In a Neural Network neurons can be arranged in layers. In each layer the neurons typically have their outputs connected only to neurons in the next layer of the model. Several layers containing different number of neurons can be connected, and each layer will then serve a separate purpose in the network. Connections in a layer typically go from one neuron in the previous layer to several neuroons in this layer. Layers have different names depending on their use:
    • Input: Special purpose layer where data is first entered into the network
    • Output: Special purpose layer where results exit from the network
    • Hidden: "Normal" layer that simply stores or processes data through it's neurons.
    • Pool: See this article.
    • Soft-max: See this article.
  • Connection topology:
    • Fully-connected: All neurons are connected to all other (N! connections);
    • Locally-Connected: Only neighboring  neurons are connected.
  • Connection Weight: See Weight.
  • Weight: Neurons trigger when the value they receive from their connected inputs reach a certain level. The value from each of it's inputs are adjusted according to the weight before entering the activation function, so even if 10 neurons all send to us a strong pulse they may be weighted down to 0 and not cause activation.
  • Learning: What the network "learns" is simply the values that it stores in its weights. The exact meaning of what a network learns we cannot know because the process by which the learning takes place is indirect and very complex, just like in a real brain. However we can use the network and its knowledge simply by passing pulses through it and recording the output. There are several forms of learning:
    • Supervised: The program knows the ground truth for training data and knows when the output from the network is good or bad, and can correct the network weights thereafter.
    • Unsupervised: The program does not know what is good or bad, but judges it's performance by trying to replicate the input after it has gone through a network whose architecture does not allow the input to simply be copied verbatim, and thereby forcing the network to "learn something, and demonstrate it later".
    • Back-propagation: A method of supervised learning where the weights of the connections in a neural network are adjusted to minimize the error of the output generated by the network when compared to ground truth.
    • Reinforcement Learning: Training with sparse training data, learning by reinforcing good behavior without the need for extensive training data.
  • Network Types: As mentioned in the section about Neural Networks, they can be arranged in all sorts of ways. Here is a list of network types:
    • Perceptron: A single layer neural network that classifies input as either "in" or"out" (a.k.a. binary classification). For example, it can determine if the input is an image of a dog or not.
    • MLP ("Multi Layer Perceptron"): see "Perceptron".
    • CNN ("Convolutional Neural Network)
    • FNN ("Feedforward Neural Network"): An artificial neural network where the neurons do not form cycles (as opposed to recurrent neural networks).
    • RNN ("Recurrent Neural Network"): A network where neurons form "cycles" (as opposed to feed forward neural networks). This imposes fewer restriction on which connections may be made between neuron in the network. Several related concepts:
      • Fullly Recurrent Network:
      • Hopfield Network:
      • Elman Network:
      • Jordan Network:
      • ESN ("Echo State Network"):
      • LSTM ("Long Short Term Memory Network"):
      • BRNN ("Bi-Directional Recurrent Neural Network"):
      • CTRNN ("Continuous Time Recurrent Neural Network"):
      • Hierarchical:
      • Recurrent Multilayer Perceptron:
      • Second Order Recurrent Neural Network:
    • Cognitron: An early implementation of a self-contained multi-layered neural network that used un-supervised learning.
    • Neocognitron: Evolution of the cognitron that improved some of it's shortcomings.
    • Topographic map: See Self Organizing Feature Map
    • Kohonen map: See Self Organizing Feature Map
    • Kohonen network: See Self Organizing Feature Map
    • SOM ("Self Organizing Map"): See Self Organizing Feature Map
    • DBN ("Deep Belief Network")
    • SOFM ("Self Organizing Feature Map"): A network that applies competitive learning to create a "map" that organizes itself to map the input space.
    • GSOM ("Growing Self Organizing Map"): A variant of SOM where nodes are added to the map following some heuristic. Invented to overcome the problem that deciding a map size that works well is difficult, in GSOM the map starts small and grows adaptability until it is "big enough".
    • Diabolo Network: See Autoencoder.
    • Autoassociator: See Autoencoder.
    • Autoencoder: A non-recurrent neural network with the same number of input as outputs and with at least one hidden layer, that when receiving as input the value X is trained not to generate some output Y but to reconstruct its input X. Autoencoders are inherently well suited to unsupervised learning.
      • Denoising autoencoder: An auto encoder that is trained on corrupted versions of the input in order to make it learn more robust features.
      • VAE ("Variational Autoencoder"): A generative model that is similar in architecture to a normal autoencoder, but has a completely different usage.
  • Ground truth: When training a neural network we supply an input and expect an output. The ideal expected output is called ground truth.
  • Deep Neural Network: The deepness of a neural network refers to the number of layers in the network, where typical deep networks have more than 2 hidden layers.
  • Oldschool methods: Not related to neural netowkrs, but a common hand-crafted method to solve a problem that has been surpassed by recent deep learning approaches.
    • SIFT.
    • SURF.
  • Boltzman Machine: A wobbly and fun neural network that is run continuously to reach different states. Albeit their intriguing nature, such networks are useless unless they are being restricted in particular ways.
  • RBM ("Restricted Boltzman Machine"): A Neural network that can learn the probability distribution of its inputs.
  • Generative model: A model that generates random output conforming to a preset distribution.
  • Stochastic neural network: A neural network that has either stochastic activation functions or random weights assigned to it. Used in training to help avoid reaching local minimum.
  • Greedy Learning: train each component of a network on its own instead of trying to train the whole network at once.
  • Convnet: See Convolutional Neural Network.
  • Training protocol: How a network is trained
    • Purely supervised
    • Unsupervised layerwize, supervised classifier on top
    • Unsupervised layerwize, global supervised fine-tuning
  • Intermediate representations: What the network or architecture actually learned at each level.
  • Reward Function: See Objective Function
  • Profit Function: See Objective Function
  • Fitness Function: See Objective Function
  • Cost function: See Objective Function.
  • Loss function: See Objective Function.
  • Objective Function: A function that maps the cost of an event. If we wish to mimimize the cost, the function may be referred to as a loss function. If we wish to maximize the cost we refer to it as a reward function.
  • Optimization: Finding the input that provides the best output from a objective function. In a loss function this means the lowest output, in a reward function it means the highest output.
  • Gradient: A smooth sloped 2D graph with dips and hills.
  • Gradient Descent: Finding nearest local minimum by moving in steps towards the steepest downward slope in the model.
  • Incremental gradient descent: See Stochastic Gradient Descent.
  • SGD ("Stochastic Gradient Descent"): A gradient descent method that iteratively takes random steps towards the slope in an effort to find local minimum.
  • Conjugate gradient: Alternative to gradient descent that is used for solving linear equations.
  • FPROP ("Forward Propegation"): To feed data into a neural network to get the resulting value output from the network. May also be called "testing" the network. The output can be compared with the ground truth/real value and the weights of the network can be ajusted based on the deviance.
  • BPROP ("Back Propegation"): To minimize error in the network, you propagate backwards by finding the derivative of error with respect to each weight and then subtracting this value from the weight value.
  • Energy based unsupervised learning:
    • PSD ("Predictive Sparse Decomposition"): TODO.
    • FISTA ("Fast Iterative Shrinkage-Thresholding Algorithm"): TODO.
    • LISTA("Learned Iterative Shrinkage-Thresholding Algorithm"): TODO.
    • LcoD ("Learning Coordinate Descent"): TODO.
    • DrSAE ("Discriminative Recurrent Sparse Auto-Encoder"): TODO.
  • Deep Learning: Broad term realting to working with "deep networks" and related technology to solve problems.
  • Manifold: a 2D surface wrapped in 3D space (think curled up sheet of paper) to form a shape that can be both be treated as a Cartesian coordinate system locally but where each point in the map actually has 3 coordinates as well.
  • Entangled Data Manifold:
  • Linear separability: When a line (in 2D), plane (in 3D) or hyperplane (in n-D) can be found such that two distinct sets of points can be completely separated by it.
  • Hebb's rule: In a biological brain synapses that are used often are strengthened while synapses that are not will weaken.
  • Feature Extractor: Derived values (features) derived from input data intended to be informative and non-redundant, facilitating the subsequent learning and generalization steps.
  • Invariant Feature Learning: TODO.
  • Trainable Feature: Something that can be understood by a neural network. In deep learning each layer typically learn features of different levels of abstraction. Low-level features are learned at at levels close to the input level. High-level features, while for each progressive layer in the model more complex and abstract features are learned.
    • Image Recognition
      • Pixel
      • Edge
      • Texton: Textons refer to fundamental micro-structures in generic natural images and the basic elements in early (pre-attentive) visual perception.
      • Motif
      • Part
      • Object
    • Text
      • Character
      • Word
      • Word group
      • Clause
      • Sentence
      • Story
    • Speech
      • Sample
      • Spectral Band
      • Sound
      • Phone
      • Phoneme
      • Word
  • Classification: Determine to which category an observation belongs.
  • ("Support Vector Machine"):
  • CV ("Computer Vision"): The field of study relating to processing of visual data in computers such as images and videos.
  • Feature: An artifact such as a pattern in an image or phone in an audio sample that can be reliably detected, tracked, parameterized and stored by feature recognizers for processing feature classifiers.
    • Haar like features: See Haar Features.
    • Haar features:
  • Integral image: Image representation, or look-up-table where sums of intensities are stored per pixel instead of direct image intensities. This simple structure is used to optimize performance of some computationally expensive image processing algorithms that depend on these sums.
  • Feature Engineering: Hand-crafting of feature detectors.
  • Stride: How many units (such as pixels, neurons etc). a sliding window travels between iterations during input of (image) data into network either during training or testing.
  • Training: Changing a network to make it better at solving the problem for which it is being designed.
  • Testing: Using a network without changing it to either see how well the network is working during training or to actually use the output in production.
  • Ventral Pathway: A pathway in the mamal brain where visual input is recognized. It has several stages each with it's own intermediate representation just like a deep neural network.
  • Sparse Modeling:
  • Neuron Cluster:
  • Shared Weights:
  • ICA ("Independent Component Analysis"):
  • Training set:
  • Receptive field: area in the visual cortex that detects concepts of different levels.
  • Datasets:
    • Barcelona
    • Imagenet
    • SIFT Flow
    • Stanford Background
    • NYU RGB-Depth Indoor Scenes
    • RGB-D People
    • MNIST http://yann.lecun.com/exdb/mnist/
    • INRA http://pascal.inrialpes.fr/data/human/
    • GTSRB http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset
    • SVHN ("Street View House Numbers"): http://ufldl.stanford.edu/housenumbers/
    • Music:
      • GTZAN: http://marsyas.info/downloads/datasets.html
  • RGB-D ("Red Green Blue Depth"):  Image format with 3 chanels devoted to color and one last channel devoted to z buffer (depth).
  • Classification loss:  TODO.
  • Reconstruction loss:  TODO.
  • Sparsity Penalty:  TODO.
  • Inhibition Matrix:  TODO.
  • Invariant Features:  TODO.
  • Lateral Inhibition:  TODO.
  • Gibbs measure: See Gibbs distribution:
  • Gibbs distribution:  TODO.
  • Semantic Segmentation: dividing image into regions that fit different labels.
  • Scene parsing: TODO.
  • Laplacian Pyramid: TODO.
  • Labeling: See Semantig labeling.
  • Semantic Labeling:
    • Label every pixel of image with the object it belongs to
  • Overfitting: Training resulting in the network simply storing data instead of inferring actual knowledge from the input. 
  • Regularization: Method to avoid overfitting by adding a paramter to the cost function that regulates priority of finding small weights and minimizing the function.
  • Cross-entropy cost function: Cost function that reduces learning slowdown by taking into account more values.
  • Dropout:
  • Epoch: step in training or testing.
  • learning rate annealing:
  • Pre-training: Training a network using a different method before actual trainig starts to get a good set of initial weights suited to the particular training in question. Usually done to avoid common errors in training due to bad values for initial weights.
  • Deep Architecture: A way to organize a set of network to conduct deep learning
    • Feed-Forward: multilayer neural nets, convolutional nets
    • Feed-Back: Stacked Sparse Coding, Deconvolutional Nets
    • Bi-Drectional: Deep Boltzmann Machines, Stacked Auto-Encoders
  • Spatial Pooling:
    • Sum or max
    • Non-overlapping / overlapping regions
    • Role of pooling:
      • Invariance to small transformations
      • Larger receptive fields (see more of input)
  • Retinal mapping: See Retinotopy.
  • Retinotopy: The mapping of input from the retina to neurons.
  • Activation function types:
    • Sigmoidal function: Function that looks like the sigma shape (continuous transition from 0 to 1).
    • Sigmoid: 1/(1+exp(-x))
    • TanH
  • Winner-take-all: let neurons compete for activation.
  • Soft-max: TODO.
  • ReLU ("Rectified Linear Unit"):
    • Simplifies backpropagation
    • Makes learning faster
    • Avoids saturation issues
    • Preferred option
  • Simple Cell:  TODO.
  • Complex Cell: TODO.

2016-05-27

Existential woe; should robots be designed to be immortal or mortal?

While working on the pairing of nodes within the OctoMY™ ecosystem, I have hit upon an interesting, almost scary design choice; should the robots in OctoMY™ be mortal, or should their soul die with the hardware?

In reality, there are 3 valid choices;

  • All robots are immortal, when the hardware dies, simply copy the software to another to resume its existence.
  • All robots are mortal, when the hardware dies, simply copying the software to another robot will automatically generate a "new soul", the old one will be lost forever.
  • Leave the choice up to the owner of the robot, so when the robot is "born" and the first configuration is completed, the robot will either be immortal or mortal depending on the user's preference.
This peculiar problem stems from the way nodes identify themselves when communicating. To inspire any form of trust when communicating with each other, the nodes must identify themselves with a signature string that is directly related to the public key used to securely transfer secrets among themselves.

If it is NOT related to the key then it would be easy to "spoof" the signature of another node and all sorts of security problems would arise.

Further, simply using the pub-key itself as a signature would be valid, but this would allow the pub-key to be copied should the old hardware die, hence the immortality.

Currently the robot signature is generated from some unique hardware parameters that differ between each node. Combining this unique "hardware fingerprint" into the signature of the node would mean that copying software from one node to another would yield a new and completely different signature, and so the "old signature would die with the old hardware", suggesting that the robot would be mortal.

I have been torn between the 3 alternatives above. On the one hand, I really think that both robots and human beings of the future will be immortal. But on the other hand, owning a robot today that "will die one day and there is nothing you can do to change that" gives it a human dimension.

I think the 3rd choice, of letting the user decide may be best, that way the user can sign the birth certificate of their robot themselves. After all, who am I to decide the mortality of your robots?

2016-05-12

What is OctoMY™ ?

This post will be an introduction to concepts of OctoMY™.

I am putting all my spare time into this overly ambitious project, and there are some great things in store. However instead of waiting, I will let some details trickle out on the web early, just for good measure.

Ok so what is OctoMY™ ?

It is a free and open-source software that you should be able to easily use in your hobby robot project to be able to do a bunch of cool stuff.

Cool stuff like what? I can hear you say.

Well. This is where the "overly ambitious" part comes in. I know for this project to be successful I need to cram in some pretty cool stuff! At the same time, I know that there needs to be a lot of basic boring stuff in place too, because cool stuff usually relies on boring stuff to work. So from this notion the following plan has emerged.

OctoMY™ Agent
There are 4 "tiers" in the model:
  1. Agent: This is your robot
  2. Remote: This is your laptop or mobile device used to control said robot
  3. Hub: This is your server running in the cloud or in your basement (or even in your laptop) used to keep track of multiple robots and share and store data between them.
  4. Zoo: This is a central service run by OctoMY™ project here. It is used to brag about your robot online and allow the public to see it. Basically it's a facebook for your robots.
OctoMY™ Remote
Agent and Remote will be available as apps readily downloadable from Google Play, or as binaries for Ubuntu. Hub will most likely be available as a docker image, or as ubuntu/debian binaries. Zoo is just there running from the cloud.

OctoMY™ Hub
OctoMY™ will try to handle all the boring communication stuff and security issues for you so that your robot will have privacy and stay safe.

That was the boring bit. Now for a list of cool stuff that this "boring" platform can enable:
  • Controlling swarms of quad-copters from your phone.
  • Having an army of hexapod robots roam an area and generate a common 3D map that can be used for accurate localization and mapping (slam).
  • Having your robot "pick up strangers" online and exchanging "love letters" with them.
  • Let crowds see live streams from your robot.
  • Send commands to your robots via twitter.
  • Let your robot collaborate on tasks with the robot's of other OctoMY™ users online.
  • Letting other use the configurations and adaptions you make for your robot easily.
OctoMY™ Zoo
This list was just a quick one I made that was severely limited by my imagination really. The point is that with a common software stack like this and with many eager enthusiasts working on it, the possibilities are virtually endless.

Hopefully I will have some software that is worth downloading up soon.


2016-04-30

Blog studio in the making

I have not invented a name for it yet, the blog studio that I am currently setting up in my basement to support the marketing effort for OctoMY™ project.

One thing is for sure, I am really looking forward to start broadcasting from it!

The OctoMY™ blog studio WIP.

2016-03-27

tiny-cnn

I know that Deep Learning is the future of anything related to intelligent robotics. Since 2012 it is the most disruptive technology in the field, making several decades worth of carefully hand engineered and tweaked code bases obsolete literally over night. And as a result, all the big players in IT such as Google, Facebook, Nvidia etc. are pouring their biggest bucks into this retro area of research.



Retro? Yes, because research has been done for decades in this field before it was kind of forgotten. Why was it forgotten? After numerous stabs at getting a working implementation of the many advance models based on biological principles like neurons and synapses, it was collectively deemed too resource intensive for the contemporary computer hardware.

But along with fancy virtual reality headgear the "neural network artificial intelligence" of 1980's and 1990's sci-fi has now resurfaced, this time with actual real promise (For VR see Oculus Rift).

How? Well because of the unfathomable increase in computer's capacity to process, store and communicate data combined with the unfathomable increase in the number of computers connected together in easily accessible clusters and farms such as AWS combined with maybe the most important parameter: the unfathomable amount of readily tagged media for training purposes (read: cat pictures on youtube).




NOTE: These graphs really do not not give my statement justice unless you grasp that the scale is exponential, and notice the part where it says "human brain" to the right.

Suddenly the dusty old models from the 1990's could be plugged into a new computer and give results literally over night.

I am truly fascinated by this new-old technology that suddenly promises that our computers in the near future may understand our desires in an exponentially growing degree. A technology that makes self-driving cars intelligent speech driven assistants something that we get to see not sometime  before we die, but something we can buy within a decade or even sooner. And who knows what kind of crazy tech we will depend on once this first generation of deep learning has reshaped our lives?

I am a true beginner in Machine Learning and Deep Learning, and I intend to use OctoMY™ as a vessel for learning about it. It is my ambition to make DL an integral part of OctoMY™ as soon as possible, putting it in the hands of the hobby robotics enthusiasts out there. Because, as fascinating as DL is, almost no-one outside the field understand what it is, the significance it carries.

But where would a beginner like myself start? It is really a jungle out there, in terms of available software libraries/frameworks/toolboxes that promise a varying degree of features, performance and integration.

So, in my quest to find the perfect deep learning library/framework/toolbox to use from OctoMY™ I found this useful presentation of tiny-cnn, and I decided to try it out.

According to the project page on github tiny-cnn is

A header only, dependency-free deep learning framework in C++11
 It promises integration with data developed in Caffe. I will give it a shot and we will go from there. Stay tuned!

2016-03-20

The blog formerly known as "Devol Robot Project" is dead. Long live "The OctoMY™ Blog"!

The title of this post really says it all. I decided to change direction of this blog. From this day forth it will no longer be the ramblings of a curious learner without direction. It will au contraire be the ramblings of a curious learner WITH direction!



All the old posts will remain, the design will remain, and for now the old URL will remain as well. In fact nothing will be changed but the title, tagline, logo and editorial direction that this blog takes.

The new direction will be something like 40%  OctoMY™ news 40% robotic tidbits  and 20% actually useful content like tutorials and cheat sheets.