Showing posts with label tutorial. Show all posts
Showing posts with label tutorial. Show all posts

2017-05-28

Short introduction to HTM

Jeff Hawkins is an engineer, business man, neuroscientist and inventor who spent his life in the pursuit of a grand unified theory about how our brain works. He has a noteworthy list of accomplishments behind him, and is very successful in his quest.

I first heard about him when I watched his ted talk, and from there my curiosity took over, and I started looking more into the research he is conducting with his company numenta. I will try to summarize the basics of his theories in this post.

If you want to learn more, a good place to start is to read his book "On intelligence".

So what is HTM? HTM is an acronym for Hierarchical Temporal Memory and basically it is a list of 3 important aspects of the algorithm in the theory. The most prominent contributor to intelligence in our brain is the "neocortex".

In contrast to the rest of the brain which has evolved longer and therefore is much more specialized, the relatively new neocortex has a homogenous structure that is re-used throughout.

This structure is basically that the cortex is a 2mm sheet of neurons about the size of a large napkin, crumpled to fit inside our cranium. The sheet is divided into 6 layers of neurons that are connected in a particular way.



Small patches of the cortex represent stages in a hierarchy, and the connections between the neurons is what dictate the boundaries of each patch. Signals pass from lower stages up to higher stages and back down.


The lowest stages are connected to our senses such as eyes, ears and skin via mechanisms of the old brain. The signals passed from here consists of patterns of impulses that make their way up the stages, and it is temporal sequences of these patterns that are learned in each stage.



And perhaps the most important part of the theory is the following; Each stage will try to remember signals coming from lower stages and later predict those signals once they have been remembered. Only the signals that have not been predicted or "learned" will pass to the next stage unchanged, so by every stage the impulses have been refined and condensed into more abstract and high-level information.

 When a stage is seeing a new signal that it cannot predict, it will be sent up to the next stage until one stage recognizes it. If no stage recognize the input, it is forwarded to the hippo-campus which sits at the logical top of the hierarchy. The hippo-campus will keep the unknown information around for a some time until it is no longer useful. Hopefully, the stages below will now have managed to learn it, and if they have not, it will simply be discarded.

Beyond this introductory description of HTM, there are many important details that really describe well how this relatively simple algorithm actually explains our intelligence and sense of being completely.

I can warmly recommend to read the book "On intelligence".

2017-01-15

Canuckistanian Dictionary

I have compiled this rudimentary Canuckistanian Dictionary, so that you may also follow the teachings of The One True Canuckistanian.

Schmoo Any liquid, be it hydrophillic or hydrophobic, homogeneous or pasteurized, that has a viscosity between that of Canuckistanian Beaver Semen at one hour after ejaculation ( 9.35 +/- 0.99 centipoise ) and Canuckistanian Maple Syrup (175 +/- 25 centipoises).
Chooch
  1. To move, either orthogonally, laterally or radially, or any combination of the aforementioned.
  2. To let pixies pass through the wires of a contrivance.
See also "Pixies".
Pixies Small, usually obedient particles that are expected to form lines inside wires to make contrivances chooch. Se also: "Chooch", "Angry Pixies".
Angry Pixies Pixies that for some reason are no longer obedient, and may for no reason at all step out of line. Se also: "Pixies".
Bumblefuckery An activity that may result in a contrivance stopping to chooch. See also "Pop fart and give up the ghost".
(The room formerly known as) wife's sowing A traditional in-home style Canuckistanian man-cave. Dedicated to learning from books and staying in front of the screen with the occational work with brain boxes. Tempered at around 23 Dungrees.
(The) shop A traditional out-of-home style Canuckistanian man-cave. Dedicated to performing bumblefuckery with the bridgeport. Tempered at around 15 Dungrees.
(The) bridgeport An age old contrivance for manually and forcefully opening snail-mail, cracking plexiglass, bending and annealing end-mills and making vodka glasses from sheets of copper.
(The) screen Special purpose contrivance that allows Canuckistanians to learn stuff from the interwebs and youtubes.
Keep your dick in a vise Have a nice day. See also "Keep your stick on the ice".
Brainbox A contrivance dedicated to the controlled conversion of pixies into angry pixies, thereby making one or more other contrivances chooch.
Sexpert Dumb fuck.
Enginerd Engineer
Stupidvisor Supervisor
Frankenstein Unit of measure for temperature.
  • 1 °Dungree = ( °Frankenstein - 32) × 5/9
  • 1 °Frankenstein = °Dungree × 9/5 + 32
Se also: "Dungrees"
Dungrees Unit of measure for temperature.
  • 1 °Dungree = ( °Frankenstein - 32) × 5/9
  • 1 °Frankenstein = °Dungree × 9/5 + 32
Se also: "Frankenstein"
Vijeo Sequence of images displayed in rapid succession with accompanying audio.
Wank overload An event where too much wank happened.
Cromulent Important
Redonculous Ridiculous
Jelly 4 ur Jam Bang for your buck
Bifocals Glasses
Canuckistan The land of the polar beers.
(Canuckistanian) Copeks Primary unit of wealth in Canuckistan. See also "(Canuckistanian) Pesos".
(Canuckistanian) Pesos Secondary unit of wealth in Canuckistan. See also "(Canuckistanian) Copeks".
Scientician Learned Canuckistanian.
Scienticious Actions carried out exclusively by learned Canuckistanian. See also "Scientician".
Yada yada yada I don't feel like elaborating now.
It's the cicle of life At this very moment, I am having an epiphany.
Dog bless the nanny state Idiomatic exclamation of pure satisfaction equivalent to "God bless the government of Canuckistan for looking after its citizens"!
Pop fart and give up the ghost To stop chooching.
Flogg it like a Rented mule To start chooching.
Son of a diddely Mild expression of disappointment.
Safety squint engaged Eyes squinting momentyarily to avoid damage. See also "Bifocals".
Fantabulous Mild expression of approval.
Scokum Of right Canuckistanian quality.
Keep your stick on the ice Good day. See also "Keep your dick in a vise".
Rippums Rotations Per Minute.
Meat hook abortion Contrivance of unknown or non-Canuckistanian origin. See also "Chinesium".
Chees grade Of considerable softness.
Bob's your auntie Let's try this.
The sharpest bowling ball in the shed Round Contrivance typically kept by land owners of Canuckistan in out-door sheds. Used for measuring relative mental capacity.

2016-09-02

How to get going with Vulkan & NVIDIA on Ubuntu 14.04 and 16.04

First, I found that there really was no support for Vulkan in Ubuntu 14.04 LTS, or at least without a lot of custom configurations that I would like to avoid. So I decided to upgrade my system to Ubuntu 16.04.
Getting started with Vulkan in Ubuntu with nVidia.


Next I tried to find my way by following a lot of different tutorials and advice. Nothing seemed to work for me, and getting Vulkan on my laptop was looking impossible until I decided to follow this simple advice. I decided to shamelessly steal his advice and add my own extra observations. So here are the steps:
  1. Open this page on your tablet/phone
  2. Hit <CTRL> + <F1> to go to a text-only terminal
  3. Log inn using your normal username and password
  4. Get rid of the old nvida drivers, libraries, configurations and other gunk (this step is optional, but if you tried some stuff before getting here it can be well worth it starting fresh): # sudo apt-get remove --purge nvidia*
  5. Add official ppa repository for vulkan: # sudo apt-add-repository ppa:canonical-x/vulkan
  6. Install nvidia driver that supports vulkan (NOTE: version 367 was recommended at the time I wrote this): # sudo apt-get install nvidia-367
  7. Install Vulkan libs: # sudo apt-get install libvulkan1 mesa-vulkan-dev mesa-vulkan-drivers
At this point you may reboot and try the vulkaninfo command from terminal to see what vulkan stuff is found in your system.

TIP: If you have a dual gpu setup which is common nowadays, your secondary inferior (usually Intel) GPU might get in the way of your Vulkan setup and this page shows you how to disable it.

Now it's time to look at and run some code. The official "wrangler lib" and SDK for Vulkan is called "LunarG" and it is not in the Ubuntu repos. It contains libraries, tools and example code for you to play with. You have to download it and unpack/build/install it the old fashioned way:

  1. Download the sdk from here: https://lunarg.com/vulkan-sdk/
  2. Once downloaded #chmod +x the file to be able to run it.
  3. Run the file, copy the extracted folder to some location and add the following path variables:
#export LD_LIBRARY_PATH=$HOME/VulkanSDK/1.0.21.1/x86_64/lib
#export VK_LAYER_PATH=$HOME/VulkanSDK/1.0.21.1/x86_64/etc/explicit_layer.d

You can put these in some script that is run on login or boot like ~/.bash_profile

NOTE: You may need to adjust the paths to match the location where you put the SDK. ~/.local/, /opt/ and /usr/local/ are good candidates.









2016-08-10

Anatomy of a hydraulic cylinder

Hydraulic cylinders look like this on the inside:



The pin eye and clevis is where the cylinder is mounted to the appliance, and are the points between which the cylinder exerts its force.

The barrel is the body of the cylinder where the piston and rod can slide in and out.

The ports are where the hydraulic fluids are pressed in or out.

The wiper keeps dirt from reaching the gland. O rings seal the crevices between parts. the seals and wear rings keep the rod wear and leak free.

The nut keeps the rod securely attached to the rod.


2014-08-20

Reading Emcotronic M1 roms

I have sucessfully read the roms from the M1 mainboard using the TOP853 rom programmer.

The process was surprisingly smooth. I was prepared for all sorts of problems along the way, but it just worked. Uncanny...

I ran the TOP853 software from virtualbox to read the roms into .bin files and then i used bokken from the ubuntu repos to parse and look at the files. This is how I did it:



Open the chip select dialog in TOP853.

Type in 27256 and select EPROM.

Select a long delay for reliable reading.

Start reading the chip into the buffer.

Save the buffer to .bin file.
 
Open bokken and select the .bin file.

You can browse the dis-assembled code.
 
You can view the strings table.
The plan ahead now is to buy a few replacement roms (modern variants with identical pin-out) and copy the images over and see if they work. Once they work I can start modifying the code slightly.

Figuring out what to change in the code and where might seem harder than it is. I thought about ho to do it and I came up with the idea that the strings tell me something about what the code does, so all I have to do is to follow the strings and see which code pushes the strings around. Next I just identify which strings relate to which functions (such as loading/saving MSD data) and hijack those routines by jumping to an unused location where I have some space for my own code.

Well in theory at least.

2013-02-09

OpenGL source code for Oculus Rift

If you are at all interested in gadgets, graphics, games you have surely picked up news about the up and coming disruptive technology that is oculus rift. If you haven't, then you should definitely check it out.

I have ordered my developer kit already, and I am looking forward to using the head mount display as a way to display the UI for my  robot.

Since there isn't really much to go on yet when it comes to code examples and such, I decided to create example source code for oculus rift using OpenGL. A little tutorial if you will, for rendering a scene in OpenGL in a way that will be adaptible to the VR gadget when it arrives.

First some explanation. When you render in 3D, you will create what is called a frustum, which means "view volume". This is usually shaped as a pyramid protruding from your viewpoint in the scene, in the direction you are "viewing".


Regular perspective frustum in OpenGL

This is fine when you are rendering to a regular monitor. However when you want to display the result in any form of stereoscopic display such as a 3DTV, VR goggles or similar, you will have to render TWO frustums, one for each eye.

Stereoscopic perspective frustum in OpenGL

Since most stereoscopic displays today have moderate field of view, the image will not be very immersive at all. The Oculus Rift changes this by boosting the field of view (also known as the view angle) to 110 degrees. This goes beyond what we are able to preceive, and will together with the stereoscopic 3D effect give a very immersive effect.

Wide angled stereoscopic perspective frustum (Oculus Rift style) in OpenGL

So how is this done in OpenGL? This entry in the OpenGL FAQ sums it up really nicely.

What are the pros and cons of using glFrustum() versus gluPerspective()? Why would I want to use one over the other?
glFrustum() and gluPerspective() both produce perspective projection matrices that you can use to transform from eye coordinate space to clip coordinate space. The primary difference between the two is that glFrustum() is more general and allows off-axis projections, while gluPerspective() only produces symmetrical (on-axis) projections. Indeed, you can use glFrustum() to implement gluPerspective(). However, aside from the layering of function calls that is a natural part of the GLU interface, there is no performance advantage to using matrices generated by glFrustum() over gluPerspective().
Since glFrustum() is more general than gluPerspective(), you can use it in cases when gluPerspective() can't be used. Some examples include projection shadows, tiled renderings, and stereo views.
Tiled rendering uses multiple off-axis projections to render different sections of a scene. The results are assembled into one large image array to produce the final image. This is often necessary when the desired dimensions of the final rendering exceed the OpenGL implementation's maximum viewport size.
In a stereo view, two renderings of the same scene are done with the view location slightly shifted. Since the view axis is right between the “eyes”, each view must use a slightly off-axis projection to either side to achieve correct visual results.

The glFrustum call will in other words allow you to set up a view matrix that with the necessary offset. But how should we go about rendering the scene? The oculus rift expects the image for each eye to be rendered side by side, so we simply render the scene twice, using the proper viewport each time. Again, from the OpenGL FAQ:

9.060 How can I draw more than one view of the same scene?
You can draw two views into the same window by using the glViewport() call. Set glViewport() to the area that you want the first view, set your scene’s view, and render. Then set glViewport() to the area for the second view, again set your scene’s view, and render.
You need to be aware that some operations don't pay attention to the glViewport, such as SwapBuffers and glClear(). SwapBuffers always swaps the entire window. However, you can restrain glClear() to a rectangular window by using the scissor rectangle.
Your application might only allow different views in separate windows. If so, you need to perform a MakeCurrent operation between the two renderings. If the two windows share a context, you need to change the scene’s view as described above. This might not be necessary if your application uses separate contexts for each window.
With no further ado, here is my working code for a stereoscopic view, which I think will work pretty well with the oculus rift from what I have gathered. It might need some tweaking with respect to projection mapping as they have been talking about  "adjusting for fisheye effect". However, I assume that it will be easy to perform with a custom projection matrix.

/*
 * StereoView.hpp
 *
 *  Created on: Feb 7, 2013
 *      Author: Lennart Rolland
 */

#ifndef STEREO_VIEW_HPP_
#define STEREO_VIEW_HPP_

#include "GLStuff.hpp"
#include "View.hpp"

using namespace std;
// Magic constant
const float DTR = 0.0174532925f;
// Intraocular distance (distance between eyes, should match the real distance between the eyes of the viewer when realism is a goal)
const float IOD = 0.5f;

class StereoView: public View {
private:

 class Eye {
 private:
  float left;
  float right;
  float bot;
  float top;
  float translation;
  float near;
  float far;
 public:

  Eye(float lf, float rf, float bf, float tf, float mt, float near, float far) :
    left(lf), right(rf), bot(bf), top(tf), translation(mt), near(near), far(far) {
  }

  void apply() {
   glMatrixMode (GL_PROJECTION);
   glLoadIdentity();
   //Set view frustum
   glFrustum(left, right, bot, top, near, far);
   //Translate to cancel parallax
   glTranslatef(translation, 0.0, 0.0);
   glMatrixMode (GL_MODELVIEW);
  }
 };

 int w, h;
 float aspect, top, right, shift, distance;
 Eye eyeLeft, eyeRight;
 bool useViewports;

 void init(void) {
  glMatrixMode (GL_PROJECTION);
  glLoadIdentity();
  glMatrixMode (GL_MODELVIEW);
  glLoadIdentity();
 }

 void drawSceneInstance(Scene &scene, Engine &e) {
  glPushMatrix();
  //Translate to screen plane
  glTranslatef(0.0, 0.0, distance);
  scene.render(e);
  glPopMatrix();
 }

 void selectEye(bool left) {
  //Use viewports
  if (useViewports) {
   const int w2 = w / 2;
   glViewport(left ? 0 : w2, 0, w2, h);
   glScissor(left ? 0 : w2, 0, w2, h);

   glEnable (GL_SCISSOR_TEST);
   glClearColor(left ? 1.0 : 0, 0, left ? 0 : 1.0, 1.0);
   glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
   glDisable(GL_SCISSOR_TEST);

   //cout << "viewport left:" << left << "\n";
  }
  //Use native OpenGL stereo back buffers
  else {
   glDrawBuffer(left ? GL_BACK_LEFT : GL_BACK_RIGHT);
   //cout << "buffer left:" << left << "\n";
  }
 }

public:
 StereoView(int w = 1280, int h = 720, bool useViewports = true, float near = 3.0, float far = 30.0, float fov = 110, float screenZ = 10.0, float distance = -10.0) :
   w(w), h(h), aspect(double(w) / double(h)), top(near * tan(DTR * fov / 2)), right(aspect * top), shift((IOD / 2) * near / screenZ), distance(distance), eyeLeft(top, -top, -right + shift, right + shift, IOD / 2, near, far), eyeRight(top, -top, -right - shift, right - shift, -IOD / 2, near, far), useViewports(useViewports) {
 }

 virtual ~StereoView() {
 }

 void resize(int w, int h) {
  float fAspect, fHalfWorldSize = (float) (1.4142135623730950488016887242097 / 2);
  glViewport(0, 0, w, h);
  glMatrixMode (GL_PROJECTION);
  glLoadIdentity();
  if (w <= h) {
   fAspect = (GLfloat) h / (GLfloat) w;
   glOrtho(-fHalfWorldSize, fHalfWorldSize, -fHalfWorldSize * fAspect, fHalfWorldSize * fAspect, -10 * fHalfWorldSize, 10 * fHalfWorldSize);
  } else {
   fAspect = (GLfloat) w / (GLfloat) h;
   glOrtho(-fHalfWorldSize * fAspect, fHalfWorldSize * fAspect, -fHalfWorldSize, fHalfWorldSize, -10 * fHalfWorldSize, 10 * fHalfWorldSize);
  }
  glMatrixMode (GL_MODELVIEW);
 }

 void renderView(Scene &scene, Engine &e) {
  init();
  gluLookAt(pos.x, pos.y, pos.z, dir.x, dir.y, dir.z, up.x, up.y, up.z);
  //Clear color and depth for all buffers
  glDrawBuffer (GL_BACK);
  glViewport(0, 0, w, h);
  //Left eye
  selectEye(true);
  eyeLeft.apply();
  drawSceneInstance(scene, e);
  //Right eye
  selectEye(false);
  eyeRight.apply();
  drawSceneInstance(scene, e);
  glDrawBuffer(GL_BACK);
  glViewport(0, 0, w, h);
  glDisable (GL_SCISSOR_TEST);
 }

};

#endif /* STEREO_VIEW_HPP_ */