Funding of OctoMY™ and general status

Progress on OctoMY™ has as promised slowed down in year 2020. A lot has happened behind the scenes. and in this post I hope to better explain what has been going on.

In one sentence: I put development of OctoMY™ on hold temporarily while working on a project that can fund the future development of OctoMY™.

Why did I do this? I realized that organically growing the OctoMY™ project as an open source project as I first planned was not feasible from it's current state. I would have to devote all my time selling the concept to unsuspecting developers rather than advancing the development status and even then I would risk attracting very few and/or very junior resources to the project.

So I decided to switch gears and start another secret unrelated project that could generate some revenue. I will not disclose any details about this project because it is truely not relevant (and I do not wish to attract any attention to this project as it is a strictly commercial B2B project).

However after exactly one year of focusing 100% of my development time into this new project (Let's call this project "FK")  I feel at least I owe blog readers a status update!

The status is that I have gone through development hell on FK, and I am finally emerging victorious on the other end. I created an MVP over 6 months ago and was ready to launch, but decided to create a second MVP (biggest nono in history of IT) that had major feature creep. We live and we learn. The important positive takeaways here are:

  • The FK project is 95% ready to launch.
  • The FK that will release now is much better than the original MVP, making it much more likely to succeed.
  • The FK codebase is reusable, modular and useful beyond the hot mess it was at the start. In fact many of the useful parts have now been made open source and rebranded as octomy projects in gitlab.
  • I learned a lot of new technologies during my one year of FK development; Kubernetes & Docker, Python, FastAPI, flask, ... The list goes on and on... 
  • I also had the oportunity to form some opinions on good practices along the way, which I would have had to do anyway for the OctoMY™ server side. I just practiced it on the FK project instead of having to do it for OctoMY™ directly.
  • And maybe most importantly: When the FK project launches I hope to see a solid revenue stream that eventually will trickle into OctoMY™ in the form of my spare time (I will be able to start working on FK and OctoMY™ full time instead of just my spare time.
I hope this status update is sufficient to explain why it appears that development stopped completely. If you look in the octomy group in gitlab you will see that is far from the case. 7 (!!) new projects were added 2 weeks ago.

And with that I wish you all a happy 2021!


Deploying python pacakges to PyPi using gitlab and twine

While working on the web-minify project, I had to figure out just how it is possible to deploy a Python package to PyPi from gitlab. Here are my findings, hopefully it is useful for someone else!

Key information

  • The gitlab docker executor will run each job in a separate container.
  • No data is shared between jobs by default, so you have to use build artifacts to facilitate sharing between jobs
  • The build job can thus prepare the package data in the dist/ folder ready for deployment and then mark the dist/ folder as a build artifact
  • The deploy job will then have the dist/ folder available (all subsequent jobs after a build artifact has been defined will have access to the artifacts)
  • The deploy job can then invoke twine to upload the package to PyPi
  • Twine takes a username and a password from a configuration file
  •  To avoid storing the password in the source code, a TWINE_PASSWORD environment variable is set in gitlab configuration of the project.
  • PyPi supports uploading packages using API tokens instead of requiring a username/password. In this mode the username is set to "__token__" and the password is actually a long token that you get from twine config upon creation of the token instead of an actual personal password. This allows us to have some granularity of permissions, creating a token that can only access one project instead of all the projects of a user. Anyway, the token is set as a gitlab variable called TWINE_PASSWORD.

You can look at the .gitlab-ci.yaml and Makefile of the  the web-minify project to see an example of how this is done.

Good luck!


Docker registry credentials as secretGenerator with kustomize

So you just discovered your new friend kustomize, and now you want to convert all your pesky secrets into secretGenerators. Good for you!

But... what about the docker registry credentials secret? It seems like magic that just works. Fear not, here is the definitive guide to how you can convert it to secretGenerator!

So let's assume you start with the following working setup:

Inside secret-docker-reg.yaml we have the following YAML:

apiVersion: v1
  - secret-docker-reg.json
kind: Secret
  name: docker-reg
  namespace: whatever
type: kubernetes.io/dockerconfigjson

Inside secret-docker-reg.json we have the following JSON:

{"auths":{"https://registry.gitlab.com":{"username":"yourusername","password":"SOME SECRET PASSWORD STRING","auth":"SOME SECRET AUTH STRING"}}}

Alternatively your secret-docker-reg.yaml may look like this:
apiVersion: v1
  - inlined-json: eyJhdXRocyI6eyJodHRwczovL3JlZ2lzdHJ5LmdpdGxhYi5jb20iOnsidXNlcm5hbWUiOiJ5b3VydXNlcm5hbWUiLCJwYXNzd29yZCI6IlNPTUUgU0VDUkVUIFBBU1NXT1JEIFNUUklORyIsImF1dGgiOiJTT01FIFNFQ1JFVCBBVVRIIFNUUklORyJ9fX0=
kind: Secret
  name: docker-reg
  namespace: whatever
type: kubernetes.io/dockerconfigjson

As you can see the difference is that in the first example the credentials is in a separate .json file referenced by the YAML (secret-docker-reg.json) while in the second the json content has been base64 encoded diretly into the YAML file.

Now to convert this to a kustomize secretGenerator you will need to keep the json in a separate file. In the first example you are already good to go, in the second simply copy the rather long base64 string (eyJhdXRocyI6eyJodHRwczovL3JlZ2lzdHJ5LmdpdGxhYi5jb20iOnsidXNlcm5hbWUiOiJ5b3VydXNlcm5hbWUiLCJwYXNzd29yZCI6IlNPTUUgU0VDUkVUIFBBU1NXT1JEIFNUUklORyIsImF1dGgiOiJTT01FIFNFQ1JFVCBBVVRIIFNUUklORyJ9fX0=)  into a separate file such as temp.txt and run the following command:
cat temp.txt | base64 -d > secret-docker-reg.json

This will decode the base64 into the json format in the file we want. NOTE: You could also use an online base64 conversion tool, but since this string contains credentials to your docker registry, you may want to use one you really trust and only over https.

What remains is to create the kustomize secretGenerator stanza:

- name: docker-reg
  - secret-docker-reg.json
  type: kubernetes.io/dockerconfigjson

Things to watch out for:

  • The secretGenerator stanza must be inside a kustomization.yaml file as it is part of kustomize and NOT part of kubernetes/kubectl.
  • The .json file needs to be on the same level or below the kustomize.yaml file from which it is referenced. If not you will get nasty errors about this.
And that's it! I hope this saved your ass like it would have saved mine if I had found it before I knew. Oh wait...


Happy Anniversary 2020!

The 7th of January 2020 marks the four year anniversary of the OctoMY™ project. The project has had some amazing progress as always, and just a few months ago a new strategy was formed that may slow development down momentarily.

More on that later.

Suffice to say, I am happy as ever to work on this project, and I am looking forward to an amazing 2020.


Productivity matrix

I have been thinking about productivity lately. I have dubbed my idea "swat matrix team", deriving from the fact that it talks about how to organize teams in a way that both resembles "swat" teams and leverages the skills of individual team members as a "matrix. (Sorry Neo, you are not the inspiration this time).
Swat robots. Image credit Morgan Allen.
The gist of it is that we create a small team of resources each expert in their own domains working together like a unit ("swat" team). For example you have one dev-ops, one UX, one back-end and one front-end specialist working on the same project.

All should take part in git review for each other and try to learn the trades of each other. In fact the learning should be formalized in rotating bi-weekly "apprenticeships designations" where pairs of team members are responsible for mentoring each other in their respective fields of expertise. For example dev-ops will teach deployment in kubernetes to UX, UX will teach user testing to dev-ops.

The actual work for the project will be the learning material used. In this example UX Tasks will be assigned 25% to the UX apprentice with the tasks with most learning potential being assigned first.

The idea is that by doing things you are not comfortable with you will be on high alert and your attention to detail and best practices will be heightened. By having an expert by your side you can maintain confidence that the result will pass the bar.

For larger projects multiple swat teams will work together. The apprenticeship is maintained within each team, and each team is responsible for one easily separable part of the project. Rotation of the members will happen between teams, so that the UX of one team will swap with the UX of another, facilitating cross-team knowledge sharing while still maintaining coherent teams.