Muscle Memory Optimized Fractal UI, designs I did back in 2010

I want to share some old designs I worked on back in 2010.

Then I was designing a completely new type of UI based on fractality of everything, together with our company adHD Oy Finland, together with Jyri Hovila, Taina Peltola, Jani Salminen and Aki Tåg.

Our company didn’t really take off, but we did a lot of design and some simple prototypes. We had ambitious plans, but we didn’t get any further than the planning stage and implementing some simple UI prototypes.

Our idea was to build a Linux based operating system, that would have completely new UI paradigm. Jyri had the idea of the Linux based operating system, and I was thinking about the UI side.

SuperYber konsepti.001

We were pushing away from the notion of computers having to imitate books, or other traditional media, like tablets and phones were doing back then. Where is the real intelligence in that? Or smart as in smart phone ?

It only smarts to use, trying to replicate old world paradigms like pages, tabs, files etc in a modern environment. We don’t have to anymore limit ourselves to something traditional with new digital platforms, so why cling onto old ways to think ?

We could create something for the purpose from the ground up, that is optimized for that use case, but instead it seems many just try to imitate already known analogue user interfaces, transferring those into the digital world, which to me doesn’t make much sense, except from the point of having familiar terms to work with and bring people in.

But those files, folders and floppies and all user interfaces paradigms we have been so used to back in the day in the physical world, are now dead. I mean, who uses folders anymore ?


When was the last time you saw these? This used to be the universal icon for saving a document

They have went the way of the famous floppy disk as the save icon, young people don’t even know what a floppy disk is, and the same is with many UI paradigms we still stick to.

Fractal User Interface

Working together as a company, we had a problem, a problem all teams and companies have: how to share information efficiently? How to see at one glance how the project was doing and how the team was completing their tasks ? No current software we looked back then was able to provide that view, in a way that you could just see from a single page, a single glance, what was going on with your team and your project ?

I started wondering, why were so many team collaboration software (definitely back in 2010 at least) designed around the concept of books and pages and so on, that fit poorly in a web environment ?

Optimal situation would be to have an immediate visual overview of what is happening in your project, without having to go through different pages, tabs, issues and so on to see what is going on with the project. I wanted to have an immediate view onto how the project was doing, without having to dig any deeper in the UI.

I really wondered, why stick to old paradigm in the thinking of how things should be done ?

SuperYber konsepti.002

So I started to think outside the box, designing from the get go what would an intuitive user interface look like,  for seeing how a project was doing. It would naturally be built around the people connected to a project, and people completing tasks for the common nominator in the center, the project, that everyone would be committing work for.

SuperYber konsepti.003

I was starting to imagine this as a tree, taking something from the nature and starting to build around that concept in my mind. A traditional list, or box based UI didn’t feel like a natural choice, so we began from the very simple roots (ha!) of what it is to build an user interface and what would feel natural for the user.

The main UI was based around the idea of building an UI based around one usage paradigm:

Everything is Fractal.

Here we can see an design based around that concept, designed for a discussion forum app. It would use a fractal tree node based interface for displaying data:


Design for a discussion forum app (click for full version)

branch detail

Branch detail from discussions

The idea was that all data displayed in our UI would be based around fractal tree nodes, and that same structure would be repeated for different data all over the system. In the picture above, we show a design for a discussion forum, but this same model could be used for contacts, messages, files and all the data you have on your phone.


This would mean that only one set of UI paradigm would have to be designed, and that same UI model could be re-used all over. No more re-designing every part of the UI, just design one part really well and repeat that!

No more multiple types of menus, drop downs, windows, whatever you need in traditional UI to visualize data, but only one set of really well working interface that would be replicated all over the place.

Fractality of Usage Patterns

We wanted to go further with the fractality of the design, and not just represent it in the visuals of the system. We wanted the usage of the system to be fractal in nature.

This means, that you only have to learn a set of certain hand motions to do any action. And you could use these same actions in every context in the UI. This is sorely missing from modern UIs, where you have to constantly change your muscle actions, leading into not being able to program the muscle memory efficiently, not being able to learn and remember different parts of the UI instantly.

For example, when you learn a sport, or play a game, first you think about it, but the main goal is to be in a state of not thinking about the action being performed, just do it, flow with it.

With modern computer UIs, this is very difficult if you are not using the keyboard, as with mouse and other position based UI’s, you constantly have to keep your attention on the screen, and figure out what does what. This is not optimal for storing the actions into your muscle memory, as the actions are constantly moving into different places, and you cannot just forget about the context and perform similar actions without thinking.


Our UI was designed to be used primarly by hand motions, touch screens in mind back then. So that you would only have to learn a set of certain actions, and repeat those same actions in different contexts, all over the system.

This, I believe, would result in a UI where you can flow through the usage, without thinking about it too much.

Currently, for example, if I want to save this document, I have go through this thinking pattern:

  • What key is used to save this document ? Okay, ctrl+s in this app. Press that.
  • What if I don’t know or remember that key ?
  • Look through the menus. File ? Document ? Okay, it’s under Document in this app.
  • Go through the menus, that are always different for each app, so I have to move my focus into the menu and position my mouse exactly in the ‘Save’ position and click there.
  • Look at a message from the app, was my document saved or not ? Where is the application telling me this ? Oh, in the upper right corner in this context. Ok.

So many specific and accurate actions needed for such a simple action, if you don’t remember the keyboard shortcut. And keyboard shortcuts are perfectly fine, but in a mobile phone or a tablet, you don’t have a hardware keyboard to rely on.

So I have to constantly keep my focus on a simple action I repeat maybe hundreds of times a day, switching from app to app, then figuring out again where to exactly position my mouse cursor or my finger, and how to do that.

Why cannot there be a single paradigm and action to take care of this ?

The main problem is the changing paradigm, and the fact that I have to focus on the action to perform pixel perfect movements. This takes away from the main focus of performing whatever you are doing in a creative flow. It all can break the flow, and move you away from the state of creativity to a state of bafflement and possibly even forgetting what I was doing at all currently.

Universal Controller

In the center of our UI would be this controller, termed Chakana -controller, as it was based on this Inca Cross formation, containing within it the Golden Ratio and Pi, modeled after the Chakana:


This cross came to me from a south american friend back then, and we studied it and found that it contains within the ratios at least for the Phi and the Pi, so in a way it was information stored in visual formation. In traditional South American culture this formation was the basis for many architecture, music, design etc, and they had the idea that each direction represents some basic element of life.

So we took this design, and applied the idea that you could control the UI with 4 different major directions, each direction representing an universal concept, like ground, air, fire, water and other universal concepts, and then seek out how those concepts would map onto when working with documents or when sharing contacts, or other actions depending on the context you were working on.


For example here you would see the chakana compass appear over a contact in your Today list, enabling you to share when selecting the upwards direction (Sharing, Air, Putting it out there, No-Time) modify when selecting down (Modify, Earth, physical action, the Now moment), see your call history (History, Past, Water) and also maybe see your future calendar contacts when selecting the right operation (Future, Fire).

Not all of these would of course map directly to these concepts, but we had the idea that if you remembered the general idea behind every major direction, you could then apply that to each operation in some sense, and you would always have an idea what direction you need to take in order to achieve some action that is related to that concept.


Here you can see some idea what these major directions could represent. The main idea was to map your muscle memory in a way to one direction, and then you could know what that specific direction does generally in different contexes.

Resulting hopefully in a more natural user interface, where you don’t have to read and understand all the time what you are now operating on, but creating a flow state through the usage. We never got to the phase of testing this in action, except some prototypes I implemented later for GeoKone (now OmniGeometry), but which I never put into production yet.


Solu Computers Implementation

I was kinda happy to see that Solu Computers were working on a similar user interface later. I actually worked there in late 2016 for couple of weeks, so I got to work on their UI for a bit too, but it just wasn’t my place to be at the moment, although their solution implements this fractal idea pretty much as I had visioned back then. So, hopefully Solu will get their computer and UI out :)

Future plans

I still have plans to implement parts of this UI, maybe for OmniGeometry or Geometrify in the future. Hopefully I will find somebody to help me implement this, so this is also part of why I want to share this information, so I can show people some of our plans.

Anyway, this was enough of past plans made in 2010, hopefully soon I will be able to implement these designs :)

This was inDigiNeous, aka Sakari, signing off!

GeoKone.NET 0.98.13 Released :: Save PNG Images with Transparent Background

New version of http://GeoKone.NET is out today!

GeoKone.NET :: Create Sacred Geometry!

GeoKone.NET :: Create Sacred Geometry!

Release Notes

I’ve released a couple of maintenance releases after 0.98.10 was announced, here are the changes:

  • Saving of PNG images with Transparent Backgrounds. Finally! :)
  • Fixed Scene Saving Overwrite check to work
  •  Only activate touch input mode when clicking on the canvas area, not on the borders. Preventing
    accidental modifying of the formations hopefully.
  • Removed the layer container numbers, not really bringing any new information and probably slowed down drawing
  • Show spinning cursor when loading & saving scenes, big scene saving & loading can take a while, show some feedback
  • Improve and fix the Randomize feature to generate formations with more parameter changes
  • Minor optimizations

Saving PNG Images with Transparent Background

GeoKone.NET finally supports saving PNG images with transparent background, saving the alpha information correctly. This means you no longer have to manually cut out the transparent areas or figure out different blending options to make compositing images good in Photoshop for example. GeoKone can now draw high quality, high resolution, beautiful antialiased formations with alpha transparency!

The image above was made with with the new version. This is a really welcome addition suggested by many users, makes transferring art from GeoKone to other programs much more easier and better looking.

To Save Images with Transparent Background, Choose the ‘Transparent Background’ option in the Save Image dialog, and make sure you Save the Image to your disk from the browser.

At least with Firefox just copying the image doesn’t preserve the transparency correctly for some reason, I have to save the image to my disk first and then continue editing from there.

Start Creating Geometry!

Now go to http://www.GeoKone.NET and start creating Sacred Geometry!

Development Issues & Early Design Drafts with GeoKone.NET

I felt like writing about the design process and some implementation details that I have been going through since I started working on GeoKone.NET. I will talk about performance issues, early designs that I worked for GeoKone, show some screenshots of different versions along the development process and finally look at what is up and coming with the next version of GeoKone.NET.

I was originally thinking about waiting for 1.0 version until to write about some of this stuff, but I feel it’s maybe better to split it into couple of parts and just to show you what kind of things I am facing when developing GeoKone.NET

Performance & Design Issues

One of the issues that I have been struggling constantly with GeoKone  is performance.

Processing.js + Javascript + HTML5 techniques have not been developing at the pace I would have wished for. When I started implementing GeoKone about 1.5 years ago, I thought that WebGL would already be widely supported or that browsers would have found an unifying way of handling canvas element + input + large amounts of classes, but I guess I was overly optimistic on this.

The first version of GeoKone used a simple model: Processing.js running in noLoop() mode and the canvas was only redrawn when user changed the input. This worked pretty well, as GeoKone was still really simple.

Early Beta Version of GeoKone

Early Beta Version of GeoKone

But this noLoop() model was too simple for taking into account ways to present visual feedback for the user when interacting with the PolyForms (the formations on the screen, based on number of points around a circle). I needed a way to run logic & drawing even when the user was not doing anything, so I could present cool animations and transition effects in the program that would run for a while after the user stopped doing anything, or before stuff happened on the screen.

So I designed to take the game engine approach, where a collection of state machines are running at 30 FPS, rendering all polyforms on each frame. This model was used before versions 0.96, and it proved to be too slow to be really used without hardware acceleration.

This design was very responsive and allowed to make some nice transition effects and other realtime animations when joggling polyforms for example, but would almost immediately raise the CPU usage to 100% or even over on multiple cores, depending on the browser.

I also designed and implemented this cool Hyper Chakana Controller for modifying and interacting with objects on the screen. Here you can see a early design image that I had in mind for GeoKone running in fullscreen:

Early Design Of Fullscreen GeoKone, with the 12 -operation Hyper Chakana Controller

Early Design Of Fullscreen GeoKone, with the 12 -operation Hyper Chakana Controller

The Hyper Chakana Controller is the Compass Looking controller, with 4 actions in each direction, allowing Context Specific Actions to be mapped to each one of these directions, so that if you select a PolyForm, the Chakana would be able to Rotate, Scale, Move etc the polyform based on the Natural Directions the user is touching.

Developing The Chakana Controller, Running at 30 FPS

Developing The Chakana Controller, Running at 30 FPS

The name and design for this was based on the South American Sacred Geometry Cube, The Chakana, which you can see a 2D -version here:

Chakana - It Is Said that all South American Culture, Art, Design is based on the ratios of this design

Chakana – It Is Said that all South American Culture, Art & Design is based on the ratios of this image

I even went so far as to implement this HyperChakana controller, as you can see in this early preview video I made:

But after testing this model for a while, I realized that I cannot run this 30 FPS all the time, as making the CPU fan scream was not an option, so I had to figure out something else.

I looked into WebGL, but since back then it was still experimental (and still is, Safari does not officialy even support it yet, you have to toggle it via the developer options) I decided to stick with Processing.js + basic 2D canvas.

GeoKone eating 98% of CPU

GeoKone eating 98% of CPU

I also decided get rid of the Chakana Controller for now, although I put a lot of work into designing and implementing it. Hopefully I will be able to use this design in upcoming, native versions of GeoKone.NET, as I believe this could be a very natural way to interact with objects on the screen, especially with touch screens.

So I had to find a middle road, not running the logic & drawing at 30 FPS, but still having to be able to animate transitions between polyforms. So I decided to run logic for 50 milliseconds after the user has stopped interacting, and after this call noLoop() to stop Processing.js from calling the main draw() method. This way I could still animate stuff and run logic, and the it wouldn’t take as much CPU as before.

This model worked pretty well, and is the one that is still in use with the current live version (0.97). But it proved to create unnecessary logic for handling the stopping and starting of the loop() and noLoop() methods, creating some pretty ugly state handling code that is just unnecessary.

For the next version of GeoKone.NET 0.98, I have cleaned up the code and got rid of this difficult method of determining when to run the loop and when no to, and just tell Processing.JS not to run the loop at all in the beginning, and to call redraw() manually whenever the user interacts with the polyforms. This seems to be the only acceptable model in which GeoKone is responsive, and does not hog the CPU.

Premature Optimization

Also I had foolishly pre-optimized the code, using precalculated sine and cosine tables for the polyforms, inside the PolyForm class. These were not really even used as any time any parameter of the polyform was changed, the class was re-created completely. So even when the user moved the polyform around, it was re-created, thus re-creating the sine and cosine tables also, and preventing from re-using them. Doh. For the next version I have removed all this kind of “optimizations” and just draw and calculate everything on the fly.

Premature optimization truly creates a lot of problems, as the logic of the program changes during development process so much that the optimizations are not even affecting the program in anyway, but they are making it more difficult to adapt to changes in the architecture.

I actually profiled my code and found out that this creating of these sin/cos tables was causing major slowdown, as I used the new keyword everytime the PolyForm was re-created to create the tables. For debugging I use Firefox and the excellent Firebug extension, and I could see the more I removed any use of new in loops, the more faster the initialization & drawing got. This is kind of obvious, as creating classes in performance critical loops of course takes time to allocate new objects, instead of just changing parameters of existing objects on the fly.

It’s really easy to start optimizing early, and run into problems afterwards. This also bit me in the ass when trying to optimize the drawing so that all the in-active polyforms, that is, those polyforms which are not currently being edited, are being drawn into a separate backbuffer and the active polyforms are drawn to a front buffer, and these are then combined to make up what the user sees on the screen.

Debugging Back Buffer Optimization - Backbuffer on left, frontbuffer on right

Debugging Back Buffer Optimization – Backbuffer on left, frontbuffer on right

This enabled me to draw more complex scenes than before, as I could copy very complex formations into the background buffer, and just move the stuff in front buffer around.

But this created problems with the z-ordering of polyforms, as whenever I would choose polyforms to be modified in the front buffer, these would rise on top of the polyforms in the backbuffer, even though logically they were behind the ones in backbuffer.

This was caused because the backbuffer was drawn in the back, and the frontbuffer always on top of the backbuffer, ignoring completely the z-ordering of the polyforms and changing the way the scene looked when editing and when you disabling Mod All.

I have enabled this Back Buffer/Front Buffer optimization for now at least three times, and yet again I have to disable it as it causes problem with the drawing logic. Better just to stick with implementing the functionality first, and then worry about optimization :) It’s kinda difficult also to let go of these optimizations, as I know that I could be drawing much more faster scenes even with current versions, but there would be some minor drawing bugs which I find unacceptable. Maybe I will find a good way to do it after the program logic is finished.

Next Version

Here are a couple screenshots of the next version in action, I’m not going to write anything more now as I’m really close to releasing this and I have to write those in the release notes anyway :) Major new improvement is the Layer Style Polyform Selector, which you can see on the left side of the screenshots. Also, you can now move the PolyForms up and down in their z-ordering, which makes it more easier to edit your scenes.

Testing the PolyForm Selector

Testing the PolyForm Selector

Testing Irregular sized Scenes with the Selector

Testing Irregular sized Scenes with the Selector

It is easy now to move polyforms higher and lower in the order which they are drawn

It is easy now to move polyforms higher and lower in the order which they are drawn

That’s it for now! Continuing finishing the last tweaks on the next version, and if you want to try it out yourself early, you can check it out at the master-optimization branch from GitHub:

Showcasing artists using GeoKone.NET

First I want to thank to many of those who have given feedback about GeoKone.NET, many telling me that they love this software and what it does, and these comments  really have made my heart fill with joy. Thank you all! I will choose some of those quotes and put them in the front page of GeoKone.NET soon.

During the last couple of months I have discovered couple of artists using GeoKone.NET to assist them in creating some really cool Sacred Geometry based art. I would like to show a couple of their works here, and urge you to check out their Facebook pages for more awesome images. I am really happy to see that artists other than me have found GeoKone to be useful, this gives me a lot of motivation to continue working on GeoKone.

First, I want to showcase couple of organic Sacred Geometry Figures made by Dusty Goods:

How Cool is this Tiger?

How Cool is this Tiger?

This Lion shows some Real Power

This Lion shows some Real Power

Love the colors and Depth in this skull

Love the colors and Depth in this skull

“When I first came across GeoKone I was thoroughly blown away. I began by just clicking the random generator and a few caught my attention as mimicking nature.

The owl, butterfly, and clover designs, were the first images I created just clicking the random generator and trying to make out what I could see. Then as I played with it more, I started having a plan and manipulating the patterns to be what I wanted them to be.”

Dusty Goods

I actually found out his page by “accident” and I was really surprised and happy to see what images he had been able to create with assistance of GeoKone.NET. Make sure to check out his Facebook page for some more awesome images!

The second artist I want to feature here is Bill Brouard or Visual Alchemy. I stumbled upon Bills work when a friend of me suggested to check his page out, and boy I was surprised to see familiar looking shapes in his works :) Here is a couple of images I find resonating with me:

Love the Colors on this one

Love the Colors on this one, brings a calming effect

A Really Nice 3D effect on this one!

A Really Nice 3D effect on this one!

This speaks to me on many levels

This speaks to me on many levels

“Can I start by saying a big thank you for having created GeoKone. NET it is a tool that I have been using daily for a good few months now and it has increased my creative output massively.”

“The Randomize feature is one of my favourites as I quickly started to to see the massive potential of your great program and watching some of your youtube videos helped also.”

– Visual Alchemy aka Bill Brouard

These are only a couple of the huge catalog images he has created, be sure to check out his Facebook page out too!

Are you creating art with assistance of GeoKone.NET ?

Let me know! Leave a comment on this page anywhere, or mail me at Sakari AT PSI Triangle D0T NET :) I will gladly link to your page and feature your work here and perhaps on the GeoKone.NET front page when I get to updating a “Featured Artists” page there. Thank you! :)

Video: Islamic Patterns and Sacred Geometry

Here is a nice little video showing some nice Sacred Geometry design and explaining shortly what Sacred Geometry is about

I like the sentence in the end where roughly they said “Through repetition of the same form, it is like remembering all the time that All Is One”. Truly, Everything Is Fractal. With Sacred Geometry we can explore this nature of our Cosmos through repetition of the same forms, everywhere around us.

Manifesting Fraktal Forums

I’ve been working for about two weeks  on visualizing, conceptualizing and sorting out the big picture for Fraktal Forums, the project I’m currently working on. I want to show you a sneak peek of the vision I’m having about the system, with some points on the actual implementation and usage too.

see the fraktal view!

see the fraktal view! click for bigger version

You are in the center, with topics that interest you surrounding you, floating like fractals, connected to other topics.

Fraktal Forums

Simplest example application of this way of communication is a text based forum, and that is what I’m targeting as a first milestone for Fraktal Forums too. Currently most of our Internet-forums are displaying text page after page, boringly trying to imitate a book for some silly reason. Computers are not books, and web pages trying to act as books isn’t the most natural way to do things. We should take what’s the best side of a book and combine that with best of new technology. This is what I want to achieve, to make communication and collaboration easier. It is essential for us to communicate, share and remix ideas to achieve global consciousness.

User is in the center of things and can focus on following the discussions that are of relevance to the user. Where the user can create as easily as possible and others can collaborate on that creation process. Realtime flow of data, with rich visualization of the discussions in different contexts.

branch detail

branch detail

Each branch contains discussions related to that topic. The user can zoom in on different topics and then actually see the discussion happening there, real time. Everything can be connected, as here we can see ‘Games’ linked to ‘software’ and ‘hardware’ for example purposes. This way interesting relations can be made to better understand our communication.

Actual discussion happens in a different view when the user zooms/clicks on the topic. Though, I haven’t yet thought much detail about how the detailed look will be, that’s something that has still to be designed through.

One key point for this kind of representation is also the easiness of following your discussions. The branches and leafs (let’s call nodes leafs from now on) that you have been participating can be highlighted easily, showing if somebody has replied to you. Different branches of the discussion could also be visualized differently based on the content, for example content rich with images would be green as text rich content would be blue. This would let the user know easily what is going on and where. Leafs could be weighted differently based on content, hanging from the brances with real physics simulation. Visualization of discussions could be taken to a whole new level. These are of course ideas only for now, for the first milestone only the most basic features are needed.

What do you think ? Is this an easier way to grasp information ? :)

Much work to do

Ultimately I just want to make sharing information easier. I’ts too cumbersome in most of the cases, it needs to be more natural. Humans have amazing capabilities that don’t rely in logical thinking, but intuition and natural action. This means that we can visualize more than two pages of text at the same time :) And wtf people?! this is 2010 and quoting text is still so hard on most if not all forums. Easy quoting will definitely be part of the natural user interface.

For the server side implementation I’m probably going with PythonDjango, hosted on either Linux or Mac Os X. Client side pure HTML 5, Javascript (Dojo), CSS 3. No legacy support, no flash, only standards based stuff that should work with anybody who cares to support standards. Propably Canvas based drawing, not sure about that yet.

Anyway, this is only a sneak peek, I still have much to do until I can present this idea on a more conceivable level. Anyway, let me know what you think! And I’m not doing this alone, going to need help in the future :) Everything will be kept open source and I will try to document this development process more as I go on, hopefully generating discussion and sparking new ideas :)