Muscle Memory Optimized Fractal UI, designs I did back in 2010

I want to share some old designs I worked on back in 2010.

Then I was designing a completely new type of UI based on fractality of everything, together with our company adHD Oy Finland, together with Jyri Hovila, Taina Peltola, Jani Salminen and Aki Tåg.

Our company didn’t really take off, but we did a lot of design and some simple prototypes. We had ambitious plans, but we didn’t get any further than the planning stage and implementing some simple UI prototypes.

Our idea was to build a Linux based operating system, that would have completely new UI paradigm. Jyri had the idea of the Linux based operating system, and I was thinking about the UI side.

SuperYber konsepti.001

We were pushing away from the notion of computers having to imitate books, or other traditional media, like tablets and phones were doing back then. Where is the real intelligence in that? Or smart as in smart phone ?

It only smarts to use, trying to replicate old world paradigms like pages, tabs, files etc in a modern environment. We don’t have to anymore limit ourselves to something traditional with new digital platforms, so why cling onto old ways to think ?

We could create something for the purpose from the ground up, that is optimized for that use case, but instead it seems many just try to imitate already known analogue user interfaces, transferring those into the digital world, which to me doesn’t make much sense, except from the point of having familiar terms to work with and bring people in.

But those files, folders and floppies and all user interfaces paradigms we have been so used to back in the day in the physical world, are now dead. I mean, who uses folders anymore ?

floppy-disk-save-button-icon-65887

When was the last time you saw these? This used to be the universal icon for saving a document

They have went the way of the famous floppy disk as the save icon, young people don’t even know what a floppy disk is, and the same is with many UI paradigms we still stick to.

Fractal User Interface

Working together as a company, we had a problem, a problem all teams and companies have: how to share information efficiently? How to see at one glance how the project was doing and how the team was completing their tasks ? No current software we looked back then was able to provide that view, in a way that you could just see from a single page, a single glance, what was going on with your team and your project ?

I started wondering, why were so many team collaboration software (definitely back in 2010 at least) designed around the concept of books and pages and so on, that fit poorly in a web environment ?

Optimal situation would be to have an immediate visual overview of what is happening in your project, without having to go through different pages, tabs, issues and so on to see what is going on with the project. I wanted to have an immediate view onto how the project was doing, without having to dig any deeper in the UI.

I really wondered, why stick to old paradigm in the thinking of how things should be done ?

SuperYber konsepti.002

So I started to think outside the box, designing from the get go what would an intuitive user interface look like,  for seeing how a project was doing. It would naturally be built around the people connected to a project, and people completing tasks for the common nominator in the center, the project, that everyone would be committing work for.

SuperYber konsepti.003

I was starting to imagine this as a tree, taking something from the nature and starting to build around that concept in my mind. A traditional list, or box based UI didn’t feel like a natural choice, so we began from the very simple roots (ha!) of what it is to build an user interface and what would feel natural for the user.

The main UI was based around the idea of building an UI based around one usage paradigm:

Everything is Fractal.

Here we can see an design based around that concept, designed for a discussion forum app. It would use a fractal tree node based interface for displaying data:

fraktal_preview_01_full

Design for a discussion forum app (click for full version)

branch detail

Branch detail from discussions

The idea was that all data displayed in our UI would be based around fractal tree nodes, and that same structure would be repeated for different data all over the system. In the picture above, we show a design for a discussion forum, but this same model could be used for contacts, messages, files and all the data you have on your phone.

 

This would mean that only one set of UI paradigm would have to be designed, and that same UI model could be re-used all over. No more re-designing every part of the UI, just design one part really well and repeat that!

No more multiple types of menus, drop downs, windows, whatever you need in traditional UI to visualize data, but only one set of really well working interface that would be replicated all over the place.

Fractality of Usage Patterns

We wanted to go further with the fractality of the design, and not just represent it in the visuals of the system. We wanted the usage of the system to be fractal in nature.

This means, that you only have to learn a set of certain hand motions to do any action. And you could use these same actions in every context in the UI. This is sorely missing from modern UIs, where you have to constantly change your muscle actions, leading into not being able to program the muscle memory efficiently, not being able to learn and remember different parts of the UI instantly.

For example, when you learn a sport, or play a game, first you think about it, but the main goal is to be in a state of not thinking about the action being performed, just do it, flow with it.

With modern computer UIs, this is very difficult if you are not using the keyboard, as with mouse and other position based UI’s, you constantly have to keep your attention on the screen, and figure out what does what. This is not optimal for storing the actions into your muscle memory, as the actions are constantly moving into different places, and you cannot just forget about the context and perform similar actions without thinking.

fingerpos

Our UI was designed to be used primarly by hand motions, touch screens in mind back then. So that you would only have to learn a set of certain actions, and repeat those same actions in different contexts, all over the system.

This, I believe, would result in a UI where you can flow through the usage, without thinking about it too much.

Currently, for example, if I want to save this document, I have go through this thinking pattern:

  • What key is used to save this document ? Okay, ctrl+s in this app. Press that.
  • What if I don’t know or remember that key ?
  • Look through the menus. File ? Document ? Okay, it’s under Document in this app.
  • Go through the menus, that are always different for each app, so I have to move my focus into the menu and position my mouse exactly in the ‘Save’ position and click there.
  • Look at a message from the app, was my document saved or not ? Where is the application telling me this ? Oh, in the upper right corner in this context. Ok.

So many specific and accurate actions needed for such a simple action, if you don’t remember the keyboard shortcut. And keyboard shortcuts are perfectly fine, but in a mobile phone or a tablet, you don’t have a hardware keyboard to rely on.

So I have to constantly keep my focus on a simple action I repeat maybe hundreds of times a day, switching from app to app, then figuring out again where to exactly position my mouse cursor or my finger, and how to do that.

Why cannot there be a single paradigm and action to take care of this ?

The main problem is the changing paradigm, and the fact that I have to focus on the action to perform pixel perfect movements. This takes away from the main focus of performing whatever you are doing in a creative flow. It all can break the flow, and move you away from the state of creativity to a state of bafflement and possibly even forgetting what I was doing at all currently.

Universal Controller

In the center of our UI would be this controller, termed Chakana -controller, as it was based on this Inca Cross formation, containing within it the Golden Ratio and Pi, modeled after the Chakana:

 

This cross came to me from a south american friend back then, and we studied it and found that it contains within the ratios at least for the Phi and the Pi, so in a way it was information stored in visual formation. In traditional South American culture this formation was the basis for many architecture, music, design etc, and they had the idea that each direction represents some basic element of life.

So we took this design, and applied the idea that you could control the UI with 4 different major directions, each direction representing an universal concept, like ground, air, fire, water and other universal concepts, and then seek out how those concepts would map onto when working with documents or when sharing contacts, or other actions depending on the context you were working on.

vlcsnap-2017-07-19-17h21m23s546vlcsnap-2017-07-19-17h21m07s998

For example here you would see the chakana compass appear over a contact in your Today list, enabling you to share when selecting the upwards direction (Sharing, Air, Putting it out there, No-Time) modify when selecting down (Modify, Earth, physical action, the Now moment), see your call history (History, Past, Water) and also maybe see your future calendar contacts when selecting the right operation (Future, Fire).

Not all of these would of course map directly to these concepts, but we had the idea that if you remembered the general idea behind every major direction, you could then apply that to each operation in some sense, and you would always have an idea what direction you need to take in order to achieve some action that is related to that concept.

vlcsnap-2017-07-19-17h21m31s008vlcsnap-2017-07-19-17h21m35s952vlcsnap-2017-07-19-17h21m40s002

Here you can see some idea what these major directions could represent. The main idea was to map your muscle memory in a way to one direction, and then you could know what that specific direction does generally in different contexes.

Resulting hopefully in a more natural user interface, where you don’t have to read and understand all the time what you are now operating on, but creating a flow state through the usage. We never got to the phase of testing this in action, except some prototypes I implemented later for GeoKone (now OmniGeometry), but which I never put into production yet.

vlcsnap-2017-07-19-17h21m53s868

Solu Computers Implementation

I was kinda happy to see that Solu Computers were working on a similar user interface later. I actually worked there in late 2016 for couple of weeks, so I got to work on their UI for a bit too, but it just wasn’t my place to be at the moment, although their solution implements this fractal idea pretty much as I had visioned back then. So, hopefully Solu will get their computer and UI out :)

Future plans

I still have plans to implement parts of this UI, maybe for OmniGeometry or Geometrify in the future. Hopefully I will find somebody to help me implement this, so this is also part of why I want to share this information, so I can show people some of our plans.

Anyway, this was enough of past plans made in 2010, hopefully soon I will be able to implement these designs :)

This was inDigiNeous, aka Sakari, signing off!

Advertisements

GeoKone.NET Interactive Geometry Projection Installation @ Kosmos Festival 2015

For the last three weeks, I had been working on creating an interactive GeoKone.NET powered geometry installation for Kosmos 2015 Festival. The idea was to create an interactive installation that people could control themselves and synchronize the visuals to the music playing on the Spacetime Stage at the festival.

Designign the controller interface

Designing the controller interface

Google Chrome browser has webmidi -support builtin, so connecting a Korg Kaosspad R3 controller to my laptop and reading midi -signals was pretty easy. Just had to figure out how to have some kind of intelligent and intuitive controls.

The idea was also not to break the geometry or the visuals, because with GeoKone you can pretty easily crash the browser, because exponential growth in processing and memory when increasing the recursion depth or number of points in the scene.

So I had to design some kind of limitations too, so that people could not mess up the beautiful scenes. And really to create new scenes with the controller was too difficult, so I created a bunch of nice scenes people could browse and select one to modify and work from there.

I’m pretty happy how the installation turned out! Here are a couple of videos from the event (Thanks to Suvi Suvereeni for filming and sharing):

 

People really seemed to enjoy this, as I observed people staying on the controller for long periods of time, some even as long as 30 minutes. Visitors also got the hang of synchronizing the visuals to the music, which was nice to see! :) Here is a quote from a user:

That was awesome! I hope I will get to try that again some time, one of the best moments of this summer for sure!

So! This means that in the future, GeoKone will have MIDI -support! You can plugin any controller and control the scenes with your favorite hardware .. how about that :) Can’t wait to implement this properly.

Also, big thanks to Miika Kuisma and Samuel Aarnio for helping me on site, and for everybody who collaborated on the stage and made this possible.

Until then, now work continues again on Geometrify, which about I will update more info later.

GeoKone.NET updated with PDF Support

http://GeoKone.NET has been updated! Now with support for Downloading PDF versions of your scene!

GeoKone.NET Blasting Geometry :: Now With PDF Export Support!

GeoKone.NET Blasting Geometry :: Now With PDF Export Support!

Actually it has been over a month already from the release, been pretty hectic with getting http://Geometrify.net up to speed, will write about that more in a separate post later.

Release Notes

  • PDF Downloading, download Scalable Vector Graphics versions of your scenes, in PDF format. Requires for you to register an user, scene to be saved on our server and purchased PDF downloads. You can purchase PDF downloads with Paypal currently, considering adding Bitcoin support in the upcoming releases.
  • Nudge Sliders, New UI elements for adjusting poly parameters with mouse, nudge sliders. These enable much more precise control of parameter values with the mouse too. Slide left and right to add or decrease to current value increasingly.
  • Hide User Interface, The option ‘Scene/Hide User Interface’ will hide all the user interface elements from the screen. Combine this with fullscreen to get fullscreen canvas for now.
  • User Interface Improvements, Show current active formation and number of formations on the scene, added Golden Ratio scene size presets and now the currently modified parameter value is shown in the bottom right corner.

For full list of changes, see the ChangeLog.

PDF Exporting

This is one of the most requested features since the beginning of GeoKone, and finally it is here. Now you can export your scenes to Adobe Illustrator, Sketch or other vector based programs and continue editing your scene there. This means you can now easily make print quality versions of your scenes, for example for t-shirts, art and other physical objects you might want to decorate with GeoKone art.

The previous version of the downloaded PDF will always be stored on the server, so you can re-download already exported scenes without having to purchase more. This is a paid feature, because running the servers cost money and for now I have paid it from my own pockets. All PDF purchases and donations help support keeping GeoKone running! ^_^

Here are a couple of examples of PDF files exported with GeoKone (click on image to open PDF version)

GeoKone PDF Example 1

GeoKone PDF Example 1

 

GeoKone PDF Example 2

GeoKone PDF Example 2

GeoKone PDF Example 3

GeoKone PDF Example 3

 

Create Geometry! Be Creative!

Now go to http://www.GeoKone.NET, register yourself an account, Create A Scene, Save It and check out the ‘Scene/Download PDF’ option :)

Development Issues & Early Design Drafts with GeoKone.NET

I felt like writing about the design process and some implementation details that I have been going through since I started working on GeoKone.NET. I will talk about performance issues, early designs that I worked for GeoKone, show some screenshots of different versions along the development process and finally look at what is up and coming with the next version of GeoKone.NET.

I was originally thinking about waiting for 1.0 version until to write about some of this stuff, but I feel it’s maybe better to split it into couple of parts and just to show you what kind of things I am facing when developing GeoKone.NET

Performance & Design Issues

One of the issues that I have been struggling constantly with GeoKone  is performance.

Processing.js + Javascript + HTML5 techniques have not been developing at the pace I would have wished for. When I started implementing GeoKone about 1.5 years ago, I thought that WebGL would already be widely supported or that browsers would have found an unifying way of handling canvas element + input + large amounts of classes, but I guess I was overly optimistic on this.

The first version of GeoKone used a simple model: Processing.js running in noLoop() mode and the canvas was only redrawn when user changed the input. This worked pretty well, as GeoKone was still really simple.

Early Beta Version of GeoKone

Early Beta Version of GeoKone

But this noLoop() model was too simple for taking into account ways to present visual feedback for the user when interacting with the PolyForms (the formations on the screen, based on number of points around a circle). I needed a way to run logic & drawing even when the user was not doing anything, so I could present cool animations and transition effects in the program that would run for a while after the user stopped doing anything, or before stuff happened on the screen.

So I designed to take the game engine approach, where a collection of state machines are running at 30 FPS, rendering all polyforms on each frame. This model was used before versions 0.96, and it proved to be too slow to be really used without hardware acceleration.

This design was very responsive and allowed to make some nice transition effects and other realtime animations when joggling polyforms for example, but would almost immediately raise the CPU usage to 100% or even over on multiple cores, depending on the browser.

I also designed and implemented this cool Hyper Chakana Controller for modifying and interacting with objects on the screen. Here you can see a early design image that I had in mind for GeoKone running in fullscreen:

Early Design Of Fullscreen GeoKone, with the 12 -operation Hyper Chakana Controller

Early Design Of Fullscreen GeoKone, with the 12 -operation Hyper Chakana Controller

The Hyper Chakana Controller is the Compass Looking controller, with 4 actions in each direction, allowing Context Specific Actions to be mapped to each one of these directions, so that if you select a PolyForm, the Chakana would be able to Rotate, Scale, Move etc the polyform based on the Natural Directions the user is touching.

Developing The Chakana Controller, Running at 30 FPS

Developing The Chakana Controller, Running at 30 FPS

The name and design for this was based on the South American Sacred Geometry Cube, The Chakana, which you can see a 2D -version here:

Chakana - It Is Said that all South American Culture, Art, Design is based on the ratios of this design

Chakana – It Is Said that all South American Culture, Art & Design is based on the ratios of this image

I even went so far as to implement this HyperChakana controller, as you can see in this early preview video I made:

But after testing this model for a while, I realized that I cannot run this 30 FPS all the time, as making the CPU fan scream was not an option, so I had to figure out something else.

I looked into WebGL, but since back then it was still experimental (and still is, Safari does not officialy even support it yet, you have to toggle it via the developer options) I decided to stick with Processing.js + basic 2D canvas.

GeoKone eating 98% of CPU

GeoKone eating 98% of CPU

I also decided get rid of the Chakana Controller for now, although I put a lot of work into designing and implementing it. Hopefully I will be able to use this design in upcoming, native versions of GeoKone.NET, as I believe this could be a very natural way to interact with objects on the screen, especially with touch screens.

So I had to find a middle road, not running the logic & drawing at 30 FPS, but still having to be able to animate transitions between polyforms. So I decided to run logic for 50 milliseconds after the user has stopped interacting, and after this call noLoop() to stop Processing.js from calling the main draw() method. This way I could still animate stuff and run logic, and the it wouldn’t take as much CPU as before.

This model worked pretty well, and is the one that is still in use with the current live version (0.97). But it proved to create unnecessary logic for handling the stopping and starting of the loop() and noLoop() methods, creating some pretty ugly state handling code that is just unnecessary.

For the next version of GeoKone.NET 0.98, I have cleaned up the code and got rid of this difficult method of determining when to run the loop and when no to, and just tell Processing.JS not to run the loop at all in the beginning, and to call redraw() manually whenever the user interacts with the polyforms. This seems to be the only acceptable model in which GeoKone is responsive, and does not hog the CPU.

Premature Optimization

Also I had foolishly pre-optimized the code, using precalculated sine and cosine tables for the polyforms, inside the PolyForm class. These were not really even used as any time any parameter of the polyform was changed, the class was re-created completely. So even when the user moved the polyform around, it was re-created, thus re-creating the sine and cosine tables also, and preventing from re-using them. Doh. For the next version I have removed all this kind of “optimizations” and just draw and calculate everything on the fly.

Premature optimization truly creates a lot of problems, as the logic of the program changes during development process so much that the optimizations are not even affecting the program in anyway, but they are making it more difficult to adapt to changes in the architecture.

I actually profiled my code and found out that this creating of these sin/cos tables was causing major slowdown, as I used the new keyword everytime the PolyForm was re-created to create the tables. For debugging I use Firefox and the excellent Firebug extension, and I could see the more I removed any use of new in loops, the more faster the initialization & drawing got. This is kind of obvious, as creating classes in performance critical loops of course takes time to allocate new objects, instead of just changing parameters of existing objects on the fly.

It’s really easy to start optimizing early, and run into problems afterwards. This also bit me in the ass when trying to optimize the drawing so that all the in-active polyforms, that is, those polyforms which are not currently being edited, are being drawn into a separate backbuffer and the active polyforms are drawn to a front buffer, and these are then combined to make up what the user sees on the screen.

Debugging Back Buffer Optimization - Backbuffer on left, frontbuffer on right

Debugging Back Buffer Optimization – Backbuffer on left, frontbuffer on right

This enabled me to draw more complex scenes than before, as I could copy very complex formations into the background buffer, and just move the stuff in front buffer around.

But this created problems with the z-ordering of polyforms, as whenever I would choose polyforms to be modified in the front buffer, these would rise on top of the polyforms in the backbuffer, even though logically they were behind the ones in backbuffer.

This was caused because the backbuffer was drawn in the back, and the frontbuffer always on top of the backbuffer, ignoring completely the z-ordering of the polyforms and changing the way the scene looked when editing and when you disabling Mod All.

I have enabled this Back Buffer/Front Buffer optimization for now at least three times, and yet again I have to disable it as it causes problem with the drawing logic. Better just to stick with implementing the functionality first, and then worry about optimization :) It’s kinda difficult also to let go of these optimizations, as I know that I could be drawing much more faster scenes even with current versions, but there would be some minor drawing bugs which I find unacceptable. Maybe I will find a good way to do it after the program logic is finished.

Next Version

Here are a couple screenshots of the next version in action, I’m not going to write anything more now as I’m really close to releasing this and I have to write those in the release notes anyway :) Major new improvement is the Layer Style Polyform Selector, which you can see on the left side of the screenshots. Also, you can now move the PolyForms up and down in their z-ordering, which makes it more easier to edit your scenes.

Testing the PolyForm Selector

Testing the PolyForm Selector

Testing Irregular sized Scenes with the Selector

Testing Irregular sized Scenes with the Selector

It is easy now to move polyforms higher and lower in the order which they are drawn

It is easy now to move polyforms higher and lower in the order which they are drawn

That’s it for now! Continuing finishing the last tweaks on the next version, and if you want to try it out yourself early, you can check it out at the master-optimization branch from GitHub: https://github.com/inDigiNeous/GeoKone/tree/master-optimization.

GeoKone 0.96 Beta Released! Create Sacred Geometry Live!

GeoKone.NET has been Updated To Version 0.96.03 Beta.

What is GeoKone ?

GeoKone is an Interactive Sacred Geometry Generator that runs in your browser. With GeoKone you can Generate, Copy, Modify  different Recursive Geometrical Formations that are based on Natural Constants like the Golden Ratio. All parameters of the geometry are assignable by the user. You can save your creations either as PNG Images or Export as JSON/Text Parameter Data, suitable to be Imported into GeoKone and edited later. GeoKone uses Processing.JS.

Visit http://GeoKone.NET to check it out Live In Your Browser! Visit also the GeoKone Gallery to see Unique Art made possible by GeoKone.

This update brings Improved Usability, More Stability and New Features:

Undo: Last 3 Steps Are Undoable.

  • Keyboard shortcut: ‘v’
  • Very basic functionality for now, still need to tweak and optimize

Copy Formation along vertex points of currently selected Formation:

  • Keyboard shortcut ‘.’ (dot)
  • This allows you to copy the currently selected Polyform and place it along the outer points of the current polyform, allowing much easier copying of formations and creation of totally new formations.

Exporting & Importing Scenes as JSON/Text

  • Copy & Paste the current Scene as Textual Data. This allows you to save your scenes locally to your computer and Import later to continue editing.
  • Importing & Exporting currently works by you Copying and Pasting the scene text into/from GeoKone.
  • Expect more improvements on this in upcoming versions. In the future it will be possible to Integrate these GeoKone scenes into other programs, through a small library that will draw GeoKone scenes with OpenGL! :) This could bring a whole new world of possibilities.
GeoKone 0.96 Screenshot!

GeoKone 0.96 Screenshot!

Improved Usability

  • Copying, Joggling and moving Formations around works now better and is more accurate.
  • Performance might be a bit slower though, but I will continue optimizing the code and release more often.

More Stylished Looks

  • More polished look for the parameter page (although the new Safari has some issues drawing this, seems to affect Mac Os X Lion version only)

Shortcut Buttons

  • Shortcut buttons for most used features, currently: Randomize, Save Image, Export Scene

Keyboard Shortcut Map

  • Help/Keyboard Shortcuts:  See more easily what different keyboard buttons do!
Keyboard Shortcuts!

Keyboard Shortcuts!

Note: Login, Saving & Loading Scenes to the server is still disabled, as the combination of Python+Django+mod_wsgi+Apache seems to be a bit challenging setup to push live. I am seriously considering rewriting the whole server side with Node.JS, but this will take some time. Hopefully I will get the current server implementation working in the upcoming days.

And as with previous releases, GeoKone is still in Beta, so everything doesn’t always work as expected.

Also, I decided to postpone the Chakana HyperController feature into an OpenGL version of GeoKone, as it’s usability is not yet up to the standards I want and Javascript/HTML5 performance is still an issue, not really possible to run 30 FPS without OpenGL.

Happy Geometry Forming with GeoKone! 

Visit http://GeoKone.NET Now! :)

Preview Video of GeoKone, Sacred Geometry Generator

Hello guys & girls, I put together a very short video of me showing one main feature of the new user interface in GeoKone, The HyperChakana controller, that is designed for easy usage with touch screens, check out a quick preview here:

[youtube http://www.youtube.com/watch?v=xyxUh17rVH4]

I decided to test out if by simply recording my screen (as I don’t have any video capture software yet) with my camera, and turned out I liked the end result, so I put together this little piece. Working hard on the new version, 15 days to go, going for a public 11.11.11 release :) That’s going to be one powerful day anyway.

Thanks for all the people who have showed interest and became testers, I will put the new version hopefully soon into betatesting, still have to complete a couple of critical features for it to be more usable. But it’s already much faster and easier to use than the current beta version.

During this project I have really realized that usability is hard to get right, it sounds easy on a theoretical level, you tap here, this happens, you hover here, this highlights etc .. but actually implementing the logic is much more harder than it sounds, because there are a lot of things to take into account when trying to translate these rules into binary logic, especially if we want to write understandable, expandable code that takes into accounts different situations and concurrencies that are happening all the time.

Today I was optimizing the drawing to enable more complex scenes to be edited with usable speed, something that I had been waiting for a while to do. I have noticed that anything complex that I want to understand, I have to draw it down and see how it works, like today I drew this:

Optimizing Drawing

Optimizing Drawing

 

 

 

 

 

 

 

 

 

 

 

 

 

Basically, for everything that needs to happen on the screen, I have some kind of drawings/plans done. I have noticed this is the optimal way for me to implement these things, or the only way actually. I need to visualize what happens in order to understand it. Writing it down many times just doesn’t cut it, as it’s so much harder to conceptualize blocks of text instead of looking at a picture.

Maybe I will post some of those images later, just for fun. My design goals are to keep it simple and take the human into account when designing user interfaces. I have noticed that it’s a step-by-step recursive task, there are no easy roads to implementing perfect interfaces, as the rules change depending on the context. There is no one wonder interface that works everywhere (or is there .. that would be another story for another time ;)).

Here are a couple of screenshots also from different stages of development, just to see how it has come along:

First version of the Chakana

First version of the chakana controller, only four directions.

NanNanNan!

NanNanNan! The program is clearly messing with my head, yelling at me, looking at me .. This is the HyperChakana controller, 4 x 3 operations, 12 total operations.

Optimizing drawing

Optimizing the drawing, implementing a simple backbuffer system.

 

Buddha on a lake.

Suprise visitor, Buddha on a lake. This image came when I started testing the thumbnail view .. good sign :)

Testing out new doublebuffer

Testing out the new doublebuffer code, much more smooth experience in editing complex formations.

 

That’s it for now. Hope you enjoy this stuff once it’s released, I’ve been very excited in putting this project together :)

Sacred Geometry Generator :: Private Beta Time!

NOTE: Public version of GeoKone has been released! Check out http://GeoKone.NET !

Finally, all the necessary stuff is implemented for the first public version of my Sacred Geometry Generator . It is a program that allows the user to create graphics based on the principles behind Sacred Geometry, The Universal laws of Creation.

What this means in practice is that you are able to create formations based on the circle and drawing lines between different points of that circle, creating new patterns emerging from the interference of those lines. Basically, that’s the simplified version of how it operates :)

These formations are stored in the program as parameter data, not actual pixel data. So everything is drawn programmatically based on a saved set of parameters. This makes it easy to expand in the future, add new options or even visualize the whole things differently, like in 3D.

The private beta is now open, if you are interested, please send me an e-mail to sakari AT psitriangle DOT net, or leave a comment here and I will tell you the details :)

Sacred Geometry Generator Screenshot

Sacred Geometry Generator Screenshot

One cool thing I’ve noticed in this program is, that it allows you to switch between “dimensions” when editing scenes. This allows you to look at the scene from a whole different view point, by first editing the scene with multiple shapes based on eg. two points, then increasing the number of points for all those shapes to three, adding a whole new dimension to the scene:

Scene with shapes having two points

Scene with shapes having two points

Same scene, shapes now have three points

Same scene, shapes now have three points

Here are couple of images generated from the same exact formations, just changed some of the parameters globally:

DNA Strands

This is the original formation I created

From the same formation as above, parameters changed

From the same formation as above, parameters changed

Another formation from the same configuration

Another formation from the same configuration

It’s fun to play with these formations and see what kind of patterns they create. Mind you, it’s still not easy to create good looking stuff, art is always something that needs balance and finesse to details from the creator, this tool just enables to focus on the big picture instead of the tiny lines.

Now I am  taking a little break from this thing, but I’ve already got plans for totally re-implementing the user interface of this program, the current version is more of an development/engineer version than actual production version intended for end users. It’s usable, but difficult.

I have some ideas about a fullscreen interface with this sort of compass tool for adjusting all the parameters in individual formations, allowing easy manipulation and visual feedback from every operation, also working on touch screen devices. But that’s for another version .. hopefully I will be releasing the public version for this generator soon, still some work to do on that part.