Progress update on my 3D Sacred Geometry Engine

Fully focused

For the past two months I’ve been blessed to focus fully on my 3D Sacred Geometry engine, called PsiTriangle Engine. This engine is going to be the basis for our upcoming 3D Sacred Geometry Creation program, called Geometrify: Creator.

Beginning of the year I was working at Vizor (http://vizor.io), for a while, but that lasted only for 2 months as it was not really something I truly loved doing.

But I got to learn ThreeJS (a javascript 3D framework) there and also how to build a 3D engine more, so this gave me good insight on how to continue with my own engine.

Getting closer to the metal

I’ve been now focusing on improving my engine since then, getting more performance out of the GPU and calculating as little as possible on the CPU.

The CPU -> GPU bottleneck is a real issue when working with dynamic geometry, optimally keep everything in the GPU memory and not transfer it from the main RAM to the GPU, as it’s a big bottleneck for performance.

Learning the tricks of GPU programming and getting to really feel the power of the GPU has been a marathon run, but I’m finally approaching the performance I’ve been looking for.

Doing complex things is easy, but just narrowing down to the simple essentials, the least amount of calculations needed, is difficult.

Putting my engineering skills to their use

I’m an automation engineer, and working with 3D equations and math is really an area where I’m starting to see use for that education. The education basis has given me the insight that I can understand and dissect any problem, if I just keep drawing and calculating on paper long enough.

Just draw it out

Just draw it out

I’m pretty happy now that I got that education, as without it I wouldn’t have probably have the system in place to work like this (*thx math teacher, Pirkka Peltola)

Fast line drawing

In the last weeks, I’ve been completely re-writing my line drawing algorithm to utilize the GPU as much as possible.

Previously I had ported an algorithm by Nicolas P. Rougier from Python code to C++ (based on his paper here: http://jcgt.org/published/0002/02/08/).

But this case was just too generalistic and did too many calculations, and took a long time to upload the vertexes from the CPU to the GPU, which really killed performance.

So I decided to just rewrite from the ground up. Good tool to prototype graphics drawing is http://processing.org, so I first implemented the algorithm with Processing, then when it worked and I understood the process, started porting it to GLSL shader code.

Tesselating Circular Polylines

Tesselating Circular Polylines with Processing

Getting to know the Geometry Shader

There exists geometry shaders on modern GPUs. With these, one can calculate vertexes completely on the GPU, utilizing the massive parallelism of modern GPUs.

I started my line drawing re-implementation using only the geometry shader. Here you can see results for that:

Here all the lines, segments and origo points for the circles are all calculated on the GPU, nothing is done on the CPU except the origo point sent to the shader.

This is pretty great, but there are limitations. First, the geometry shader has to re-draw all the shapes, calculate all the sines and cosines for each line segment, all the time, everytime, on each frame. This is slow.

Second, the geometry shader can only output a limited amount of vertexes. With my GPU, that limit is 256 vertexes of vector4 components. So it’s not really much, can’t do deep recursion with that.

Bringing in the Transform Buffers

There also exists a thing called ‘Transform Feedback Buffer‘, which basically means you Transform (calculate geometry) and put the results in a Feedback Buffer (store), which you then use to actually draw (read buffer).

These buffers are then only updated when changes occur, and not on beginning of each frame like with purely geometry shaders.

This got me already much better performance:

Much better, but I was still calculating stuff recursively, storing each circular formation as a separate copy of my base class.

This worked well with http://GeoKone.NET, as with software rendering all the data stays in the main memory. But with GPU rendering, we really want to minimize the amount of things calculated.

Drawing as little as possible

At this point, I decided that I know what I want to do achieve, and to get there, I really need all the perfomance I need, to make it as smooth as possible.

To do that, the current model of doing things recursively, ie. where a class instance stores num_points class instances and visits each of them to draw their data, continuing down the path recursively with a parent child model, really didn’t work anymore with the GPU.

With GPUs, what seems to work best is doing things in a linear buffer. We want to have all data in a continuous pattern, so you can just loop through it when calculating and drawing, with minimum amount of branching and changing buffers when doing that.

Basically we just want to blast the data to the shaders, so they can work on it as parallel as possible, because that’s the strength of the GPUs.

I’m still seeking the best way to do this, but with this model I could finally reach dynamic geometry in 3D space with similar performance as with GeoKone.NET. This is my latest update, showcasing dynamic manipulation of 2D plane sacred geometry in 3D space, that will be the basis for Geometrify: Creator.

Getting there  :)

I’m developing this engine on laptop GPUs, my faster Macbook Pro having a Nvidia GT750M 2GB card, and my home computer having an ancient Nvidia GT330M/512MB.

So I really also have to figure out how to make this fast in order to develop, which is a good thing :) But I can’t wait to test this out on modern beasts of GPUs, which are easily 30x faster than the one on my older laptop.

Anyway, development continues, if you are interested in more updates, follow me on Twitter: https://twitter.com/inDigiNeous, I’ll be updating there more frequently.

Now peace out, and bom! ^_^

Playing the startup game with Geometrify, probably going mad

Pushing the limits of my mental health

Past 3 weeks have been crazy. Crazy depressing and stressful, trying to push Geometrify forwards, in a rushed mad man craze feeling of trying to reach a goal of releasing our new demo and crowdfunding campaign on 11.11.15, in order to match with the release of GeoKone on 11.11.11 (ha, that post has 11 comments)

And then realizing all this was just some imaginary deadline we had setup for ourselves, and that our direction is not exactly aligned on what is possible to implement right now. Gah.

I have been driving myself to the limits, drinking too much black tea, coffee, then needing something to calm me down after that, and continuing this vicious cycle until I was completely exhausted and not even remembering who I am anymore. This completely drained me physicall and mentally, and I was ready to give up the whole thing.

But then suddenly, I saw something in the midst of it all ..

A diamond shines in the darkness

In this self developed chaos shit storm, a diamond of perfection shone in the midst of it all:

Diamond in the Center

Diamond in the Center

Looking at it, just observing, and using GeoKone while being in the darkness I could see it more clearly. GeoKone has helped to suppress depression, to channel my self in the darkest moments so many times that I really wish I could somehow bring this program to the masses better. There I was again, creating art with GeoKone, feeling like shit, when suddenly I could see beauty coming out of darkness, forming slowly, reminding me that there is always a solution, even if I’m not seeing it in the middle of all the crap flying around.

Not sure if this can be seen only by travelling to the centre of it all, going through all that chaos around it first, or is there an easier way maybe ? Who knows. I dont know. I can read a thousand books, hear a thousand stories, and none are the one I am travelling. There are no ready answers or solutions to those walking their own path.

Startup mantras

I hope I don’t have to do this again ever. So that I might learn when to just keep calm, and focus on the task at hand, and not project into the future to fulfill some deadline that might as well be total bullshit in 2 weeks.

Fail fast, fail often. If you fail at first, try again. These are the mantras repeated in the startup world, and I can see now why. This has been a total surrender trip since 2012 december when I left my dayjob and pursued GeoKone full time.

My psyche has been put to the test, still working alone as programmer on Geometrify. Not having anybody to talk to on the technical aspect daily, not having anybody to wait at the workplace for me, or not anybody to even expect me to be there at time. It can get very depressing.

And still, the comments from people who have tried our first demo (which will be released at some point for the Oculus DK1, hopefully) like “most interesting usage of VR”, or “most deep experience of geometry”, or the mind baffling comment after a private demo “I realized what happens after I die”, these keep me going.

Glimpses to another world

Yeah, somebody saw glimpses of what happens when he dies by just trying our demo.

Think about that for a while. Let it sink in.

The possibilities with this project are endless. And yet, how to communicate this what we are buildling, seems impossible. There are no words to describe it. It slips through the rationality and structure of words. Like a spiritual experience, it is really difficult to share.

And this has been our challenge for the whole time we have been developing Geometrify Experience. How to communicate this to people, when it is really personal, can be meditative, immersive, but can also not be meditative, can just be annoying for minority of people trying it out, or it can be something completely different for each person.

Catching small fish

I think we have been trying to catch a too big of a fish, we now need really to narrow down our grids and focus on something that can be also implemented, and explained, and still utilizing our top of the line geometry engine, which might I say, is getting really nice.

I’m starting to get the hang of C++11 finally, and OpenGL too, so things are starting to look good on the technical side.

Still, no funding, no coding partners, although I have help from many people now, and we are now an officially registered Co-Op in Finland now, so a lot of progress happening also!

I am hopeful to continue working. Fail fast, fail often, get up, grind up, fuck that shit up and show them that following our hearts something truly extraordinary can be achieved!

Ready to get up again!

Ready to get up again!

Now this is inDigiNeous, signing off again!

Doing what I Love :: Development of Geometrify continues

I just want to do a quick update what’s going on with me and the software I am developing.

My Lifes Work Currently

My Lifes Work Currently

Developing Geometrify

Geometrify is the name of our company and also the name of the software we are developing. Geometrify will be an amazing VR experience, something completely different than you are used to seeing.

Geometrify will use the same tech as GeoKone.NET, but supercharge it to 10x in performance, visual style and animation. Geometrify will be developed primarily for the Oculus Rift VR -headset, and our goal is to launch first version of our software in Q1 2016.

Geometrify is being developed using Modern OpenGL, C++11 and QT 5/QT Quick
For more information about Geometrify, check out these pages:

Progressing irritatingly slowly, but steadily

Things have been going a little slower than I originally thought, and we have no funding for Geometrify yet to rise off the floor. We have a team of 4 top notch professionals ready to go, but only funding for myself.

Which I am very grateful for, I am now being paid for doing what I love. It was a leap of faith in 2012 December when I left my dayjob, relationship, apartment and everything else to pursue developing GeoKone.NET fulltime.

Finally it is starting really to pay off, and I am seeing already more clearly in which direction to take this in order to create software that people are willing to pay money also for.

Creating the content along with the tech parallel has proven very challenging, and with no funding to drive the team, I as the only programmer must choose carefully what direction to take.

Video :: Dive Into Recursive Geometry!

Here is my presentation from Assembly 2014, talking about Natural Geometry and the effects creating it can have on our level of awareness, and introducing our upcoming Virtual Reality Experience, Geometrify.


Big thanks to Olli Sinerma for arranging this VR -track and all the other speakers, Paavo S Ho, Kaomas Turmakallio and Samuli Jääskeläinen, really enjoyed all of your sets. Be sure to check out their presentations on the same Youtube channel if you are interested in Virtual Reality and where it has been coming from and what is happening right now.

About 15-20 people tried our Geometrify tech-demo, some of the comments included “pure joy”, “being in a flow state”, “damn nice” and many others. Thank you for everyone testing it out and giving feedback!

PS. We’re looking to hire a graphics programmer here in Helsinki to continue work on Geometrify. Please contact us if you are interested! :)

Development Issues & Early Design Drafts with GeoKone.NET

I felt like writing about the design process and some implementation details that I have been going through since I started working on GeoKone.NET. I will talk about performance issues, early designs that I worked for GeoKone, show some screenshots of different versions along the development process and finally look at what is up and coming with the next version of GeoKone.NET.

I was originally thinking about waiting for 1.0 version until to write about some of this stuff, but I feel it’s maybe better to split it into couple of parts and just to show you what kind of things I am facing when developing GeoKone.NET

Performance & Design Issues

One of the issues that I have been struggling constantly with GeoKone  is performance.

Processing.js + Javascript + HTML5 techniques have not been developing at the pace I would have wished for. When I started implementing GeoKone about 1.5 years ago, I thought that WebGL would already be widely supported or that browsers would have found an unifying way of handling canvas element + input + large amounts of classes, but I guess I was overly optimistic on this.

The first version of GeoKone used a simple model: Processing.js running in noLoop() mode and the canvas was only redrawn when user changed the input. This worked pretty well, as GeoKone was still really simple.

Early Beta Version of GeoKone

Early Beta Version of GeoKone

But this noLoop() model was too simple for taking into account ways to present visual feedback for the user when interacting with the PolyForms (the formations on the screen, based on number of points around a circle). I needed a way to run logic & drawing even when the user was not doing anything, so I could present cool animations and transition effects in the program that would run for a while after the user stopped doing anything, or before stuff happened on the screen.

So I designed to take the game engine approach, where a collection of state machines are running at 30 FPS, rendering all polyforms on each frame. This model was used before versions 0.96, and it proved to be too slow to be really used without hardware acceleration.

This design was very responsive and allowed to make some nice transition effects and other realtime animations when joggling polyforms for example, but would almost immediately raise the CPU usage to 100% or even over on multiple cores, depending on the browser.

I also designed and implemented this cool Hyper Chakana Controller for modifying and interacting with objects on the screen. Here you can see a early design image that I had in mind for GeoKone running in fullscreen:

Early Design Of Fullscreen GeoKone, with the 12 -operation Hyper Chakana Controller

Early Design Of Fullscreen GeoKone, with the 12 -operation Hyper Chakana Controller

The Hyper Chakana Controller is the Compass Looking controller, with 4 actions in each direction, allowing Context Specific Actions to be mapped to each one of these directions, so that if you select a PolyForm, the Chakana would be able to Rotate, Scale, Move etc the polyform based on the Natural Directions the user is touching.

Developing The Chakana Controller, Running at 30 FPS

Developing The Chakana Controller, Running at 30 FPS

The name and design for this was based on the South American Sacred Geometry Cube, The Chakana, which you can see a 2D -version here:

Chakana - It Is Said that all South American Culture, Art, Design is based on the ratios of this design

Chakana – It Is Said that all South American Culture, Art & Design is based on the ratios of this image

I even went so far as to implement this HyperChakana controller, as you can see in this early preview video I made:

But after testing this model for a while, I realized that I cannot run this 30 FPS all the time, as making the CPU fan scream was not an option, so I had to figure out something else.

I looked into WebGL, but since back then it was still experimental (and still is, Safari does not officialy even support it yet, you have to toggle it via the developer options) I decided to stick with Processing.js + basic 2D canvas.

GeoKone eating 98% of CPU

GeoKone eating 98% of CPU

I also decided get rid of the Chakana Controller for now, although I put a lot of work into designing and implementing it. Hopefully I will be able to use this design in upcoming, native versions of GeoKone.NET, as I believe this could be a very natural way to interact with objects on the screen, especially with touch screens.

So I had to find a middle road, not running the logic & drawing at 30 FPS, but still having to be able to animate transitions between polyforms. So I decided to run logic for 50 milliseconds after the user has stopped interacting, and after this call noLoop() to stop Processing.js from calling the main draw() method. This way I could still animate stuff and run logic, and the it wouldn’t take as much CPU as before.

This model worked pretty well, and is the one that is still in use with the current live version (0.97). But it proved to create unnecessary logic for handling the stopping and starting of the loop() and noLoop() methods, creating some pretty ugly state handling code that is just unnecessary.

For the next version of GeoKone.NET 0.98, I have cleaned up the code and got rid of this difficult method of determining when to run the loop and when no to, and just tell Processing.JS not to run the loop at all in the beginning, and to call redraw() manually whenever the user interacts with the polyforms. This seems to be the only acceptable model in which GeoKone is responsive, and does not hog the CPU.

Premature Optimization

Also I had foolishly pre-optimized the code, using precalculated sine and cosine tables for the polyforms, inside the PolyForm class. These were not really even used as any time any parameter of the polyform was changed, the class was re-created completely. So even when the user moved the polyform around, it was re-created, thus re-creating the sine and cosine tables also, and preventing from re-using them. Doh. For the next version I have removed all this kind of “optimizations” and just draw and calculate everything on the fly.

Premature optimization truly creates a lot of problems, as the logic of the program changes during development process so much that the optimizations are not even affecting the program in anyway, but they are making it more difficult to adapt to changes in the architecture.

I actually profiled my code and found out that this creating of these sin/cos tables was causing major slowdown, as I used the new keyword everytime the PolyForm was re-created to create the tables. For debugging I use Firefox and the excellent Firebug extension, and I could see the more I removed any use of new in loops, the more faster the initialization & drawing got. This is kind of obvious, as creating classes in performance critical loops of course takes time to allocate new objects, instead of just changing parameters of existing objects on the fly.

It’s really easy to start optimizing early, and run into problems afterwards. This also bit me in the ass when trying to optimize the drawing so that all the in-active polyforms, that is, those polyforms which are not currently being edited, are being drawn into a separate backbuffer and the active polyforms are drawn to a front buffer, and these are then combined to make up what the user sees on the screen.

Debugging Back Buffer Optimization - Backbuffer on left, frontbuffer on right

Debugging Back Buffer Optimization – Backbuffer on left, frontbuffer on right

This enabled me to draw more complex scenes than before, as I could copy very complex formations into the background buffer, and just move the stuff in front buffer around.

But this created problems with the z-ordering of polyforms, as whenever I would choose polyforms to be modified in the front buffer, these would rise on top of the polyforms in the backbuffer, even though logically they were behind the ones in backbuffer.

This was caused because the backbuffer was drawn in the back, and the frontbuffer always on top of the backbuffer, ignoring completely the z-ordering of the polyforms and changing the way the scene looked when editing and when you disabling Mod All.

I have enabled this Back Buffer/Front Buffer optimization for now at least three times, and yet again I have to disable it as it causes problem with the drawing logic. Better just to stick with implementing the functionality first, and then worry about optimization :) It’s kinda difficult also to let go of these optimizations, as I know that I could be drawing much more faster scenes even with current versions, but there would be some minor drawing bugs which I find unacceptable. Maybe I will find a good way to do it after the program logic is finished.

Next Version

Here are a couple screenshots of the next version in action, I’m not going to write anything more now as I’m really close to releasing this and I have to write those in the release notes anyway :) Major new improvement is the Layer Style Polyform Selector, which you can see on the left side of the screenshots. Also, you can now move the PolyForms up and down in their z-ordering, which makes it more easier to edit your scenes.

Testing the PolyForm Selector

Testing the PolyForm Selector

Testing Irregular sized Scenes with the Selector

Testing Irregular sized Scenes with the Selector

It is easy now to move polyforms higher and lower in the order which they are drawn

It is easy now to move polyforms higher and lower in the order which they are drawn

That’s it for now! Continuing finishing the last tweaks on the next version, and if you want to try it out yourself early, you can check it out at the master-optimization branch from GitHub: https://github.com/inDigiNeous/GeoKone/tree/master-optimization.