Muscle Memory Optimized Fractal UI, designs I did back in 2010

I want to share some old designs I worked on back in 2010.

Then I was designing a completely new type of UI based on fractality of everything, together with our company adHD Oy Finland, together with Jyri Hovila, Taina Peltola, Jani Salminen and Aki Tåg.

Our company didn’t really take off, but we did a lot of design and some simple prototypes. We had ambitious plans, but we didn’t get any further than the planning stage and implementing some simple UI prototypes.

Our idea was to build a Linux based operating system, that would have completely new UI paradigm. Jyri had the idea of the Linux based operating system, and I was thinking about the UI side.

SuperYber konsepti.001

We were pushing away from the notion of computers having to imitate books, or other traditional media, like tablets and phones were doing back then. Where is the real intelligence in that? Or smart as in smart phone ?

It only smarts to use, trying to replicate old world paradigms like pages, tabs, files etc in a modern environment. We don’t have to anymore limit ourselves to something traditional with new digital platforms, so why cling onto old ways to think ?

We could create something for the purpose from the ground up, that is optimized for that use case, but instead it seems many just try to imitate already known analogue user interfaces, transferring those into the digital world, which to me doesn’t make much sense, except from the point of having familiar terms to work with and bring people in.

But those files, folders and floppies and all user interfaces paradigms we have been so used to back in the day in the physical world, are now dead. I mean, who uses folders anymore ?

floppy-disk-save-button-icon-65887

When was the last time you saw these? This used to be the universal icon for saving a document

They have went the way of the famous floppy disk as the save icon, young people don’t even know what a floppy disk is, and the same is with many UI paradigms we still stick to.

Fractal User Interface

Working together as a company, we had a problem, a problem all teams and companies have: how to share information efficiently? How to see at one glance how the project was doing and how the team was completing their tasks ? No current software we looked back then was able to provide that view, in a way that you could just see from a single page, a single glance, what was going on with your team and your project ?

I started wondering, why were so many team collaboration software (definitely back in 2010 at least) designed around the concept of books and pages and so on, that fit poorly in a web environment ?

Optimal situation would be to have an immediate visual overview of what is happening in your project, without having to go through different pages, tabs, issues and so on to see what is going on with the project. I wanted to have an immediate view onto how the project was doing, without having to dig any deeper in the UI.

I really wondered, why stick to old paradigm in the thinking of how things should be done ?

SuperYber konsepti.002

So I started to think outside the box, designing from the get go what would an intuitive user interface look like,  for seeing how a project was doing. It would naturally be built around the people connected to a project, and people completing tasks for the common nominator in the center, the project, that everyone would be committing work for.

SuperYber konsepti.003

I was starting to imagine this as a tree, taking something from the nature and starting to build around that concept in my mind. A traditional list, or box based UI didn’t feel like a natural choice, so we began from the very simple roots (ha!) of what it is to build an user interface and what would feel natural for the user.

The main UI was based around the idea of building an UI based around one usage paradigm:

Everything is Fractal.

Here we can see an design based around that concept, designed for a discussion forum app. It would use a fractal tree node based interface for displaying data:

fraktal_preview_01_full

Design for a discussion forum app (click for full version)

branch detail

Branch detail from discussions

The idea was that all data displayed in our UI would be based around fractal tree nodes, and that same structure would be repeated for different data all over the system. In the picture above, we show a design for a discussion forum, but this same model could be used for contacts, messages, files and all the data you have on your phone.

 

This would mean that only one set of UI paradigm would have to be designed, and that same UI model could be re-used all over. No more re-designing every part of the UI, just design one part really well and repeat that!

No more multiple types of menus, drop downs, windows, whatever you need in traditional UI to visualize data, but only one set of really well working interface that would be replicated all over the place.

Fractality of Usage Patterns

We wanted to go further with the fractality of the design, and not just represent it in the visuals of the system. We wanted the usage of the system to be fractal in nature.

This means, that you only have to learn a set of certain hand motions to do any action. And you could use these same actions in every context in the UI. This is sorely missing from modern UIs, where you have to constantly change your muscle actions, leading into not being able to program the muscle memory efficiently, not being able to learn and remember different parts of the UI instantly.

For example, when you learn a sport, or play a game, first you think about it, but the main goal is to be in a state of not thinking about the action being performed, just do it, flow with it.

With modern computer UIs, this is very difficult if you are not using the keyboard, as with mouse and other position based UI’s, you constantly have to keep your attention on the screen, and figure out what does what. This is not optimal for storing the actions into your muscle memory, as the actions are constantly moving into different places, and you cannot just forget about the context and perform similar actions without thinking.

fingerpos

Our UI was designed to be used primarly by hand motions, touch screens in mind back then. So that you would only have to learn a set of certain actions, and repeat those same actions in different contexts, all over the system.

This, I believe, would result in a UI where you can flow through the usage, without thinking about it too much.

Currently, for example, if I want to save this document, I have go through this thinking pattern:

  • What key is used to save this document ? Okay, ctrl+s in this app. Press that.
  • What if I don’t know or remember that key ?
  • Look through the menus. File ? Document ? Okay, it’s under Document in this app.
  • Go through the menus, that are always different for each app, so I have to move my focus into the menu and position my mouse exactly in the ‘Save’ position and click there.
  • Look at a message from the app, was my document saved or not ? Where is the application telling me this ? Oh, in the upper right corner in this context. Ok.

So many specific and accurate actions needed for such a simple action, if you don’t remember the keyboard shortcut. And keyboard shortcuts are perfectly fine, but in a mobile phone or a tablet, you don’t have a hardware keyboard to rely on.

So I have to constantly keep my focus on a simple action I repeat maybe hundreds of times a day, switching from app to app, then figuring out again where to exactly position my mouse cursor or my finger, and how to do that.

Why cannot there be a single paradigm and action to take care of this ?

The main problem is the changing paradigm, and the fact that I have to focus on the action to perform pixel perfect movements. This takes away from the main focus of performing whatever you are doing in a creative flow. It all can break the flow, and move you away from the state of creativity to a state of bafflement and possibly even forgetting what I was doing at all currently.

Universal Controller

In the center of our UI would be this controller, termed Chakana -controller, as it was based on this Inca Cross formation, containing within it the Golden Ratio and Pi, modeled after the Chakana:

 

This cross came to me from a south american friend back then, and we studied it and found that it contains within the ratios at least for the Phi and the Pi, so in a way it was information stored in visual formation. In traditional South American culture this formation was the basis for many architecture, music, design etc, and they had the idea that each direction represents some basic element of life.

So we took this design, and applied the idea that you could control the UI with 4 different major directions, each direction representing an universal concept, like ground, air, fire, water and other universal concepts, and then seek out how those concepts would map onto when working with documents or when sharing contacts, or other actions depending on the context you were working on.

vlcsnap-2017-07-19-17h21m23s546vlcsnap-2017-07-19-17h21m07s998

For example here you would see the chakana compass appear over a contact in your Today list, enabling you to share when selecting the upwards direction (Sharing, Air, Putting it out there, No-Time) modify when selecting down (Modify, Earth, physical action, the Now moment), see your call history (History, Past, Water) and also maybe see your future calendar contacts when selecting the right operation (Future, Fire).

Not all of these would of course map directly to these concepts, but we had the idea that if you remembered the general idea behind every major direction, you could then apply that to each operation in some sense, and you would always have an idea what direction you need to take in order to achieve some action that is related to that concept.

vlcsnap-2017-07-19-17h21m31s008vlcsnap-2017-07-19-17h21m35s952vlcsnap-2017-07-19-17h21m40s002

Here you can see some idea what these major directions could represent. The main idea was to map your muscle memory in a way to one direction, and then you could know what that specific direction does generally in different contexes.

Resulting hopefully in a more natural user interface, where you don’t have to read and understand all the time what you are now operating on, but creating a flow state through the usage. We never got to the phase of testing this in action, except some prototypes I implemented later for GeoKone (now OmniGeometry), but which I never put into production yet.

vlcsnap-2017-07-19-17h21m53s868

Solu Computers Implementation

I was kinda happy to see that Solu Computers were working on a similar user interface later. I actually worked there in late 2016 for couple of weeks, so I got to work on their UI for a bit too, but it just wasn’t my place to be at the moment, although their solution implements this fractal idea pretty much as I had visioned back then. So, hopefully Solu will get their computer and UI out :)

Future plans

I still have plans to implement parts of this UI, maybe for OmniGeometry or Geometrify in the future. Hopefully I will find somebody to help me implement this, so this is also part of why I want to share this information, so I can show people some of our plans.

Anyway, this was enough of past plans made in 2010, hopefully soon I will be able to implement these designs :)

This was inDigiNeous, aka Sakari, signing off!

Advertisements

Launched OmniGeometry :: Going native with GeoKone

Reached GeoKone version 1.0 ! ^_^

Realized I didn’t write anything in my blog about our new sacred geometry software website: https://OmniGeometry.com.

OmniGeometry_facebook_banner_fullHD.png

OmniGeometry is the result of 6 years of work, culminating in what is we believe to be the ultimate sacred geometry software on this planet.
GeoKone 1.0.2

GeoKone 1.0.2

OmniGeometry is a package of GeoKone 1.0 + complete set of tutorial videos to help you get on speed on how to create your own sacred geometry.

 We’ve been working on this together with my partner in business, Gustavo Castañer. Gustavo found me last year, and asked me to teach how to use GeoKone. We did some private lessons, and after some time he convinced me to work on more tutorial videos to really make a product of GeoKone. We decided to start working on this together, as Gustavo saw the potential in the software and wanted to help create a commercial product together with me, and also as a means of  support us to continue working on this and future projects through income of selling the software.

Together with Gustavo we created the website, set of tutorial videos and the new base for what is to be the future of sacred geometry software. It was a real pain in the ass to combine GeoKone with a WordPress installation, but in the end the work was worth it. Go check out our website now: https://OmniGeometry.com to learn more!

Towards a downloadable desktop application

After the launch, the #1 request from users and potential users was: Is there a downloadable version ?

Currently GeoKone is implemented with Javascript + HTML5 & Node.js on the server side. This has many benefits, like being able to run on any operating system that runs a modern browser, anywhere, ubiquitously. No need to install anything, updates are automatic and so on. But this has downsides too. This means you have to have an online connection access the application, and also we’re not getting all the performance we could be getting with the browser being between the hardware and the application.

I was pondering whether we should just package the HTLM5 application with Electron, but decided that would take at least 2 months to implement, and I wouldn’t personally be happy with the results, as I’m already also pretty tired working with web technologies. Ultimately I decided, it’s finally time to start working on a native, downloadable desktop version of GeoKone.

Turbocharged, slick, beautiful native application

Since the beginning of GeoKone, I always had in mind to implement a proper desktop version. I had been looking at QT5 for a really long time, even learning it and doing some simple UI prototyping, but it just didn’t feel like the right thing to do earlier. Now the time feels finally right. Most of the problems have been solved with GeoKone 1.0, so it’s “just” a matter of porting to QT5. Happy and inspired to start work finally on a native, downloadable desktop version of GeoKone ! Target platforms are going to be Mac/Win/Linux desktops.

Also now I have experience to undertake this, after working over 2 years on the Geometrify 3D geometry engine. This native version will still be fully 2D, Geometrify will be the 3D version once it’s done, but that’s another story completely.

This is going to be one slick, beautiful, turbocharged version of GeoKone! Hope to release within 6 months at least a beta version for early users to test out.

Anyway, until then, work continues! We are committed to this, and will provide people with the best experience of creating sacred geometry.

My Mission :: Awakening to Sacred Geometry

Video

Hello, time for something more personal this time.

I wanted to share something more personal, some of the motivation that has lead me to work with Sacred Geometry and motivation behind on my mission to continue bring people direct experience of what I believe is the creation language of our Universe.

From now on, I want to share more, be more truthful, more direct, not think so much about what I put out.  Bom shaka bom!

Progress update on my 3D Sacred Geometry Engine

Fully focused

For the past two months I’ve been blessed to focus fully on my 3D Sacred Geometry engine, called PsiTriangle Engine. This engine is going to be the basis for our upcoming 3D Sacred Geometry Creation program, called Geometrify: Creator.

Beginning of the year I was working at Vizor (http://vizor.io), for a while, but that lasted only for 2 months as it was not really something I truly loved doing.

But I got to learn ThreeJS (a javascript 3D framework) there and also how to build a 3D engine more, so this gave me good insight on how to continue with my own engine.

Getting closer to the metal

I’ve been now focusing on improving my engine since then, getting more performance out of the GPU and calculating as little as possible on the CPU.

The CPU -> GPU bottleneck is a real issue when working with dynamic geometry, optimally keep everything in the GPU memory and not transfer it from the main RAM to the GPU, as it’s a big bottleneck for performance.

Learning the tricks of GPU programming and getting to really feel the power of the GPU has been a marathon run, but I’m finally approaching the performance I’ve been looking for.

Doing complex things is easy, but just narrowing down to the simple essentials, the least amount of calculations needed, is difficult.

Putting my engineering skills to their use

I’m an automation engineer, and working with 3D equations and math is really an area where I’m starting to see use for that education. The education basis has given me the insight that I can understand and dissect any problem, if I just keep drawing and calculating on paper long enough.

Just draw it out

Just draw it out

I’m pretty happy now that I got that education, as without it I wouldn’t have probably have the system in place to work like this (*thx math teacher, Pirkka Peltola)

Fast line drawing

In the last weeks, I’ve been completely re-writing my line drawing algorithm to utilize the GPU as much as possible.

Previously I had ported an algorithm by Nicolas P. Rougier from Python code to C++ (based on his paper here: http://jcgt.org/published/0002/02/08/).

But this case was just too generalistic and did too many calculations, and took a long time to upload the vertexes from the CPU to the GPU, which really killed performance.

So I decided to just rewrite from the ground up. Good tool to prototype graphics drawing is http://processing.org, so I first implemented the algorithm with Processing, then when it worked and I understood the process, started porting it to GLSL shader code.

Tesselating Circular Polylines

Tesselating Circular Polylines with Processing

Getting to know the Geometry Shader

There exists geometry shaders on modern GPUs. With these, one can calculate vertexes completely on the GPU, utilizing the massive parallelism of modern GPUs.

I started my line drawing re-implementation using only the geometry shader. Here you can see results for that:

Here all the lines, segments and origo points for the circles are all calculated on the GPU, nothing is done on the CPU except the origo point sent to the shader.

This is pretty great, but there are limitations. First, the geometry shader has to re-draw all the shapes, calculate all the sines and cosines for each line segment, all the time, everytime, on each frame. This is slow.

Second, the geometry shader can only output a limited amount of vertexes. With my GPU, that limit is 256 vertexes of vector4 components. So it’s not really much, can’t do deep recursion with that.

Bringing in the Transform Buffers

There also exists a thing called ‘Transform Feedback Buffer‘, which basically means you Transform (calculate geometry) and put the results in a Feedback Buffer (store), which you then use to actually draw (read buffer).

These buffers are then only updated when changes occur, and not on beginning of each frame like with purely geometry shaders.

This got me already much better performance:

Much better, but I was still calculating stuff recursively, storing each circular formation as a separate copy of my base class.

This worked well with http://GeoKone.NET, as with software rendering all the data stays in the main memory. But with GPU rendering, we really want to minimize the amount of things calculated.

Drawing as little as possible

At this point, I decided that I know what I want to do achieve, and to get there, I really need all the perfomance I need, to make it as smooth as possible.

To do that, the current model of doing things recursively, ie. where a class instance stores num_points class instances and visits each of them to draw their data, continuing down the path recursively with a parent child model, really didn’t work anymore with the GPU.

With GPUs, what seems to work best is doing things in a linear buffer. We want to have all data in a continuous pattern, so you can just loop through it when calculating and drawing, with minimum amount of branching and changing buffers when doing that.

Basically we just want to blast the data to the shaders, so they can work on it as parallel as possible, because that’s the strength of the GPUs.

I’m still seeking the best way to do this, but with this model I could finally reach dynamic geometry in 3D space with similar performance as with GeoKone.NET. This is my latest update, showcasing dynamic manipulation of 2D plane sacred geometry in 3D space, that will be the basis for Geometrify: Creator.

Getting there  :)

I’m developing this engine on laptop GPUs, my faster Macbook Pro having a Nvidia GT750M 2GB card, and my home computer having an ancient Nvidia GT330M/512MB.

So I really also have to figure out how to make this fast in order to develop, which is a good thing :) But I can’t wait to test this out on modern beasts of GPUs, which are easily 30x faster than the one on my older laptop.

Anyway, development continues, if you are interested in more updates, follow me on Twitter: https://twitter.com/inDigiNeous, I’ll be updating there more frequently.

Now peace out, and bom! ^_^

GeoKone 0.99.66 released ::Set Background Image & Continous Tracing

GeoKone.NET was just updated to version 0.99.66.

Background Image Setting

This version (actually previous version already) brings support for drawing over a background image, like demonstrated here:

Remixing Geometry over a image from www.luminaya.com

Remixing Geometry over a image from luminaya.com

Using a gradient as the background

Using a gradient as the background

Saving the background image is not yet supported, but you can always load it again after loading the scene from a local file. The background image setting is smart, so you can import any size file as the background and GeoKone will automatically scale it to fit the best target, and you can select  how to fit the image to to your scene.

When exporting PNG images with background, the background images original resolution will be preserved, so you can set a high resolution image as background and then draw over it, and get same quality output if exporting a higher resolution PNG image.

One cool thing to try out also is exporting your current scene as PNG, then setting that image as your background and continuing drawing over it :)

Continous Tracing Mode

When the ‘tracing’ mode is set, toggling the animation will not clear the trace buffer anymore. This means you can pause the animation, modify your formations and resume animation to continue tracing over your previously traced buffer.

This enables to create this kind of beautiful images more easily:

This was created by tracing the original formation, pausing animation, re-scaling, continuing animation and repeating this pattern

This was created by tracing the original formation, pausing animation, re-scaling, continuing animation and repeating this pattern

 

Updated Help Page

Be sure to check out the updated Help Page also to get a better idea how to use GeoKone.

Changelog

Complete changelog for versions 0.99.65 & 0.99.66:

  • Fix bug with topmost buffer not always clearing when switching layers and
    toggling mod all on and off
  •  Update PDF download prices to be slightly more supportive toward development of GeoKone ^_^
  • Update help page
  • New dialog style login page
  • Update styling of signup and forgot password dialogs
  • Clean up the code a lot
  • Add buttons for animation and tracing on the top menu to make them easier to find
  • Remove the toolbar bottom indicators, replace with the led indicators
  • Style minor things to be more compatible with Chrome
  • Continous tracing drawing. Traces are now drawn by default continously, toggling animation will not clear the trace buffer now, only when you toggle tracing is the buffer cleared
  • ‘Help/Send Feedback’. Send direct feedback to me.
  • Fixed the right menu appear animation
  • Remove the stupid blue outline coming around the canvas on Chrome and Safari
  • Fix keyboard focus icon not showing with Safari
  • Fix export image to show proper preview when tracing off

Little bit closer to 1.0 again :) Go to http://GeoKone.NET and get creative!

Notes from the path of creating a 3D geometry engine

Hello, Sakari here, aka inDigiNeous. For the past 2 years I have been building my own 3D engine, from the ground up, using C++11 and modern OpenGL.

In this post I am going through the motivation to build my own 3D engine, technical details about the process and what is driving me to do this work. Work that is ultimately leading to the development of next version of GeoKone (“GeoKone 3D”) and later also to http://Geometrify.net.

The desire to see how things work

First some background on why I decided to create my own 3D engine, how I got here and why I ultimately didn’t just use a ready made solution like Unity or Unreal Engine, or any of the other more lightweight engines out there.

My personal goal has always been to understand things deeply, and to see behind things, how they operate on their core levels. I get satisfaction from understanding things thoroughly. I like to take apart stuff, build, re-build and see how the pieces fit together in different ways. This is where I get my kicks from.

To understand, to learn, to see what makes things tick.

My graphics background

When I started developing http://GeoKone.NET, I had some experience in programming graphics, but nothing on a really deep level. I first learned graphics programming back in the 90s in the MS-DOS days, doing some simple Turbo Pascal, assembler and C -based demo effects, but I never released anything or never did anything really cool.

I tinkered during the years with 2D -graphics programming, mostly on the Javascript side of things, so I had some experience with that. But nothing on a really deep level.

During the development of GeoKone I noticed that  many of the visions I wanted to implement, I just couldn’t do with Javascript and the 2D canvas, because of performance reasons and just the limitations of the platform.

Back in 2011 when I started working on GeoKone, and during the couple of years following that, there was no viable option to do hardware graphics processing in the browser, WebGL wasn’t really a viable option, and in a way it still even isn’t, although it is gaining popularity, stability and support now, but it is still limited to new powerful machines to get smooth frame rates and nice support.

So I stuck with Javascript and 2D canvas programming for a while there, and it worked perfectly fine for what GeoKone is .. but I knew I needed more close-to-the-metal programming in order to get the visions out of my head that I wanted to see.

Enter Virtual Reality

In December 2013 when the Oculus Rift DK1 became available to purchase for any developer, I could hear my intuition proclaiming loudly “Order!” “Do it!”.

I still remembered in the early 2000s when my intuition told me, as a engineering student,  “purchase Apple stock” .. and I didn’t, so this time I decided to listen ;)

I ordered the first development kit, and the moment I first tested it, I knew that this is  the environment where Geokone 3D has to be made.

The Oculus Rift DK1 :: Hello Androids

Hello Androids :: Wearing the Oculus DK1

I  decided to dedicate my time then to start learning C++ and OpenGL more and to get something showing with the DK1.

I had already prototyped core parts of GeoKone with C++, Java and even Objective-C, just to test out what would fit best for the case, so I had some code already to work on, but not much.

Development of libgeokone, very early stages, C++ & OpenGL

Development of libgeokone, very early stages, C++ & OpenGL

Back then there was no integrated support in Unity or Unreal Engine like now, so the only way to see something in stereo 3D, was to implement  it using Oculus’s own C/C++ SDK.

Learning 3D basics

And I had no real experience in doing 3D programming, no understanding of how the math and the principles behind it worked. But I just decided, screw this, I’m gonna learn this while I go, and so I began.

Implementing and understanding basic 3D math and translations, projections and such took a while to understand, and matrix operations also still a bit difficult, but the more I program this stuff and really just teach it to myself, the more I start to see this geometry everywhere, how things are built, and understand the math and physics of space and light also while learning it, which is really interesting to notice.

Using a grid to verify 3D functionalities

Using a grid to verify 3D functionalities

Had one experience of walking outside after really grasping the 3D fundamentals and getting code working, I walked outside and could see their vector versions projected on top of the physical objects for a while there, like green matrixes of the physical objects, my mind mapping them out with the rules I just programmed for them. Woah! As Keanu Reeves would say.

Programming the Oculus Rift

It took me 4 months of work to get anything even showing properly with the DK1, mainly due to learning the basics of 3D programming, vectors, matrixes, projections, the whole third dimension and the complexity that comes with it. Luckily, for vector math I could use the great GLM library, which is really nice for working with OpenGL.

I had done simple 3D programming before this, but nothing that I could substantially understand, and this time I wanted to understand each thing I was doing, in order to really grasp what is going on here.

In these months of learning modern C++, OpenGL, GLSL, browsing the Oculus Forums to understand the SDK and just taking in all of this new information, I got to the state where I would get stereo rendering working properly finally, and man, it felt good at that point :)

One of the first stereo rendered scenes I did

One of the first stereo rendered scenes I did

Finally in beginning of May 2014 I had most of my geometry core from GeoKone ported to C++, with very simple GLSL shaders. This meant I could now draw recursive geometry in 3D space, with support for stereo rendering.

This would produce this kind of images:

Early stereo images

Early stereo images

One of the very first scenes for verifying stereo rendering I did

One of the very first scenes for verifying stereo rendering I did

Another stereo image

Another stereo image

Developing the first Demo

I also got the DK1 rendering properly, so I now had this kind of images produced by my program, animated in 60 fps, zooming through the geometry:

Geokone geometry engine rendered in stereo with C++ & OpenGL

Geokone geometry engine rendered in stereo with C++ & OpenGL

I continued from there to develop this demo, tested it with different music, and finally added music from my good friend Miika Kuisma. This lead to the development of the first demo of Geometrify, which we demoed together with Tommi Ullgren, Samuel Aarnio and many others helping us, ultimately us showcasing this potential of sacred geometry in VR at over 20 events and offering probably over a 1000 people their first VR experience ..

But that is a story for another blog post :)

From demo to engine

The first demo I did was just only a technical demo, made in a way that really didn’t support building flexible software around it, so I had to spend a lot of time figuring out how to get a proper framework of C++11 & OpenGL, built in a way that allowed configuration of a 3D world designed to render recursive geometry in a 3D space.

C++11 and OpenGL was chosen in order to support cross platform from the get go. No other language really offers bindings to all the libraries on all major three platforms, Mac Os X, Windows & Linux, and also mobile devices, and with the rock solid performance that this kind of experience would require to be really solid and feel smooth as silk.

Modern C++ practices

Modern C++ practices

I spent lot of time in just learning OpenGL and C++11 best practices. C++11 is a very nice language, but to understand how to use it, you have to wade through maybe 15-20 years of legacy practices in order to find how things are done currently, and how to apply modern techniques.

Implementing beautiful line rendering

One of the key aspects of rendering recursive geometry, is how to get beautiful lines rendered. Turns out this is not as simple as just using a ready made library, as there really doesn’t exist such a thing.

I waded through different methods, read through many papers and found out people have implemented something already, like the VASE -renderer. Ultimately I decided to implement the line rendering method researched and implemented by Nicolas P. Rougier with python, described in his paper here: http://jcgt.org/published/0002/02/08/paper.pdf

Getting this to work I now had lines like this renderer in OpenGL:

Antialiased Polygon Lines

First version of the antialiased polygon lines I implenented

Showing how these lines are tesselated from triangles

Showing how these lines are tesselated from triangles

More recent example of the linerendering

A More recent example of the line rendering method.

Need for a scripting language

One of the really annoying things about C++ is the need to compile the source code all the time even if you just change one number. Trying to implement program logic with this limitation was getting really on my nerves, as usually it took at least 2 – 3 seconds to compile the classes I was working on, and when doing this hundreds of times really takes a toll in the iteration times.

I reached the same conclusion that many others before me have done, and saw that I need to add scripting language support for my engine to really get anywhere in a reasonable time and not loose my mind while doing it.

I prototyped different approaches, first considering using QT5 with it’s built in Google V8 javascript engine, but came to the conclusion that I don’t need such a beast of software just do add scripting.

I also looked at several other scripting options, but really the only other one that can be taken seriously is Lua, so I bit the bullet, learned yet another new language and bound my C++ engine to the Lua -side with the assistance of the great lua-intf interface.

Controlling scenes with Lua

Controlling scenes with Lua

To be able to write programs that have the iteration speed of javascript, combined with the raw power of C++11 and Modern OpenGL, felt super good. Unlimited power!! ;)

Base work almost done

So finally I’ve reached a state of development in my engine, now named PSITriangle Engine™, where I can run program logic in scripts in Lua, and keep the parts requiring performance on the C++ side.

It’s been a long way to this state, to implement a very specific use-case 3D Geometry Engine, using modern C++11 and modern OpenGL + GLSL techniques, all the while learning how to do this while developing the actual engine.

The feeling of getting here, controlling all the pixels on the screen, using the power of math and geometry, and skills I had accumulated over the years to produce this virtual space, felt like pure creation coming into life.

I’m glad I didn’t know how much work implementing a 3D engine properly is, because I might have not chosen that path if I had known :D

Finally  I can just quickly script and prototype ideas quickly. This brings development speeds up significantly and helps test and development massively easier from now.

From now on my plan is to finish this base of the engine and actually start implementing the program logic for what is going to become “GeoKone 3D” as the project name for now! :)

My Custom 3D Engine Rendering Geometry

Video

Short clip I wanted to share, showing some recent rendering example of the custom 3D Engine, PsiTriangle Engine, I am developing is capable of doing.

This is just one testing example, rendering circular shapes, but the engine can render GeoKone -style recursive geometry. Check out http://GeoKone.NET for examples.