Hello guys & girls, I put together a very short video of me showing one main feature of the new user interface in GeoKone, The HyperChakana controller, that is designed for easy usage with touch screens, check out a quick preview here:[youtube http://www.youtube.com/watch?v=xyxUh17rVH4]
I decided to test out if by simply recording my screen (as I don’t have any video capture software yet) with my camera, and turned out I liked the end result, so I put together this little piece. Working hard on the new version, 15 days to go, going for a public 11.11.11 release :) That’s going to be one powerful day anyway.
Thanks for all the people who have showed interest and became testers, I will put the new version hopefully soon into betatesting, still have to complete a couple of critical features for it to be more usable. But it’s already much faster and easier to use than the current beta version.
During this project I have really realized that usability is hard to get right, it sounds easy on a theoretical level, you tap here, this happens, you hover here, this highlights etc .. but actually implementing the logic is much more harder than it sounds, because there are a lot of things to take into account when trying to translate these rules into binary logic, especially if we want to write understandable, expandable code that takes into accounts different situations and concurrencies that are happening all the time.
Today I was optimizing the drawing to enable more complex scenes to be edited with usable speed, something that I had been waiting for a while to do. I have noticed that anything complex that I want to understand, I have to draw it down and see how it works, like today I drew this:
Basically, for everything that needs to happen on the screen, I have some kind of drawings/plans done. I have noticed this is the optimal way for me to implement these things, or the only way actually. I need to visualize what happens in order to understand it. Writing it down many times just doesn’t cut it, as it’s so much harder to conceptualize blocks of text instead of looking at a picture.
Maybe I will post some of those images later, just for fun. My design goals are to keep it simple and take the human into account when designing user interfaces. I have noticed that it’s a step-by-step recursive task, there are no easy roads to implementing perfect interfaces, as the rules change depending on the context. There is no one wonder interface that works everywhere (or is there .. that would be another story for another time ;)).
Here are a couple of screenshots also from different stages of development, just to see how it has come along:
That’s it for now. Hope you enjoy this stuff once it’s released, I’ve been very excited in putting this project together :)