Project Rubik (Kinetic UI) explored the intersection of the mobile GUI and new physical input methods. The project lead to a series of experiments that looked to answer: "What comes after multi-touch?"
During a brainstorming session in which concentric Saturn like ring were conceived for the GUI, I came up with the concept of a touch-enabled depression on the back of the device to initiate and control the on-screen interface. This later evolved into having a touchpad with stratified rings that were each touch-sensitive and enabled a different interaction in context with the task at hand. This allowed for interactions not restricted to just one app, but across the OS.
This concept has a few implications for single-handed use cases. For one, it means that you can interact with whatever is on your phone's screen without obscuring the view of it with a finger or hand like every single touch-screen phone today requires. Second, imagine if you had limited use of a second hand from an injury or disability? Third, it could me more finite control over a small detailed interaction such as exact scrolling or selecting text for copy and paste functions for example—no more obscuring what it is you're trying to select
Loading more stuff…
Hmm…it looks like things are taking a while to load. Try again?