In this video we present Squama, a programmable physical window or wall that can independently control the visibility of its elemental small square tiles. This is an example of programmable physical architecture, our vision for future architectures where the physical features of architectural elements and facades can be dynamically changed and reprogrammed according to people's needs. When Squama is used as a wall, it dynamically controls the transparency through its surface, and simultaneously satisfies the needs for openness and privacy. It can also control the amount of sunlight and create shadows, called programmable shadows, in order to afford indoor comfort without completely blocking the outer view.
This research introduces a mechanism to induce a virtual force based on human illusory sensations. An asymmetric signal is applied to a tactile actuator such that the user feels that the device is being pulled or pushed in a particular direction, although it is not supported by any mechanical connection to other objects or the ground. The proposed device is smaller and lighter than any previous force-feedback devices. This small form factor allows the device to be implemented in several interactive applications, such as a pedestrian navigation or an untethered input device with virtual force (ACM UIST 2013 paper). http://lab.rekimoto.org/projects/traxion
LiveSphere is an immersive experience transmission architecture with a wearable omnidirectional camera for “Human to Human Telepresence”. In LiveSphere, the first person wears a LiveSphere gear with an omnidirectional camera and the omnidirectional video are stabilized to decouple ego-motion and transmitted to others. Other persons can virtually look around the immersive visual experience with his or her own head motion with HMD.
Immersive experience transmission by LiveSphere has broad applications including entertainment and content distribution (i.e., the audience can share the view from professional sports player), communication such as virtually travel or shopping together, and educational teaching or professional training.
Presented at Augmented Human 2014 International Conference:
Shunichi Kasahara and Jun Rekimoto, JackIn: Integrating First-Person View with Out-of-Body Vision Generation for Human-Human Augmentation (AH2014)
JackIn is a new human-human communication framework for connecting two or more people. With first-person view video streaming from a person (called Body) wearing a transparent head-mounted display and a head-mounted camera, the other person (called Ghost) participates in the shared 1st-person view. With JackIn, people’s activities can be shared and assistance or guidance can be given through other peoples expertise. This can be applied to daily activities such as cooking lessons, shopping navigation, education in craftwork or electrical work, and sharing experiences of sporting and live events. For a better viewing experience with 1st-person view, we developed the out-of-body view in which 1st-person images are integrated to construct a scene around a Body, and a Ghost can virtually control the viewpoint to look around the space surrounding the Body. We also developed a tele-pointing gesture interface. We conducted an experiment to evaluate how effective this framework is and found that Ghosts can understand the spatial situation of the Body.