Playspace Environment

When pico projectors are integrated with Kinect-like interfaces, we’ll have a fully embedded “Minority Report” interface, where every surface is a monitor and we interact with our surrounding environment with our entire body, instead of just our fingers.

When the playroom is the computer – [physorg.com]

The prototype of the Playtime Computing system consists mainly of three door-high panels with projectors behind them; a set of ceiling-mounted projectors that cast images onto the floor; and a cube-shaped, remote-controlled robot, called the Alphabot, with infrared emitters at its corners that are tracked by cameras mounted on the ceiling.

But the system is designed to make the distinctions between its technical components disappear. The three panels together offer a window on a virtual world that, courtesy of the overhead projectors, appears to spill into the space in front of it. And most remarkably, when the Alphabot heads toward the screen, it slips into a box, some robotic foliage closes behind it, and it seems to simply continue rolling, at the same speed, up the side of a virtual hill.

Windows 7 gets the Minority Report treatment using Kinect – [gizmag.com]

In the 2002 movie Minority Report, part of the “way out there” 2054 technology was a computer system that Tom Cruise navigated his way through via arm and hand gestures. That technology – minus the holograms – has now officially arrived 44 years ahead of schedule, thanks to the design team at tech firm Evoluce. With support from Microsoft, the company has created prototype software which allows Microsoft’s Kinect gesture-based video gaming platform to control Windows 7 applications. PC-users will likely soon be able to “swim” through Google Earth images, write on-screen messages in the air, and surf the Internet without cramping their mousing hand.

The new software, which acts as a bridge between Kinect and Windows 7, is based on Evoluce’s Multitouch Input Management driver. The team altered the driver to include a multi-gesture control mode for applications running under Windows 7, including those using Flash and Java. Video and photos of the system have just been posted by the company, which says it could revolutionize many aspects of the computer-using experience. Possible applications for the technology include office, education, point of sale, medical and (naturally) gaming systems.

DepthJS – [vimeo.com]

Kinect + Computer Vision + Javascript

DepthJS is a web browser extension that allows any any web page to interact with the Microsoft Kinect via Javascript.

Navigating the web is only one application of the framework we built – that is, we envision all sorts of applications that run in the browser, from games to specific utilities for specific sites. The great part is that now web developers who specialize in Javascript can work with the Kinect without having to learn any special languages or code. We believe this will allow a new set of interactions beyond what we first developed.

MIT Media Lab – Fluid Interfaces Group – [media.mit.edu]

Why do we still use a keyboard and mouse to interact with digital information? This mode of human-computer interaction, invented more than 40 years ago, severely constrains our ability to access and interact naturally with digital content. Computer systems lack the contextual knowledge to offer relevant information when and where we need it. Further, traditional screen-based interfaces divert our attention in mobile and social situations. They are designed for a single user, and not well suited to accommodate collaborative activities.

These are the problems that motivate our research. Our group designs new interfaces that integrate digital content in people’s lives in more fluid and seamless ways. Our aim is to make it easier and more intuitive to benefit from the wealth of useful digital information and services. Our work is focused in the following areas:

* Augmented Experiences: We augment a person’s experience of their surroundings with relevant digital information. We try to make the experience as seamless as possible, blending the digital information into the physical environment and making interaction with that information natural and fluid.

* Responsive Objects: We alter everyday objects by embedding sensors, actuators and displays so that the objects can respond to people using them in meaningful, noticeable ways.

* Collaborative Interactions: We experiment with novel interfaces that are designed from the start for use by multiple people. The projects support collaborations ranging from small numbers to very large numbers of people and further differ in whether they support collocated versus remote collaboration as well as synchronous versus asynchronous collaborations.

* Programmable Materials: We invent interfaces and machines for control and manipulation of materials such as paper, fabric, wood and food. Our goal is to endow materials and manufacturing with some of the advantages associated with the digital world such as modifiability, programmability, responsiveness and personalization.

SEE ALSO:
Luminous Room
Pico Projectors
AR Haptic Floor Tiles
Singing Optical Sensor Fibers
Sixth Sense
…other Augmented Reality articles

Comments are closed.