Filtered By: Scitech
SciTech

Kinect hack allows potential 3D teleconferencing


A new hack by a university professor and student on Microsoft's popular Kinect gaming accessory may soon allow "affordable" 3D videoconferencing. University of North Carolina at Chapel Hill graduate student Andrew Maimone said his setup uses Kinect's depth cameras and an algorithm and filters. "Our system is affordable and reproducible, offering the opportunity to easily deliver 3D telepresence beyond the researcher's lab," he said in his paper abstract. He said his proof-of-concept telepresence system offers "fully dynamic, real-time 3D scene capture and continuous-viewpoint, head-tracked stereo 3D display without requiring the user to wear any tracking or viewing apparatus." The algorithm will merge data between multiple depth cameras, and will work together with techniques for automatic color calibration and preserving stereo quality even with low rendering rates. His system also employs a fully GPU (Graphics Processing Unit)-accelerated data processing and rendering pipeline that can apply hole filling, smoothing, data merger, surface generation, and color correction at rates of up 100 million triangles/sec on a single PC and graphics board. Also presented in Maimone's concept is a Kinect-based markerless tracking system that combines 2D eye recognition with depth information to allow head-tracked stereo views to be rendered for a parallax barrier autostereoscopic display. An article on PC World said the paper, overlooked by Professor Henry Fuchs, uses four Kinect sensors to capture the same images from different angles. "Although the hack still looks a little rough, it certainly paves the way for a whole different aspect of creating cool Kinect hacks without the need for adding reference points," it said. Earlier this year, a group of students from the Massachusetts Institute of Technology hacked the Kinect to "enhance" distance-based Internet communication. "The proliferation of broadband and high-speed Internet access has, in general, democratized the ability to commonly engage in videoconference. However, current video systems do not meet their full potential, as they are restricted to a simple display of unintelligent 2D pixels. We present a system for enhancing distance-based communication by augmenting the traditional video conferencing system with additional attributes beyond two-dimensional video," Lining Yao, Anthony DeVincenzi, Ramesh Raskar, and Hiroshi Ishii said in their paper. "With Kinect camera and sound sensors, we explore how expanding a system's understanding of spatially calibrated depth and audio alongside a live video stream can generate semantically rich three-dimensional pixels containing information regarding their material properties and location," they added. Using a Kinect camera and sound sensors, the students indicated at least four features that can enhance the videoconference: Talking to Focus, where the system focuses on those currently speaking and can blur those who are not. The system can also display vital information about the speaker, including name and speaking time. Freezing Former Frames, where people who do not want to be noticed by the other side can freeze themselves and make a still image for a short time – handy if one wants to pretend to sit and listen, but is really checking email or engaged in a short conversation. Privacy Zone, where the user can render himself or herself, or a specified area, invisible with a gestural command. The simulation will not interrupt objects moving in the foreground. Spatial Augmenting Reality, where people can click certain objects on the screen and see the augmented information remotely. The setup includes two networked locations, each with a video screen for viewing the opposite space; a standard RGB digital web camera enhanced by a depth-sensing 3D camera like the Kinect; and calibrated microphones. — LBG, GMA News

LOADING CONTENT