Research
Research Labs and Groups
Name of Lab or Group | About | Contact |
---|---|---|
ILLIXR (Illinois Extended Reality) Lab | The ILLIXR lab works on hardware and software systems for extended reality as part of a broader architecture and systems effort for the post-Moore era. The group has developed and maintains the first of its kind end-to-end open source XR system testbed (ILLIXR) to democratize XR systems research, development, and benchmarking. Collaborating with others in networking, programming systems, and security and privacy, the group is developing the hardware and software technologies for scalable multiuser distributed XR systems. | Sarita Adve |
Earable Computing |
The group is defining a new vision of earable computing, the next significant milestone in wearable computing, that envisions a transition from today’s ear-phones to "earables" analogous to the transformation from basic-phones to smartphones. The group works on convergent sensing, processing, and communications to enable such devices. |
Romit Roy Choudhury |
MONET (Multimedia Operating Systems and Networking) Group | The MONET group engages in research in a large number of areas of distributed multimedia systems, including distributed XR and 360 degree video streaming systems. | Klara Narhstedt |
iSENS (Wireless, Sensing, and Embedded Networked Systems) Lab | iSENS is creating the next generation of intelligent wireless systems for communication and sensing with applications in autonomous systems, mixed reality, and IoT. | Elahe Soltanaghai |
Wang Computer Vision Group | Wang's research group focuses on the intersection of 3D computer vision and immersive computing and robotics, particularly in two areas: (a) developing computational algorithms to sense and perceive the 3D environment, and (b) creating accurate and practical digital replicas of objects and scenes. | Shenlong Wang |
Intelligent Motion Lab | The Intelligent Motion Lab studies planning and control for dynamic, high-dimensional systems in complex environments. Applications include intelligent vehicles, robot manipulation, legged locomotion, human-robot interaction, robot- and computer-assisted medicine. In our Tele-robotic Intelligent Nursing Assistant (TRINA) project, we are investigating immersive telepresence techniques for robot avatars that can allow human operators to sense, communicate, and act in a remote environment. Our work was demonstrated in the ANA Avatar XPRIZE Competition as Team AVATRINA. | Kris Hauser |
ARCANA (Architects Reimagining Computing Around New Applications) Research Group | The group explores several aspects of designing data-centric computer architectures and systems. | Saugata Ghose |
LLVM Research Group | The group develops new programming systems and compilation technologies. Recent and ongoing projects include compilers and IRs for domain-specific heterogeneous systems and distributed systems such as for immersive computing. | Vikram Adve |
Chris Fletcher Research Group | The group works on security and privacy (from applied cryptography to hardware-based attacks and defenses) and domain-specific acceleration (from algorithm to hardware design), with ongoing projects for immersive systems. | Chris Fletcher |
SPAA (Speech Accommodation to Acoustics) | The SpAA Lab works on Speech Intelligibility, Room Acoustics and Musical Acoustics as well as in voice production and perception. | |
The Statistical Speech Technology Group | The group develops theory, software, and databases in support of universally accessible spoken language user interfaces. It includes the Speech Accessibility Project to make speech recognition more inclusive of diverse speech patterns. | Mark Hasegawa-Johnson |
NCSA Visualization Lab |
The advanced visualization group closes the gap between what we learn and how, turning raw data into powerful visual displays that resonate with meaning. From the inner workings of the human cell to complex events unfolding at the outer edge of space, visualizations help researchers gain new insights on their work and share them with the world. Award-winning visualizations from the team simulate data from large, complex computer models to enhance scientific research. Using a breadth of technologies, including custom software, machine learning and much more, the team brings science to life across a wide range of domains from astronomy to earth sciences to genomics. |
Kalina Borkiewicz |
John Hart Graphics Group |
The group works on computer graphics, data visualization, and virtual reality. |
John Hart |
Magnetic Resonance Functional Imaging Lab | MRFIL develops MRI acquisition, reconstruction, and image interpretation approaches for understanding the brain. It develops neuroimaging approaches, dynamic imaging, and visualizations that include virtual reality for presurgical planning. | Brad Sutton |
EmIT (Embodied and Immersive Technologies) Group | The group examines theories and designs for learning within emerging media platforms; e.g., simulations, virtual environments, mobile devices, video games, and augmented and virtual reality. | Robb Lindgren |
Immersive Data Lab | The lab is centered on three interrelated areas: data science, data-driven design, and STEM education. It explores the potential for immersive technologies to help learners understand complex and abstract scientific concepts and to advance the understanding of learner behaviors with unstructured and massive data out of immersive technologies. | Jina Kang |
DiTA (Digital Technologies in Architecture) | DiTA (Digital Technologies in Architecture) explores ideas at the intersection of computational design, advanced manufacturing, and materiality. The lab is investigating design, fabrication and construction using immersive technologies. | Niloufar Emami |
Siebel Center for Design |
The center works on human centered design, including incorporating design thinking into immersive technologies and experiences. |
Rachel Switzky |
Media Technology and Social Behavior Lab |
The lab brings together researchers to study how information, communication, and media technologies shape our social world through interdisciplinary and empirical research. The lab is devoted to better understanding people’s thoughts, feelings, and behaviors in relation to emerging technologies and exploring ways these advancements can open up opportunities or present unforeseen challenges. |
Mike Yao |
RIVET (Research for Immersive Virtual Educational Technology) Lab |
The lab is devoted to increasing access to quality education through virtual reality. |
Laura Schackelford |
Cromley Lab |
The lab performs research on the effectiveness of multimedia technologies for STEM learning, including VR and haptic interfaces. |
Jennifer Cromley |
Michael Twidale Research Lab |
The group looks at issues at the intersection of computer supported cooperative work, computer supported collaborative learning, human computer interaction, and sociotechnical systems design. We are interested in how people learn technologies, how they do this individually, together and online, and how they tailor technologies to their needs. We are also interested in metaphors, mental models, misconceptions and metadata. |
Michael Twidale |
Laboratory for Audience Interactive Technologies (LAIT) |
The Laboratory for Audience Interactive Technologies (LAIT) has been established to research and develop applications for personal mobile devices for use in live performance. This includes theater, music, dance, and performance art. |
John Toenjes |
Steven LaValle Lab |
The group focuses mainly on problems that involve continuous spaces, complicated geometric constraints, differential constraints, and/or sensing uncertainties. Such problems are fundamental in areas such as robotics, computer graphics, architectural design, and computational biology. The planning techniques developed in his research group build on literature from robotics, algorithms, computational geometry, artificial intelligence and control theory. |
Steven LaValle |
More information coming soon
|