top of page
State-of-the-Art Equipment & Software

The Oregon Reality Lab is home to next gen technologies available to all UO students, faculty, and staff who complete the necessary requirements for its usage. Through our lab you can experience social virtual worlds, 360 virtual environments simulating anything from historical wonders, outer space, and ocean reefs to anatomical structures. We encourage a hands-on approach, giving students access to an abundance of technology from 360 cameras and audio and lighting kits to drones, Oculus Quest (1,2,3) and HTC Vive VR headsets, Hololens and Magic Leap Mixed Reality Headsets, mobile devices, GoPros, and more. You can work with industry standard software like Unity, Unreal Engine, Maya, Zbrush, Substance Painter and Zapworks to build your own projects and bring your ideas to life, using any of the lab's high-end gaming workstations. The lab has dedicated studio spaces for immersive media research and photogrammetry, the process of digitally stitching together overlapping photographs to create 3D models and maps.

 

Whether you are seeking to learn, play, or create in virtual reality, the OR Lab invites you to explore everything it has to offer as well as to request any additional content and equipment as needed to enhance your enjoyment and production within the space.

​

The OR Lab offers limited equipment available for checkout as well as equipment restricted to use within the lab. Documentation and learning materials including instruction manuals, software specific guides and video tutorials are available below. 

​

In order to use the lab and/or its equipment, you must:

  • Be a UO student or employee:

    • Enrolled in class with OR Lab scheduled use

    • Engaged in research related to immersive media in coordination with OR Lab Faculty & Staff

  • Complete necessary safety and training with lab staff sign-off

  • Treat equipment and materials appropriately

  • Review Lab Policies and Procedures

Immersive Media Glossary

 

Virtual reality (VR):

Virtual reality technologies replace reality and the user’s vision and hearing with a new digital world. It transports them through their audio and visual senses to worlds and experiences only previously imagined. This can take place on a screen but is most commonly experienced with VR headsets. Examples are screen-based virtual worlds like Second Life, VR games like Beat Saber and Lone Echo, and social environments like Meta Horizon Worlds and Sansar.

​

Augmented reality (AR):

Augmented reality adds to a user’s vision by inserting digital information and images into what they see naturally. For instance, a person using AR may see a new image, animation or video layered on top of a view of the room in front of them. This technology is used in popular games such as Pokémon Go!.

​

Mixed reality (MR):

Also called extended reality or hybrid reality, mixed reality combines physical reality and digital content in a way that enables interaction with and among real-world and virtual objects.

​

Extended reality (XR):

 A 'catch-all' term for technologies that enhance or replace our view of the world. This is often through overlaying or immersing computer text and graphics into real-world and virtual environments, or even a combination of both.

​

Virtual Environment/World:

A computer simulated place or environment in which users can interact with the interface and each other. These interactions can be driven by keyboards, mice, HMDs, mobile devices…Think Second Life as an example.

​

Digital Games:

An interactive program for one or more players, meant to provide entertainment, educate, socialize and share experiences. It involves interaction with a user interface or input device.

​

The Metaverse:

The metaverse is a hypothesized iteration of the Internet, supporting persistent online 3-D virtual environments through conventional personal computing, as well as virtual and augmented reality headsets. Metaverses, in some limited form, have already been implemented in virtual worlds such as Second Life. (Wikipedia)

XR_Spectrum.png

LiDAR (Light Detection and Ranging): 

Uses eye-safe laser beams to "see" the world in 3D, which more accurately measure distances of surrounding objects.

 

360-degree video:

To make these immersive videos, videographers record a view in every direction at the same time with an omnidirectional camera or a collection of cameras. Watch a video about multimedia journalism master’s student and OPB reporter Cassandra Profita’s production of a 360-degree video for the Eagle Creek Fire project.

​

Volumetric Capture:

"Volumetric capture" (also known as "volumetric video") refers to recording a physical place, object, person, or even event in a way that makes it appear to take up three-dimensional space, which differs from 360-degree video that lacks that depth. This allows a viewer to rotate or move around the end experience. Volumetric capture works by using a series of cameras from different angles that are all filming at the same time. Computer algorithms stitch the views from these angles together to create volumetric images.

​

Gaussian Splatting:

3D Gaussian Splatting is a recent volume rendering method useful to capture real-life data into a 3D space and render them in real-time. The end results are similar to those from Radiance Field methods (NeRFs), but it's quicker to set up, renders faster, and delivers the same or better quality.

 

Photogrammetry:

This process digitally stitches together overlapping photographs, using a software program like RealityCapture, to create a 3D model or map. Watch a video explaining how multimedia journalism students used photogrammetry to map a virtual world.

 

Spatial audio (ambisonics):

This surround-sound technique mimics the way we hear in real life by channeling the characteristics of sound as it travels through space and time.

​

Artificial Intelligence:

Artificial intelligence leverages computers, machines and datasets to mimic the problem-solving and decision-making capabilities of the human mind. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.
 

Rendering:

Rendering is the process of producing an image based on three-dimensional data stored on your computer. Rendering can take from seconds to even days for a single image or frame. There are two major types of rendering in 3D and the main difference between them is the speed at which the images are calculated and processed: real-time and offline or pre-rendering.

​

Real-Time Rendering:

In real-time rendering, most common in video games or interactive graphics, the 3D images are calculated at a very high speed so that it looks like the scenes, which consist of many of images, occur in real time when players interact with the game or application. Video games, VR, AR, MR are all rending in “Real Time”. 

​

bottom of page