What Is The R1 CPU Found In Apple's Vision Pro Headset?

At the WWDC 2023, the Cupertino-based tech giant Apple unveiled its first mixed-reality headset. Apple Vision Pro, or the company's "first spatial computer," costs $3,499 and will be available in early 2024. The high-end headset doesn't only compete with the likes of Meta'a Quest Pro but takes the augmented-reality game to the next level, i.e., it blends overlays virtual elements on users' environment with ultra-high precision. However, to do that, the "most advanced personal electronics device" has got to have some exceptional computational power, which is where the company introduced us to its dual-chip design.

Unlike any other headset, the Vision Pro features two chipsets under the sophisticated metal and glass design. First, there's the Apple M2 chip. Known to handle complex workflows on MacBooks while being energy efficient, the M2 chip is tasked with the same for the headset. It takes care of all the staple operations, such as opening apps, multitasking, and accessing the web for browsing on the VisionOS, the "world's first spatial operating system."

However, to provide users with a lag-free three-dimensional experience that is in sync with their movements, the company has designed the all-new R1 chip. Working in conjunction with M2, the R1 chip ensures that the headset processes all spatial inputs in time. Here's a deeper dive into the R1 chip in the Apple Vision Pro and how it helps the headset stand out.

Introducing Apple's R1 Chip For Processing Inputs

In the official press release, the company mentions that Vision Pro features an "entirely new input system controlled by a person's eyes, hands, and voice." However, to interpret these movements as inputs, the headset must capture and process them in real-time. Otherwise, users would experience a significant lag, which could result in motion sickness. Fortunately, Apple gave a fair thought to this, and to provide the best possible mixed-reality experience, it designed the new R1 chip.

The R1 chip "processes inputs from 12 cameras, five sensors, and six microphones." These are not regular cameras or sensors. Apple mentions that it has equipped the device with the TrueDepth camera system and a LiDAR sensor to understand the three-dimensional space around users. Along with M2, the R1 chip also powers features like Immersive Environments that let users control the level of immersion in AR with the Digital Crown, spatial FaceTime, and new app experiences.

If that isn't impressive enough, the R1 chip does it all within 12 milliseconds. For reference, an average blink of an eye lasts about 100 milliseconds, meaning that the R1 chip receives, processes input, and streams the new images at a speed about eight times faster than the blink of an eye. In other words, the Vision Pro should be able to follow users' head and eye movements with negligible lag, reducing the chances of feeling uncomfortable.

Possible downsides of the R1 chip

To summarize, the R1 chip ensures "every experience feels like it's taking place in front of the user's eyes in real-time." But are there any hidden downsides to using the chipset per se? Well, Vision Pro relies on two chipsets to run its operating system and process the inputs, and naturally, it has to power both these chipsets, which increases the overall power consumption.

Although we don't have the consumption rates of the R1 chipset yet, Apple promises up to two hours of usage with the external battery pack, which seems a tad light for what a device that is supposed to stream movies and live sports events should offer. Further, it would have added certain design complications, too, as accommodating two chipsets in a device and managing their thermal dissipation while maintaining comfortable temperatures is tricky. Nonetheless, the dual-chip design, including the M2 and the R1 chip, is nothing like anything in the market, and counting on Apple's experience with chipsets should deliver a seamless experience on the Vision Pro.