While Sony themselves weren’t present in any significant capacity at Games Developer Conference 2022, that didn’t stop exciting new PSVR2 secrets trickling out from other sources.
Announcing the expansion and support of PSVR2 into their offerings, Unity – the makers of cross platform game development tools – reminded us of the upcoming hardware’s innate powers as well as hinting at some of its previously unseen capabilities and the possibilities open to bright minded developers.
PSVR2 tech specs
In brief, PSVR2 features 4K OLED displays, a 110 degree field of view, a resolution of 2000 x 240 per eye and 90 to 120Hz refresh rates. Meanwhile improved motion tracking (over the previous PSVR hardware) promises a more immersive experience without a requirement for external tracking hardware.
Ahead of the systems launch Unity’s presentation at GDC focussed in on two of PSVR2’s most exciting new features:
Firstly, Unity will feature extensive support for VR2’s on-board eye-tracking. This high-level tech is built into every VR2 and Unity see it as being a major part of every offering on the platform. Not only is eye tracking a useful input into the game – such as being able to know where the player is looking in order highlight in-game choices and eliminate needless ‘gorilla arm’ waving – but the Unity team were keen to spell out the exciting possibilities behind the scenes when used in combination with foveated rendering.
Foveated rendering takes advantage of the inherent way the eye and brain work together to perceive an image. While it’s easy to believe that everything within your full field of vision is perfectly in focus and given equal prominence in your perception, in reality only a tiny part of your view – your ‘gaze’ (aka the part being focused on by your fovea) – actually gets your brain’s full attention.
Foveated rendering prioritises this gaze point above the rest of the frame, reducing detail in areas outside the fovea’s range. Now your true point of focus – revealed through the PSVR2’s eye tracking – can be given maximum care and attention with all other parts of the image taking a backseat. The result is a huge saving in render power with imperceptible impact for the user. After all, if you can’t ‘see’ it, then why render it?
And new, single pass rendering further reduces CPU usage by traversing the scene only once when rendering for both eyes.
It all means that using Unity’s system the team were able to achieve 2.5x GPU performance when implementing fixed foveated rendering (based on an assumed fixed focal point) but this figure increased to a 3.6x performance boost when combined with eye tracking to zone in the GPU’s power.
In addition to knowing where the player is looking, the hardware (and Unity’s software) can also detect pupil diameter – which could be used to deliver the optimum settings for the display – and ‘blink states’ with the game knowing if the player had their eyes closed and potentially missed a vital part of the action.
More realistic avatars
Similarly eye tracking allows for the creation of more realistic avatars for social apps and the like by successfully animating more lifelike faces and expressions bringing digital likenesses to life. Unity even offers VR2 ‘wink detection’ for use in social apps and which could even be implemented in game to trigger an interaction.
And the benefits don’t end there. Eye tracking can also produce metrics to show where the player is sending their attention and so could therefore be used to trigger in-game ‘clues’ and nudges to gamers walking in circles, missing the point or generally barking up the wrong tree.
And alongside eye tracking, PSVR2’s enhanced haptics similarly push the envelope and deliver exciting new possibilities.
Players can ‘feel’ the 3D world around them
In their demo, the Unity team suggested multiple uses for the haptic motor inside the headset. This could be triggered to represent the rush of an object passing close to the head or to let the player know when they’ve reached a boundary in a play area or come in contact with a surface.
Most interestingly the team offered up the possibilities of ‘3D haptics’ where the vibration units in the left controller, headset and right controller could be triggered at different strengths and timings. By triggering the left controller at full strength, the headset at medium (a few milliseconds later) and the right at low strength milliseconds after that, the player will mentally build a 3D picture and perceive an impact or hit to their left.
And it’s worth noting that the hand controllers not only feature vibration motors but speakers too, allowing developers to further build a sense of a 3D space around the player through the combined use of sound and haptics. Also clever controller features such as finger touch detection lets the game know where your finger is resting on a button without actually pressing it, potentially allowing more realistic rendering of in-game hands and finger positions.
And with six degrees of movement detection and the adaptive triggers lifted from PS5 (which can be made stiffer or looser depending on what you’re pushing, lifting, or moving in game) it all adds up to a new experience that the player can feel and hear in their hands.
Multiple display modes put you in the game
Finally the team outlined the different modes of output from the hardware allowing an external screen to either mirror the players view – allowing passive players to see what the headset wearer is enjoying – or to provide an alternative, interactive new view allowing them to play along and be part of the action from a different viewpoint.
It’s certainly an exciting package of features and we can’t wait to see what smart developers will be able to do with it. Sign up at Unity to find out more.