UX design for Apple’s Vision Pro AR headset | by Kamil Kołodziejczyk | Mar, 2024
[ad_1]
With UI for AR already out of the way, I spent more time designing 3D apps. And here I am with another Medium post about my findings. This time around I want to focus on the UX part.
We can figure out HOW to design for AR, but first: WHAT should we design?
I’ve been working on AR interfaces for a year. Coming from flat design, one thing becomes immediately clear — those apps should be treated as a workspace, not a screen. Huh.
To complicate things even more, VR concepts might not apply to AR. VR’s strength is a distraction-free environment. This, as a consequence, can make you feel detached, and well… lonely.
AR solves that. Although here the hardware still leaves a lot to be desired. I’ve been mainly working on Tilt Five, but it’s the same with HoloLens or Magic Leap: image quality, tracking and FOV quickly become an issue.
That’s why trying Vision Pro feels so refreshing— Apple wants to show us the benefits of AR, even though the technology is not ready yet.
Vision Pro is a VR pretending to be AR …and it’s great at it.
Vision Pro shows how AR can improve our workflow. Arranging apps, remembering their location, as if we were interacting with real objects. Apps are no longer hiding off-screen.
Now we can rely on our spatial memory. In AR that’s the real deal. It’s easy to recall where you put something, when your room is a point of reference.
This changes the way we perceive software. To go with it you should consider the purpose of your app:
- Consuming content vs. creating content. Is your whole app something you’d want to keep in a specific place, as one object? Or is your content something we would like to move around?
- Sitting down vs. moving around. How much time will we spend in the app? Is it okay to move around or do we need focus, stay in one place?
“But!” you might say, “this is not that different from VR”. Well… sure. We can do better.
How about enhancing the real world, instead of trying to cover it?
Vision Pro does some of it already. For example, it detects your mac — you can connect to it just by looking at it. What if those kinds of interactions were the norm? What if our glasses could, say, check what we are writing down?
[ad_2]
Source link