RealityKit 2024: Create Head-Following UI & Sticky Buttons
Learn to build intuitive head-following UI and sticky buttons in RealityKit for visionOS. Master FollowTarget and TargetAlignment for engaging spatial apps.
Alejandro Vargas
Senior iOS and visionOS developer passionate about creating intuitive spatial computing experiences.
Ever felt like you were chasing menus in a virtual world? In the new frontier of spatial computing with visionOS, user interface (UI) design isn't just about looking good—it's about feeling right. A static menu floating in the distance is often a frustrating experience. Users need critical controls to be accessible, predictable, and unobtrusive. This is where head-following UI comes in.
Today, we're diving into one of RealityKit's most powerful features for creating intuitive spatial interfaces: the FollowTarget
component. We'll learn how to build UI panels that gracefully follow the user's gaze and create "sticky" buttons that feel responsive and natural. Let's get started!
Why Head-Following UI is Crucial in visionOS
In the physical world, you have a personal space. Tools you need often, like your phone or a notepad, are kept within arm's reach. Spatial computing is no different. Head-following UI creates a persistent, personal workspace for the user.
- Accessibility: By keeping controls within the user's immediate field of view, you ensure they can always access important functions without having to physically turn or move to find a lost menu.
- Context: It's perfect for displaying contextual information, like system status, notifications, or tool palettes, that needs to be readily available but shouldn't permanently obstruct the main view of the immersive world.
- Comfort: A well-designed following UI feels like a natural extension of the user, rather than a rigid, bolted-on element. It moves with you, not at you.
The key is to strike a balance. A UI that's too rigidly locked to the head can cause discomfort and feel jarring. The goal is a soft, predictable follow, which is exactly what RealityKit enables.
The Core Concept: `FollowTarget` Component
At the heart of this functionality is a simple yet powerful RealityKit component: FollowTarget
. As the name suggests, it makes one entity follow another. For our purposes, the most important target is the user's head.
When you add this component to an entity, you're essentially telling RealityKit, "Hey, I want this object to maintain a specific position and orientation relative to something else." That "something else" can be the user's head, hands, or any other entity in your scene.
Implementing a Basic Head-Following UI
Let's build a simple UI panel that stays in front of the user. We'll do this within a SwiftUI RealityView
.
Step 1: Setting up the `RealityView`
First, we need a place to host our RealityKit content. A RealityView
is the perfect bridge between SwiftUI and RealityKit.
import SwiftUI
import RealityKit
struct FollowerDemoView: View {
var body: some View {
RealityView { content in
// We'll add our content here
}
}
}
Step 2: Creating the UI Entity
Next, let's create a simple visual element to act as our UI panel. A flat plane with a simple material will work perfectly. We'll make it a ModelEntity
.
// Inside the RealityView's make closure
// 1. Create the panel entity
let uiPanel = ModelEntity(
mesh: .generatePlane(width: 0.4, height: 0.2, cornerRadius: 0.04),
materials: [SimpleMaterial(color: .darkGray.withAlphaComponent(0.8), isMetallic: false)]
)
// 2. Add it to the scene
content.add(uiPanel)
Step 3: Applying the `FollowTarget` Component
This is where the magic happens. We'll create a FollowTarget
component and configure it to follow the user's head. The target
parameter is an enum, and .head
is exactly what we need.
// 3. Make the panel follow the user's head
let followComponent = FollowTarget(
target: .head,
targetAlignment: .camera,
offset: [0, -0.1, -0.6] // X, Y, Z - A bit below center, 60cm in front
)
uiPanel.components.set(followComponent)
Let's break down the parameters:
target: .head
: This tells the component to use the user's head as the anchor point.targetAlignment: .camera
: This ensures the panel always faces the user, which is crucial for a readable UI.offset: [0, -0.1, -0.6]
: ThisSIMD3<Float>
vector positions the entity relative to the target. Here, we're placing it 10cm below the center of the user's gaze and 60cm in front of them. The Z-axis is negative for "in front."
Creating "Sticky" Buttons with `loose` Tracking
A UI that moves instantly with every tiny head movement can feel rigid and distracting. To create a more organic, "sticky" feel, we can adjust the trackingMode
.
The FollowTarget
component offers two main tracking modes:
.tight
: The default mode. The entity is rigidly locked to the target's position and orientation. It moves instantly..loose
: This is the key to our sticky UI. The entity follows the target with a dampened, spring-like motion. It lags behind slightly and gracefully catches up, feeling much more natural and less intrusive.
Updating Our Component for a Sticky Feel
Let's modify our component to use .loose
tracking.
let followComponent = FollowTarget(
target: .head,
trackingMode: .loose, // <-- The magic is here!
targetAlignment: .camera,
offset: [0, -0.1, -0.6]
)
uiPanel.components.set(followComponent)
Just by changing that one parameter, the entire feel of the UI is transformed. It now feels less like a fixed overlay and more like an object with its own subtle physics, tethered to your view.
Making It Interactive
A UI panel isn't very useful without interactive elements. To make our entity tappable, we need to add two more components: InputTargetComponent
and CollisionComponent
. Then, we can use SwiftUI's gesture system to detect taps.
// Add components to make the panel interactive
uiPanel.components.set(InputTargetComponent())
// Generate a collision shape that matches the mesh
let shape = ShapeResource.generateConvex(from: uiPanel.model!.mesh)
uiPanel.components.set(CollisionComponent(shapes: [shape]))
Now, back in our SwiftUI view, we can attach a gesture.
struct FollowerDemoView: View {
@State private var panelColor: UIColor = .darkGray
var body: some View {
RealityView { content in
// ... (setup code from before) ...
let uiPanel = ModelEntity(
// ...
materials: [SimpleMaterial(color: panelColor.withAlphaComponent(0.8), isMetallic: false)]
)
// ... (component setup from before) ...
content.add(uiPanel)
} update: { content in
// Update the material when the state changes
if let panel = content.entities.first as? ModelEntity {
panel.model?.materials = [SimpleMaterial(color: panelColor.withAlphaComponent(0.8), isMetallic: false)]
}
}
.gesture(
TapGesture()
.targetedToAnyEntity()
.onEnded { _ in
// Change color on tap
panelColor = (panelColor == .darkGray) ? .blue : .darkGray
print("Panel Tapped!")
}
)
}
}
We've now connected a SwiftUI gesture to our RealityKit entity. The .targetedToAnyEntity()
modifier directs the tap to entities with an InputTargetComponent
. We also use the update
closure of the RealityView
to efficiently update the panel's material when our @State
variable changes.
Conclusion: Your Turn to Build
You now have the fundamental building blocks for creating user-friendly, responsive spatial UI in visionOS. By mastering the FollowTarget
component, you can ensure your application's controls are always right where the user expects them to be, without feeling rigid or obtrusive.
We've covered:
- Why head-following UI is essential for a good user experience.
- How to use
FollowTarget
withtarget: .head
. - Fine-tuning position with
offset
and orientation withtargetAlignment
. - Creating a natural, "sticky" feel with
trackingMode: .loose
. - Making your following UI interactive with gestures.
Experiment with different offsets, tracking modes, and even other targets like .hand(.left)
or .hand(.right)
. The possibilities for intuitive spatial interfaces are vast. Go build something amazing!