Voyager is a game is about science. Not just in the “you’re launching a space probe” sense, but in that the core loop is about experimentation. Make a hypothesis (I need to launch at about 45°), perform the test (3, 2, 1… LIFTOFF), observe the result (missed wide right!), refine the hypothesis (let’s try 30° next time). It took a long time to figure out how to make this process feel natural and intuitive.
In order for it to work, the game needed three things:
- Feedback: I need to be able to tell how my current aim differs from my last aim.
- Determinability: If I do the same thing two different times, I should get the same result both times.
- Precision: I should be able to execute the move as I intended. If I try to launch at an angle of 19°, I should not accidentally launch at 20° or 18° instead.
Solving feedback was the easiest part. The aiming mechanism is very explicit, with an arrow showing the direction and relative power the probe will launch in. Previous launches are shown as dimmer arrows, so the player can easily line up a similar shot and tell how their new one differs. It also straight up tells the user the angle and power of their shot in text, so the values can be written down and reused later. And finally, in-flight probes leave a trail of breadcrumbs, so it’s easy to tell when and how a trajectory diverges from an older one.
All of that was in place for the very first version of Voyager’s aiming controls.
1. Float-angle aiming
So Voyager for Windows Phone shipped with the same slingshot-style controls that power the game today. That input mechanic was borne of the desire to be accessible and mobile-friendly, something any budding rocket scientist could pick up and play. But in this early version, the backing data type was just a normalized vector in the aim direction (opposite the “draw” or “pull” direction). That vector was used to calculate the probe’s launch trajectory. The “angle” shown in the UI was calculated on the fly from the vector, and shown to the available (1 decimal place) precision.
Unfortunately, this approach didn’t mesh well with my intended play-style for the game (or any play-style that elicited adjectives like “fun” or “strategic”). The main issue? It was basically impossible to launch two probes on the same trajectory. It wasn’t deterministic. Why is that?
In short, imprecise touchscreen controls messed with the game’s determinism. (Warning, math interlude.) On a 480×800 (WVGA) screen, the aiming circle is at most 480px wide. This means the circle of possible angles is 480 * π = 1507px around, so each degree has about 4 pixels of separation. On a 60mm wide WVGA phone, 4 pixels of movement is half a millimeter. Now imagine we have at least 0.1° precision – that’s less than half a pixel! There’s no way a player could reasonably expect (or be expected) to be able to reproduce a previous shot exactly. The game needed to become more deterministic.
2. Whole-angle aiming
So I added a second control path, “whole-degree” angle controls that clamped the aim vector to one-degree “ticks.” The original control scheme stuck around as “expert controls.” I didn’t change the underlying data structure, just adjusted the vector to stick to directions associated with whole-number angles.
public Vector3 AimDirection { get { if (this.aimDirection.IsNaN() || this.aimDistance < this.InnerRadius) { return Vector3.Zero; } if (SettingManager.ExpertControls) { return this.aimDirection; } var angleDegrees = (int)MathHelper.ToDegrees(this.aimDirection.Angle()); var angleRadians = MathHelper.ToRadians(angleDegrees); return Vector3.Transform( Vector3.Up, Matrix.CreateFromYawPitchRoll(0f, 0f, -angleRadians)); } }
By happy coincidence, 1 degree turned out to be a pretty good and readable separation between different trajectories at in-game scales. Trying to make the game easier to control, I had accidentally improved the feedback as well. Additionally, now that I knew that the player could launch at 5° or 6°, but not 5.5174, I could design levels around specific “correct” angles. The set of possible solutions suddenly became possible to simulate, which gave me the tools I needed to create better and more interesting level designs.
When I started work on the Unity version of Voyager, I got rid of “expert controls” and made whole-degree controls the standard.
3. The Algorithm
Voyager: Grand Tour built on the foundation of the original Voyager. Launch parameters were saved as an angle and power level, no longer a vector. Whole numbers ruled the day. Unfortunately, aiming still didn’t feel exactly, perfectly right. It turns out that a half-millimeter degree is just hard to hit, especially when the very act of lifting a finger off the screen (to launch the probe) can make the touch position change by entire millimeters. I could continue chunking the angles to broader and broader sets (5° ticks? 10°?), but that would have dramatically reduced the number of possible “right answers” for each stage, and I didn’t want to sacrifice elegance, nuance, or player autonomy.
So I considered other input mechanisms. Buttons to tick the angle a degree at a time. Typing the launch parameters in by hand. None were fast enough (especially on levels that require tracking a moving object) or mobile-friendly enough. My best option was to find a way to fix the problems with slingshot-style aiming.
By carefully analyzing the debug information from tons of play sessions, I started to uncover a pattern of failure cases. With the right algorithm, I should be able to recognize and combat those failures.
// change significantly close to one degree const float angleDelta = 0.9f; // change significantly close to one percentage point const float powerDelta = 0.009f; var past0 = this.aimQueue[2]; // most recent aim var past1 = this.aimQueue[1]; // middle aim var past2 = this.aimQueue[0]; // oldest aim // if current time minus last aim update is less than delta time // (meaning very sudden movement), and the change in angle or // power is beyond thresholds, investigate if (((Time.realtimeSinceStartup - past0.time) < Time.deltaTime) && (Mathf.Abs(past1.angle - past0.angle) >= angleDelta || Mathf.Abs(past1.power - past0.power) >= powerDelta)) { // rule out intentional rapid movement by checking old->middle aim delta if (Mathf.Abs(past2.angle - past1.angle) < (3f * angleDelta) && Mathf.Abs(past2.power - past1.power) < (3f * powerDelta)) { // if middle aim was more than two frames ago, use it // (significant evidence of a last-second twitch) if ((Time.realtimeSinceStartup - past1.time) > (2f * Time.deltaTime)) { this.aimAngle = past1.angle; this.aimPower = past1.power; } else if ((Time.realtimeSinceStartup - past2.time) > (3f * Time.deltaTime)) { // if the middle aim also showed evidence of twitch, // and oldest does not, use that instead this.aimAngle = past2.angle; this.aimPower = past2.power; } else { // if oldest shows evidence of twitch, but not enough to prove // intentional rapid movement, we can't reach any further conclusions } } }
Here is that algorithm (scrubbed of debug logs and such for readability). It’s a little messy, but it worked remarkably well on the iOS platforms I targeted first.
Then I ran the game on Android. And Kindle, and Windows Phone. Different operating systems, different devices… turns out they all react to touch input in slightly different ways. One of the perils of targeting many different platforms. I could either special case every divergent input profile the game ran into, from now until the end of time, or find a different approach.
4. Smoothing algorithm / Movement dampening
I found a different approach. My eventual solution was to implement a simple smoothing algorithm. By averaging values over time, jitters would be dampened and dramatic last-minute changes would generally work correctly: too fast (machine error) = no change, pretty fast (human input) = some change.
// apply changes to rolling averages (used to smooth out errors) this.rollingAimAngle = GameHelpers.AbsMod(((this.rollingAimAngle * (1f - this.biasForAction)) + ((this.aimAngle + aimAngleChange) * this.biasForAction)), 360f); this.rollingAimPower = (this.rollingAimPower * (1f - this.biasForAction)) + ((this.aimPower + aimPowerChange) * this.biasForAction); // convert rolling averages to actual aim values this.aimAngle = Mathf.RoundToInt(this.rollingAimAngle); this.aimPower = Mathf.RoundToInt(this.rollingAimPower);
The biasForAction value governs how much newer values are weighted in the rolling average. Too low, and the aiming arrow feels sluggish. Too high, and the averaging function provides too little smoothing to be useful. The best-feeling value in this case ended up being 0.5f. Finally, precision was coming into focus.
(Notice the other subtle tweak in this version. Text always moved according to which quadrant the player aimed toward, but the rules that governed how changed: when aiming to the top-left, the text is now displayed in the top-right rather than the bottom-left, so it’s visible to both right- and left-handed players.)
Dampening has provided the best-feeling controls to date across the most platforms, and with the least special-casing (a serious concern for a one-person studio). Not that it’s perfect, but it covers most scenarios: large, intentional movements move the needle as intended, little tweaks still work, but unintentional misfires are uncommon. Someday, there might be yet more improvements. Perhaps the aimPowerChange can be adjusted logarithmically or exponentially, so sweeping changes have priority over minor adjustments.
For now, this is where Voyager’s aiming will stay. But the lessons learned, all the playtests and hours spent poring over debug data, have been instrumental, not just in making Voyager the game it is, but in preparing me to tackle new and even more exciting UX challenges in the future.