Testing early and often is key to iterative software development. Rather than waiting until the end of a project to validate everything was done correctly, we frequently produce functional versions of the end goal even if not all of the required features are finished.
This approach will be familiar to anyone who has worked in an environment inspired by the principles of Agile. While there are countless flavours of this popular framework, one of the central tenants is to shorten feedback cycles and prevent huge inefficiencies waiting for reviews or handoffs.
Developing in a web browser, this workflow can be as easy as hitting refresh. We can still benefit from these short feedback loops in the world of voice apps.
Actions console simulator the best place to review early work
Anyone who’s built a mobile app will likely be familiar with using a simulator. As the name implies it’s not a 1:1 reproduction of using an app on a real device, but often it’s a good enough facsimile to spot obvious issues. The console may be designed for developers, but anyone from the team should be able to use the simulator to test an action during development.
Head straight to the left-hand menu and click simulator. Early in the project, we suggest selecting the speaker configuration and using voice input to simulate a voice-first setup. Much in the same way designing for mobile first revolutionised responsive web design, we believe audio-only, voice-based interactions are still the backbone of the Assistant experience, so start there before moving on to other multimodal surfaces.
Having said that, one of the key advantages of the simulator is being able to watch the transcription of user inputs in real time. With a solid understanding of the user flow one can see if there are issues with certain expected responses.
For example, we found the Assistant often incorrectly transcribed homonyms that were clear to the human ear but confusing to the machine. Perhaps counterintuitively, the Assistant seems better at capturing more complex inputs than simple ones. It often struggles to correctly transcribe one-letter multiple choice answers, often misinterpreting ‘A’ as ‘hey’ or ‘C’ as ‘sea’, but spells ‘Jamal Khashoggi’ correctly every time – something we on the team wouldn’t be able to do!
These false positives lead to intense user frustration, as the semantics of the situation are obvious to the speaker but the reason for failure is not. Finding these pitfalls in the console allows you to add more robust error-handling based on evidence, directly from the Assistant.
Setting up an alpha release
Once the team is satisfied with the experience in the console, the next stop is the Assistant on a smartphone by promoting your action to an alpha. But wait, you might say, isn’t that contrary to a voice-first approach? Well, we’ve found a few hurdles moving directly into testing on audio-only devices. Annoyingly, the smart speakers are incompatible with many corporate wifi setups. So ensure you have alternative connectivity options.
The smartphone provides a robust alternative testing option. The phone surface is actually very similar to the console in that you can still see the transcriptions in real time while using voice-only interactions. It also adds a few more wrinkles, such as what happens when the Assistant is backgrounded or interrupted mid-conversation.
Another thing to keep in mind testing on the Assistant is which accounts you have active. Many of us have more than one Google account set up on our devices and we’ve found this can create headaches. Managing your account on the smartphone is much easier than a smart speaker, helping avoid panic moments where nothing seems to be working.
Moving to the phone also allows you to better test the efficacy of invocation phrases. Ensuring these phrases have a high hit rate is crucial. Using the phone also allows you to clearly see when and how the invocation fails, sometimes with surprising results. Being in the live Assistant environment, as opposed to the controlled configuration of the console, gives you a more intuitive understanding of how the Assistant might misinterpret your invocation and what happens when it does. Perhaps your invocation is too similar to a first-party action? Or maybe even a competitor? Honing in on the right ways for users to invoke your action is crucial to discovery on the platform.
Chips, cards, and everything
Once you’ve nailed the core voice-first experience, it’s time to go multimodal. Remember, the actions console does not allow you to explicitly choose which surfaces your action will be available on. Based on which capabilities you require, it’s very likely you will need to support screens in at least a minimal capacity even for primarily audio experiences. This means adding display text, chips and potentially other rich responses like cards or images.
You will want to test on a variety of devices, including phones, speakers and smart displays. The same way you might test a website on different form factors, you’ll want to ensure both the visual and aural aspects work in harmony on different devices.
We found balancing the length of visuals and display text between phones and the Home Hub particularly challenging. Our approach was to paraphrase as much as possible, rather than display a verbatim quote. Even then, having different amounts of text and a different context between these different form factors produced surprisingly varied results when it came to optimising the visuals.
Another tip is to test in different acoustic environments. Gather friends and family at home to ensure the combination of audio and visuals function equally well in a noisy group setting and the relative quiet of an office conference room.
Worthwhile investment
Ensuring quality throughout the lifecycle of a project helps provide valuable feedback, allowing a team to course correct during development. Waiting until your first beta submission to test across a full spectrum will likely lead to a swift rejection and leave the team scrambling to accommodate the variety of surfaces the Assistant supports. Testing early and often is just as helpful for voice-first experiences as any other type of software.
Find out more about the Voice Lab’s mission or get in touch at voicelab@theguardian.com.