How might we enable users to create powerful user segmentation with ease?
Appcues is a user experience layer, enabling users to build onboarding, feedback and announcement content on top of their products, and target that content to specific end-users. Audience targeting conditions are the set of user properties and events, and their values, that qualify end-users to view particular content. Setting up conditions easily and having confidence that content will show to the right end-users is a core customer need. Yet the interface to set those conditions was frustrating and confusing and left users unsure whether they had done it right.
The goal of this project was to make the process of setting conditions easier, to reduce user error in setting illogical or erroneous conditions, and to give users greater confidence that their content would display to the intended audience.
Senior Product Designer | Appcues
Research, wireframing, prototyping, testing, visual and interaction design
Jan - July 2019
—— the problem
Targeting content to the right end-users is what gives that content value, but the interface made setting up conditions error-prone, frustrating to navigate and ultimately unpredictable.
Parsing through research insights and support tickets, talking with customer success managers, and struggling with the interface myself, I noted common challenges.
And vs Or clauses were barely distinguishable and the wording and way they were presented made those clauses difficult for the average user who is unfamiliar with boolean logic
Users could input any value for conditions that were absolutes, allowing users to set broken conditions by entering erroneous values or misspelling true/false
Dropdowns hid frequently used and important options above or below the menu fold
Content could be displayed by a long ID number in dropdowns, rather than by name
Advanced operator types, like Regex, were not explained well for users to be able to easily leverage their power
Dropdown items sometimes broke lines, creating confusion when scanning options
Visual noise and uneven alignment reduced overall readability of the conditions themselves
We offered no preview of number and type of users who would be targeted based on the conditions that were set
Many of the problems stemmed from poor visual and organizational design—I explored visual tweaks, different interaction mechanisms, and menus.
Many of the initial concepts, including using colored dropdowns for AND vs OR clauses, grouping conditions with borders, having actions appear on hover, and including a type-ahead, made it into the final design.
But some ideas were scrapped
Displaying all menu options together in a single dropdown—for User Property, Flow, Segment, Event, and Language. Initially, we thought the one dropdown to manage would be better, but it could pose problems: 1) the menu could load slowly because there were so many options and 2) users wouldn’t be able to see the categories after User Property as that category’s options would push the others down the list. Instead, we opted for an initial dropdown displaying the types, and then a second menu with all of that type’s options.
Naming groups and saving them as segments—we thought being able to name condition groups would help remind users of that groups’ intended purpose. We had also thought that being able to save a group of conditions as its own segment would be a nice way to encourage saved segment use. However, users didn’t find these particularly necessary, and looking at the average condition tree, we found that most users had zero or one group to begin with. We didn’t want to over-optimize for an uncommon use case.
Displaying the condition, operator and value all in one condensed row—we had tried to condense each of the three selection areas for condition type, operator, and value all into one row. However, the interaction model for this was tricky—how would a user be able to change the value, if upon click, the first option to change was always the condition type? Keeping them as discrete sections enabled users to change them independently with ease.
—— simple ui fixes
Many of the solutions were UI changes based on usability heuristics and clean, modern design practices
Differentiate And vs Or clauses visually and make the wording of conditions and nested groups read like a natural sentence
Float commonly used options to the top in dropdowns, make scrolling more obvious, display long options on hover instead of with line breaks, and provide a type ahead to find desired choices
Offer help text or links to explain advanced operator types like Regex, and provide a reference of common Regex patterns
Provide true/false options when only absolute values are possible as values in dropdowns
Use category labels and iconography to differentiate condition, operator and value types within dropdowns
Display recently seen values in dropdowns for easy reference and selection
Fix alignment and clean up visual noise by having certain actions only appear on hover
Updating dropdown styling alone went a long way towards simplifying the UI
Labels and icons helped separate and identify selections. For conditions like user properties, segments, or flows, a descriptive menu would appear on hover to help users be sure they were selecting the right condition. For operator and value types that could be confusing, help tooltips were provided.
Leveraging hover states for interactions enabled the condition tree at rest to remain legible and reordering gave users freedom when building complex condition trees.
Actions like ungroup, delete and adding conditions could be displayed on hover. Additionally, we explored having the ability to reorder conditions—without it, users needed to know the exact order of their condition tree in advance of building, and making an error anywhere meant painful rework.
—— customer example
Complex condition trees became easier to set and read
—— give the people what they want
Besides cleaning the UI, we wanted to include a high-impact and oft-requested feature—user estimation.
We weren’t done. Even though the interface was visually cleaner and easier to use, a key user need we hadn’t solved for was providing some sort of verification that the conditions were “right”\. An oft-requested feature was to display a count of users who would qualify for content based on the set conditions. Though our tech limited us to only an estimation rather than an exact count, this would still be enough to give users more insight into the effect of their targeting.
Even better than a count though would be a sample list of users who would qualify—this was something we added to our backlog.
—— mapping the experience to common use
Next, we explored a “simple” vs “advanced” mode, since our hunch, that most users set up only a handful of conditions, proved correct.
There was another piece to improving the UX—simplifying the experience even further for the average user.
We of course would still need to support those users who built complex condition trees—and bless them, those users did exist—but the data showed that the vast majority would set up only a handful of conditions. All that complexity in the current UI was perhaps too much for what they needed to accomplish.
So, we considered a Simple vs Advanced mode. In Simple, users would target to one or a few Segments (user groups with pre-saved conditions), and then could add a handful of additional conditions if they wished to refine those groups down. This mapped to how users thought about their targeting to begin with—”I have a group of users who I want to filter down by these few things.”
And if users realized they needed something more flexible—let’s say if they wanted to set up a group of OR’d conditions, they could tab over to “Advanced’. If they did so after having made some selections in Simple, we would move those selections over and expose the underlying logic. What this did was not only store the users’ work for them, but teach them a bit about the logic so that they could go on to play with more conditions with just a bit more understanding of how it all worked.