Role
Product Designer Intern
Worked with a senior designer mentor, 1 PM, 1 FE and 2 BE engineers
Timeline
Q3 2024 July - August
Tasks
Hi-fi Prototyping
User Research
Design Iterations
Dev & PM Collaboration
Tools
Figma
Enterprise Tools
/ CONTEXT
Untangling the Complexity
Understand Kwai's A/B Test Platform
An internal A/B testing platform used across Kwai’s product, content, and operations teams to evaluate and optimize features through controlled experiments.
It supports thousands of daily experiments, helping teams make data-informed decisions by comparing different versions of user experiences, content strategies, and recommendation algorithms.
Two distinct user flows across different experimentation roles
Experiment owners, typically product managers, engineers, or operations teams, are responsible for configuring all aspects of the experiment, including defining goals, selecting metrics, allocating traffic, and specifying variant logic. Their primary focus is on ensuring that tests are properly set up and that live data is aligned with the intended experimentation objectives.
Data analysts, on the other hand, are responsible for validating the experimental design, monitoring data quality, and interpreting test results. They provide statistical insights that help determine whether the outcomes are reliable, significant, and actionable.
/ PROBLEM SPACE
Uncovering Pain Points
What We Learned from Last Quarter
To better understand user friction within the experimentation lifecycle, product and design team launched a platform-wide survey last quarter, and we collected qualitative feedbacks that help us gain in-depth reviews. The goal was to identify which phases of the A/B testing process caused the most confusion, inefficiency, or frustration.
The responses pointed to main pain points around two experimentation stages: Experiment Metric Configuration & Experiment Data Analysis Phases.
Experiment Metric Configuration
"The metrics involved in experiment analysis are quite complex."
456
average subscribed metrics per test
20%
tests have 1800+ subscribed metrics
44%
survey participants unable to distinguish between goal vs. observation metrics
Experiment Data Analysis
"I had to click into each metric one by one, and I still couldn’t tell how far along the experiment was. It felt so frustrating."
20%
Missing SRM Checks, risk of invalid traffic allocation
40%
survey participants are unsure if metrics are statistically significant
70%
survey participants take 20+ minutes to analyze the data every time they enter the result interface
Where Users Struggle the Most
After conducting preliminary research through 20+ stakeholder interviews, platform usage logs, and survey data, I uncovered a series of interconnected pain points from the surface that impacted both experiment owners and data analysts. These issues didn’t exist in isolation: one problem often triggered another, compounding into workflow breakdowns that delayed decisions, compromised data quality, and reduced overall trust in the experimentation system.
To better understand where and why these breakdowns occurred, I mapped critical moments across the experiment lifecycle and divided them by two experimentation phases.
Metric Configuration (Setup) Phase
where owners struggled with fragmented responsibilities and high cognitive load. However, we have found out that the pain points are intertwined into a vicious cycle, which means we need to exterminate the root cause to fix the problem.
Data Analysis Phase
where analysts (mostly) and owners alike faced challenges interpreting overwhelming, unstructured data.
PROBLEM STATEMENT
How Might We help users subscribe to the right metrics with goal clarity and cost awareness, so they avoid over-complication and focus on meaningful results at the Experiment Metric Configuration Phase?
How Might We provide users with clear experiment progress and success signals, so they can confidently interpret results and take action faster at the Experiment Data Analysis Phase?
/ GOAL
A New Direction for A/B Test
Product Goals
Pain Points to Opportunities
Transforming user pain points into design opportunities based on existing platform functionality, and also explore possibilities within constraints after discussing with the product and dev team.
Changing the Vicious Cycle
Restructuring Site Map
Metric Configuration - Metric Dashboard
Goal Metrics
- primary indicators allow the users to measure the effectiveness and success of an experiment
Observation Metrics
- secondary indicators allow the users to monitor potential negative side effects during an experiment.
SOLUTION - Experiment Design Phase Design
Metric Configuration - Primary Setup
Hover State Checkboxes
Reached 50 maximum subscriptions, Reminders of subscription cost
Search for Metrics
Search for Metric Category
Metric Categories
- subscribe metrics based on client nodes, modes, and subscription history
Tooltip Hover
50 maximum subscriptions
2
Add / Edit Metrics
3
Edit Metric Template
2
1
Goal Metric Re-bucketing Toggle
Tooltip Hover
The system recommends setting no more than five re-randomization metrics.
Active State Checkboxes
Cost Reminder
It is estimated that the department budget will be ¥ xx per day.
3
Edit Metric Template
FINAL SOLUTION PREVIEW - Result Analysis Phase Design
Result Interpretation
Experiment Prompt Guidance Design
/ ITERATION
Metric Dashboard
My design iteration preserved the visual style and core interaction patterns of the original version to ensure continuity for existing users. While keeping the primary user flows intact, I focused on enhancing the overall user experience by streamlining the path to view goal metric details and estimated costs on the metric dashboard.
These changes made key information more immediately accessible and improved the clarity of data presentation.
Showing a thumbnail of the metrics subscribed
no intuitive indication of goal vs. observation metrics, and re-bucketing (goal) metrics
New subscriptions were designed in a new section, while taking up plenty of spaces
Before : A/B Test Platform 1.0 Design
In the first two iterations, I combined the metric thumbnails with subscription actions to create a more seamless user experience. To enhance clarity and focus, I also highlighted several key goal metrics for easier viewing and comparison.
Initial Explorations
Iterations based on user feedbacks
💡 I want to understand cost impact earlier, before committing to configuration.
💡 I need to choose which goal metric to re-bucket, not all of them at once.
Toggles on each of the metrics
Showing cost reminder and budgets while selecting, streamlined the user experience
Final Design Revised Based on User Feedbacks
REFLECTION
In this internship project, I learned:
My Design Reflections at work
“