
HOW WE GOT HERE?
đ€ł Onboarding flows user testings
I began by modeling each setup independently (Account, Taste Tags, Avatar, Location, Gamification guidance), then recomposed them into endâtoâend flows.Â

Flow 1/ All-Upfront (Baseline)
Complete setup required before reaching the home page.

â Â What Went Well
- Consistent initial dataset; fewer interruptions later.
- Few people who love customization enjoyed avatar up front.
- Clear sense of Byteyâs scope and features.
- Valuable as a control.
⌠What Went Wrong
- Long timeâtoâhome; fatigue; dropâoffs during Avatar.
- Perceived as "too much before trying anything."
đ Indicative metrics
Median time to Home
2m05s
Flow completion
54%
âToo longâ feedback
~75%
(nâ26)
Flow 2/ Defer Avatar
Shorten onboarding by defering nonâcritical steps while keeping strong recommendations.

â Â What Went Well
- Shorter flow; perceived âlighterâ onboarding; better focus on recommendations.
- Avatar completion improved when framed as a reward.
⌠What Went Wrong
- Still felt long to some users.
- Account upfront still created early abandonment.
đ Indicative metrics
Median time to Home
1m22s
Flow completion
68%
(nâ28)
Flow 3/ Taste-First
Enable a progressive account setup to allow users to access the app sooner.

â Â What Went Well
- Strong early relevance; better firstâsession engagement.
⌠What Went Wrong
- A subset was unsure what to pick; wanted a skip.
đ Indicative metrics
Median time to Home
0m52s
Flow completion
82%
(nâ32)
Flow 4/ Progressive Disclosure
Push friction as late as possible. Introduce setup only when a feature requires it.
- All skippable initially; surface steps contextually (banners, overlays, firstâtouch triggers).
- Account at checkout / save / deeper actions; Avatar via tasks/rewards; Taste Tags initially skippable.

â Â What Went Well
- Shortest path to value; very strong preference in testing for overall simplicity.
⌠What Went Wrong
- Taste Tag prompts were missed or treated like ads
- Some users later felt the app lacked âpersonalityâ without early tags. Initial recs lost some sparkle.
đ Indicative metrics
Median time to Home
0m34s
First-session activation into search/categories
+21â27%
vs Flow 2
Preference in qualitative ranking
â68% chose Flow 4
âBanner looked like adsâ
~23%
(nâ34)
User Testing Summary
General feedback (majority)
- Flow 4 preferred for brevity and clarity.
- Flow 1 felt too long; Flow 2 split opinions; Flow 3 largely acceptable.
- Avatar is more reasonable after trying the core experience; a minority loved it early.
- Some wanted Taste Tags skippable at first because they were unsure what to pick.
- People love OTP.
Specific feedback (minority)
- A subset found Avatar fun/engaging up front, but we assessed higher dropâoff risk.
- Taste Tags created choice overload at first; better once they knew the app.
Final Flow (Best of Flow 3 + Flow 4)

Decisions
- Keep progressive account and progressive avatar from Flow 4.
- Keep OTP (No passwords) for account sign-up.
- Make Taste Tags mandatoryâshortened and clarifiedâto lift recommendation quality and session satisfaction from day 0.
- Location stays at first Main entry.
- Keep avatar as a quest with small rewards and multiple entry points.
Why this tradeâoff works
- A small upfront step (Taste Tags) substantially improves personalization and reduces bounce from âgenericâ results.
- Users still get speed: 3 short screens with clear helper text and modern microâinteractions.
- Since Taste Tag setup is non-skippable, Taste Tag UI needs to be reâwritten to reduce ambiguity and indecision (updated tags* that better reflect the user needs, short descriptions, icon labels).
đ© InviteâOnly Variant (Public Test Phase)
Before open public launch, Bytey are currently using an Invite Gate to control test cohorts, monitor onboarding metrics, and maintain community quality.
Interface & flow
- Invite gate: Enter code â go through, or Join Waitlist path.
- Account setup immediately after the gate via OTP (code to email/phone).
- Taste Tags flow next (same 3 screens).
- Avatar still progressive postâentry.
- Branded illustrations + light animation to convey warmth and core features; clear dual CTA states (have a code / need a code).




Variants tested: different animations, speeds, and code input affordances.
â
Accessibility: numeric keypad on mobile, obvious error/empty states, clear waitlist confirmation feedback.
đœ Taste Tag System Iterations
Taste Tags are not just onboarding UI. They are core product infrastructure. As the product designer, I worked closely with engineers to shape the taxonomy behind the system, bridging user behavior and system architecture.
đ Influences
Search relevance and restaurant/food discovery (core)
Data organization and cleansing (foundation)
Recommendation ranking
Community content (posts and feed relevance)
Gamified task and card system
đ« Key constraints
- Dataset limitations: Early database does not contain every cuisine, dish, or restaurant type.
- Tags must prioritize what exists today while preparing for future expansion as new cuisines, dishes, and restaurants are added.
đ Goal
- A structure that is clear for users and usable by algorithms.
đ€Â Design considerations
User cognition
How people actually think about food.
Product needs
Fast onboarding and strong personalization.
Technical constraints
Data schema, embeddings, and ranking logic.
v1/ Regional (1-3) â Flavor (â„1 or Skippable) â Diet (opt)
This early version established the initial Taste Tag structure using the first dataset, focused on the most common cuisines around the testing area.



â Â What Went Well
- Worked with the initial dataset, focusing on popular cuisines in the testing area to set a clear baseline for early testing.
⌠What Went Wrong
- Flavor step may be unclear for some users with low selection confidence as,
- Flavor preferences change frequently.
- Flavor perception is subjective.
- Hard to structure in data and many dishes are customizable.
v2/ Regional (1-3) â Diet (opt) â Priority (ranking)
Flavor was replaced with a ranked Priority step, as ordering priorities proved to be a clearer and more actionable signal than flavor.

â Â What Went Well
- Priority reflects real ordering behavior.
⌠What Went Wrong
- Diet taxonomy needed refinement due to data constraints and potential AI hallucination.
- Some diet types (e.g., vegan) were well labeled in restaurant data, while others (religious diets) were inconsistent.
- Religious diet labels carried higher risk due to less reliable menu data and AI hallucination
- Diet restrictions and preference-based goals were still mixed together, making the category less clear and less aligned with user expectations.
v3/ Regional (1-3) â Diet (opt) â Eating Style (opt) â Priority (ranking)
Separated Diet from Eating Style to keep Diet focused on dietary restrictions, while moving preference-based goals like Low-calorie and High-protein into Eating Style and combining some flavor ideas with food types there.


â Â What Went Well
- Clearer system logic and better match with user expectations.
⌠What Went Wrong
- 4 screens felt long and added onboarding complexity.
- Richer taxonomy, but limited payoff relative to the time required.
Final/ Updated Cuisine (required) â Diet (opt) â Priority (ranking)
Main changes: updated Cuisine and removed Eating Style. More Diet and Priority to accomodate user needs
The final version uses a more organic structureâPopular Picks + a general Regional sheetâto keep onboarding short while leaving room to scale cuisine organization over time.

â Â What Went Well
- More flexible and organic than a fully fixed cuisine taxonomy.
- Better fit for current data coverage and onboarding speed.
- Easier to expand later with broader regional models or more detailed subcategories*.
đ Indicative metrics
Median completion time
< 35s
Abandon
< 5%
âNot sure what to pickâ
â to ~12â15%




