Meta | Reality Labs
Evaluating Cross-Surface Builder Journeys
A systems-focused example on objectively evaluating end-to-end journeys that span tools and lifecycle phases—enabling clearer prioritization and alignment across teams.
Context
Within Meta Reality Labs Developer Platform Products (DPP), UX researchers were responsible for evaluating builder tools that spanned multiple surfaces and phases of the developer journey, primarily across Unity and Android Studio. These tools were critical to developer onboarding and productivity, but their cross-surface nature made them difficult to evaluate in a meaningful and consistent way.
UXRs were unsure how to objectively measure usability across these end-to-end journeys or how to determine success and failure when work spanned multiple tools and lifecycle stages. I was brought in initially to advise on how this problem could be approached.
Without a shared evaluation framework, builder journeys were assessed inconsistently across teams. This made it difficult to compare findings, understand where friction accumulated across tools, or prioritize fixes with confidence. The risk was slow onboarding, fragmented developer experiences, and misaligned decision-making across DPP teams.
Leadership
I led the research strategy for Metaverse Canonical Builder Journeys, initially joining in an advisory capacity and then assuming end-to-end ownership as gaps in evaluation became clear.
I owned research strategy, analysis, and oversight of execution, and worked hands-on with the tools themselves—building test projects in Unity and Android Studio, validating task accuracy, and defining clear standards for what “successful” and “failed” task completion looked like across the journey.
I defined UXR Quality Scorecards to evaluate two canonical builder journeys spanning three products—Unity, Android Studio, and the Meta Spatial SDK—establishing repeatable measures across five evaluation areas. I guided UX researchers on task definition and application of the scorecards, oversaw vendor-led testing sessions to ensure environments were set up correctly, and reviewed outputs to ensure findings were accurate, comparable, and actionable.
Findings
The research identified three recurring breakdowns that consistently slowed developer progress across builder journeys:
-
Onboarding
Developers struggled to understand where to start and how tools fit together across different phases of the journey. -
Documentation
Fragmented and inconsistent documentation increased context switching and reliance on external sources. -
In-product feedback
Limited feedback during key steps made it difficult for developers to confirm progress or diagnose issues.
Impact
The research gave DPP teams a shared, objective view of where builder journeys were breaking down across tools. Findings were translated into actionable engineering tasks and tracked through existing workflows, shifting discussions from subjective feedback to evidence-based prioritization.
More importantly, the work established a repeatable way to evaluate cross-surface builder experiences, enabling more confident and coordinated decision-making across teams responsible for onboarding and developer productivity.