Meta | Reality Labs

Building a Shared Approach to Usability

A program-level case showing how I created a repeatable approach to usability evaluation, surfaced systemic gaps, and improved decision quality across complex developer workflows.


Context

Within Meta Reality Labs, Developer Platform Products (DPP) were being evaluated inconsistently due to limited UXR resources and uneven experience with usability testing. As a result, developer-facing features across DPP teams often shipped without a reliable, comparable understanding of usability risk, creating blind spots in product quality and decision-making.

There was no shared standard within DPP for how usability should be evaluated, how results should be interpreted, or how findings should inform release decisions.


How can we create a scalable, repeatable way to evaluate developer-facing features so DPP teams can make consistent, confidence-driven product decisions?

Without a standardized evaluation approach, usability findings varied by team and researcher, making it difficult to compare results, prioritize fixes, or assess readiness for release. This increased the risk of DPP teams shipping developer features with avoidable usability gaps, slowing adoption and undermining developer trust.


Leadership

I served as program owner, designing and launching the DPP Usability Program from the ground up. The initiative started bottom-up, informed by persistent gaps I observed in how usability testing was being conducted across DPP teams.

I personally defined the UXR Quality Scorecards for DPP, incorporating input and existing frameworks from partner teams to ensure the methodology and calculations were sound and applicable across DPP products. I then formalized the program with leadership support and guided UX researchers in applying the framework consistently, including task creation, session monitoring, and result evaluation.


Findings

Rather than isolated feature issues, the program identified three recurring usability gaps across platforms, revealing structural problems in how developer features were designed and implemented in products.

  • Foundational task breakdowns in core developer workflows
  • Inconsistent onboarding experiences that slowed developer progress
  • Missing documentation and feedback that increased friction during setup and iteration

Impact

As the program took hold, I trained other researchers in task-based usability testing and consistent evaluation criteria, improving the quality and comparability of findings across teams. The resulting insights informed improvements to onboarding and documentation workflows later in the year, while quality issues identified through the program were tracked and prioritized as part of ongoing product work.

Overall, the program closed a foundational gap in how developer experiences were assessed, shifting usability evaluation from a one-off activity to a shared decision-support framework that teams could rely on when making product and prioritization decisions.

↑ Back to top