Skip to main content

Overview

Understanding UserTrace’s testing framework requires grasping four key components that work together to create realistic agent evaluations.

Scenario

A Scenario defines the testing situation you want to evaluate. Think of it as the “what if” question you want answered. Example: “A 30-year-old migrant delivery worker based in Bengaluru, under financial stress” This simple description gets expanded into a comprehensive test case that includes:
  • Contextual background and situation details
  • Expected user goals and pain points
  • Linked persona for behavioral modeling
  • Connected evaluation criteria for success measurement

Persona

A Persona brings your scenario to life by defining how the simulated user thinks, behaves, and communicates. Taking our delivery worker example, the persona would detail:
  • Communication style: Direct, time-conscious, uses local language mix
  • Daily goals: Maximize deliveries, minimize downtime, manage expenses
  • Emotional state: Stressed about finances, frustrated with app issues
  • Knowledge level: Tech-savvy but impatient with complex processes
  • Decision-making patterns: Quick decisions, price-sensitive choices

Evals (Evaluations)

Evals define your pass/fail criteria - the specific metrics that determine if your agent handled the scenario successfully. Examples include:
  • Did the agent resolve the delivery issue within 2 minutes?
  • Was the tone empathetic when discussing financial concerns?
  • Did the agent offer appropriate solutions for the user’s location?
  • Was sensitive information handled securely?

Simulation

A Simulation executes your test by running the scenario with its linked persona and evaluations. Scaling Example:
  • 1 Scenario: “Financial stress delivery worker”
  • 4 Personas: Different ages, locations, stress levels, communication styles
  • 2 Sessions each: Morning rush vs evening fatigue states
  • Result: 8 total conversations automatically tested against your evaluation criteria
Each conversation provides detailed insights into how your agent performs across different user types and situations.

How It Works

The simple truth: You only need to select the scenarios you want to test. When you choose a scenario in UserTrace, the appropriate personas and evaluations are automatically selected and linked for you. This means:
  • No manual persona creation - We’ve already built realistic user personas that match your scenarios
  • No evaluation setup - Relevant success criteria are pre-configured for each scenario type
  • No complex configuration - Just pick your scenarios and run your tests

Workflow

  1. Browse available scenarios in your UserTrace dashboard
  2. Select the scenarios that match your testing needs
  3. Run simulations - personas and evals are automatically applied
  4. Review results with detailed conversation analysis
This automated approach ensures you get comprehensive testing without the complexity of manual setup, while still providing the depth and realism your agent testing requires.