Automating Persona‑Driven Usability Testing with Playwright MCP
• UX, Testing, Automation
You can simulate realistic user behavior with personas and Playwright to catch obvious UX issues before recruiting real participants. This doesn’t replace moderated studies, but it’s great for fast feedback and regression‑testing known behaviors.
What We’ll Use
- Playwright via an MCP‑compatible AI client (e.g., Claude Desktop + Playwright MCP)
- A small demo repo with personas and a testing framework: github.com/storbeck/user_persona_tests
The repo includes CLAUDE.md (system instructions) and personas/ (ready prompts). Running a session will generate report.md and screenshots locally.
Setup
-
Install the Playwright MCP server:
claude mcp add playwright npx '@playwright/mcp@latest' -
Clone the example repo:
git clone https://github.com/storbeck/user_persona_tests cd user_persona_tests - Open your MCP client and ensure Chromium is available to Playwright.
Tip: Install the browser binaries with:
``` npx playwright install chromium ```
Run a Persona Session
- Load
CLAUDE.mdas the system instruction. - Pick a persona in
personas/(e.g., Efficient Shopper) and paste it as the user prompt. - Start the run. The agent will:
- Launch a browser with tracing/video (as supported)
- Follow the persona’s behavior and mission (we sign up first with demo data)
- Capture screenshots of key steps
- Generate a Markdown UX report (
report.md)
Note: The included demo personas target a clothing site built for automation practice.
Customize for Your Product
Duplicate a file in personas/ and tweak:
- Mindset & behavior (search‑first vs. browse‑first, cautious vs. fast)
- Mission (what to accomplish and how to judge success)
- Device (mobile viewport or desktop)
Keep instructions concise and literal; the agent follows them exactly.
Interpreting Outputs
report.md— Step log, observations, pain points, strengths, and recommendations.playwright-mcp/— Screenshots/traces to attach to tickets or PRs
Use these artifacts to gate UI changes with a quick “persona pass,” compare behavior before/after design tweaks, and share reproducible UX issues with designers and engineers.
Where This Helps (and Where It Doesn’t)
Great for: early discovery of obvious issues, consistent regression checks, and exercising flows under different mindsets.
Not a replacement for: moderated tests, accessibility audits, or research with representative users. Treat this as a fast feedback layer before deeper studies.
Repo Link
Start here and adapt to your product: github.com/storbeck/user_persona_tests