Fleet Personas & LLM-Driven Customer Research

AI - Driven UX Research
Client
Osprey EV Charging Network
Project type
AI - Driven UX Research
Project year
July 2025 - Jan 2026

Problem Statement

We were building a B2B fleet product with no validated understanding of who we were designing for. The commercial team held knowledge in silos; instincts about fleet managers, assumptions about driver behaviour; but none of it had been structured, validated, or made accessible to inform product decisions. We had proto personas, but no system to evolve them. Without a reliable picture of our users, every design decision rested on assumption.

Outcomes

I designed and ran an end-to-end LLM-assisted research programme that turned raw interview data into a structured, evidence-backed understanding of fleet customers. The result was two distinct outputs: fleet-level profiles and individual-level personas, each grounded in interview evidence and tagged by how many participants supported each insight.
Fleet Segmentation: Delivered distinct fleet profiles segmenting customers by operational model, EV mix, charging maturity, and decision-making structure, giving product and strategy a clear view of who to prioritise and why.
Validated Personas: Produced fleet manager personas grounded in real interview evidence, replacing assumption-based proto personas with a reliable, living source of truth for design and product decisions.
Replicable System: Built a versioned, replicable research workflow with structured prompts, human validation checkpoints, and a research repository, so future interviews can be folded into a re-run rather than starting from scratch.

Context

As we expanded into B2B fleet, the product needed to serve a new and distinct set of users: fleet managers, drivers, and administrative stakeholders, each with different goals, authorities, and frustrations. Our prior consumer-facing work gave us limited foundation to build on. The business had knowledge about these users, but it was fragmented across teams and had never been formally structured or validated against real customer input.

Eight 45-minute interviews with fleet managers gave us the depth to reach thematic saturation — enough to validate assumptions and change product decisions with confidence.

This project set out to close that gap, not just by producing personas, but by building the research infrastructure to validate and evolve them over time.

Constraints

The research was initiated before any fleet-specific data was available, which meant we couldn't rely on analytics or prior user research as a baseline. The project also ran in parallel with live product delivery, so the research system had to be lightweight enough to run without slowing the team down. Collaboration with the commercial team was essential as they held the closest existing knowledge of fleet customers and were key to framing the right questions.

Primary Challenge

Fleet customers aren't a monolith. Managers overseeing large, mixed-energy fleets operate differently to those running small, fully-EV sets. Drivers with one-to-one vehicle relationships have fundamentally different tasks and friction points than those managing multiple vehicles. Understanding how roles, fleet scale, and operational models shaped user needs was the core challenge, and getting it wrong would mean designing features that worked for an assumption, not a real customer.

Discovery

Research planning

Defined objectives, question areas, and participant criteria in collaboration with the commercial team.

Participant identification & recruitment

Identified fleet manager participants with support from commercial, selecting for range across fleet size and operational model.

Interview design

Structured interviews to surface goals, frustrations, daily workflows, decision-making authority, and attitudes toward EV adoption.

Transcript preparation

Reviewed and corrected raw transcripts for accuracy before synthesis, ensuring data quality at the point of entry.

No items found.

Design Goals

Build a validated, shared source of truth.

Replace siloed commercial instinct with structured, evidence-backed customer understanding accessible to the whole team.

Make qualitative data quantifiable.

Move beyond impressionistic findings. Every insight tagged by how many interviews supported it, so the team could distinguish established patterns from emerging signals.

Design a system, not just a deliverable.

The personas and fleet profiles needed to be reusable and evolvable, built on a versioned, replicable workflow rather than a one-off research sprint.

Principles

Qualify the qualitative.

Qualitative research is only as useful as the rigour behind it. Every insight was tagged by how many interviews supported it, turning "people said" into "5 of 7 participants said." That distinction changed how confidently the team could act on findings.

Human judgment at every checkpoint.

The LLM handled synthesis at scale, but a human reviewed every extraction before it moved to the next stage. The tool accelerated the work, it didn't replace the critical thinking.

Build for reuse.

Versioned prompts, structured transcripts, and a filed research repository meant every interview added to a growing asset rather than a one-off report. The system was designed to get stronger over time.

Personas as a living product, going beyond the deliverable.

Proto personas were starting hypotheses. Validated personas were the output of evidence. The principle throughout was that the work should evolve with the product, not sit in a deck and go stale.

Design Decisions

Evolving Archetypes

With research learnings in hand, the proto personas development meant these transitioned into detailed architypes that recognised core differences between user tasks, needs and frustrations through real user insights. They actively informed key journeys in the product suck as bulk actions to reduce repeat tasks from an actions panel.

Initial Proto Personas

I began with proto personas based on what reasonable assumptions we could make from commercial knowledge to give us a starting point. This working foundation enabled designing for the core audience from an informed stance and ready to be shaped by data.

Validation Research

We reached out to users and conducted email surveys and ran in depth interviews to validate and direct the proto-personas into being reliable and accurate compasses that would guide the product. These steps helped us gain insights about the users mental models of a day in their life and ultimately enabling us to have a more user centric approach.

The Solution

A replicable, LLM-assisted research system that transformed raw qualitative interviews into structured, evidence-backed fleet intelligence. Outputs included distinct fleet profiles for product and strategy, validated fleet manager personas for design, a prioritised action list tagged by evidence strength, and a versioned research repository so future interviews extend the work rather than restart it. The system was shared with stakeholders via a structured playback session, with findings directly informing product direction.

  • A replicable, versioned LLM-assisted research workflow with human validation checkpoints at every stage
  • Per-interview rich data extractions and executive summaries for internal and stakeholder use
  • A full cross-interview synthesis report with evidence-strength tagging throughout
  • A prioritised action list derived from the synthesis, tagged by how many interviews supported each recommendation
  • Fleet profiles segmenting customers by fleet size, EV mix, charging maturity, and operational model
  • Validated fleet manager personas capturing goals, frustrations, daily workflow, and decision-making authority
  • A structured research repository with versioned prompts, corrected transcripts, and all deliverables filed for future re-runs
No items found.

Learnings

Encoding research structure upfront, with clear objectives, consistent transcript format, and versioned prompts, made the difference between a synthesis that held up to scrutiny and one that didn't. The LLM was only as good as the inputs and the human judgment applied at every checkpoint. Running validation alongside extraction, rather than at the end, caught errors before they compounded through the analysis. The most valuable shift was treating the system as a persistent research asset, one that gets stronger with each new interview, rather than a project with a fixed endpoint.

Let's work together

For enquiries about product design roles or collaborations, feel free to get in touch.

Some work is subject to confidentiality and can’t be shared publicly, but I’m happy to discuss further examples on request. I aim to respond within one business day.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.