Static analysis – When QA prevents bugs before they even exist

A candid QA story about questions, criteria, figmas, and untangling 40 “glued” features
Why we had to change our QA approach
We were staring at roughly 40 “interrelated” critical features. In practice, they were entangled so tightly that progressing on one quietly required half-building others. Acceptance criteria were often vague or contradictory. Figma designs frequently drifted from what the stories described. Traditional downstream QA (writing test cases after dev hands over something) would only validate ambiguity, not resolve it.
So, we flipped the model:
Quality-first meant:
- Interrogating acceptance criteria before code exists.
- Reconciling every Figma element with an explicit, testable statement.
- Decoupling (“de-linking”) features so they could be finished and accepted independently.
- Turning persistent clarifying questions into shared artifacts (updated AC, design notes, interface contracts).
Static analysis for us began with static thinking: freezing ambiguity early.
Deep acceptance criteria review (before a single test case)
Instead of drafting test cases from whatever criteria were written, we ran a “Criteria Integrity Pass” on every story.
Checklist we used:
- Is each user-visible behavior expressed in observable terms? (e.g., “Should support archive” → “User can click Archive; record moves to Archived list within <2s.”)
- Are success and failure states both stated?
- Are edge states (empty, loading, permission denied, partial data) enumerated?
- Are domain terms defined once (glossary), not inferred differently across tickets?
- Does each criterion map to a single outcome we can assert without inspecting internal code?
If an answer was “no,” we didn’t log a bug—we asked questions immediately and held the story in refinement status.
Result: Fewer speculative test cases; far more stable acceptance at first implementation handoff.
Figma vs acceptance criteria: systematic crosswalk
We stopped trusting that design and written criteria matched. For each feature, we created a “Design Crosswalk”.
Process:
- Export or snapshot the Figma frame(s).
- Create a two-column table: (UI element / state) ↔ (Referenced in AC? Yes/No).
- Anything in Figma not in criteria → ask: “Is this behavior in scope now?”
- Anything in criteria not present visually → request explicit design addition or removal from scope.
- Flag inconsistent terminology (e.g., “Remove” button in UI but “Deactivate” in story).
Outcome: Designers and product owners adjusted wording or visuals before dev started. This eliminated a category of late-cycle “small tweaks” that previously cascaded into rework.
Relentless questioning (our primary QA tool)
We deliberately treated the number of clarifying questions asked pre-dev as a success indicator, not a sign of friction.
Typical question themes:
- Sequencing: “Can Feature B ship without Feature A? If yes, what default does B assume?”
- Data ownership: “Which service is authoritative for status X?”
- Error semantics: “What message do we show when the external system times out vs rejects?”
- Permissions: “Is ‘read-only’ truly view-only, or can they still export?”
- State transitions: “What are the disallowed transitions? (List them explicitly).”
We compiled recurring answers into a living “Behavior Reference” so the same question wasn’t reopened per feature.
Delinking (decoupling) features for independent completion
Originally, finishing one story implicitly meant half-implementing several dependencies (and leaving partial hidden states). That made acceptance impossible.
What we changed:
- Identified natural seams (API endpoint boundaries, domain aggregates, UI vertical slices).
- Defined the “Minimal Acceptable Unit” per feature: the smallest user outcome testable without latent placeholders.
- Introduced stub responses or feature toggles where upstream functionality was not ready.
- Adjusted acceptance criteria to exclude cross-feature workflows unless all constituent parts were explicitly in scope.
- Added a “Dependency Statement” to each story: either “None (self-contained)” or an explicit list with readiness signals.
Result: We could mark features done—and tested—without waiting for the whole cluster. Cycle time for individually scoped features dropped because we weren’t reopening accepted stories when adjacent work landed.
How we knew this was working (qualitative signals)
Instead of deep metrics dashboards, we watched for:
- Fewer mid-sprint design rework pings.
- Stories entering dev with zero open “definition” comments.
- Test cases written once, not rewritten after dev demo.
- PR discussions focused on implementation details—not what the feature should do.
- Designers proactively clarifying states we hadn’t yet asked about (a cultural shift).
Lessons (QA perspective)
- The volume and precision of early questions is directly proportional to late-cycle calm.
- Acceptance criteria are living documents—until they’ve been interrogated against actual design states.
- Delinking is not just architectural—it’s a quality tactic to enable definitive acceptance.
- “Asking a lot” is not antagonistic; it is quality creation performed upstream.
Where we landed
We still test. But by the time we execute tests, behavior is already locked in—in text, design, and scope. The energy spent is verification, not interpretation.
The largest shift: fewer surprises.
Ambiguity no longer sneaks in disguised as “minor follow-up.”
leghetsj96apy704n1a57l02pbp3ap-small.png)
I’m an automation tester who professionally breaks things so users don’t—then stitches quality back together with scripts, sneaky assertions, and an irresponsible amount of coffee. Off the clock, I’m a family-first logistics coordinator and an amateur hockey puck magnet, still chasing preseason glory while timing how fast I can lace skates before someone needs a beer.