Tech corner - 27. August 2025

Exploratory testing and bug bashes for better software

header_image

Quality in software isn’t just “no bugs” or “all tests passed.” Real quality means people can trust it, it feels smooth, and it doesn’t surprise (in a bad way). Even with powerful automation and AI generating tests, datasets, and coverage insights, there’s still a layer that only human eyes, curiosity, and empathy can catch: awkward flows, trust gaps, emotional friction, and subtle inconsistencies.

This guide is a practical playbook for anyone shaping a product who cares about how it actually feels in real hands. Two low-overhead, people-powered habits lift quality fast and complement (not replace) automation and AI tooling: Exploratory Testing and Testing Parties (a.k.a. Bug Bashes). Let’s unpack them without jargon.

Why “just run the tests” isn’t enough

Automated tests are great—they catch issues we already expect might go wrong. But many real problems slip through because:

  1. Users behave in ways no script imagined
  2. Designs change late
  3. Features interact in odd ways
  4. Different devices, languages, or network speeds expose hidden issues
  5. Emotional/trust questions (“Did that save? Can I undo?”) aren’t asserted

That’s where curious humans shine.

Exploratory Testing (think: smart poking around)

Exploratory Testing = Learning about the feature while trying it, jotting down what you notice, and following interesting threads.

It’s NOT random clicking. It’s more like asking:

  1. “What happens if I do this twice?”
  2. “What if my internet is slow?”
  3. “What if I’m not logged in?”
  4. “Can I do things out of the expected order?”

Why it’s powerful:

  1. Finds surprises early
  2. Builds shared understanding (great for new team members)
  3. Generates ideas for what SHOULD later become automated tests
  4. Improves the “feel” of the product

Simple recipe:

Pick a small goal (a “charter”):

Example: “Try cancelling an order after it’s paid.. see if anything weird happens.”

Then:

  1. Timebox it (e.g. 45 minutes).
  2. Take quick notes (What did you try? What felt off? What broke?).
  3. Share what you found (even “I expected X but got Y” is useful).
  4. Suggest: “This scenario should be automated later.”

You don’t need fancy tools. A shared doc or note template like this works:

  1. Goal:
  2. What I tried:
  3. What felt good:
  4. What felt confusing:
  5. Problems found (with steps):
  6. Ideas for improvement:
  7. Should we automate this? (Yes/No)

Testing Parties (bug bashes): group exploration

A Testing Party is a short team event (usually 1–2 hours ) where people from different roles try the product together (developers, designers, support folks, product managers, maybe even marketing). Fresh perspectives compress weeks of scattered feedback into a single energetic session.

Why bother?

  1. Fresh eyes spot confusing flows
  2. You uncover “oh wow, that’s not obvious” moments
  3. You get many perspectives at once instead of waiting weeks
  4. People feel more ownership of quality

When they’re helpful

  1. Right before a big launch
  2. After a major redesign
  3. When integrating a new payment/service
  4. When onboarding new team members

How to run one

  1. Before:
  2. Decide what areas to focus on (don’t try the whole product).
  3. Provide test accounts / sample data.
  4. Share how to report issues (Slack thread, form, or ticket template).
  5. Kickoff (10–15 min):
  6. Show what’s new.
  7. Explain what “a good issue” looks like (clear steps, expected vs actual).
  8. The session:
  9. Encourage themes: mobile use, first-time experience, breaking form inputs, acting like a new user, etc.
  10. Wrap up:
  11. Highlight top issues.
  12. Thank participants (maybe shout out the most helpful find).
  13. After:
  14. Decide what to fix now vs later.
  15. Convert repeatable issues into automated tests.

Folding these into everyday work (without overhead)

Instead of treating this as “extra work,” bake it in:

  1. Do a tiny exploratory session before merging a risky feature.
  2. Schedule one Testing Party per major milestone (e.g., once per sprint or before marketing launch).
  3. After an exploratory session, pick the top 2–3 scenarios to automate next.

What to measure (keep it simple)

Helpful:

  1. How soon we spot serious issues (earlier is better)
  2. How many “escaped” problems users report vs we found ourselves
  3. Whether repeated problems turn into automated checks

Not Helpful:

  1. Raw bug counts (reward quality, not volume)
  2. Bragging about “95% test coverage” if users still struggle

Final takeaway

Exploratory Testing and Testing Parties:

  1. Are simple to start
  2. Build shared understanding
  3. Catch the kinds of issues scripts miss
  4. Turn “quality” into a team habit

Start small this week. One charter. One shared session. Then grow from there.

blog author
Author
Matus Hajdu

I’m an automation tester who professionally breaks things so users don’t—then stitches quality back together with scripts, sneaky assertions, and an irresponsible amount of coffee. Off the clock, I’m a family-first logistics coordinator and an amateur hockey puck magnet, still chasing preseason glory while timing how fast I can lace skates before someone needs a beer.

Read more

Contact us

Let's talk

I hereby consent to the processing of my personal data.