Judging Criteria and Event Procedure


Event Duration

  • There will be an initial (optional) 30-minute meet and greet and orientation
  • This is part of the 3 hours of competition time, including sending in the deliverables

That’s a Long Time!

Not really. You’ll spend the first few minutes reading the information, then interact with the “customer”, decide what to test, come up with a strategy, do actual testing, and file bugs.

About halfway through the exercise, you’ll need to start thinking about the formal test report that is due at the end of the contest, and how to make sure you have it turned in before the time runs out.

We suspect you’ll be pretty busy.

Software under Test (SUT)

The testing period is three hours (3h) during which your team has to test. The actual SUT will be announced at the start of the period to guarantee fair play for all.

Neither in Unicorn land nor in the real world it will be possible to test everything.

Your team will have to choose how to invest your time; what to test, what not to test; how to approach it.

The final deliverables will be the issues you’ve found (make sure you log any, at any time) and a test report, focusing on major issues and designed to inform the “customer” of the status of the SUT, suggesting fixes and their importance (bug advocacy).

Hold on! You’ll probably be asking, “What is important? What do my customers care about?

Interacting with Customers

For three hours, Matt Heusser, our lead judge, will act as “customer” on audio and video; you can ask questions using the comment and chat features on YouTube.

For countries where this is not possible, we will provide an alternative. This will then apply to the entire region to guarantee an equal playing field for all teams in that region.

Use the event’s YouTube site to ask questions and receive answers about priority/risk, consequences of failure, project details, and so on. We suggest that after reading the initial information, your team spend a couple of minutes coming up with questions, then jump onto the stream to interact with the “customer”. The quality of that conversation will be part of your score!

Deliverables

Teams will produce two major deliverables: bug reports and a test report. Since we are an international event and the judges are from all continents, we will need the deliverables in the English language.

The bug reports should be straightforward; each team should have directions on how to file bugs emailed to them in advance.

The content and format of the report is up to each team, but it should help the decision maker figure out whether he is ready to “ship” the product, whether it needs more fixes, what to invest in next, and so on. Ideally, the report would also include how you decided what to test and how your team spent its time.

For example, your report would start by describing the state of the SUT, then might include what you felt was the most important to test; then you might include a few details on your strategy and which major issues you found.

At the end of the functional competition you will need to email your report with the subject line “Functional Test Report for [Team_Name]”. (The email address will be provided to the teams beforehand).

We expect to receive it “shortly after” the official end of the competition. Ideally teams will allocate some time for that within their time box. We are willing to give a grace period of 5 minutes to receive it.

Judging Criteria

The total regular score possible for the entire competition is 100 points. Functional test categories include importance of bugs filed, the quality of bug reports, how reproducible those reports are and the quality of the test report. A small amount of bonus points are available for how the teams interact with the judges, and teamwork if visible.

Judging Criteria

  • Up to 20 points for being on mission – did you file the right kind of bugs, bugs we care about?
  • Up to 20 points for the quality of bug reports – writing style, are they compelling, do they make sense, are they reproducible?
  • Up to 20 points for the quality of your test report
  • Up to 20 points for the accuracy of your test report – did you warn us about the right issues?
  • Up to 20 points for non-functional testing, should teams decide to do so
  • Bonus of up to 10 points for interacting with judges and the “customer”

This gives us the possibility to find possible winners in multiple categories and an objective way to measure each team.

You

You should now have everything you need to get out there and test! If not, leave a comment. See you on YouTube, in the comments, and in emails.

But wherever we see you, we hope to see you … testing.

If you can’t play, come back in the next weeks for the scores, awards, and after-action reports; we think it’s going to be quite a show.

We are going to put up a separate info page for the final round at the Agile Testing Days in Potsdam, Germany.

Sign up for the latest news!