[Scummvm-devel] Bring testers to the team?
Max Horn
max at quendi.de
Thu Jul 23 14:11:24 CEST 2009
Am 23.07.2009 um 09:56 schrieb Pierre-Yves Gérardy:
> Some thought regarding the testing process.
>
> There are at the moment 130 games or game paths (counting 5 for MM
> and 3 for Indy4) to test, twice, and it's going to explode once more
> when SCI becomes supported.
>
> The number of testers required for this is probably larger than what
> you can recruit through the ScummVM forums.
That's true. In fact, the number would have to be multiplied by N > 8,
where N is the number of ports we would want to test each game on :).
However, luckily we do not need to test all these games, nor all on
each platform. Rather, a well chosen subset of each during each
release would already give a much better coverage than our current
semi-random testing yields. In fact, I dare to say that a really
random set of games being tested would already be better; as it is, I
think some games are tested much more frequently than others, due to
being more popular.... ;)
Testing games serves at least two purposes:
1) Finding regressions in the code of that game/engine
2) Finding regressions in the port.
For example, a change in the SCUMM code may cause a bug in the credit
sequence of Sam&Max; playtesting is required to find that.
OTOH, the simple fact that we keep adding code to our common code
infrastructure may cause problems on the Nintend DS, due to the
increase code size alone. (In particular an engine might regress that
was not touched at all, but only on a few platforms).
Ideally, we'd test all games on all platforms, each release. But
that's impossible for us, and even if we had lots of money to pay
professional testers would be an arduous task. Since we cannot solve
this the bruteforce way (lack of resources), we need to be clever
about it. To me that means selecting good combinations of games &
platforms and getting these tested, to maximize our coverage. By
testing a well-chosen sample of all game&platform combinations, we
should be able to catch a majority of issue in advance. Hopefully more
than currently. Of course, this would still be a big task; and we
still would greatly benefit from being assisted by automated
playtesting.
Anyway, here are some criterions by which we should choose
games&platforms for testing (if we had the liberty to do so, at least):
* Try to test at least one game from each engine each release, no
matter which platform on
* Focus on primary variants of games first. Yes, we also want the uber-
rare special versions so loved by geeks to also work; but first we
should make sure the versions most people have and play work fine
* Test at least one game on each *port*
* Priority for games from engines that changed a lot. Next games from
engines that only changed a bit. Last games from engine which didn't
change at all.
* ...
Also, during all this game testing, I think we keep forgetting to
mention that the ScummVM GUI, launcher and backend UI (e.g. hotkeys)
need to be tested, too ;). Ideally on many different ports.
>
> Obviously, the solution to minimize the testing load would be to
> organize the development process in order to prevent the
> introduction of regressions in games whose code didn't change since
> the last release, in other words, to freeze the ScummVM core as much
> as possible, and build on that until it's not possible to do so
> otherwise.
>
> Would it be feasible?
I don't think so. APIs change and engines have to be adapted; and bugs
in engines need to be fixed -- and every bug fix is a change, and
hence has the potential to introduce new regressions.
Also I am also doubtful whether it would help that much anyway. I
mean, even an engine that isn't changed at all can regress due to
changes in the internal code it relies on.
Bye,
Max
More information about the Scummvm-devel
mailing list