[Scummvm-devel] ScummVM 0.12.0: Missing builds

Max Horn max at quendi.de
Wed Sep 3 10:44:51 CEST 2008


Am 02.09.2008 um 20:24 schrieb Bertrand Augereau:

> I guess the main problem is the lack of QA.
>
> We have k platforms to test for m different intermediary revisions  
> (beta1, 2, 3) of p games. And few people to test.
>
> So testing is more of a stochastic process sampling this 3-dimension  
> matrix hoping to find all significative bugs, and that different  
> triplets (bug, platform, engine) are correlated.
>
> I don't really see a good solution though :(
>

Same here :/.

However, let me point out that porters do not have to do all this QA  
work alone -- we have plenty of folks in the forums who are willing to  
help. But of course, that's not the same as a real QA team, as a  
willing tester may not have the right hardware or games for testing.  
But I still think that resource could be used a bit more efficiently.  
By asking people what to test, specifically.

Also, we have a Wiki page which does represent a stripped down version  
of the testing matrix you mentioned (see <http://wiki.scummvm.org/index.php/Release_Testing/0.12.0 
 >), but it's underused, sadly -- we used to have a full matrix with  
status info for many ports, but well, somebody has to maintain it.  
Still, every port is welcome to make use of it, to record user  
feedback & own testing results. I am *not* intending to put extra  
burden on porters here, I just think it might be a helpful tool, to  
keep track of what works / needs testing / needs fixing. Use it, or  
don't, your call..



>
> A robot tester might be nice to catch a certain class of crashes.  
> Inputs would be recorded on a PC version and replayed non  
> interactively against the running game.
>
> If we record a video, we can detect graphic or sound glitches.
>
> If we log the replays, we can find when a regression was introduced.
>
>
> Lotsa work though, we have this kind of tool for our game at work  
> and it was a major but invaluable development.
>

Fuzz testing, reply testing, etc., are all great tools, fully agreed.  
The main obstacles in my eyes are are that we'd need
(a) to get hardware to run this on, automated,
(b) setup the tools to do this.

I am aware of free solutions for this for Windows, Linux; maybe also  
Mac OS X; but I am not aware of any affordable solution for our custom  
ports. If you are aware of any, I am eager to hear about it!

The "big three", already get quite good test coverage by our many  
users, who actively participate during the testing phase. I think it's  
the many custom ports (esp. those with custom backends, like DS, DC,  
PSP, PS2, Wii, PalmOS, WinCE, Symbian, ...) which would profit from  
automated testing the most. Sadly, it seems they are the hardest to be  
tested.

The only thing I can think of is the idea of an automated compile  
server, which I raised several months ago. We now have more money than  
we had back then, we definitely could afford even a full server for  
this. Though *I* do not have the resources (read: spare time) to set  
it up and maintain it, even with help from porters for setting up the  
various compile chains. Anyway, I think that automated build tests  
will only solve a tiny fraction of the problem by themselves.

A potentially good side effect of automated builds would be that we  
could get more testing between releases, but that would requirer  
porters to keep their build working between builds, yet many porters  
only update their ports just before release.


Bye,
Max





More information about the Scummvm-devel mailing list