[Scummvm-devel] Thoughts on new save/load system

Max Horn max at quendi.de
Sun Sep 22 07:14:02 CEST 2002


So we have been dicussing  recently a bit that ScummVM really should 
get a new, more flexible savegame system (the scumm part that is, 
this whole mail is only talking with Scumm games in mind, not Simon, 
though in theory SImon could benefit from this, too).

Here is my suggestion on a) how it could look from the API side and 
b) from the implementation side. This is just an idea, I would 
welcome other better ideas :-)

I ma designing this top-bottom: first I try to think on how I would 
like the API to be used, then I'll discuss how to do that in detail.

Some properties I think the system should have:

* EXTENSIBLE: it must be possible to add/remove fields w/o breaking 
compatibility with older save games (that is, newer scumm versions 
should be able to load older save games; the other way around would 
be nice to have but is not a strict requirment)

* OBJECT ORIENTED: instead of bundling the whole save/load code in 
one big function (as it is now, more or less), every class/object 
should now how to stream/unstream itself (Actor, ObjectData, Scumm)


My suggestion for the first property would be to use a key based 
approach. That is, data is stored using a key to uniqly identify it. 
Keys only have to be unique within the context of one object/class, 
so if you have two classes, they both can contain a data element with 
key "x", Keys can be ascii strings, but they could be ints just as 
well.


How the API could look like
===========================
There would be two class, Archiver and Unarchiver (we could also 
group this into one class, whatever). Also, we would add an interface 
/ mixinclass, "Serializable" or "Archivable". Then e.g. class Actor 
would inherit from that class. It would implement at least the 
following two methods:

void Actor::readFromArchiver(const Archiver &archiver)
{
   x = archiver.readInt("x");
   y = archiver.readInt("y");
   ...
   width = archiver.readUnsignedInt("width");
   ...
   needRedraw = archiver.readBool("needRedraw");
   ...
   archiver.readObject(cost, "cost");
   ...
}

void Actor:: writeToArchiver(const Archiver &archiver)
{
   archiver.writeInt(x, "x");
   archiver.writeInt(y, "y");
   ...
   archiver.writeBool(needRedraw, "needRedraw");
   ...
   archiver.writeObject(cost, "cost");
   ...
}

That is easy to read and understand, I think. The master save method 
then just would have to create an archiver object, and perform a 
couple of writeObject calls to it. It could use a 
writeInt/writeString or so at the start to store a version info, for 
those cases where despite all efforts we have to break compatibility 
again.


Note that we never call the readFromArchiver() and writeToArchiver() 
methods directly. Only the Archiver object will do that.

So what happens behind the scenes? Well, the archiver is always in a 
certain "context" - that is the context of the current object (the 
one most recently passed to writeObject; we need a stack for that). 
For that context, we maintain a Map of keys to values (where values 
are instances of a simple class/struct that stores size, type and 
data of each value; e.g. size=4, type=sleInt, data=pointer to an int).

The writeInt/writeBool/... methods all simply insert a new value 
entry into that map. At the end of writeObject, the map and its 
entries are written in out.

OK let's get a bit more concrete. This is how writeInt might look like

void Archiver::writeInt(int value, const char *key)
{
   value = TO_LE_32(value);
   currentMap[key] = Value(4, sleInt, &value);
}

void Archiver::writeString(const char *str, const char *key)
{
   currentMap[key] = Value(strlen(str), sleInt, str);
}

And reading is done in a similar fashion:

int Archiver::readInt(const char *key, int defaulValue = 0)
{
   if (currentMap.contains(key)) {
     const Value &value = currentMap[key];
     assert(value.type == sleInt);
     assert(value.size == 4);
     return TO_LE_32(*(int *)(value.data));
   }
   return defaulValue;
}

...

Simple, hu? Now writeObject is a bit more complex:


void Archiver::writeObject(const &Archivable object, const char *key)
{
   // Note that currentMap is reference, not an actual object
   Map map;
   Map &oldMap = currentMap

   // Enable a new key map
   currentMap = map;

   // Now tell the object to archive itself
   object.writeToArchiver(*this);

   // Back to the old key map
   currentMap = oldMap;

   // Now newMap contains all the data we need to stream out.
   // We have to conver it to a Value now. For this we add a constructor
   // to class Value that does that. We can also put that into a function,
   // of course, I just want to hide the implementation details here.
   // To be able to recreate the object later one, we have to assign a
   // unique classTypeID for each class we need to stream.
   Value value(object.classTypeID, map);

   // Finally store it in the parent's key map
   currentMap[key] = map;
}


And reading an object done like this:

void Archiver::readObject(const &Archivable object, const char *key)
{
   // For now no default values for objects - could be changed
   assert(currentMap.contains(key));
   const Value &value = currentMap[key];

   // Make sure this object is of the same class).
   assert(value.type == object.classTypeID);

   // Yet another magic constructor that converts a value to a map.
   // Could also be a function instead, doesn't matter.
   Map map(value);
   Map &oldMap = currentMapM

   // Enable a new key map
   currentMap = map;

   // Now tell the object to unarchive itself
   object.readFromArchiver(*this);

   // Back to the old key map
   currentMap = oldMap;
}


I don't cover here how to convert a Map into a binary stream and 
back. However that isn't really a big problem, you can simply stream 
out the key/value pairs in order, to recreate the map just read key & 
value, then insert them into the map you are building. VoilĂ .


The only thing that is missing is the topmost level of the code now, 
namely where the stuff is written/read to/from the file.


void mySaveGame()
{
   Archiver *archiver = new Archiver("path/to/my/file");
   archiver.open();
   archiver.writeInt(1, "version");
   archiver.writeObject(g_scumm, "Scumm");
   archiver.close();
}

void myLoadGame()
{
   Archiver *archiver = new Archiver("path/to/my/file");
   archiver.open();
   version = archiver.readInt("version");
   archiver.readObject(g_scumm, "Scumm");
   archiver.close();
}


The data is written by the close() method in mySaveGame. Essentially, 
only the very topmost map has to be converted to a binary stream (we 
already have the ability to do that), that is then written to the 
data file. And for loading, well, open() does the work of reading in 
the top most map.


I think that should cover it. Please feel free to point out all the 
obvious flaws I missed :-) Also feel free to ask questions if you are 
not sure how a certain detail is supposed to work.



OK, so now there was some concern about save game sizes. Currently we 
are at 35-100KB per save game. This new approach definitly will 
increase the size, depending on how long we choose the names it could 
easily double or triple our size. However, that would still be OK 
IMHO. And it would be trivial to use zlib (which is open source, very 
small, easy to use and available on nearly every system) to compress 
our data - given the strucutre of the data it should compres well and 
might in the end be small than what we have currently.



Cheers,

Max
-- 
-----------------------------------------------
Max Horn
Software Developer

email: <mailto:max at quendi.de>
phone: (+49) 6151-494890




More information about the Scummvm-devel mailing list