More serialization this week, but there is a new twist this time.

Generic glitch

Last time I was implementing a proof of concept for automatically generating SaveObject types from any structs/classes. This was going fine until I hit generics and collections. The duality of "simple type, no need for saveobject" and "complex type, needs saveobject" ended up being quite problematic, especially for cases where e.g. I have a dictionary< simple, complex> or dictionary or dictionary. C# leaves things to be desired in the generics department, and I'm no expert in it either, so it posed quite a problem. Now what?

Take two: Automatic cloning into different namespace

Well, because of my faffing with the code analysis libraries, I realised there was another way: I can parse the codebase, detect all types that I want versioned, and re-create them in a different versioned namespace (e.g. from foo.bar.SomeType to v001.foo.bar.SomeType). This is possible, and after several hours I made a proof of concept using the codebase. By adding an additional attribute, [[VersionedSaveable]], I could move the type to this new namespace. There's a lot of associated work though, as I only want to preserve serialized data. This means killing off constructors, method implementations, interface derivations (as no methods are implemented) etc. But I stopped somewhere there, as there's more nontrivial work to be done:

  • What if I change a [[MemoryPackable]] type, how do I detect that? Such a change would trigger a major version change and incompatibility, because such changes are not versioned.
  • Other complications are the code migration. I still have to automatically generate code that migrates from one version to another.
  • Another bit of code that's a bit complicated to write: I need to be able to detect a "type signature" in terms of serialized data, so I detect what types have changed and what types haven't. This creates the following nightmare, where e.g. version v001 has been generated for types A,B and C, and A contains a class D that hasn't changed. But later on, C and D types change, which would result in a new version v002 and associated set of types. But now, all occurences of D in namespace v001 would need to change to v001.D because D type has changed. This sounds a bit like dependency hell.

So, this namespace-types idea starts to show lots of rabbit holes, that I really, really don't want to descend in. How to avoid all that?

Take three: KISS and versioned MemoryPackable

Ok, let's backtrack a bit and see what are the main issues with using MemoryPack as-is:

  • No support for weak references. I have to deal with them in a custom way no matter what approach I choose.
  • Limited version support. Assuming that my approaches will not be ultra-optimised, what if I use the "full versioning" support of MemoryPack? It's characteristics are:

  • unmanaged struct can't change any more. This is fine, as I only use my own unmanaged structs very infrequently, for things that don't really change much.

  • all members must add [MemoryPackOrder] explicitly(except annotate SerializeLayout.Sequential). This is tedium, but better to have tedium rather than code maintenance.
  • members can add, can delete but not reuse order (can use missing order). That's fine
  • can change member name. That's convenient
  • can't change member order. That's fine, no reason to mess with order
  • can't change member type. That's fine as it's rather unlikely

  • Bespoke version migration. So, what about migrating from a previous version? If I've added a new member, I can just set an appropriate default value. If the value depends on other serialized data, MemoryPack provides support for callbacks before/after serialization or deserialization, for example we implement a callback for post-deserialization and set any new members a value based on other, loaded members.

  • Can't handle polymorphism with non-abstract base classes. Well, I'll need to refactor my code to solve this issue, but it's not huge; I think I have less than 10 class hierarchies that are like that.

Also, thanks to threads like the following, making me aware of the existence of Steam branches and how you handle it, as I think that's a good approach for major version changes, and puts a bit less pressure for developing a custom monster-system. I love that community and the structured discussions, hope it doesn't all go down the drain due to the IPO's knock-on effects.

The refactor boogeyman: weak reference type

Alright, after a bit of reverting, I need to resolve some limitations with MemoryPack. One of them is that it doesn't handle classes in classes. That's easy. Another one, which unfortunately ended up being quite a bit more problematic, is the lack of support for WeakReference. In my code, in the state that is saved, I sometimes store things like effect objects (that are dynamically created) but also store weak references to such objects in collections (e.g. enchantment collections). An example is the player, who has equipped an item that provides an enchantment, but also has consumed some potion that provides yet another enchantment. These currently active enchantments are stored in a collection of weak (non-owning) references, as the original objects might be in the configuration database, on an item, etc. If I didn't use weak references and I serialized the state, upon loading I'd have different objects for the same thing: one in the collection and one in the original source. Weak references worked with BinaryFormatter (so deserialisation resulted in a single object and strong/weak references to it) but now they are not supported. Oops, I have to refactor. Trouble is, this has become quite tricky to refactor so ... I'm working on it! Effects are used a lot in the code, and due to some occasional code smell and some limitation that I realised recently, there might be another refactor there.