Last time I gave a brief description about how messaging (and my dirt simple implementation) can help with decoupling. But of course that was just scratching the surface. So, in this post, a bit more information on how the whole system is put together
The messages now can also store an explicit message handler. In terms of the example I used last time, the new message would be as follows:
So, a slight change allows cases where we'd like to target a message to a particular handler. This would be useful in the cases where we want to directly affect something from another part in the code that we don't want coupling with, but we don't want to introduce abstraction layers. Example:
My test rendering app needs to modify a renderable directly, by setting a bunch of tiles. One option is to introduce a new message, TilesChangedInRenderable( tiles, renderable), but then we have a TilesChanged(tiles) message AND a TilesChangedInRenderable(tiles, Renderable). To avoid doing the same thing with classes other than Renderables, and since the Renderable is a MessageHandler anyway, I decided to make the above adjustment where we can always optionally provide an explicit handler; if one is provided, the message is only handled by message propagators (e.g. a System) and the handler in question, otherwise it is handled by everybody who is registered to listen to those types of messages.
Disclaimer: Rendering is always in flux - I'm trying to get something generic, extensible and easily editable working together, and it's no easy feat.
Summary of rendering so far:
- The currently running application state renders its root widget
- Each widget contains body and margin renderables (2 different)
- Each widget can contain a modal widget, or if it's a container widget, other widgets
- Some widgets add more renderables: e.g. textbox also has a text renderable
- Renderables are pretty much rendering configurations, and store a reference to a renderer and to their widget owner
- Renderers use shaders and contain rendering logic
- A renderer renders a single renderable type, a renderable can be rendered by several renderer types
Before, the configuration was via explicit parameters in an inheritance chain. While it's explicit, it's a PAIN to add parameters, as it's compile-time. So I ditched that approach, and used a far more generic approach. Now every renderable stores, among other things:
- A list of references to textures
- A list of dynamic textures, along with a message updater for each
- A list of texture buffers, along with a message updater for each
- A reference to a blending configuration
- A list of shader variables, organized as:
- a vector of (name, value) pairs, for every common shader type (int, float, int2, float4, etc)
- a vector of (name, texture_buffer_index)
- a vector of (name, texture_index)
- a vector of (name, dynamic_texture_index)
So far, this is looking flexible and I like it. Of course it's far from optimal, but it is optimal for prototyping, and that's what matters now. For performance, variables could be organized in uniform buffer objects of varying frequency of updates, etc, but that's far down the line.
Above there's a screen from the modification of the A* visualizer to operate on graphs -- just minimal changes needed from existing infrastructure:
- There is a new renderer instance of the type GridSparseSelectionRenderer -- it's used for rendering lines.
- There are a few renderables: for the node points, for the start point, for the goal points (of course horribly inefficient, I might as well draw all points at once and assign per-instance colors, but that's not the point here), for the edges and for the edges that are part of the output path.