More porting this week! WARNING: plenty of rambling ahead, as I make things up as I go.

After discovering the "global using" and reading this article, which is another data point in the code-driven development approach, I decided to go all-out with keeping Godot out of the codebase, in case the final build has severe problems and I need to change again. So, extra care must be taken into setting up the final code folder hierarchy (which doubles-up as namespaces) So, first thing was to replace Godot.Mathf with System.MathF and glm as appropriate, and use aliases for Rect2,Color and Colors

PlayerPrefs

This was only used in a very simple way in a static class "Cache" that would do save/load of binary or json files, and might decide or not to reuse cached versions. Truth is, it's possibly buggy in its embedded logic that decides of we can or cannot reuse some cached files (and that's where PlayerPrefs is used) so it can be refactored out. Done.

JsonUtility

As easy as it gets. I was using it very simply with some old code, so I straight-up replaced it with Newtonsoft.Json.JsonConvert.[De]serializeObject<>.

UnityEngine.Profiling.Profiler, ProfilerMarker

I didn't find something like this in Godot, but thankfully I have my own simple solution for profiling so it was an easy replacement.

Color32

Yet another effortless port, just needed to define implicit casts between Color32 and Color

SerializeFieldAttribute, ISerializationCallbackReceiver

Godot uses [Export] and not callbacks AFAIK, and because I did not use that feature much and to avoid polluting the main game with these attributes, this kicked the bucket. One way to refactor is to ... refactor out.

Application, Time, LogType

Starting with Time. It's moved to a godot namespace, because we're getting the data through godot. Application.time can be ported to Godot.Time.GetTicksMsec() (with scaling to convert msec to sec), but delta time is trickier as it's pushed through Godot's _process method. Still, not hard, just changing the game's API so that systems get delta time like this. frameCount was replaced with .time, as it was used for some tests only.

For the aplication now, things are getting tricky. Can't find any log hooks, like "Application.logMessageReceived". Where was I using that? So that when Unity logged something, my log handler would listen to it. Ok, not going to cry a lot here, so removing this feature. My log system and Godot's log system will remain separate. Paths were easy to fix too, dataPath was replaced by res:// and temporaryPath points to a special folder in res://.temp which is ignored by both git and godot.

Ok, low hanging fruit are done! Onwards to medium hanging fruit.

Singleton

To be able to test anything else from now on, I need to be able to run the game, in whatever chopped down form. So, the basis of that is the Singleton. I've made an autoload node as per instructions, but instead of using the not-very-efficient GetNode("/root/Global"); as per example, I'm just storing this in a static variable that's updated every frame (for good measure), so that all C# scripts get access through that static. Here's where the serious refactoring begins as well, as so far (for various reasons) the main game state class ecs.Ecs is now moved to game.Game. Testing this, I realise I have access to the global in the _Process function, but not in the _Ready function of any other script, at least when starting the game.

Console

That's going to be ultra useful for testing things, so I might as well fix it now. Trying to do this right wrt Godot, so after a bit of fiddling, I've added a CanvasLayer in which I place a LineEdit node under. CanvasLayer is necessary otherwise the controls get misplaced when I change the camera's zoom level. Minus for Godot: I don't like the 2D camera, it's unintuitive to me, I preferred Unity's. Plus for Godot: setting up the input map is a breeze. Unity's new input system was a horrible overcomplicated mess. Minus for Godot: I had to setup the shortcut to be shift-tab for toggle console because things like "#", "`", "~" do not work well for me. The "Listening for input" field is pretty handy though to figure things out.

I've set up a LineEdit for the command and a Label for contextual info regarding the command. Now, compared to the previous implementation, I need to split it into a game-agnostic component (which is Godot specific) and the game-specific component, with a bit of refactoring. This is now done, and I think I have a slightly healthier developer console as a result, split in 3 parts: a generic console utility, the godot control/events/handling and the game-specific commands.

Camera

Ok now the base game skeleton is ready: An autoload gamenode with the state, a 2D root, a Sprite2D for rendering the world, a Sprite2D for rendering the level and a Camera2D. The first priority is to get the camera working, which means I need to figure out its relationship to the viewport and the scale of things in the world. First interesting find: by default the scale of things remain the same if you resize the window. I changed that to stretch the content appropriately in Project Settings -> Display -> Window -> Stretch (mode: disabled to viewport). The sprite now fills the screen always when Camera sets zoom level of 5, or the sprite scale sets a scale of 5. I'll set the sprite scale at the world units, so for the world sprite it's going to be 512, and the camera's zoom level needs to be set at 5/512. So far so good, behaviour is as expected.

The main camera object is being used to manipulate movement in the world and levels, so it's part of the game scene. The camera move behaviour script, after a lot of contemplation, will transform into a game system (in the ECS sense), listening for level changes, etc, and it will access the godot node to set its values, like position, zoom and offsets.

I spent about 2 days trying to get the camera to work, and my honest opinion is that Godot's Camera2D sucks. It is way too specialised for some particular use cases, and there's not simpler camera to derive from. I literally spent several hours to figure out why my bounds checking was incorrect (being ill/tired didn't help), only to realise that some built-in default Drag settings (horizontal/vertical) were enabled, applying some sort of limits to the camera. Also, add the fact that there is Position, GlobalPosition and ScreenTargetPosition that are not explained very clearly in the docs wrt their differences. In any case, I managed to make that work after a bit of pain and a huge dip in porting speed. Camera is now bounded at level dimensions, can convert between mouse cursor and WCS without raycasts, we can click-and-drag or move with arrow keys, zoom with mouse wheel or +/-. There is also (as of now untested) support again for scripted panning and zooming. So, after refactoring few more bits and bobs (e.g. screenshake implementation), it's done for now and awaiting more serious testing. Overall, this was a high-hanging fruit, and it felt like it.

Palettes and color indices

This is a funny one. I thought in the beginning I'm going to use some color lookups so I made some classes for colormaps and palettes and ... the usage was problematic and underwhelming. So I'm going to remove most of that and use just a Color32 for colors. A single exception is the use of a colormap for GUI, as my colormap was identical to Godot's colors (X11 colors), so I'm keeping that. Code blasting commencing, and (a couple of hours later) done! Part of the work involved exposing Godot's color names as a named list (other people think that's an omission too) and replace the use of palette colors for sprite skin tones etc, which was done using some Python scripts.

  • AudioSource,AudioClip,AudioMixer

Removed AudioMixer because I don't think I was using that yet. And based on my understanding from a quick read, AudioClip is AudioStream and AudioSource is AudioStreamPlayer (with 2D/3D variants).

Now the "fun" begins. The documentation of AudioStreamPlayer2D says that it updates the audio panning based on where the object is on screen. The API does not seem to allow me to specify where the object is in the 2D world relatively to the listener. So, it probably requires the source is attached to a node that can be seen from the camera, but my "nodeless" programming style is incompatible with that. Thankfully, Godot is open source! So I go into the audio stream player 2D code. I must confess I'm not impressed by the level of comments in the header file. Code could also be a bit better, I'll stop judging though. The _update_panning function gets the position of some audio listener from each viewport to calculate the panning. Turns out the panning code is super simple, taking into account only the X axis, and I got example code which uses some AudioFrame class that I was not aware of. The problem though is that class, which controls panning is C++ only! Oops. So, we can't easily control panning from C#. Need to get creative.

Time for a little demo experiment: I play some sound effects within some seconds, there might be some overlap in sounds. Each sound effect is coming from a different position (with different panning level). Need to:

  • Create one AudioListener2D, attach to sprite (for game, AudioListener would be under Audio node)
  • Create a list of AudioStreamPlayer2D, e.g 16 (max of 16 SFX)
  • Every second, trigger a stream player 2D from the appropriate position.

Demo/proof of concept was successful, you can hear/see here.

After a bit more refactoring, I've split my AudioSystem code to the part that's handled by a Godot "audio manager" node of sorts, where we just manage music tracks and SFX without any game-specific information, and the AudioSystem which listens to game events and sends instructions to the audio manager to play/stop tracks.

While dealing with audio, I read how asset imports work (yeah, I didn't RTFM earlier, it was just after the basics), so my hacky "AssetBindings" class that I was using in Unity to store at runtime all resources (shaders/audio/etc) is effectively moot, as I can use e.g. ResourceLoader.Load(streamPath) and this, supposedly, will be costly just the first time and will be fine the rest of times. I'm happy with that during development and later on I can preload as necessary.

Ok, for now, audio is done!

GUIStyle, GUILayout

Time to deal with developer GUI! Which is what I've been using so far, and unfortunately people have assumed that this is the actual game GUI. Oh well... Anyway. I need some developer GUI for basic UI mock, and I had GUILayout in Unity. But, what do we have in Godot? Well, well, well, it looks like there is support for Dear ImGui, and latest release is from last week for Godot 4.x. Shortly after this discovery, I had an ImGui window showing, so that's super promising. After a couple of hours of refactoring, that's also done! It's probably going to look very ugly, but that's a surprise for later. And maybe it will be a motivation to set up some more appropriate game gui.

ReadOnly/WriteOnlyAttribute, Job, IJob, IJobParallelFor

I'm going to cheat a bit with these. Instead of porting them right now to the C++ plugin, I'll provide a working implementation which is single-threaded, which is very simple to do, and I'll port them to C++ at the end after the game's basis is working again. Done that, to be tested after the overworld generation works.

Input, KeyCode, Event.keyCode

This is going to be another "fun" one. Unity allowed sprinkling the code with Input.GetKeyDown() whereas in Godot this is clearly discouraged and you can either hack your way (I'd rather not) or make good use of actions and the node input event handling.

My codebase is sprinkled with input checks, and I might as well clean this mess up. Here is a record of where input is getting used ((D) are debugging actions)

  • Trigger some console command shortcuts with LCtrl+LAlt+[1-9] (D)
  • F6 swaps the player with the hovered entity as a sensor (we can see through the eyes of the entity) (D)
  • F10 hides the main dev gui, I used it for clean recording
  • F7 starts some fire effect on the map, I had added that when testing that fire effect (D)
  • X,Y are used to highlight creatures and object on the map
  • When LCtrl is pressed and we hover over a location, we show gui for quick travel
  • LShift + [1-4]: some debug controls, e.g. clear a tile, add a poison tile etc. (D)
  • Escape: sprinkled in a few places. E.g. skip cutscenes or "go back one step" in multi-stage commands
  • Left/Right arrow: move through pages in inventory "screen". Also used in some GUI screen for stack splitting that I can't remember much
  • [1-9]: select option in multiple-choice "screen"
  • F12: pause realtime (so, animation frames freeze) (D)
  • F11: change realtime speed to a number of slow motion presets (1/2, 1/4 etc), useful for some animation debugging (D)
  • 9 starts a lightning weather effect (D)
  • various keys with optional modifiers are used by the "input action database", that provides a data-driven approach of things that happen based on input. This includes a number of actions types
    • "InputActionAAWrapper", parameterised on an active ability. During execution, it creates/pushes a new GUI stage as necessary
    • "InputActionHotkey" : that was a half-baked attempt of handling hotkeys. Can die in a fire.
    • "InputActionNoGui" : base class for actions like Teleport, Look, Quicksave, Quickload. These, as the name suggests, can happen immediately without any more GUI-driven parameters (e.g. teleport on hovered tile, look at hovered tile)
    • "InputActionPressed" : base class for actions that happen only while a key/btn is pressed, e.g. minimap, highlighting objects and/or treasure

Alright, what are the observations from the above?

  • I need a singular place to put all my debugging stuff with various weird key combos. This can be a .cs file with classes that implement some key handling interface, and those classes can be instantiated and registered in the main game node, whose input function we're listening. AKA organised, contained mess!
  • Useful non-debug stuff like the multi-purpose use of Escape or LCtrl could be mapped to global, fixed Input actions in project settings
  • The key controls for the devgui will disappear. Incentive to prototype a game gui! The game gui nodes can then listen for input for e.g. "press the button, or press 1 to activate option"
  • Setup global input actions like other titles do it, with premade actions for hotkeys etc (I checked out Skyrim as an inspiration, not 100% relevant but doesn't matter)

So, how should I be supposed to be triggering actions now? Generally, looking back at my previous implementation ... well, I need to apply KISS.

  • Have all debugging actions declared in one place. Each can be triggered with a different key combo. The keys will be checked during "unhandled input" check on the game node, which means it will happen only if no standard action has been handled. This is an improvement already.
  • Different "states" listen to different subsets of input actions. These could typically be:
    • "world" : when waiting for input while on overworld
    • "level" : when waiting for input while in a level
    • "tile select" : when we have abilities that require selection of a tile
    • container: input key shortcuts when we have a container/inventory screen
    • menu: input key shortcuts when we have a menu screen with options
    • slider: input key shortcuts when we have a slider, e.g. splitting stacks.
  • On player turn, we should be in one of the above states. The input should either be given by the fictional GUI elements (one day...) or by any supported key input action associated with the state.
  • Handling input actions could either result in something that is not considered a move (quicksave, look at tile, highlight objects) or something that is a move (trigger a hotkey, move right)
    • Non-moves could either be triggered by press-and-hold input (e.g. hold key to highlight) or by press-once types (look at tile)
    • Move inputs are triggered by "press the key once" type of input, and, depending if the ability needs configuration (e.g. target) we might or might not need to spawn more GUI stages.
    • I need info on which action is press-and-hold. I guess it can live in a config next to the "state to input actions" mapping. Sounds like a fine job for an InputSystem json config.
  • To augment the previous statement, on player turn, based on the state, we start checking if any of the supported inputs need to be triggered, and handle accordingly.

I think that's it for now, more rambling next time!