So what you have currently is a polling-based approach, or "are we there yet?" from the perspective of each tank, which in terms of performance scaling is
O(num_inactive_tanks) - i.e. you pay some fixed performance cost for each one.
In theory that's fine so long as
num_inactive_tanks doesn't get high enough to noticeably affect performance, but in practical terms it's suboptimal since the performance ceiling will differ across hardware, and GDScript has its own overhead on account of being a scripting language.
Many to One / One to Many
Ideally, you want a solution that is
O(1) - i.e. fixed cost regardless of its inputs. That's a bit of a blue sky ideal for any case where the number of inputs (i.e.
num_inactive_tanks) is itself variable, but we can definitely get closer to it.
A good first step in solving any compsci problem is trying to invert it; if we currently have every tank asking "are we there yet?", is there a way to flip that so some singular thing tells each tank "we are there now" at the appropriate time instead?
Since the stage Node2D owns the data that decides when a tank should activate (i.e. its Y coordinate), and is the singular parent of all tanks, yes - it's a natural candidate to implement such an approach.
The naive solve from that angle would be to have the Node2D loop over all of its inactive tank children, check their Y, and invoke activation methods on each one.
This would perhaps save some nominal amount of CPU time on function call overhead, but otherwise works out the same as the
O(num_inactive_tanks) many-to-one approach.
Caching
So how to improve on it? The
O(n) performance scaling's root cause is the need to run a for-loop over all inactive tanks every frame (be that manually inside the Node2D, or implicitly inside a _process function called from some engine-side loop), so the solution is to somehow do less work each frame.
Thus, we should step back and examine why the for loop is needed. In this case, it's because number of inactive tanks, and Y position of each tank relative to the screen, is dynamic data that might change across invocations, and so needs to be read from its owner every time.
However, the actual problem isn't actually that dynamic - once a stage is loaded, both the number of inactive tanks and their local Y positions are known data that doesn't need to be recalculated every frame; the only truly dynamic aspect is the Y position of the scrolling Node2D stage, since we also know that
num_inactive_tanks starts at a known value and reduces by 1 each time a tank is activated.
Thus, we can optimize around those properties by precalculating all of the non-static data at startup, and reading from that instead of the tanks themselves.
Since we need to know the local Y position of each tank in order to calculate its screen-relative Y, we can run a for loop in
_ready() that stores a reference (
FuncRef or
WeakRef) to each tank alongside its Y in a key-value dictionary, and read from that in our
_process loop instead of checking the tanks directly, removing entries as they get activated.
This is another incremental improvement over the previous solve - probably a little faster on account of a local dictionary having better cache coherency than querying foreign data, but stil fundamentally
O(num_inactive_tanks) on account of iterating the whole thing each frame.
But now the data is stored locally, we have control over its shape. This is the key to lowering that
O() number, and the big reason for using the one-to-many approach to begin with.
Acceleration Structure
Since we know that a given stage's tanks will activate in a fixed order, and the Y positions describing that order are already being stored in a list-like structure, we can sort that list by Y position.
Having a sorted list means it's no longer necessary to iterate every entry every frame, because only the first
N entries (where
N is the number of tanks that enter the screen on a given frame) will need to be activated.
Thus, we can use a while loop instead of a for-each, breaking at the first entry whose Y is offscreen, and know we've updated everything we need to.
In so doing, performance scaling goes from
O(num_inactive_tanks) to
O(num_inactive_tanks_entered_screen_this_frame) - much better, and closer to the golden 'minimal solve'.
That's probably good enough for your use case, since it ticks the 'not
O(n)' box without interfering with your existing level editing workflow.
Improvements
This approach assumes all tanks activate in a uniform way - i.e. when they reach some imaginary line either on or off the visible screen.
You could extend it to read more data from each tank alongside the Y position and store it inside a
CachedTank struct within the dictionary,
allowing for more arbitrary behaviour (i.e. activating at different points on the screen) that can still be looked up and acted on in a performant way.
More...
It's possible to go deeper, since a linear STG fundamentally boils down to a pre-keyed animation timeline if you really think about it (i.e. tank activations as events that occur at a preset timestamp w.r.t. game time), which could be made extremely performant with techniques like object pooling.
But optimizing, around such things introduces more assumptions that need to be upheld, which in turn necessitates building custom tooling in order to specialize around the problem - i.e. a timeline-based level editor, or a script that analyzes an editor-assembled godot scene and munges it into timeline data. I suppose the latter is essentially a more complex version of the caching above, just with time instead of stage Y and full custom pattern data instead of only activations