Sengoku Strider wrote: ↑Wed Aug 02, 2023 5:22 pm
Is game logic invariably tied to frame rate? I'd always assumed it just kind of did its own thing in the background, but I was reading an interview with Merkules (Destiny 2 chief weapon designer) where he was talking about how 30 fps machines mean TTK values happen at set increments which limit how they can tweak things compared to faster machines. It occurred to me this might well apply to shmups as well, which run at all sorts of weird frame rates like 57 fps or whatever.
Not at all. As a rule, the render loop - whose timing is dictated by the graphics backend and display hardware - is the lowest-common-denominator for executing logic in any given game, since you always need to draw stuff, but there are many ways to run logic loops on their own independent timing. Though there are a lot of variables like system architecture (hardware-asynchronous PCB, single-core CPU, multicore CPU), graphics architecture (software blitter, memory-sharing GPU, dedicated GPU), and type of game (lightweight 2D STG vs heavy 3D physics simulation, local or networked, etc) that affect the shape of a given implementation.
At baseline, a CPU is capable of looping thousands upon thousands of times faster than the 60Hz we're used to from games; when it comes to driving a simulation, it spins as fast as it can (or as fast as the tightest loop it wants to run) while checking how long it's been since the last frame, and runs some logic when a given interval is exceeded. So even on a strictly monothreaded system, you can have as many of those intervals as you like all being checked at once (i.e. 60Hz for drawing, 120Hz for game sim, 30Hz for physics, 300Hz for input polling, 15Hz for the novelty game of Pong driving a background element, whatever you can think of) and they'll appear to run 'in the background' or 'in parallel', but in reality are being done serially - just so fast that a human observer can't tell that they're interleaved until something takes too long and causes slowdown or stutter.
PCB / multicore / GPU architecture swap that out for true parallelism (i.e. actually running things in the background because they're being calculated by a separate piece of hardware) to alleviate hangs when too many things are vying for CPU time, requiring designed-in sync points so everything is ready when the time comes to present a frame, but ultimately drive the same idea.
In Destiny's case, there's probably some heady mix of minimum spec, networking, server costs, and legacy tech that mean constraining their design more rigidly. Rendering at 30 doesn't preclude you from handling network updates or local sim ticks at 60 or 120, but bean counters, last-gen machines, and a creaky engine might.
As for STGs, you tend to see game logic running on the render tick because that's what the state of software engineering was at the time, and the genre has largely stuck to it as a status quo since then. There are positives - design simplicity, elegant slowndown instead of dropped frames - but it's an outdated practice as far as the tech goes, can't scale to higher-framerate monitors, and may indeed cause issues if running at an odd Hz relative to the display; Cho Ren Sha on Windows does exactly that, and is ~10% faster as a result.
The right way is to use that 'time since last frame' value to modulate movement speeds - that way you can run it as fast or slow as you like, get Matrix-like time dilation for free, and not have to waste countless hours going back in to divide everything by 2 like the poor Hyper Light Drifter devs did with their now-outdated 60FPS patch