I’ve been grappling with an issue in my software 3D renderer that I can’t quite seem to resolve. After months of development in C using SDL2, my engine has solid performance—around 330 FPS at 1920×1080 on my Ryzen 5 5500. The renderer supports depth shading, perspective correct texturing, and it feels great. But I recently added a feature to allow users to change the screen resolution at runtime without needing to recompile, which has led to unexpected performance drops.
When I change the resolution using an array of values during runtime, my frame rate takes a nosedive—sometimes cutting performance by almost half. Interestingly, when the values are known at compile time, everything runs smoothly even if I call the change at runtime. But when those values are unknown, the performance hit is noticeable.
I’ve been digging into my raster functions and how they’re structured, but I suspect it might be a compiler optimization issue. It feels like the compiler is treating those resolution variables as non-constants, leading it to miss out on optimizations that it would usually apply. I even attempted some assembly optimizations on certain raster functions, but I haven’t seen any improvement.
I’m trying to understand if there’s a fundamental flaw in the way I’m handling screen resolution changes at runtime. It’s particularly discouraging because I know where the slowdown happens, but I’m stuck on how to effectively fix it. Despite asking in a few places and trying various suggestions, nothing has worked so far.
I’ve thought about stepping back to how I manage state and memory during these resolution changes. Perhaps I need a better way to set up the pixel buffers or reinitialize my rendering context? Has anyone faced a similar issue and found a way to address it? I really want to avoid using a profiler since I already pinpoint where the issue is—the deeper underlying problems elude me. Here’s a link to my GitHub repo if that’s helpful: [repo link]. I’d appreciate any insights you might have!
It sounds like you’re hitting a tricky problem! Changing the resolution at runtime can definitely complicate things, especially when it comes to managing resources like textures and buffers.
Since you mentioned that the performance drops significantly when the resolution values are dynamic, it might be worth looking into how you’re managing your graphics resources. Are you perhaps reallocating memory or creating new buffers every time the resolution changes? If yes, that could be a big source of inefficiency.
You might want to try and cache your textures and buffers based on the resolution. Instead of reallocating them each time, maybe just adjust the size of existing buffers when a new resolution is set. Also, double-check your texture filtering and sampling methods; they could behave differently based on resolution changes and might be introducing overhead.
Another thing to consider is whether any state changes in your rendering context are causing the slowdown. If you’re resetting the context or state variables whenever you change the resolution, that might also be a factor. Try to minimize state changes as much as possible.
Finally, since advance optimization is tricky without knowing the specifics of your raster functions, it could be helpful to add logging around your performance hits. Even though it’s annoying and you don’t want to use a profiler, sometimes just having some extra information can help guide you to the solution more quickly.
Good luck, and I hope you find a way to smooth out those resolution transitions!
From your description, it seems very likely that the slowdown when runtime resolution changes occur stems from the compiler’s inability to optimize for dynamically sized pixel buffers. When you provide constant resolution values at compile time, the compiler can perform aggressive optimizations, such as loop unrolling, SIMD vectorization, and address calculations being resolved at compile-time. In contrast, dynamically selected resolutions at runtime typically force the compiler to introduce additional branching, indirect memory accesses, and calculations that hinder performance significantly. A straightforward fix would be to pre-allocate your pixel buffer at the maximum supported resolution, thereby ensuring memory aspects remain predictable. Rather than reallocating or resizing at runtime, you can adjust your effective rendering area within the existing buffers, reducing the overhead introduced by dynamic memory allocations and resolution-dependent instructions.
Another approach is to restructure your rendering code to explicitly optimize for data locality and cache coherence, especially since lower-level raster loop operations can become cache-inefficient when buffer strides or screen dimensions are updated dynamically. Rearranging your pixel buffer structure to stay resolution-independent (e.g., using fixed stride lines or block-based rasterization strategies) can help maintain cache predictability. Lastly, rather than relying solely on compiler magic, manually inline critical rasterization code, use compiler intrinsics or SIMD instructions, and verify memory alignment. This careful combination of predictable memory layout and explicit performance-oriented coding patterns will often resolve runtime resolution change slowdowns, ensuring stable and performant frame rates.