apricotslice wrote: ↑Tue, 29. Nov 22, 02:38
But the thing I don't understand is why the GPU isn't being used for rendering the station build screen, and other places where there is so much going on that the lag is happening? I've got a 3090 which as far as I can see is being ignored.
The GPU is being used to render those screens. The GPU is used for all rendering. Unless for some reason you installed a third party Vulkan software device and for some reason the game decided to use that as the main render device.
Due to how powerful the RTX 3090 is and how well optimised the renderer for X4 is it is quite possible that even a "3000 module station" at high resolution does not tax the GPU much. The GPU is still rendering it all, just what is being rendered is so simple for the GPU that it is nowhere near bottlenecking performance.
apricotslice wrote: ↑Tue, 29. Nov 22, 02:38
So this seems to be a 2 part problem. Everything trying to happen in 1 process window when it shouldn't be, and the GPU not being used when it should be.
The use of only 1 window is likely due to the low uptake of multi-monitor setups. Even of those with multi-monitor a lot will not want to use the other monitors for X4 and instead have them display other things for user multitasking.
Most of the time the GPU is being used when it makes sense for it to be used. There are likely a few extreme cases, such as GPU texture decompression and direct I/O, where the GPU may not currently see use but these are cutting edge technologies which last I checked lack support for Vulkan and have as good as no adoption in the non-console gaming industry.
apricotslice wrote: ↑Tue, 29. Nov 22, 02:38
Now I'm finding they've stopped doing that. Or maybe the hardware is pulling away from them faster than they're aware of, or maybe they've stopped pushing the envelope.
The issue is more that there is no real logical progression path for consumer hardware anymore. CPU, memory and GPU capabilities and performance are all over the place and ultimately you have to design a game that targets a wide variety of hardware if you ever want to see significant sales.
In the old days there were single cores, dual cores and then quad cores, which then remained the status quo for over a decade with the main difference between tiers being clock speed. This resulted in the progression path of 1 -> 2 -> 4 threads where you could expect every few years significant IPC improvements per thread. Sure, there were outliers with more cores but these were usually lower clocked so no one would expect to perform better when gaming on them. However recently things have all changed... Now we have 6, 8, 12, and even 16 core processors in consumer space thanks to AMD. Well we do not.. since even more recently we have those and 4 to 8 additional E cores thanks to 12th gen Intel. Well we do not... Now we have all those plus an additional 8 to 16 E cores thanks to 13th generation Intel. So within the consumer space performance cores are anywhere from 6 to 16 and E cores from 0 to 16 depending on the product tier with high end parts having between 16 and 24 cores. It only makes sense to build your game to be fully playable by the lowest core configuration to have access to the largest player base which means that X4 must be highly playable on a 4-6 core processor, which it is. Since it is highly playable on a 6 core processor it is very difficult to find
optional uses for additional processing cores, especially when there are large diminishing returns with some tasks scaling with core count. It is just not reasonable to expect modern high-end processors to be fully loaded by games like X4, or even games in general. I do not know of a single game client that would fully load a modern high end consumer processor.
Memory is a similar issue as you have people with anything from 16 GB all the way up to 128 GB or more. The game has to be highly playable on the lower end (most common) configurations of that. Since it is already highly playable without using a lot of memory, it is hard to find optional uses for that memory to try and improve performance. Especially considering memory bandwidth does not scale linearly with memory amount. Sure, they could cache more and more assets of X4 in that extra memory, but X4 is not even that big (will fit inside 128 GB of memory completely) and if installed to a high speed NVMe drive this might not even give a noticeable increase in performance over reloading the assets from storage.
Modern GPUs not being highly utilised is largely due to CPU bottlenecks and the overall environment of X4. Space, like in real life, is quite empty so quite a bit of the time is much easier to render than a city, or forest, or even a crowd of people. Of course, they could heavily enhance the quality of assets, adding more fine geometry and higher resolution textures, but assets cost a lot of money to create and are generally considered a huge development bottleneck in the gaming industry. Raytracing could be an option, but this would require significant development time for a feature that most people probably will not use and might not even improve visuals significantly in most scenes. Even then if you are flying to the middle of nowhere there might be little for your GPU to do.
apricotslice wrote: ↑Tue, 29. Nov 22, 02:38
My frustrations this time around are based on the fact the game isn't using the available hardware, and is instead creating a series of choke points, and forcing you to endlessly go back and forth through the same screens, with lag between them, on a computer which is effectively under utilized.
Might be a good idea to post the not modified save along with recreation instructions to experience this "lag". The developers might be able to profile it to look for potential optimisations in those cases for the future.
KextV8 wrote: ↑Tue, 29. Nov 22, 04:09
As for this, my personal perspective is that hardware limits, especially regarding raw CPU processing power simply have not improved at a really impressive rate compared to the gains that were happening 15 years ago. GPU's have gotten a lot better, but CPU's have really kind of stagnated. Like... great you shoved more cores onto a single chip, but that doesn't really help much on single thread processing, and doesn't address thermal issues very well.
Single thread performance has improved significantly over the last few years. It might not be improving as fast as it was during the 1990s but something like a Core i9 12900k does have a lot higher single thread performance than a Core i9 9900K from a few years ago. There are still significant performance improvements expected over the coming generations that are not all from throwing more power at the problem.
apricotslice wrote: ↑Tue, 29. Nov 22, 04:49
But it does make me wonder where all the gamers went, because in gamer expectation terms, I thought I was falling behind.
Most PC gamers were always chasing the best bang for the buck. Like quad core i5 processors that could be overclocked to perform like much more expensive i7s with only a moderate cooling solution. This used to include some high-end GPUs, but back before they cost anywhere close to a thousand dollars each. As it stands no one currently recommends a RTX 4090 or for gaming, that GPU just does not make sense in that use case given its price and capabilities.