Vulkan & Multi-GPU
Moderator: Moderators for English X Forum
-
- Posts: 377
- Joined: Mon, 15. Mar 04, 08:07
Vulkan & Multi-GPU
So I know this is old news and that X4 does not have muli-gpu programmed for Vulkan API but I am willing to take a crack at it -
https://stackoverflow.com/questions/318 ... ith-vulkan
That be a few ways to get this job done, Rendering station high LOD meshes on secondary GPU, along with the shaders.
Primary GPU display outputs to monitor and renders the ships as instanced contexts within the API splitting up the rendering into parts, which can be handled by either GPU/s
If more useful could have a go at computation tasks with Vulkan to instead run Ai processing logic on GPU chip instead of CPU, GPU architecture as most will know already is vastly more effective for mathematical calculation and in doing so could reduce CPU usage by up to 60 percent if one was to make use of the GPU for computing tasks of the scripts engine.
A great deal of work indeed, however, a great deal to be gained especially when the aim of the game is to simulate as much onscreen as you can before you run out compute power bringing the entire engine to a standstill. Honestly, I would do it just to see the game run at 100FPS and be able to make use of older hardware in order too bring minimum frames up in complex ship battles by using the GPU too simulate pathing instead of the CPU
Here a few digests of information and just things about Vulkan API in general on how things could be very differently used since X4 is using Vulkan API and I keep feeling Vulkan gets underutilized by the majority of game devs.
1.http://www.duskborn.com/posts/a-simple- ... e-example/
2. http://ojs.bibsys.no/index.php/NIK/article/view/513/437 - an interesting read on Vulkan AP subdividing rendering methods.
3. https://www.imgtec.com/blog/gnomes-per- ... opengl-es/
https://stackoverflow.com/questions/318 ... ith-vulkan
That be a few ways to get this job done, Rendering station high LOD meshes on secondary GPU, along with the shaders.
Primary GPU display outputs to monitor and renders the ships as instanced contexts within the API splitting up the rendering into parts, which can be handled by either GPU/s
If more useful could have a go at computation tasks with Vulkan to instead run Ai processing logic on GPU chip instead of CPU, GPU architecture as most will know already is vastly more effective for mathematical calculation and in doing so could reduce CPU usage by up to 60 percent if one was to make use of the GPU for computing tasks of the scripts engine.
A great deal of work indeed, however, a great deal to be gained especially when the aim of the game is to simulate as much onscreen as you can before you run out compute power bringing the entire engine to a standstill. Honestly, I would do it just to see the game run at 100FPS and be able to make use of older hardware in order too bring minimum frames up in complex ship battles by using the GPU too simulate pathing instead of the CPU
Here a few digests of information and just things about Vulkan API in general on how things could be very differently used since X4 is using Vulkan API and I keep feeling Vulkan gets underutilized by the majority of game devs.
1.http://www.duskborn.com/posts/a-simple- ... e-example/
2. http://ojs.bibsys.no/index.php/NIK/article/view/513/437 - an interesting read on Vulkan AP subdividing rendering methods.
3. https://www.imgtec.com/blog/gnomes-per- ... opengl-es/
Last edited by Misunderstood Wookie on Sun, 24. Mar 19, 07:20, edited 1 time in total.
*modified*
*X3 LiteCube User*
MOD GemFX Real Space Shaders
MOD Variety and Rebalance Overhaul Icon Pack
I lost my Hans and should not be flying Solo.

*X3 LiteCube User*
MOD GemFX Real Space Shaders
MOD Variety and Rebalance Overhaul Icon Pack
I lost my Hans and should not be flying Solo.

-
- Moderator (English)
- Posts: 3230
- Joined: Mon, 14. Jul 08, 13:07
Re: Vulkan & Multi-GPU
What, exactly, do you mean by "willing to take a crack at it"?
-
- Posts: 377
- Joined: Mon, 15. Mar 04, 08:07
Re: Vulkan & Multi-GPU
I mean I am willing to write the code changes to enable Vulkan multi-gpu renderingradcapricorn wrote: ↑Sun, 24. Mar 19, 07:03 What, exactly, do you mean by "willing to take a crack at it"?
initially get multi-gpu to combine efforts to render a single frame, then later attempt to split rendering into instances where one GPU can tackle say the compte tasks of shading while the other is rendering the ships on screen before both combine to display one frame. I doubt it will be easy to code the AI pathing for compute but it is possible to have a second GPU purly do the math required to calculate ship travel and job ques.
At least it is effectively plausible with Vulkan I have no idea how complicated it would be to write the required instruction sets nor would I really get the chance as this would involve access to the raw engine data before compile and re-write of most of its Vulkan render routines.
I am just saying that it would not be too hard to get simple crossfire support working, this just requires Vulkan have multi-gpu support enabled, how well it works comes down to how the APi is synchronized but in theory it should at least work to some degree due the game engine being written in Vulkan
*modified*
*X3 LiteCube User*
MOD GemFX Real Space Shaders
MOD Variety and Rebalance Overhaul Icon Pack
I lost my Hans and should not be flying Solo.

*X3 LiteCube User*
MOD GemFX Real Space Shaders
MOD Variety and Rebalance Overhaul Icon Pack
I lost my Hans and should not be flying Solo.

-
- Moderator (English)
- Posts: 4933
- Joined: Fri, 21. Dec 18, 18:23
Re: Vulkan & Multi-GPU
Problem with multi GPU support implementation is synchronization overhead. Where as moving data around inside a single GPU is as good as free, moving data between GPUs incurs overhead. Largest of which is latency since one has to wait until both GPUs finish their assigned work, then shove the results to the display GPU (slow) then merge the results before display. This extra PCI-E bandwidth use might also interfere with the CPU communicating with the GPUs. This is ignoring the fact that the GPU will out right have to spend more time communicating with GPUs.
SLI has dedicated communication bridges between GPUs. This means SLI puts no overhead on PCI-E bandwidth hence is slightly more efficient.
GPU accelerating the AI is not as easy as one may think. Most game AI is not suitable for such acceleration since they are not computationally intensive but rather logic intensive, having huge hard coded decision trees. GPUs are really bad at making decisions but fantastic at general computation. Additionally if a general computation AI is used such as a neural network, which is being used in cutting edge game research, one needs an AI accelerator. NVidia purposely restricted AI acceleration support on their previous generation GPUs, only adding it back with the RTX and non RTX cards released recently which have tensor cores to do this job. Without AI hardware acceleration it is impossible to use general computation AI since CPUs and GPUs just are not fast enough for real time performance with a decent quality network.
Path finding is an example of something a GPU is likely very bad at. It requires a lot of iteration and decision making to find the correct outcome. GPUs are bad at executing branching code like this where as CPUs are very good at it. However what will help improve path finder performance is to better multi thread it since nothing will stop modern 8 core with HT CPUs from computing up to 16 paths simultaneously.
SLI has dedicated communication bridges between GPUs. This means SLI puts no overhead on PCI-E bandwidth hence is slightly more efficient.
GPU accelerating the AI is not as easy as one may think. Most game AI is not suitable for such acceleration since they are not computationally intensive but rather logic intensive, having huge hard coded decision trees. GPUs are really bad at making decisions but fantastic at general computation. Additionally if a general computation AI is used such as a neural network, which is being used in cutting edge game research, one needs an AI accelerator. NVidia purposely restricted AI acceleration support on their previous generation GPUs, only adding it back with the RTX and non RTX cards released recently which have tensor cores to do this job. Without AI hardware acceleration it is impossible to use general computation AI since CPUs and GPUs just are not fast enough for real time performance with a decent quality network.
Path finding is an example of something a GPU is likely very bad at. It requires a lot of iteration and decision making to find the correct outcome. GPUs are bad at executing branching code like this where as CPUs are very good at it. However what will help improve path finder performance is to better multi thread it since nothing will stop modern 8 core with HT CPUs from computing up to 16 paths simultaneously.
-
- Posts: 128
- Joined: Fri, 4. Apr 14, 17:40
Re: Vulkan & Multi-GPU
I think people mix up the two parts of using neural networks: the training and the execution.Imperial Good wrote: ↑Sun, 24. Mar 19, 07:40 Without AI hardware acceleration it is impossible to use general computation AI since CPUs and GPUs just are not fast enough for real time performance with a decent quality network.
The execution is super fast, even on large networks.
What takes huge computational effort is the training of the net, iterating millions of times on the training data, and adjusting the network.
But games would in the end just EXECUTE neural networks to use it in the AI decision making. This is really not hard on the hardware.
(E.g.: The developers prepare the networks beforehand)
-
- Moderator (English)
- Posts: 4933
- Joined: Fri, 21. Dec 18, 18:23
Re: Vulkan & Multi-GPU
It is hard on standard CPU and GPU hardware because it still requires a lot of computation for a decent sized network. Sure it is a lot faster than training, but it is not a fast process.But games would in the end just EXECUTE neural networks to use it in the AI decision making. This is really not hard on the hardware.
If it was then NVidia would not be pushing solutions like this...
https://news.developer.nvidia.com/get-t ... -tensorrt/
The obvious advantage of hard accelerating the trained neural network is that it removes most of the workload from the CPU. If using a GPU like an RTX or 1660/1650 from NVidia it will also have minimal impact on GPU performance thanks to dedicated hardware. AMD cards would need to sacrifice compute time which is another consideration (worse frame rate).
-
- Posts: 377
- Joined: Mon, 15. Mar 04, 08:07
Re: Vulkan & Multi-GPU
I think the argument is pretty simple to me, Current GCN hardware does not have onPCB solutions like RTX has for Raytracing or compute tasks, Raytracing is a form of onPCB die specifically for compute tasks to take such load off the main GPU silicon.Imperial Good wrote: ↑Sun, 24. Mar 19, 11:29It is hard on standard CPU and GPU hardware because it still requires a lot of computation for a decent sized network. Sure it is a lot faster than training, but it is not a fast process.But games would in the end just EXECUTE neural networks to use it in the AI decision making. This is really not hard on the hardware.
If it was then NVidia would not be pushing solutions like this...
https://news.developer.nvidia.com/get-t ... -tensorrt/
The obvious advantage of hard accelerating the trained neural network is that it removes most of the workload from the CPU. If using a GPU like an RTX or 1660/1650 from NVidia it will also have minimal impact on GPU performance thanks to dedicated hardware. AMD cards would need to sacrifice compute time which is another consideration (worse frame rate).
If you think AMD do not have an answer for this you would be mistaken. The chiplet design is a step showing AMD is ready to design chips with multiple types of workload chips on a single socket die.
They will bring this to the new Radeon most likely with the release of Radeon 3000 series and Zen 2 later this year. The difference, however, is GCN hardware is capable of doing everything already, unlike Nivida AMD do not lock the drivers behind a PCB paywall.
That is not to say Nvidia is not doing good things either but I believe we are going to see a shift in power both companies are going at it neck and neck just like Intel and AMD CPU market share.
If the future is to be believed anyway it will not be that long another 5-10 years before mainstream compute tasks are handed over to cloud rental servers anyway and another 20 years ARM designs dominate the hardware chip sector, but I believe nothing would be achieved if developers did not TRY to do something by pushing current generation tech in order to improve upon that in the next generation of that hardware.
So saying that trying to do this for X4 is pointless I think is a false statement if you can isolate which parts of the game require the most render time then try and optimise that code to work under multi-GPU situations then thinking about it is not good enough it needs actual testing vs single card to find out exactly how it affects performances over time. Vulkan is a very fast API, I have already told you this. Developers are not using Vulkan to its fullest right now, there is MUCH more to be done code wise optimisation wise even for single card performance. Besides if they EgoSoft do not hunt ghosts in their code performance is not going to get any better. I am wondering why performance is the way it is given the fact the GPU is not rendering a great deal in these types of games why do things go to a crawl.. it is more to do with how fast the logic is calculated and I very much would like a crack to fix that because more reliable math with faster approximate calculations will improve average framerate of the universe being driven.
Why is on board a station frame rates tank 10 or even 20 fps, why is only when small battles break out the frame rate tanks, either the rendering is happening too slowly which is probably fixable, or the assets are not optimised well enough at least asset wise that is something we can actually fix without reverse engineering the engine as we can run the models through blender or Max to fix them considering, for the most part, they are not very complex shapes at a distance.
I still believe it is the calculations which are too slow, it is apparent because of all the bugs with Ai trades getting caught up and looping around and doing silly things like this I still believe that is a core issue which should be taking priority over any other improvement to the game and before DLC I would seriously work on that a lot harder as faster calculations ultimately reduce CPU time per instruction which and yes reduce GPU time per instruction because Vulkan also has to wait for calculations by the CPU create the next frame.
*modified*
*X3 LiteCube User*
MOD GemFX Real Space Shaders
MOD Variety and Rebalance Overhaul Icon Pack
I lost my Hans and should not be flying Solo.

*X3 LiteCube User*
MOD GemFX Real Space Shaders
MOD Variety and Rebalance Overhaul Icon Pack
I lost my Hans and should not be flying Solo.

-
- Moderator (English)
- Posts: 3230
- Joined: Mon, 14. Jul 08, 13:07
Re: Vulkan & Multi-GPU
Ok... 

-
- Moderator (English)
- Posts: 4933
- Joined: Fri, 21. Dec 18, 18:23
Re: Vulkan & Multi-GPU
I was stating facts based on current products rather than speculating future products. One can purchase a GTX 16XX or a RTX 20XX card right now and it comes with tensor cores and other such AI accelerating hardware. This hardware not only accelerates training and execution of neural network based AI, but also gives another set of parallel capabilities allowing it to do so with minimal performance impact on the rest of the GPU. One can purchase an AMD GPU and it can be used to accelerate AI except since it lacks dedicated hardware to do so meaning the AI has to eat into general GPU compute resources which will impact performance.If you think AMD do not have an answer for this you would be mistaken. The chiplet design is a step showing AMD is ready to design chips with multiple types of workload chips on a single socket die.
They will bring this to the new Radeon most likely with the release of Radeon 3000 series and Zen 2 later this year. The difference, however, is GCN hardware is capable of doing everything already, unlike Nivida AMD do not lock the drivers behind a PCB paywall.
Modern Apple computers have dedicated AI accelerator units on them. I am unsure if this applies to their desktop and laptops, but it does to their phones and especially their tablets. However these operate separately from graphics with a dedicated API to use them.
Both AMD and Intel will provide hardware AI acceleration in coming generations of their products. This is to follow the general trend of hardware accelerated AI that Apple and NVidia are leading the way with. However these are future products, not existing ones.
It will give throughput increases. However it will also increase frame delivery latency and the throughput increase will be less than 100%. It also opens up a whole lot of other problems such as dealing with mismatched GPUs and load balancing them such that the weaker GPU gets proportionally less work than the stronger GPU to avoid bottlenecking more than using the stronger GPU alone. One would need an entire framework to allow users to control this since chances are auto detection would eventually get something wrong causing lower performance.So saying that trying to do this for X4 is pointless I think is a false statement if you can isolate which parts of the game require the most render time then try and optimise that code to work under multi-GPU situations then thinking about it is not good enough it needs actual testing vs single card to find out exactly how it affects performances over time.
One has to balance the value that such functionality would bring. Multi GPU for gaming setups are extremely rare, limited to only a few people with more money than sense or who lucked out and ended up getting a second GPU cheaply. One almost always will get better results spending the same money for 2 GPUs to buy a single better GPU. Coding good support for this might take a programmer 1-2 months at least. X4 also does not have a huge player base compared with games like Overwatch. Is it worth it for a dozen or so players to get better performance?
Which is exactly why they should not be wasting time adding multi GPU support and instead focus on optimizing single GPU performance. Where as multi GPU support will benefit only a hand full of people who meet the requirements, optimizing single GPU performance will benefit everyone including those with multiple GPUs.Developers are not using Vulkan to its fullest right now, there is MUCH more to be done code wise optimisation wise even for single card performance.
This is why few developers support multiple GPUs directly even if they do use Vulkan. They might support technology like SLI, but that occurs more transparently.
CPU bottleneck. Most performance problems I notice are when a lot of gravity sources are overlapping and collisions are occurring at the same time. Physics is well known to be CPU intensive so this makes sense.I am wondering why performance is the way it is given the fact the GPU is not rendering a great deal in these types of games why do things go to a crawl.
Have you tried sending them your résumé? It would likely work better than a forum message board postit is more to do with how fast the logic is calculated and I very much would like a crack to fix that because more reliable math with faster approximate calculations will improve average framerate of the universe being driven.
Most likely to do with collisions. Most performance drops I observe occur when collisions are bugging out or occurring frequently.Why is on board a station frame rates tank 10 or even 20 fps, why is only when small battles break out the frame rate tanks, either the rendering is happening too slowly which is probably fixable, or the assets are not optimised well enough at least asset wise that is something we can actually fix without reverse engineering the engine as we can run the models through blender or Max to fix them considering, for the most part, they are not very complex shapes at a distance.
Physics calculations are notoriously complex to solve. One of the biggest causes of low performance with StarCraft II maps is when too many unit on unit collisions are occurring at the same time. If StarCraft II suffers from this, a game made by a huge company with huge player base, then what chance does Egosoft, who have a much smaller development team, have at doing it orders of magnitude better?I still believe it is the calculations which are too slow, it is apparent because of all the bugs with Ai trades getting caught up and looping around and doing silly things like this I still believe that is a core issue which should be taking priority over any other improvement to the game and before DLC I would seriously work on that a lot harder as faster calculations ultimately reduce CPU time per instruction which and yes reduce GPU time per instruction because Vulkan also has to wait for calculations by the CPU create the next frame.
If one multi threads more, then performance drops on users with low core count. If one uses AVX then players like myself cannot play anymore as my I7 920 does not support AVX. If one uses more modern GPU shader features then many people I know will not be able to play anymore because their GPUs are older.
There really is not a whole lot one can do to make calculations go faster, next to optimizing the specific pieces of code or cases that are causing performance bottlenecks. Judging how much performance has improved since 1.00 to 2.20 I am guessing Egosoft are doing this already.
-
- Posts: 486
- Joined: Mon, 3. May 10, 20:30
Re: Vulkan & Multi-GPU
Every single MultiGPU scheme has failed to actually achieve a market majority or developer support. It is a waste of time except for certain very niche cases.
"A Tradition is only as good as it's ability to change." Nael
-
- Posts: 5625
- Joined: Sat, 10. Nov 12, 17:55
Re: Vulkan & Multi-GPU
Haven't failed, they have specific more serious use cases then gaming. Like how cuda has spread in physical simulations, and gets used in the industry.Nafensoriel wrote: ↑Mon, 25. Mar 19, 17:14 Every single MultiGPU scheme has failed to actually achieve a market majority or developer support. It is a waste of time except for certain very niche cases.
Besides the issues mentioned above with for ex trade AI acting dumb has not much to do with how effective resource usage is. It's more because of issues with the scripts.
-
- Posts: 486
- Joined: Mon, 3. May 10, 20:30
Re: Vulkan & Multi-GPU
And to a video game developer? As i said niche uses have been found and are great for the technology. For a video game developer though it is a complete failure with to few adopters.pref wrote: ↑Mon, 25. Mar 19, 20:24Haven't failed, they have specific more serious use cases then gaming. Like how cuda has spread in physical simulations, and gets used in the industry.Nafensoriel wrote: ↑Mon, 25. Mar 19, 17:14 Every single MultiGPU scheme has failed to actually achieve a market majority or developer support. It is a waste of time except for certain very niche cases.
Besides the issues mentioned above with for ex trade AI acting dumb has not much to do with how effective resource usage is. It's more because of issues with the scripts.
"A Tradition is only as good as it's ability to change." Nael
-
- Posts: 718
- Joined: Wed, 3. Jul 13, 03:21
Re: Vulkan & Multi-GPU
Ambitious. My guess though is that there are *far* greater optimizations to be found elsewhere in the implementation.
-
- Posts: 377
- Joined: Mon, 15. Mar 04, 08:07
Re: Vulkan & Multi-GPU
So after reading everything posted, I need to clear somethings up.
Well first, I am aware multi-gpu is never going to be a priority, ideally even I would prefer single GPU or single PCB - multi die (in the case of AMD R9 290Xx2 for example) even thos PCB with two gpu cores tend to act the same way as SLI or crossfire in terms of detection by the system (and I am usure if that is software limitation or hardware based). As for being a waste of time for game devs, well statistically yes it would be, and money spent try to optimise the engine for such is not worth the dev time, but I will touch at least on AMD's side of things as you need concider that most of all AMD GPU architecture has been GCN based regadless what the CORE architecture actually is for the past 5-6 years and while it is ideal you have the same card installed into a system to keep the issues down, this only really matters if your trying to spread the load beteen both, but I am not going to be spreading the load between both I am going to treat each as individual and they do not need to talk to each other very much at all, quite similar to how the Nvidia PhsyX cards worked in the days of old they were never reliant on cross PCB communication or what card you had installed.
Only it needed to be Nvidia but that was not a hardware limitation it was driver limitation, now considering GCN and every AMD card you find today is GCN any driver for AMD supports this or in a better term any Vulkan API driven application will support this no matter if the card is Nvidia or AMD. A lot of this is theory I have no idea how it would play out under game situations but I do have an idea how it would play out doing simple test renders as the CPU no longer is trying to calculate physics thus you just eliminated a good 40% of your overhead.
Ship Pathing/Trade, well yes it is a problem with scripts but scripts are still cycling through logic every time the game has to re-calculate a route it is eating into our instructions available for more important tasks, where as if the approximations were faster ships may not collide as frequently either and you can see where that is going, fixing the scripts would be ideal but so would improving the equations used to approximate math reducing the calculation time overall I feel both could be improved.
Collisions, hmm this is a tricky one, I mean there are things they probably already are doing such as when the player is not in the active sector the sector unloads collisions and does not do as much to process them just the engine knows they collided but does not do animations or avoidance with as much approximation but if this is not the case it could save on a dozen or so instructions per sector.
Why Multi--GPU
I challenge I guess, not something I expect to be for everyone but as others pointed out collision math is hard on the CPU, except GPU's are relatively good at calculation physics so I was going to not try and load balance things at all actually, I was going to explicitly move aspects of the math onto a second GPU and that would be its sole job to calculate physics math which if that is all it has to do I imagine would happen faster than everything else is calculated as it can do this while the CPU is calculating pathing it really should not increase latency and even if it did increase latency a little bit, well I guess any improvements to the scripts would return the latency back in another area.
Thus balanced should remain with maybe not more calculations per cycle but hopefully smarter and faster calculations which in theory at least is all going to be seen in render latency time.
That being said, your output display is equally important and I don't know why people skimp out on the monitors. It is almost as if people do not realise that your output device is like a HDD inside a PC, your latency for frame display is also affected and bottlenecked by your display.
The easiest way to describe that is by saying if your display is locked to 70hz then no matter if your GPU is ready or not it cannot do another task until your display is ready to accept the frame, this all adds to latency.
and is the major benefit to Gysnc and Freesync displays which have 120-144hz panels as you basically remove the latency your output display creates on your buffer frame.
Well first, I am aware multi-gpu is never going to be a priority, ideally even I would prefer single GPU or single PCB - multi die (in the case of AMD R9 290Xx2 for example) even thos PCB with two gpu cores tend to act the same way as SLI or crossfire in terms of detection by the system (and I am usure if that is software limitation or hardware based). As for being a waste of time for game devs, well statistically yes it would be, and money spent try to optimise the engine for such is not worth the dev time, but I will touch at least on AMD's side of things as you need concider that most of all AMD GPU architecture has been GCN based regadless what the CORE architecture actually is for the past 5-6 years and while it is ideal you have the same card installed into a system to keep the issues down, this only really matters if your trying to spread the load beteen both, but I am not going to be spreading the load between both I am going to treat each as individual and they do not need to talk to each other very much at all, quite similar to how the Nvidia PhsyX cards worked in the days of old they were never reliant on cross PCB communication or what card you had installed.
Only it needed to be Nvidia but that was not a hardware limitation it was driver limitation, now considering GCN and every AMD card you find today is GCN any driver for AMD supports this or in a better term any Vulkan API driven application will support this no matter if the card is Nvidia or AMD. A lot of this is theory I have no idea how it would play out under game situations but I do have an idea how it would play out doing simple test renders as the CPU no longer is trying to calculate physics thus you just eliminated a good 40% of your overhead.
Ship Pathing/Trade, well yes it is a problem with scripts but scripts are still cycling through logic every time the game has to re-calculate a route it is eating into our instructions available for more important tasks, where as if the approximations were faster ships may not collide as frequently either and you can see where that is going, fixing the scripts would be ideal but so would improving the equations used to approximate math reducing the calculation time overall I feel both could be improved.
Collisions, hmm this is a tricky one, I mean there are things they probably already are doing such as when the player is not in the active sector the sector unloads collisions and does not do as much to process them just the engine knows they collided but does not do animations or avoidance with as much approximation but if this is not the case it could save on a dozen or so instructions per sector.
Why Multi--GPU
I challenge I guess, not something I expect to be for everyone but as others pointed out collision math is hard on the CPU, except GPU's are relatively good at calculation physics so I was going to not try and load balance things at all actually, I was going to explicitly move aspects of the math onto a second GPU and that would be its sole job to calculate physics math which if that is all it has to do I imagine would happen faster than everything else is calculated as it can do this while the CPU is calculating pathing it really should not increase latency and even if it did increase latency a little bit, well I guess any improvements to the scripts would return the latency back in another area.
Thus balanced should remain with maybe not more calculations per cycle but hopefully smarter and faster calculations which in theory at least is all going to be seen in render latency time.
That being said, your output display is equally important and I don't know why people skimp out on the monitors. It is almost as if people do not realise that your output device is like a HDD inside a PC, your latency for frame display is also affected and bottlenecked by your display.
The easiest way to describe that is by saying if your display is locked to 70hz then no matter if your GPU is ready or not it cannot do another task until your display is ready to accept the frame, this all adds to latency.
and is the major benefit to Gysnc and Freesync displays which have 120-144hz panels as you basically remove the latency your output display creates on your buffer frame.
*modified*
*X3 LiteCube User*
MOD GemFX Real Space Shaders
MOD Variety and Rebalance Overhaul Icon Pack
I lost my Hans and should not be flying Solo.

*X3 LiteCube User*
MOD GemFX Real Space Shaders
MOD Variety and Rebalance Overhaul Icon Pack
I lost my Hans and should not be flying Solo.

-
- Posts: 5625
- Joined: Sat, 10. Nov 12, 17:55
Re: Vulkan & Multi-GPU
Not really, trade scripts do not recalculate routes all the time.ledhead900 wrote: ↑Tue, 26. Mar 19, 07:24 Ship Pathing/Trade, well yes it is a problem with scripts but scripts are still cycling through logic every time the game has to re-calculate a route it is eating into our instructions available for more important tasks, where as if the approximations were faster ships may not collide as frequently either and you can see where that is going, fixing the scripts would be ideal but so would improving the equations used to approximate math reducing the calculation time overall I feel both could be improved.
The cases where you can experience low fps usually are either when map is overloaded with ship commands etc, or when there are way too many objects to render.
Biggest improvement would probably be if the render loop could be sped up but that's a different topic.
Changing the engine is beyond question anyway, you cannot refactor/rewrite huge chunks of disassembled code and aren't even allowed to. Plus gains are questionable as it's highly conditional and separate threads need to know about each other for avoidance.
One might bump into the same issue why it's faster to run an nbody simulation on GPU calculating all pairs with a cost of O(N^2) instead of using an octal tree with O(n log(N)).
-
- Posts: 377
- Joined: Mon, 15. Mar 04, 08:07
Re: Vulkan & Multi-GPU
Interesting, at least I think we can agree there are areas which could be improved but by what order of magnitude it would be in real-gameplay any body's guess as we cannot do much about it except offer up equations to use and hope they like it.pref wrote: ↑Tue, 26. Mar 19, 14:13Not really, trade scripts do not recalculate routes all the time.ledhead900 wrote: ↑Tue, 26. Mar 19, 07:24 Ship Pathing/Trade, well yes it is a problem with scripts but scripts are still cycling through logic every time the game has to re-calculate a route it is eating into our instructions available for more important tasks, where as if the approximations were faster ships may not collide as frequently either and you can see where that is going, fixing the scripts would be ideal but so would improving the equations used to approximate math reducing the calculation time overall I feel both could be improved.
The cases where you can experience low fps usually are either when map is overloaded with ship commands etc, or when there are way too many objects to render.
Biggest improvement would probably be if the render loop could be sped up but that's a different topic.
Changing the engine is beyond question anyway, you cannot refactor/rewrite huge chunks of disassembled code and aren't even allowed to. Plus gains are questionable as it's highly conditional and separate threads need to know about each other for avoidance.
One might bump into the same issue why it's faster to run an nbody simulation on GPU calculating all pairs with a cost of O(N^2) instead of using an octal tree with O(n log(N)).
I wonder if anything can be done at the driver level? that is something we might actually be able to improve specifically with a third-party game profile driver build
*modified*
*X3 LiteCube User*
MOD GemFX Real Space Shaders
MOD Variety and Rebalance Overhaul Icon Pack
I lost my Hans and should not be flying Solo.

*X3 LiteCube User*
MOD GemFX Real Space Shaders
MOD Variety and Rebalance Overhaul Icon Pack
I lost my Hans and should not be flying Solo.

-
- Moderator (English)
- Posts: 4933
- Joined: Fri, 21. Dec 18, 18:23
Re: Vulkan & Multi-GPU
Extremely unlikely. One must remember that there are thousands of people employed full time by AMD, Intel, NVidia and Microsoft to do this sort of thing. It is unlikely an amateur will get anywhere near as good performance as they are getting, and even if they do achieve better performance it will likely be only single digit percentages at most.Interesting, at least I think we can agree there are areas which could be improved but by what order of magnitude it would be in real-gameplay any body's guess as we cannot do much about it except offer up equations to use and hope they like it.
I wonder if anything can be done at the driver level? that is something we might actually be able to improve specifically with a third-party game profile driver build
Optimizing the game code and game engine is likely the best area for improvement since unlike the driver developers, Egosoft does not have thousands of full time employees dedicated to writing and maintaining their code as they are a small company. One can be pretty sure there are many areas where shortcuts were taken to save time. One can even see this by how much better performance is in 2.20 than it was way back in 1.00, they must have revisited and optimized such parts as is fairly common in small scale software development.
The reality remains that they are still a small company. Yes if they had thousands of people working on code the game engine would likely perform a lot better, however that is not possible from an economics point of view given the size of the player base of their games.