Imperial Good wrote: ↑Mon, 9. Sep 19, 09:24
bignick217 wrote: ↑Fri, 6. Sep 19, 23:40
My Threadripper 1950X (16c/32t) tops out at 4.2Ghz on up to 4 cores and it's struggling to keep the framerates up on stations much past 30. You can't tell me that my processor at 4.2Ghz (with XFR) is not enough Ghz grunt to push this game to acceptable framerates on an RTX 2080Ti.
You incorrectly named your processor "Ryzen 9 1950X" earlier in your post which makes no sense. Fortunately I saw this and deleted the response to that I was going to give...
Your CPU is kind of slow for games. Yes it has 32 threads but they are slow threads. First generation Ryzen was slow. To put it in perspective a Ryzen 5 3600 would likely give you ~30% more FPS.
GHz mean nothing really now due to physical limits. It is largely about IPC as well which is where both second generation and third generation improved. This is also why the Ryzen 5 3600 is a lot better for gaming than even a Ryzen 2700X despite having 2 less cores.
I'm giving you a fair bit of latitude with a lot of your responses. Some things you get right. Others you missed the mark, but were close enough. This one, you are way off and it needs to be addressed. I think I know my own processor better than you do. First off, I did not "incorrectly" state my processor initially as the R9 1950X. While Threadripper is the colloquially accepted name designation for all "Threadripper" processors, R9 is it's actual class designation. In the newest 3rd generation of Ryzen processors (Technically "Zen 2"), the R9 designation has been taken up AM4 class processors that exceed 8 physical cores, but in the first generation of chips, R9 was Threadripper's designation.
Second, are you out of your mind. No, it's not slow for games. Never has been. It is more than capable of handling everything thrown at it with good high framerates. Especially if you pair it with 3200Mhz RAM (and even better if you pair it with CL14 Samsung B-Die RAM modules) like I have (which speeds up the Infinity Fabric and in turn the die-to-die communication speed). What it is not good for, is ultra high framerates if you're wanting to push past 200fps for use with 200hz+ monitors at 1080p resolution. But at higher resolutions like 1440p, there is very little difference between the 1950X and other processors of it's generation and at 4K, the differences are virtually indistinguishable. It was Intel's IPC and Frequency advantage that allowed them to hang on to the ultra high framerate lead. A lead, they have been quickly losing over successive Ryzen generations. But in case you didn't notice in my original post, I don't game on this CPU at 1080p. I game at 4K. Which means I don't care about Ultra-High framerates. If I did, I would be playing competitive games like COD and other FPS games where that matters, and not X4 (A game I only want to maintain 60fps).
On top of that, if you had done your homework before speaking on this topic, you would know, back then, that the 1950X actually outperformed the 1800X, 1700X and 1600X in gaming in that generation in most games. Some by a little. Some by a considerable margin. There were only a few games where the 1950X ended up being slower, and that was usually do to an issue between the game and the UMA/NUMA memory configurations, which you could usually easily fix by simply changing a setting. The 1950X was actually so good while gaming, that when the 2nd Gen Ryzen processors released, while the AM4 chips saw a considerable increase in gaming performance, the 2950X only saw a marginal 5-10% improvement that only equated to about 5 additional frames per second on average. There were one or two outliers that saw a better gain of about 10 frames, but for the most part it was only a few frames difference.
Trying to use the R5 3600 (a processor that's only just released) performance today to justify your argument that the 1950X is "slow" for gaming because it's threads are "slow" is ridiculous. The 1950X has two full-fat 1800X dies on it that are clocked higher than the 1800X was. The 1800X only went up to 4.1Ghz with XFR, where as the 1950X goes up to 4.2Ghz. For you to call the 1950X "slow" would be the same as you calling the 1800X, 1700X, and 1600X all "slow" because they all use the exact same dies (with the 1600X simply having 2 cores disabled), where the 1950X got the best binned dies. Just what do you think Threadripper is? Epic? I'm sorry, but you're wrong. Threadripper is not comparable to an Intel Xeon scenario. And before you ask, I use my PC for more than gaming. I did not buy it jus for gaming. Oh and BTW, I've been building computers for about 20 years now. Trust me when I tell you that I probably know more about hardware and the way computers actually work than you do.
And what the hell do you mean that "GHz mean nothing"? GHz and IPC go hand in hand when it comes to determining single thread (or better said, per thread) performance. You can't have one without the other when determining performance. But you are right that we have pretty much hit the limit of Frequency. And IPC can only get you so far. IPC is all about efficiency and you can only make something so efficient before you start running out of ideas. That's why you see big IPC improvements in the first few generations of a new architecture, and then there after, each successive generation's "improvements" become smaller and smaller (diminishing returns). Intel has illustrated this concept for years. That's why you're now seeing an explosion of increased cores and threads in processors. Because it's now a lot easier to add more cores and threads, than it is to push frequencies higher. And that's why games are becoming a lot more multithreaded these days, because you can process a lot more instructions on two threads at 4Ghz than you can on one thread at 5Ghz. And you can do that while using less power and producing less heat because you're not overtaxing one thread when you have other threads just sitting there doing nothing, wasting resources. The term I heard one developer use is they are now needing to start programming wide instead of tall.
Do I think it's easy. No, of course not. But when you have a bunch of people complaining about performance issues (even people using "gaming" CPUs (even though there's technically no such thing as a "gaming" CPU)) and everyone keeps saying the problem is with the CPU even though these people have CPU's that have threads to spare while only 2 are being said to be getting slammed, it's not outside the realm of reason to think that maybe the developers should do something about it and maybe give a couple of those extra unused threads "something" to do. While I've been playing this game, my overall CPU usage barely ever hits 10% (usually sits around 8%). If the problem is as others have said and it's the CPU at fault because 2 threads isn't enough, then please by all means, use more of my cores. That's what they're there for! I doubt anyone else rocking 6 cores, 8 cores, 6 threads, 12 threads or 16 threads would have a problem with X4 using more of their processor resources if it meant the game would run better.