Mmm, really depends on the task. If it's a single thing triggered by user interaction, yeah, it's super unlikely to matter if it completes in 100 ms instead of 200 ms. If it's GIMP moving onto GEGL and reducing times for certain image processing operations on my older Windows laptop by several seconds, that's a very noticeable improvement from a user perspective of getting stuff done. Same for a lot of GIS operations or other similar kinds of data manipulations, which often take seconds to minutes even on modest datasets of a few tens of MB. In computationally intensive tasks like video rendering, AI training, and numerical simulation dropping from somewhere around 24 hours to vaguely 12ish hours can be dramatic—instead of dedicating a machine to runs you can build workflows around using it during the day and letting it crunch overnight. There's also a breakpoint around 60 hours since that's the difference between fitting into a weekend run versus spilling over.When you have cpus and memory as fast as we do today, a 2x improvement would be a move from insanely fast to even more insanely fast...
Speaking as someone who's shipped multiple network-oriented products out of several different organizations it should be possible to get that under control, at least as a software engineering capability. Not that we didn't sometimes need a few VMs in parallel to run tests overnight on daily builds or get slammed with analyzing test failures. Usually the problem I've encountered, though, is management wanting to squish everything into devops with the idea tasking people with simultaneously running, developing, and testing a service (or service plus client) somehow leads to everything getting done well. Organizations which explicitly dedicated heads to test automation tended to do better, both in mitigating development cost spirals and in maintaining code quality. Until somebody in upper management decided to lay off the programmers in test because accounting had them on the books as a separate cost center, anyways.Problem, in my experience, is not the coding, it's the testing and retesting (QA) to get to release.
Did he do power measurements, thermal measurements, or indicate sources? Haven't seen much data but the one good third-party measurement I know of found 375–500 mW/GB max for DDR5-4800 to 5400. Which isn't greatly different from the nominal 375 mW/GB commonly assigned for DDR4.Also too much heat in normal use.
So far I haven't encountered many DDR5 reliability complaints on Intel platforms. Seems like the main issue is price, though the midrange DDR5-DDR4 spread is currently around 25% in the markets I track, which is a ~15% premium on a performance adjusted basis. DDR5-5200 is now where DDR4-3200 was last spring so it's hard for me to call it too terrible. Even at 128 GB, +25% is a fraction of what people here will spend for a lens with slightly nicer optical properties or changing up a body tier.
Open question on AMD DDR5 at this point. AMD's certainly known for buggy launches but, since they've been in a good financial position for Raphael's development, I've seen a fair amount of hope Raphael might not be as bad.
On the flip side, Oz has shorter transport distances to southeast Asia and sometimes access to parts which aren't distributed to North America or Europe. There's also parts distributed to Europe which are hard to get in North America and vice versa. Even within Europe there are things you can get in Sweden but not in Norway, for example.[H]ere in Oz we often don't have parts that are freely available in the USA. Our market is too small, with a population less than California.
Out of curiosity, what's been mentioned that's not available in Oz?