When you have cpus and memory as fast as we do today, a 2x improvement would be a move from insanely fast to even more insanely fast...
Mmm, really depends on the task. If it's a single thing triggered by user interaction, yeah, it's super unlikely to matter if it completes in 100 ms instead of 200 ms. If it's GIMP moving onto GEGL and reducing times for certain image processing operations on my older Windows laptop by several seconds, that's a very noticeable improvement from a user perspective of getting stuff done. Same for a lot of GIS operations or other similar kinds of data manipulations, which often take seconds to minutes even on modest datasets of a few tens of MB. In computationally intensive tasks like video rendering, AI training, and numerical simulation dropping from somewhere around 24 hours to vaguely 12ish hours can be dramatic—instead of dedicating a machine to runs you can build workflows around using it during the day and letting it crunch overnight. There's also a breakpoint around 60 hours since that's the difference between fitting into a weekend run versus spilling over.

Problem, in my experience, is not the coding, it's the testing and retesting (QA) to get to release.
Speaking as someone who's shipped multiple network-oriented products out of several different organizations it should be possible to get that under control, at least as a software engineering capability. Not that we didn't sometimes need a few VMs in parallel to run tests overnight on daily builds or get slammed with analyzing test failures. Usually the problem I've encountered, though, is management wanting to squish everything into devops with the idea tasking people with simultaneously running, developing, and testing a service (or service plus client) somehow leads to everything getting done well. Organizations which explicitly dedicated heads to test automation tended to do better, both in mitigating development cost spirals and in maintaining code quality. Until somebody in upper management decided to lay off the programmers in test because accounting had them on the books as a separate cost center, anyways.

Also too much heat in normal use.
Did he do power measurements, thermal measurements, or indicate sources? Haven't seen much data but the one good third-party measurement I know of found 375–500 mW/GB max for DDR5-4800 to 5400. Which isn't greatly different from the nominal 375 mW/GB commonly assigned for DDR4.

So far I haven't encountered many DDR5 reliability complaints on Intel platforms. Seems like the main issue is price, though the midrange DDR5-DDR4 spread is currently around 25% in the markets I track, which is a ~15% premium on a performance adjusted basis. DDR5-5200 is now where DDR4-3200 was last spring so it's hard for me to call it too terrible. Even at 128 GB, +25% is a fraction of what people here will spend for a lens with slightly nicer optical properties or changing up a body tier.

Open question on AMD DDR5 at this point. AMD's certainly known for buggy launches but, since they've been in a good financial position for Raphael's development, I've seen a fair amount of hope Raphael might not be as bad.

[H]ere in Oz we often don't have parts that are freely available in the USA. Our market is too small, with a population less than California.
On the flip side, Oz has shorter transport distances to southeast Asia and sometimes access to parts which aren't distributed to North America or Europe. There's also parts distributed to Europe which are hard to get in North America and vice versa. Even within Europe there are things you can get in Sweden but not in Norway, for example.

Out of curiosity, what's been mentioned that's not available in Oz?
 
Mmm, really depends on the task. If it's a single thing triggered by user interaction, yeah, it's super unlikely to matter if it completes in 100 ms instead of 200 ms. If it's GIMP moving onto GEGL and reducing times for certain image processing operations on my older Windows laptop by several seconds, that's a very noticeable improvement from a user perspective of getting stuff done. Same for a lot of GIS operations or other similar kinds of data manipulations, which often take seconds to minutes even on modest datasets of a few tens of MB. In computationally intensive tasks like video rendering, AI training, and numerical simulation dropping from somewhere around 24 hours to vaguely 12ish hours can be dramatic—instead of dedicating a machine to runs you can build workflows around using it during the day and letting it crunch overnight. There's also a breakpoint around 60 hours since that's the difference between fitting into a weekend run versus spilling over.

Well, we have been discussing consumer software, not scientific or financial operations at a server room... and even there, I suspect most of the optimization is going to be in getting the data to the processors and not in the processing optimization...

Speaking as someone who's shipped multiple network-oriented products out of several different organizations it should be possible to get that under control, at least as a software engineering capability. Not that we didn't sometimes need a few VMs in parallel to run tests overnight on daily builds or get slammed with analyzing test failures. Usually the problem I've encountered, though, is management wanting to squish everything into devops with the idea tasking people with simultaneously running, developing, and testing a service (or service plus client) somehow leads to everything getting done well. Organizations which explicitly dedicated heads to test automation tended to do better, both in mitigating development cost spirals and in maintaining code quality. Until somebody in upper management decided to lay off the programmers in test because accounting had them on the books as a separate cost center, anyways.

No arguments there, but your latest sentence pretty much says what I said - there is more cost to product release than just software development and barring the very big companies, just about anywhere I have seen there is always a shortage of time and resources for the releases management wants. As a whole, globally, there will be an estimated 1.2 million developers shortage in 2026 and it might be dozens fold that in 2030. It is not a surprise that there is selective optimization.
 
On the flip side, Oz has shorter transport distances to southeast Asia and sometimes access to parts which aren't distributed to North America or Europe. There's also parts distributed to Europe which are hard to get in North America and vice versa. Even within Europe there are things you can get in Sweden but not in Norway, for example.
Quite right. However, the distances aren't shorter if the bits come via (e.g. the USA) first. This is not uncommon for all sorts of things here.
Out of curiosity, what's been mentioned that's not available in Oz?
Motherboards in particular. Many are listed as available, but are special order only. Both CPL and mWave can have long wait times for specific items. These are two of the biggest and most reputable online and storefront (CPL) suppliers here.

My build is currently waiting on the NVMe 1 TB SSD (!), which CPL list as in stock.

Not one of their many stock build boxes meets my requirements in often more than one important way.

The most expensive I looked at (heading towards AUD $10,000!) will not be as capable for my purposes as what I've specified.

An unexpected development is that I will apparently need to buy software to clone my existing SATA MBR SSD onto the new NVMe SSD. Only AUD$102.84 (USA $70) for a lifetime licence, so not the end of the earth, and I have other uses for it. Apparently Windows 7 Pro 64 doesn't support NVMe SSDs without a patch, which Microsoft no longer makes available ...

So the process becomes: backup W7 Pro 64 system image, upgrade to Windows 10 Pro 64 in the existing box, backup new system SSD to existing HDDs, move SATA SSD to new box, clone MBR SATA SSD to UEFI NVMe SSD, remove old SSD, boot system with UEFI NVMe SSD, relocate existing SATA HDDs into new box. Should keep me occupied for a bit.

Then rebuild Bridge cache in the new box.

This will be the first box in about 30 years that I haven't built myself.

The joy ...
 
Last edited:
Motherboards in particular.
Interesting. Just checked the ASRock, ASUS, and MSI boards I'd be most likely to purchase and they showed as in stock across multiple Australian retailers. Presumably they don't all share CPL's stock counting difficulties? CPL did show a five day lead on the MSI MAG B550 Tomahawk but that was the only motherboard-retailer combination actually indicated as out of stock.

Apparently Windows 7 Pro 64 doesn't support NVMe SSDs without a patch, which Microsoft no longer makes available ...
Windows 7 end of life was 2.5 years ago but, yeah, it's a bit curious the page for KB2990941 stayed up but lost the actual download.
 
Interesting. Just checked the ASRock, ASUS, and MSI boards I'd be most likely to purchase and they showed as in stock across multiple Australian retailers. Presumably they don't all share CPL's stock counting difficulties? CPL did show a five day lead on the MSI MAG B550 Tomahawk but that was the only motherboard-retailer combination actually indicated as out of stock.
What's showing in stock and what they actually have can be two very different things. I also happen to be very happy with Gigabyte motherboards. Never had one fail.
Windows 7 end of life was 2.5 years ago but, yeah, it's a bit curious the page for KB2990941 stayed up but lost the actual download.
Microsoft do that as a commonplace these days. Bastards.

As a species, we have to stop this kind of behaviour, which gives a short term benefit to commercial entities, and a disastrous long term outcome for this beautiful planet.

ALL of these fixes and patches are still there, as Microsoft is still supporting Windows 7 for big companies. I suspect the latter to have told Microsoft they could get screwed, as they weren't about to change hundreds of thousands of workstations to W8, W8.1 or W10, all of which barely work by comparison. W9 was stillborn ...

As usual, the general population just get screwed over.

My new partitioning software also picked up logical errors on my boot disk. Fixed.

I will run CHKDSK on my two remaining drives, as it has been far too long since I've done this.

Life is what happens while you are planning other things ...
 
OK. All drives have now had 5 stage CHKDSK run, all OK or just logical errors, no surface scan errors.

Next job is to make a second disk image of my boot SSD. I'm a belts and braces man ...

Then the tedious job of the upgrade to Windows 10 Pro 64.

This took several days on our other main workstation. The initial upgrade only took about 3-4 hours, then the seemingly endless updates and upgrades took about 4-5 days, IIRC. That PC ran out of puff at v.21H2.
 
I learned about 30 years ago that the only type of programming I enjoy anymore is writing code for embedded systems. About the only place that optimization and tight code mean anything. Optimizing code for vector supercomputers and array processors was fun.
Networking- fast was doing a vertical stack in MIPS assembly language and having a CAM for address resolution. SMTP in 200 lines of assembly, using a MUSIC Content Addressable Memory on the NIC. I had the card custom made, including the framing chips. This was in the R&D days of optical networking, the 90s for me.

Running Hot- the computer room had halon, and it fired. They bought me a faster computer after that. Intel Sugercube, 80MFlop/s 80286. That was fast for an 80286.

After almost 50 years of writing code, it's just typing. But it does pay well.
 
Last edited:
I bought a new PC recently as well. Use it for simpler task. It is fanless so totally quiet! (a dream for me with sensitive ears)

Coolby YealBox Intel N3350 Mini PC 6GB DDR4 RAM 120GB SSD RJ45 1000M LAN HDMI Vega Double Screen 4K 60FPS Windows10 Pro Mini Computer

20220921_185206.jpg
Join to see EXIF info for this image (if available)
 
It is fanless so totally quiet!
Cool. How do you like the dual core 6 W TDP footprint?

I learned about 30 years ago that the only type of programming I enjoy anymore is writing code for embedded systems. About the only place that optimization and tight code mean anything.
Yah, smaller embeds are the only place I know where hardware's remained constrained enough spending the engineering time to watch everything closely's stayed a priority. SIMD intrinsics go instruction by instruction but usually those kernels are something you do occasionally as part of a broader job.

Something I've found I enjoy is asynchronous parallel programming. It's a very different sort of approach to performance but figuring out how to structure tasks and data structures so nothing blocks on anything else but several worker threads still communicate effectively to keep all cores hot has proved interesting. At the moment I've a pipelined set of spatial operations where one pass needs to complete before the next one starts, but only locally, so it ends up multidimensionally and regionally partitioned over a surface that's potentially non-convex, may contain holes, and may or may not be disjoint by distances which exceed the locality.

Microsoft is still supporting Windows 7 for big companies.
Mmm, pretty sure Microsoft isn't in any sense relevant to your build, since Windows 7 extended support ended January 14, 2020. Extended security updates for paying volume licensing customers may be issued until January 2023 which, in a narrow sense, does constitute some level of ongoing support.
 
The Intel Sugarcube was Asynchronous Parallel, Four 80286 nodes each with 20MFlop Sky array processors. Each node communicated over a network connection, the programmer responsible for all the synchronization and keeping the array processors fed. It was fine for generating synthetic imagery, fractal based where there was not much in the way of feeding data across the network. Computationally intensive, but generated its own seeds. The Texas Instruments ASC compiler spoiled you- analyzed your code for sections that could be run independently and split it across multiple execution units by itself. I've never used a compiler since that recognized parallelism like it did. I had a thorough knowledge of that machine, including having some of the vector instructions implemented in FPGA for an embedded system I had made 30 years after the ASC was retired.

Feeding Satellite imagery to an FPS120b array processor was a problem. Lots of data, intense crunching for a short while- but long enough that it was worthwhile doing it on the AP and not the VAX 11/780. Until the FPS120b power supplies overheated and it caught on fire. Halon put it out.

These days- processing DNG files using Watcom Fortran on my Centrino Core2 XP SP3 is plenty fast, much faster than Photoshop running on the I7. My XP machines are not on any network, the adapters removed.

I've not bought a new computer for myself in almost 10 years. Back up drives, yes. Computers for my wife and daughter- yes. Make sure they have at least 16GBytes RAM and latest I7 processors. But- sticking with 18MPixel M9 and M Monochrom and 16MPixel Df, no need for an upgrade. I do the image manipulation on the DNG files with my own code, and use Lightroom to export to JPEG. I spend my "fun money" on vintage lenses. My latest 90 year old Sonnar cost almost as much as the CP/m Machine did in 1981. Back when a young guy went with a computer instead of a new car.
 
Last edited:
Mmm, pretty sure Microsoft isn't in any sense relevant to your build, since Windows 7 extended support ended January 14, 2020. Extended security updates for paying volume licensing customers may be issued until January 2023 which, in a narrow sense, does constitute some level of ongoing support.
It's a minor detail, but I'm upgrading from Windows 7 Pro 64 to Windows 10 Pro 64 coincidentally with the hardware swap ...

So yes, just a little bit relevant.

Currently doing the final system image backup prior to commencing the upgrade to Windows 10 Pro 64. I've done all the other housekeeping and critical data backups.

I've never lost a client's data yet. I do not wish to start by losing any of my own ...
 
Cool. How do you like the dual core 6 W TDP footprint?
how do you mean? I think it runs good. The whole box get a little warm but it works good. I had a similar before with a fan and that one got too hot so performance degraded once in a while, very frustrating. This one runs much better, even though it does not have any fan :)
 
Urrrk; good luck. I'm sticking with WIn7 as long as I can. Dreading the inevitable change to 10/11 (and I'm not referring to that as an *up*grade)
Actually, once you download, install and configure Classic Shell, W10Pro is usable, Irene.

It's also better in some ways, just not as straightforward ... 3 steps instead of two for many system processes.

Allow a day for backing up data and a couple of system images.
Then a day for installation and updates.

I noticed today that Microsoft was serving me data at 8 Mbps (!!!) over a hybrid fibre/coax link that runs like a double header diesel train at 106 Mbps download speed. The update is even slower ...

However, the upgrade process is extremely 'clean'. The one and only bug that I have found is that it removes Classic Shell at every opportunity! Grrr ...

Classic Shell even has a one click button to restore a Windows 7 type Shell. However, I prefer proper menus, etc.

And BTW my main workstation is now running W10Pro 64 like a dream.

My new box will be available for pickup on Monday.

Then I just have to clone my existing SATA MBR SSD to the new 1 TB NVMe SSD, resize it into two partitions (400+600?). One for the OS and programs, the other for Bridge cache and current working files.
 
Running Hot- the computer room had halon, and it fired. They bought me a faster computer after that. Intel Sugercube, 80MFlop/s 80286. That was fast for an 80286.
That halon dump was a very expensive event for them. Upgrading your computer was a lot more economical than yet another halon discharge and refill, plus the environmental impact paperwork is a pain.
 
It's built, installed and mostly functional. Windows 10 Pro 64 is no match for Windows 7 Pro 64 in most respects ...

OS 'features' need to be fixed, and the Intel on-board NIC is only running at 100 Mbps.

It is very quick. AOMEI software was critical in getting the boot disk cloned.

DO NOT use UEFI unless you are doing a clean install! In my own case, that amounts to around 150 programs, so not really something to be undertaken lightly. Use MBR for the NVMe SSD instead.

Partitioned the new NVMe SSD as 300 GB for the OS, 200 GB for the Bridge caches, and 400+ GB for fast, temporary storage.

Not without its problems ...
Thank god for the motherboard hardware reset button ...

Adobe is having a hissy fit, of course. Everything else seems to be working beautifully.

My back is killing me!
 
Last edited:
@Brian Brian, I learned well over 30 years ago never to buy 'brand name' boxes. They tend to be full of gotchas. e.g. HP made 32 different servers, not one single one of which was upgradeable to another level!
I've been sticking with mostly Corsair parts, and now an MSI motherboard, I like what these two companies offer. My PC is the Corsair Bulldog barebones kit, with many of the parts swapped out or upgraded over the years.
 
Back
Top