HP Updates The Z240 Workstation With The Core i7-6700K
by Brett Howse on July 26, 2016 1:00 PM EST- Posted in
- Workstations
- HP
- Xeon
- Skylake
HP has an interesting announcement today - they are refreshing their existing Z240 workstation, which is targeted towards small and medium-sized businesses, with a non-Xeon Core i7 based processor. It was already available with Skylake based Xeon CPUs, up to the Intel Xeon E3-1280 v5. That’s a 3.7-4.0 GHz Xeon, with 4 cores, 8 MB of cache, with an 80-Watt Thermal Design Power (TDP). That’s certainly an excellent choice for a lot of workloads that workstations are tasked with, and with support for ECC memory, reliability under load is also a key factor. But HP has been talking to their customers and found that many of them have been choosing to forgo the error checking capabilities of ECC and have been building or buying equivalent gaming-focused machines in order to get more performance for the money. Specifically, they have been building desktops with the Core i7-6700K, which is an unlocked 4.0-4.2 GHz quad-core design, with a 91-Watt TDP, and in pure frequency can offer up to 13% more performance than the fastest Skylake Xeon.
So armed with this data, HP has refreshed the Z240 line today, with the usual Skylake Xeons in tow but also an option for the Core i7-6700K. This desktop sized workstation supports up to 64 GB of DDR4-2133, with ECC available on the Xeon processors only. It’s a pretty interesting move, but can make a lot of sense if most customers would probably rather purchase a workstation from a company like HP so that they get the testing and support offerings found with workstation class machines. If some of them had to resort to building their own in order to get the best CPU performance, HP has made a wise decision to offer this.
Despite the higher TDP, HP has created fan profiles which they say will allow full turbo performance with no thermal throttling, while at the same time not exceeding their acoustic threshold which I was told was a mere 31 dB. Although they have offered closed loop liquid cooling on their workstations in the past, the Z240 achieves this thermal performance with more traditional air cooling.
(Edit from Ian: It has not been stated if HP will implement a variation of MultiCore Turbo/Acceleration at this time, but given the limited BIOS options of the major OEMs in recent decades, this has probably been overlooked. Frankly, I would be surprised if the BIOS engineers had even heard of mainstream motherboard manufacturers implementing the feature, though I will happily be proved wrong.)
The Z240 is currently offered with a wide range of professional graphics, if required, including the NVIDIA NVS 310, 315, and 510, and Quadro up to the M4000. With yesterday’s announcement of the Pascal Quadro, and today's announcement of the new Radeon Pro WX, they are likely to offer these soon. If a user requires AMD professional graphics, HP will offer the FirePro W2100, W5100, W4300, and the W7100 with 8 GB of memory.
A simple device refresh mid-cycle is far from unexpected, but it is pretty interesting that by talking to their customers HP has found that many of them would prefer higher single threaded performance with a Core i7-6700K, rather than the Xeon ecosystem with a focus on stability and ECC. It will be interesting to see if Intel reacts to this, since the Xeon is a nice high margin product.
As a small comparison, the highest clocked Xeon E3 v5 is the E3-1280 v5 at 3.7-4.0 GHz, and has a recommended customer price of $612 on Intel Ark. The one underneath is the E3-1275 v5 at 3.6-4.0 GHz, but is a more palatable $350. This latter part compares in price to the Core i7-6700K, which is at $350 list price also, however the i7-6700K has the margin on frequency at 4.0-4.2 GHz. Comparing the two Xeons to the Core i7, HP can offer a bit more performance with the trade-off of no ECC support, and in the case of the peak Xeon, save some money as well. For those that need the top raw CPU performance available especially for single-threaded workloads, short of overclocking, this is the way to go.
Source: HP
16 Comments
View All Comments
infowolfe - Tuesday, July 26, 2016 - link
Will it cost you money if your data is corrupted? If yes: ECC. Also there's this: http://www.cs.toronto.edu/~bianca/papers/sigmetric...I thought there was a more recent copy of this paper hosted someplace on *.google.com/* but I can't seem to readily find it. The essence is that as memory chip capacities grow, there is an *increase* in likelihood of errors due to stray particles due to smaller transistor/feature size and higher density and the quantity of errors "in the wild" is many orders of magnitude more common than previously thought. In other words, if you *really* care about your data and you're not using ECC, you're making a huge mistake.
infowolfe - Tuesday, July 26, 2016 - link
Oh, and RE performance... This is a kinda cute writeup by Puget Computers: https://www.pugetsystems.com/labs/articles/ECC-and...TL;DR? <1% difference +/- ECC Reg vs non-ECC, identical timings. It's a cute read.
Samus - Wednesday, July 27, 2016 - link
Wow thanks for posting those results. I'm not ruling out ECC in mission critical applications, especially servers, but error rate of memory is becoming sequentially lower as the quantity (not the density) of overall system memory increases. That's why it was so important in the 90's when a server was almost defined by a raid array and ECC memory. These days many SMB servers have a soft raid at best with a core i3.It's important to also point out the not so obvious fact that slips people's minds. Cache inside CPU's is ECC so errors are not passed onto system RAM.
mctylr - Wednesday, July 27, 2016 - link
> "but error rate of memory is becoming sequentially lower as the quantity (not the density) of overall system memory increases."I'm not certain of what you are trying to say, but I believe your argument is incorrect. The probability that any given bit (i.e. at a specific address) is erroneously incorrect is independent of the overall (system) capacity (total amount of addressable RAM installed in a system). The University of Toronto paper linked by [infowolfe] does suggest that manufacturers are roughly keeping error rates constant despite increasing densities of memory chips, and that memory utilization is strongly correlated with error rates.
I think ECC memory is under-utilized in general and in particular in the SMB segment, not due to lack of benefits, but because of price sensitivity of corporate purchasing agents (whom at often not IT-oriented people in most SMB or too often are finance / business trained ("bean counters") in large enterprises) and the fact that crashes or other faults caused by memory errors are not blatantly obvious to end-users or even the majority of IT administrators.
infowolfe - Thursday, July 28, 2016 - link
Actually, you're incorrect on that error rates *decreasing* as capacities grow. The issue at hand is that in the days of DDR2 a stray particle flipping a bit in a single transistor had a reasonably decent chance of either completely missing or hitting and flipping only a single cell. In modern times, with smaller and smaller feature sizes (like Samsung's 10nm 8Gb chips) that particle's got a much larger chance of hitting and flipping either one cell or in a really bad case one cell and its neighbor. Parity in general is now a huge thing not just in enterprise systems, but in any system where the data's actually important to the user (such as in media/content creation... 4k video and so forth). Most of the people I know doing serious content creation work are at least using ECC Registered RAM in their workstations, and some have gone so far as to using ZFS on the filesystems where their work is stored.If you think that ECC isn't a thing anymore, you should look at anywhere that people are working with larger monolithic datasets. For example, NVidia's Tesla P100 is using ECC HBM2, all current 3d-capable Quadro cards have ECC capability, etc etc etc.
The "reliability" of RAM failure rates vs silent corruptions is exactly the myth that Google was looking to dispel in its study of memory failure and corruption across its global infrastructure. See https://en.wikipedia.org/wiki/RAM_parity for more info. Basically, the jist is that a ram stick doesn't have to fail completely to have potentially undetectable/silent errors.
Now, for addressing corruption (silent and otherwise), the current "state of the art" isn't hardware raid, it's advanced filesystems with inbuilt ECC/parity features (ZFS/btrfs) though they kinda require you to be using parity modes (raid1, raid5/6/7 or raidz2/3) for automatic *recovery* of flipped bits. Please note that as of Ubuntu 16.04, ZFS is now shipped by default as the preferred LXC backend. COW filesystems w/ parity, ECC Registered memory, etc are not going away. In fact, as flash storage becomes more ubiquitous, there's been a massive *increase* in the amount of CRC/ECC taking place in commodity storage (though it's typically transparent to the user).
http://arstechnica.com/information-technology/2014...
Findecanor - Friday, July 29, 2016 - link
Being security-paranoid, I would choose ECC on a workstation because it is less susceptible to Rowhammer attacks. Especially if I am working in an environment with sensitive information.Any attack-vector counts.