Test Configurations

So while the Intel SSD DC P4800X is technically launching today, 3D XPoint memory is still in short supply. Only the 375GB add-in card model has been shipped, and only as part of an early limited release program. The U.2 version of the 375GB model and the add-in card 750GB model are planned for a Q2 release, and the U.2 750GB model and the 1.5TB model are expected in the second half of 2017. Intel's biggest enterprise customers, such as the Super Seven, have had access to Optane devices throughout the development process, but broad retail availability is still a little ways off.

Citing the current limited supply, Intel has taken a different approach to review sampling for this product. Their general desire for secrecy regarding the low-level details of 3D XPoint has also likely been a factor. Instead of shipping us the Optane SSD DC P4800X to test on our own system, as is normally the case with our storage testing, this time around Intel has only provided us with remote access to a DC P4800X system housed in their data center. Their Non-Volatile Memory Solutions Group maintains a pool of servers to provide partners and customers with access to the latest storage technologies and their software partners have been using these systems for months to develop and optimize applications to take advantage of Optane SSDs.

Intel provisioned one of these servers for our exclusive use during the testing period, and equipped it with a 375GB Optane SSD DC P4800X and a 800GB SSD DC P3700 for comparison. The P3700 was the U.2 version of the drive and was connected through a PLX PEX 9733 PCIe switch. The Optane SSD under test was initially going to be a U.2 version connected to the same backplane, but Intel found that the PCIe switch was introducing some inconsistency in the access latency on the order of a microsecond or two, which is a problem when trying to benchmark a drive with ~8µs best case latency. Intel swapped out the U.2 Optane SSD for an add-in card version that uses PCIe lanes direct from the processor, but the P3700 was still potentially subject to whatever problems the PCIe switch may have caused. Clearly, there's some work to be done to ensure the ecosystem is ready to take full advantage of the performance promised by Optane SSDs, but debugging such issues is beyond the scope of this review.

Intel NSG Marketing Test Server
CPU 2x Intel Xeon E5 2699 v4
Motherboard Intel S2600WTR2
Chipset Intel C612
Memory 256GB total, Kingston DDR4-2133 CL11 16GB modules
OS Ubuntu Linux 16.10, kernel 4.8.0-22

The system was running a clean installation of Ubuntu 16.10, with no Intel or Optane-specific software or drivers installed, and the rest of the system configuration was as expected. We had full administrative access to tweak the software to our liking, but chose to leave it mostly in its default state.

Our benchmarking is a variety of synthetic workloads generated and measured using fio version 2.19. There are quite a few operating system and fio options that can be tuned, but we generally ignored them: for example the NVMe driver wasn't manually switched to polling mode, or the CPU affinity was not manually set, and nothing was tweaked about power management or CPU clock speed turbo. There is work underway to switch fio over to using nanosecond-precision time measurement, but it has not reached a usable state yet. Our tests only record latencies in microsecond increments, and mean latencies that report fractional microseconds are just weighted averages of eg. how many operations were closer to 8µs than 9µs.

All tests were run directly on the SSD with no intervening filesystem. Real-world applications will almost always be accessing the drive through a filesystem, but will also be benefiting from the operating system's cache in main RAM, which is bypassed with this testing methodology.

To provide an extra point of comparison, we also tested the Micron 9100 MAX 2.4TB on one of our systems, using a Xeon E3 1240 v5 processor. In order to not unfairly disadvantage the Micron 9100, most of the tests  were limited to use at most 4 threads. Our test system was running the same Linux kernel as the Intel NSG marketing test server and used a comparable configuration with the Micron 9100 connected directly to the CPU's PCIe lanes rather than through the PCH.

AnandTech Enterprise SSD Testbed
CPU Intel Xeon E3 1240 v5
Motherboard ASRock Fatal1ty E3V5 Performance Gaming/OC
Chipset Intel C232
Memory 4x 8GB G.SKILL Ripjaws DDR4-2400 CL15
OS Ubuntu Linux 16.10, kernel 4.8.0-22

Because this was not a hands-on test of the Optane SSD on our own equipment, we were unable to conduct any power consumption measurements. Due to the limited time available for testing, we were unable to make any systematic test of write endurance or the impact of extra overprovisioning on performance. We hope to have the opportunity to conduct a full hands-on review later in the year to address these topics.

Due to time, we were unable to cover Intel's new Memory Drive Technology software. This is an optional software add-on that can be purchased with the Optane SSD. The Memory Drive Technology software is a minimal virtualization system that allows software to pretend that their Optane SSD is RAM. The hypervisor will present to the guest OS a pool of memory equal to the amount of available DRAM plus up to 320GB of the Optane SSD's 375GB capacity. The hypervisor manages the placement of data to automatically cache hot data in DRAM, such that applications or the guest OS cannot explicitly address or allocate Optane storage. We may get a chance to look at this in the future, as it offers an interesting aspect of the new ways multi-tiered storage will be affecting the Enterprise market over the next few years.

3D XPoint Refresher Checking Intel's Numbers
POST A COMMENT

117 Comments

View All Comments

  • melgross - Tuesday, April 25, 2017 - link

    You obviously have some ax to grind. You do seem bitter about something.

    The first SSDs weren't much better than many HHD's in random R/W. Give it a break!
    Reply
  • XabanakFanatik - Thursday, April 20, 2017 - link

    I know that this drive isn't targeted for consumers at all, but I'm really interested in how it performs in consumer level workloads as an example of what a full Optane SSD is capable of. Any chance we can get a part 2 with the consumer drive tests and have it compared to the fastest consumer NVM-e drives? Even just a partial test suite for a sampler of how it compares would be great. Reply
  • Drumsticks - Thursday, April 20, 2017 - link

    I imagine it will be insane - the drive saturates its throughput at <QD6, meaning most consumer workloads. It'll obviously be a while before its affordable from a consumer perspective, but I suspect the consumer prices will be a lot lower without the enterprise class requirements thrown in.

    This drive looks incredibly good. 2-4x more than enterprise SSDs for pretty similar sequential throughput - BUT at insanely lower queue depths, which is a big benefit. At those QDs, it's easily justifying its price in throughput. Throw on top of that a 99.999th% latency that is often better than their 99th% latency, and 3D Xpoint has a very bright future ahead of it. It might be gen 1 tech, but it's already justified its existence for an entire class of workloads.
    Reply
  • superkev72 - Thursday, April 20, 2017 - link

    Those are some very impressive numbers for a gen1 storage device. Basically better than an SSD in almost every way except of course price. I'm interested in seeing what Micron does with QuantX as it should have the same characteristics but potentially more accessible. Reply
  • DrunkenDonkey - Thursday, April 20, 2017 - link

    Well finally! I was waiting for this test ever since I heard about the technology. This is enterprise drive, yeah, but it is the showcase for the technology and it shows what we can expect for consumer drive - 8-10x current SSD speeds for desktop usage (that is 98% 4-8k RR, QD=1). That blows out of the water everything in the market. Actually this technology shines exactly at radon joe's PC, while SSDs shine only in enterprise market (QD=16+). Can't wait! Reply
  • Meteor2 - Thursday, April 20, 2017 - link

    But don't we say SATA3 is good enough and we don't really need (for consumer use) NVMe? So what's the real benefit of something faster? Reply
  • DrunkenDonkey - Thursday, April 20, 2017 - link

    All you want (from desktop user perspective) is low latency at low queue depth (1). NVME helps with that regard, tho not by a lot. Equal drives, one on sata, one on nvme will make the nvme a bit more agile resulting in more performance for you. So far no current ssd is ever close to saturate the sata3 bus in desktop use, this one, however, is scratching it. Sure, it will be years till we get affordable consumer drives from that tech, but it is pretty much the same step forward than going from hdd to ssd - first ssds were in the range of 20ish mb per second, while hdds - about 1.5 in these circumstances. Here we are talking a jump from 50 to close to 400+. Moar power! :) Reply
  • serendip - Thursday, April 20, 2017 - link

    Imagine having long battery life and instant hibernation - at 400 mbps, waking up from hibernation and reloading memory contents would take a few seconds. Then again, constantly writing a huge page file to XPoint wouldn't be good for longevity and hibernation doesn't allow for background processes to run while asleep. I'm thinking of potential usage for XPoint on phones and tablets, can't seem to find any. Reply
  • ddriver - Friday, April 21, 2017 - link

    Yeah, also imagine your system working 10 times slower, because it uses hypetane instead of ram.
    And not only that, but you also have to replace that memory every 6 months or so, because working memory is much more write intensive, and this thing's endurance is barely twice that of MLC flash.

    It is well worth the benefit of instant resume, because if enterprise systems are known for something, that is frequently hibernating and resuming.
    Reply
  • tuxRoller - Friday, April 21, 2017 - link

    They didn't say replace the ram with xpoint.
    It's a really good idea since xpoint has faster media access times so even when it's a smaller amount it should still be quite a bit faster than nand.
    Reply

Log in

Don't have an account? Sign up now