My homelab got an upgrade – Intel Optane SSDs!

I didn’t blog about it until yet because I didn’t have the time to install, test and troubleshoot. Also, I didn’t have much time for writing because I was busy with private and business things. But just recently I found time to do some homelabbing and test some things.

May I introduce you: Intel Optane SSDs!

A while ago, the vExpert community got the opportunity to apply for Intel Optane SSDs. I thought why not? They can always say no. So I applied for three Intel Optane SSDs. And I was one of the chosen ones. Sure, this sounds cheesy, but I don’t know how many vExperts finally got some disks.

Through the vExpert program, we had the possibility to choose between the 2.5″ U.2 P4800X 375 GB SSD or its PCIe counterpart. I applied for the U.2 disk. If only I had known what I was getting myself into…

But first: troubleshooting

It would be easy, I thought. Buying some U.2 to PCIe adapter cards, packing the SSD onto these cards, and installing everything in my homelab server. Well. This worked easily, with no problem. Who would have thought that massive troubleshooting would be necessary? Not me, at least…

My homelab server (I call it the home base because it should run all “production” VMs like vCenter, the domain controller, and my jump host) actually recognized the SSDs. Unfortunately, not as storage devices, but only as storage adapters. No chance to create a data store.

With some help from the vCommunity on Twitter, I was pushed in the right direction and also got some good insights and ideas.

Root cause: Disk block size

ESXi can’t “see” disks with a 4K block size. And unfortunately, that was causing the problems. My Optane disks have had a block size of 4K. So how to get them formatted to 512B? Intel MAS to the rescue – or not. In my case, it was “not” the rescue.

Intel Memory and Storage Tool for the win!

I’m not going into deep what the IntelĀ® Memory and Storage Tool can do (or in my case couldn’t). In theory, I had to low-level format the Optane SSDs so they will be formatted with a 512B block size. The Intel MAS tool should do the job, but it didn’t. Not sure why, I tried it several times and waited for a long time, but no luck. Anyways.

How to use the Intel MAS tool? The tool is available for Windows, Linux, and ESXi. Currently, it is only supported up to vSphere 8.0 (vanilla, no update 1). However, by editing some XML files I was able to install it also on my vSphere 8.0 U1 home base host.

There are some commands that you can use to delete/wipe and format your Optane SSDs:

First, find out which “index” your disk has. The index is the actual ID of your Optane. If you have multiple Optanes installed, you will have index numbers starting from 0. My Optanes had the numbers 0, 1, 2.

intelmas show -intelssd

Next, with that index, you can move on and wipe the disk (where 0 is the index). This step is optional. If you’re going to low-level format the SSD, it’s anyway wiped:

intelmas delete -intelssd 0

So let’s do the low-level format now (where 0 is the index):

intelmas start -intelssd 0 -nvmeformat

By default, the -nvmeformat parameter sets the block size to 512B.

Unfortunately, using the Intel MAS tool didn’t help. Even after multiple tries as well as rebooting the server, the Optanes still showed up with a 4K block size.

Well, back to the drawing board then…

Linux Live system for the win!

The next try was a Linux Live CD. There are many Linux versions available, some of them even provide a Live CD to boot and “test” this Linux, and some of them were even booting my server. Lucky me, I got the iLO Advanced enabled on my home base, so no need to flash a USB drive with a new Linux and walk back and forth between the home office and the garage (where my rack is).

Lucky me too, after more than one hour of testing different approaches and Linux Live CDs, I was able to boot my home base server with Ubuntu 18.04 LTS. After what felt like an eternity of waiting I was able to get into the Ubuntu Live system, start the terminal and start with the next steps. There is a tool called nvme-cli.

First, we need to install this tool:

apt install nvme-cli

When the tool has been installed, we can move on to the next step. We’re starting with a list of installed NVMe disks and their important details. This step is optional because the disks/devices should be located in /dev/ anyways and should be named like nvmeXnY. Sure, if you have already some NVMe drive installed it makes sense to 100% identify the Intel Optane drives.

nvme list

Next, we need to check the actual block size with the following command. In my case, it showed a block size of 4096 (4K). Also this step is optional if you executed the Intel MAS before and found the block size:

nvme id-ns -H /dev/nvme0n1 | grep "LBA Format"

With the next command, we’re going to format the drive:

nvme format -l 0 /dev/nvme0n1

The -l parameter stands for “LBA Format”, which is the block size. The default is 0, which stands for 512 bytes. But I wanted to make sure it really formats the disk with a 512B block size.

Repeat this for all disks. The process may take a few minutes to complete, so please stay patient!

When the process has finished, you can verify it with the command nvme list. You should see now a different block size.

Many thanks to Andrew Hancock for providing me with the command!

But now, you want to see the real thing. The SSDs. I know. Feel free to scroll down…

The SSDs – Their look and specs

Before we dive further into this (maybe first) blog post, I’d like to introduce you to the disks and their technical specifications.

Their look…

The Intel Optane SSDs are all black. For an SSD they are quite heavy compared to a Samsung for end users. In contrast to the Samsung end-user SSDs, the Intel Optane disks are encased in aluminum and have a good heat sink. And they will also need that for the 24/7 usage in a server.

…and their specs

Just a brief overview of what they are expected to deliver.

Capacity 375 GB
Sequential Bandwidth – 100% Read (up to) 2400 MB/s
Sequential Bandwidth – 100% Write (up to) 2000 MB/s
Random Read (100% Span) 550’000 IOPS (4K Blocks)
Random Write (100% Span) 500’000 IOPS (4K Blocks)
Power – Active 18 Watts
Power – Idle 5 Watts

What do they actually deliver in my homelab?

You’re all wondering what the SSDs are doing and how they are performing. Well, me too. So I performed some benchmarks. To say this here: these are synthetic benchmarks, quickly executed, not done scientifically. Just for the sake of getting some numbers.

Test setup

I deployed a naked Windows Server 2022 VM where I did the benchmarks. I did some other benchmarks on my “daily use” lab client, based on Windows 10, which is running on my SuperMicro vSAN cluster, backed with Samsung SSDs. More information about that homelab hardware is available here. Both VMs had no special configurations and are running with thin-provisioned disks.

I was using CrystalDiskMark 8.0.4 x64 for these benchmarks. I ran two benchmarks, one with the default settings, and another one with the NVMe settings. As you can see, there are some differences in the block size, queues, and threads.

Test results

Lab client – vSAN Samsung SSDs – CrystalMark default settings

Lab client – vSAN Samsung SSDs – CrystalMark NVMe settings

Test VM for NVMe tests – Intel Optane – CrystalDiskMark default settings

Test VM for NVMe tests – Intel Optane – CrystalDiskMark NVMe settings

Disclaimer

I will not judge the performance. I just did a quick and dirty test, neither scientifically nor under laboratory conditions. The Optane drives should deliver around 500’000 IOPS, but my tests were far away from it. In terms of read and write performance, they must not hide and have met or even exceeded the technical specifications.

Thanks

I want to thank the one and only vCommunityGuy, Eric Romero, thanks to the VMware vExpert program, and many thanks to Intel Memory and Storage for providing us with these awesome test samples! And who knows, maybe Mr vSAN also had a hand in it…

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.