vSphere 6.5: The NFS edition

A while back, I wrote up a blog about the release of vSphere 6.0, with an NFS slant to it. Why? Because NFS!

With VMworld 2016 Barcelona, vSphere 6.5 (where’d .1, .2, .3, .4 go?) has been announced, so I can discuss the NFS impact. If you’d like a more general look at the release, check out Cormac Hogan’s blog on it.

Whither VMFS?

Before I get into the new NFS feature/functionality of the release, let’s talk about the changes to VMFS. One of the reasons people use NFS with VMware (other than its awesomeness) is that VMFS had some… limitations.

With the announcement of VMFS-6, some of those limitations have been removed. For example, VMFS-6 includes something called UNMAP, which essentially is a garbage collector for unused whitespace inside the VMFS datastore. This provides better space efficiency than previous iterations.

Additionally, VMware has added some performance enhancements to VMFS, so it may outperform NFS, especially if using fiber channel.

Other than that, you still can’t shrink the datastore, it’s not that easy to expand, etc. So, minor improvements that shouldn’t impact NFS too terribly. People who love NFS will likely stay on NFS. People who love VMFS will be happy with the improvements. Life goes on…

What’s new in vSphere 6.5 from the NFS perspective?

In vSphere 6.0, NFS 4.1 support was added.

However, it was a pretty minimal stack – no pNFS, no delegations, no referrals, etc. They basically added session trunking/multipath, which is cool – but there was still a lot to be desired. On the downside, that feature isn’t even supported in ONTAP yet. So close, yet so far…

In vSphere 6.5, the NFS 4.1 stack has been expanded a bit to include hardware acceleration for NFSv4.1. This is actually a pretty compelling addition, as it can help the overall NFSv4.1 performance of the datastore.

NFSv4.1 also fully supports IPv6. Your level of excitement is solely based on how many people you think use IPv6 right now.

Kerberos

Perhaps the most compelling NFS changs in vSphere 6.5 is how we secure our mounts.

In 6.0, Kerberos support was added, but you could only do DES. Blah.

Now, Kerberos support in vSphere 6.5 includes:

  • AES-128
  • AES-256
  • REMOVAL of DES encryption
  • Kerberos with integrity checking  (krb5i – prevents “man in the middle” attacks)

Now, while it’s pretty cool that they removed support for the insecure DES enctype, that *is* going to be a disruptive change for people using Kerberos. The machine account/principal will need to be destroyed and re-created, clients will need to re-mount, etc. But, it’s an improvement!

How vSphere 6.5 personally impacts me

The downside of these changes means that I have to adjust my Insight presentation a bit. If you’re going to Insight in Berlin, check out 60831-2: How Customers and Partners use NFS for Virtualization.

Still looking forward to pNFS in vSphere, though…

TECH::How to set up Kerberos on vSphere 6.0 servers for datastores on NFS

For a more in-depth, updated version:

Kerberize your NFSv4.1 Datastores in ESXi 6.5 using NetApp ONTAP

In case you were living under a rock somewhere, VMWare released vSphere 6.0 in March. I covered some of my thoughts from a NFS perspective in vSphere 6.0 – NFS Thoughts.

In that blog, I covered some of the new features, including support for Kerberized NFS on vSphere 6.0. However, in my experience of setting up Kerberos for NFS clients, I learned that doing it can be a colossal pain in the ass. Luckily, vSphere 6.0 actually makes the process pretty easy.

TR-4073: Secure Unified Authentication will eventually contain information on how to do it, but I wanted to get the information out now and strike while the iron is hot!

What is Kerberos?

cerberus

I cover some scenarios regarding securing your NFS environment in “Feeling insecure about NFS?” One of those I mention is Kerberos, but I never really go into detail about what Kerberos actually is.

Kerberos is a ticket-based authentication process that eliminates the need to send passwords over the wire in text format. Instead, passwords are stored on a centralized server (known as a Key Distribution Center, or KDC) that issues tickets to grant tickets for access. This is done through varying levels of encryption, which is controlled via the client, server and keytabs. Right now, the best you can do is AES, which is the NIST standard. Clustered Data ONTAP 8.3 supports both AES-128 and AES-256, by the way. 🙂

However, vSphere 6.0 supports only DES, so…

Again,  TR-4073: Secure Unified Authentication covers this all in more detail than you’d probably want…

Kerberize… like a rockstar!

game-blouses

In one of my Insight sessions, I break down the Kerberos authentication process as a real-world scenario, such as buying a ticket to see your favorite band.

  • A person joins a fan club for first access to concert tickets
    • Ticket Granting Ticket (TGT) issued from Key Distribution Center (KDC)
  • A person buys the concert ticket to see their favorite band
    • TGT used to request Service Ticket (ST) from the KDC
  • They pick the ticket up at the box office
    • ST issued by KDC
  • They use the ticket to get into the concert arena
    • Authentication
  • The ticket specifies which seat they are allowed to sit in
    • Authorization; backstage pass specifies what special permissions they have

Why Kerberos?

One of the questions you may be asking, or have heard asked is, “why the heck do I want to Kerberize my NFS datastore mount? Doesn’t my export policy rule secure it enough?”

Well, how easy is it to change an IP address of an ESXi server? How easy is it to create a user? That’s really all you need to mount NFSv3. However, Kerberos requires a user name and password to get a ticket, interaction with a KDC, ticket exchange, etc.

So, it’s much more secure.

Awesome… how do I do it?

Glad you asked!

After you’ve set up your KDC and preferred NFS server to do Kerberos, you’d need to set the client up. In this case, the client is vSphere 6.0.

Step 1: Configure DNS

Kerberos needs DNS to work properly. This is tied to how service principal names (SPNs) are queried on the KDC. So, you need the following:

  • Forward and reverse lookup records on the DNS server for the ESXi server
  • Proper DNS configuration on the ESXi server

Example:

DNS-conf

Step 2: Configure NTP

Kerberos is very sensitive to time skew. There is a default of 5 minutes allowed between client/server/KDC. If the skew is outside of that, the Kerberos request will fail. This is for your security. 🙂

ntp

Step 3: Join ESXi to the Active Directory Domain

This essentially saves you the effort of messing with manual configuration of creating keytabs, SPNs, etc. Save yourself time and headaches.

Join-domain

Step 4: Specify a user principal name (UPN)

This user will be used by ESXi to kinit and grab a ticket granting ticket (TGT). Again, it’s entirely possible to do this manually and likely possible to leverage keytab authentication. But, again, save yourself the headache.

Credentials

Step 5: Create the NFS datastore for use with NFSv4.1 and Kerberos authentication

You *could* Kerberized NFSv3. But why? All that gets encrypted is the NFS stuff. NLM, NSM, portmap, mount, etc don’t get Kerberized. NFSv4.1 encapsulates all things related to the protocol, so encrypting NFSv4.1 encrypts it all.

New-datastore
Enter the server/datastore information:

Add-datastore-nfsv4.1

Be sure you don’t forget to enable Kerberos:

Kerberos-enable

After you’re done, test it out!

TECH::vSphere 6.0 – NFS thoughts

DISCLAIMER: I work for NetApp. However, I don’t speak for NetApp. These are my own views. 🙂

I’m a tad late to the party here, as there have already been numerous blogs about what’s new in vSphere 6.0, etc. I haven’t seen anything regarding what was missing from a NFS perspective, however. So I’m going to attempt to fill that gap.

What new NFS features were added?

Famously, vSphere 6 brings us NFSv4.1. NFSV4.1 is an enhancement of NFSV4.0, which brought the following features:

  • Pseudo/unified namespace
  • TCP only
  • Better security via domain ID string mapping, single firewall port and Kerberos integration
  • Better locking than NFSv3 via a lease-based model
  • Compound NFS calls (i.e., combining multiple NFS operations into a single packet)
  • Better standardization of the protocol, leveraging IETF
  • More granular ACLs (similar to Windows NTFS ACLs)
  • NFS referrals
  • NFS sessions
  • pNFS

I cover NFSv4.x in some detail in TR-4067 and TR-4073. I cover pNFS in TR-4063.

I wrote a blog post a while back on the Evolution of NAS, which pointed out how NFS and CIFS were going all Voltron on us and basically becoming similar enough to call them nearly identical.

vSphere 6.0 also brings the ability to Kerberize NFS mounts, as well as VVOL support. Fun fact: NetApp is currently the only storage vendor with support for VVOLs over NFS. 

Why do these features matter?

As Stephen Foskett correctly pointed out in his blog, adoption of NFSv4.x has been… slow. A lot of reasons for that, in addition to what he said.

  • Performance. NFSv3 is simply faster in most cases now. Though, that narrative is changing…
  • Disruption. NFSv3 had the illusion of being non-disruptive in failover events. NFSv4 is stateful, thus more susceptible to interruptions, but its locking makes it less susceptible to data loss/corruption in failover events (both network and storage).
  • Infrastructure. It’s a pain in the ass to add name services to an existing enterprise environment to ensure proper ID string mapping.
  • Disdain for change. No one wants to be the “early adopter” in a production environment.

However, more and more applications are recommending NFSv4.x. TIBCO is one. IBM MQueue is another. Additionally, there is a greater focus on security with recent data breaches and hacks, so storage administrators will need to start filling check boxes to be compliant with new security regulations. NFSv4.x features (Kerberos, domain ID, limited firewall ports to open) will likely be on that list. And now, vSphere offers NFSv4.1 with some limited features. What this means for the NFS protocol is that more people will start using it. And as more people start using it, the open-source-ness will start to kick in and the protocol will improve.

As for Kerberos, one of the questions you may be asking, or have heard ask is, “why the heck do I want to Kerberize my NFS datastore mount?” Doesn’t my export policy rule secure it enough?

Well, how easy is it to change an IP address of an ESXi server? How easy is it to create a user? That’s really all you need to mount NFSv3. However, Kerberos requires a user name and password, interaction with a KDC, ticket exchange, etc. So, it’s much more secure.

As for VVOLs, they could be a game changer in the world of software-defined storage.

Check out the following:

Virtual Volumes (VVOLs) On Horizon to Deliver Software Defined Storage for vSphere

The official VMware VVOL blog

vMiss also has a great post on VVOLs on her blog.

Also, NetApp’s ESX TME Peter Learmonth (@titaniumlegs on Twitter) has a video on it:

That’s great and all… but what’s missing?

While it’s awesome that VMware is attempting to keep the NFS stack up to date by adding NFSv4.1 and Kerberos, it just felt a little… incomplete.

For one Kerberos was added, but only with DES support. This is problematic on a few levels. For one, DES is old and laughably weak as far as Kerberos enctypes go. DES was cracked in less than a day… in 2008. If they were going to add Kerberos, why not AES, which is the NIST standard? Were they concerned about performance? AES has been known to be a bit of a hog. If that was a concern, though, why not implement the Intel AES CPU?

As for NFSv4.1… WHERE IS PNFS?? pNFS is an ideal protocol for what virtual machines do – open once, stream reads and writes. Not a ton of metadata. Mobile and agile with storage VMotion and volume moves in clustered Data ONTAP. No need to use up a ton of IP addresses (one per node, per datastore). Most storage operations via NFS would be simplified and virtually transparent with pNFS. Hopefully they add that one soon.

Ultimately, an improvement

I’m glad that VMware added some NFS improvements. It’s a step in the right direction. And they certainly beefed up the capabilities of vSphere 6 with added hardware support. Some of those numbers… monstrous! Hopefully they continue the dedication to NFS in future releases.

Wait, there’s more?!?

That’s right! In addition to the improvements of vSphere 6.0, there is also VMWare Horizon, which integrates with NetApp’s All-Flash FAS solutions. NetApp All-Flash FAS is provides the only all-flash NFS support on the market!

To learn more about it, see this video created by NetApp TME Chris Gebhardt.

You can also see the Shankay Iyer’s blog post here.

Introducing A New Release of VMWare Horizon!

For more info…

What’s New in the VMware vSphere 6.0 Platform

For a snarky rundown on NFSv4.1 and vSphere 6.0, check out Stephen Foskett’s blog.

For some more information on NFS-specific features, see Cormac Hogan’s post.