Kerberize your NFSv4.1 Datastores in ESXi 6.5 using NetApp ONTAP

How do I set up Kerberos with NFSv4.1 datastores in ESXi 6.5?

 

homer-rubik

I have gotten this question enough now (and from important folks like Cormac Hogan) to write up an end to end guide on how to do this. I had a shorter version in TR-4073 and then a Linux-specific version in TR-4616, but admittedly, it wasn’t really enough to help people get the job done. I also wrote up some stuff on ESXi Kerberos in a previous blog post that wasn’t nearly as in-depth called How to set up Kerberos on vSphere 6.0 servers for datastores on NFS.

Cormac also wrote up one of his own:

https://cormachogan.com/2017/10/12/getting-grips-nfsv4-1-kerberos/

I will point out that, in general, NFS Kerberos is a pain in the ass for people who don’t set it up on a regular basis and understand its inner workings, regardless of the vendor you’re interacting with. The reason is that there are multiple moving parts involved, and support for various portions of Kerberos (such as encryption types) vary. Additionally, some hosts automate things better than others.

We’re going to set it all up as if we only have created a basic SVM with data LIFs and some volumes. If you have an existing SVM configured with NFS and such, you can retrofit as needed. While this blog covers only NFS Kerberos using ONTAP as the NFS server, the steps for AD and ESXi would apply to other NFS servers as well, in most cases.

Here’s the lab setup:

  • ONTAP 9.3 (but this works for ONTAP 9.0 and onward)
    • SVM is SVM1
  • Windows 2012R2 (any Windows KDC that supports AES will work)/Active Directory
    • DNS, KDC
    • Domain is core-tme.netapp.com
  • ESXi 6.5
    • Hostname is CX400S1-GWNSG6B

ONTAP Configuration

While there are around 10 steps to do this, keep in mind that you generally only have to configure this once per SVM.

We’ll start with ONTAP, via the GUI. I’ll also include a CLI section. First, we’ll start in the SVM configuration section to minimize clicks. This is found under Storage -> SVMs. Then, click on the SVM you want to configure and then click on the NFS protocol.

SVM-protocol

1. Configure DNS

We need this because Kerberos uses DNS lookups to determine host names/IPs. This will be the Active Directory DNS information. This is found under Services -> DNS/DDNS.

dns

2. Enable NFSv4.1

NFSv4.1 is needed to allow NFSv4.1 mounts (obviously). You don’t need to enable NFSv4.0 to use NFSv4.1. ESXi doesn’t support v4.0 anyway. But it is possible to use v3.0, v4.0 and v4.1 in the same SVM. This can be done under “Protocols -> NFS” on the left menu.

v41.png

3. Create an export policy and rule for the NFS Kerberos datastore volume

Export policies in ONTAP are containers for rules. Rules are what defines the access to an exported volume. With export policy rules, you can limit the NFS version allowed, the authentication type, root access, hosts, etc. For ESXi, we’re defining the ESXi host (or hosts) in the rule. We’re allowing NFS4 only and Kerberos only. We’re allowing the ESXi host to have root access. If you use NFSv3 with Kerberos for these datastores, be sure to adjust the policy and rules accordingly. This is done under the “Policies” menu section on the left.

export-rule.png

4. Verify that vsroot has an export policy and rule that allows read access to ESXi hosts

Vsroot is “/” in the namespace. As a result, for clients to mount NFS exports, they must have at least read access via vsroot’s export policy rule and at least traverse permissions (1 in mode bits) to navigate through the namespace. In most cases, vsroot uses “default” as the export policy. Verify whichever export policy is being used has the proper access.

If a policy doesn’t have a rule, create one. This is an example of minimum permissions needed for the ESXi host to traverse /.

vsroot-policy.png

5. Create the Kerberos realm

The Kerberos realm is akin to /etc/krb5.conf on Linux clients. It tells ONTAP where to look to attempt the bind/join to the KDC. After that, KDC servers are discovered by internal processes and won’t need the realm. The realm domain should be defined in ALL CAPS. This is done in System Manager using “Services -> Kerberos Realm” on the left.

realm.png

6. Enable Kerberos on your data LIF (or LIFs)

To use NFS Kerberos, you need to tell ONTAP which data LIFs will participate in the requests. Doing this specifies a service principal for NFS on that data LIF. The SVM will interact with the SVM defined in the Kerberos realm to create a new machine object in AD that can be used for Kerberos. The SPN is defined as nfs/hostname.domain.com and represents the name you want clients to use to access shares. This FQDN needs to exist in DNS as a forward and reverse record to ensure things work properly. If you enable Kerberos on multiple data LIFs with the same name, the machine account gets re-used. If you use different SPNs on LIFs, different accounts get created. You have a 15 character limit for the “friendly” display name in AD. If you want to change the name later, you can. That’s covered in TR-4616.

kerb-interface.png

7. Create local UNIX group and users

For Kerberos authentication, a krb-unix name mapping takes place, where the incoming SPN will attempt to map to a UNIX user that is either local on the SVM or in external name service servers. You always need the “nfs” user, as the nfs/fqdn SPN will map to “nfs” implicitly. The other user will depend on the user you specify in ESXi when you configure Kerberos. That UNIX user will use the same user name. In my example, I used “parisi,” which is a user in my AD domain. Without these local users, the krb-unix name mapping would fail and manifest as “permission denied” errors when mounting. The cluster would show errors calling out name mapping in “event log show.”

Alternatively, you can create name mapping rules. This is covered in TR-4616. UNIX users and groups can be created using the UNIX menu option under “Host Users and Groups” in the left menu.

The numeric GID and UID can be any unused numeric in your environment. I used 501 and 502.

First, create the primary group to assign to the users.

group.png

Then, create the NFS user with the Kerberos group as the primary group.

nfs-user.png

Finally, create the user you used in the Kerberos config in ESXi.

parisi-user.png

Failure to create the user would result in the following similar error in ONTAP. (In the below, I used a user named ‘vsphere’ to try to authenticate):

**[ 713] FAILURE: User 'vsphere' not found in UNIX authorization source LDAP.
 [ 713] Entry for user-name: vsphere not found in the current source: LDAP. Ignoring and trying next available source
 [ 715] Entry for user-name: vsphere not found in the current source: FILES. Entry for user-name: vsphere not found in any of the available sources
 [ 717] Unable to map SPN 'vsphere@CORE-TME.NETAPP.COM'
 [ 717] Unable to map Kerberos NFS user 'vsphere@CORE-TME.NETAPP.COM' to appropriate UNIX user

8. Create the volume to be used as the datastore

This is done from “Storage -> Volumes.” In ONTAP 9.3, the only consideration is that you must specify a “Protection” option, even if it’s “none.” Otherwise, it will throw an error.

vol create

vol-protect.png

Once the volume is created, it automatically gets exported to /volname.

9. Verify the volume security style is UNIX for the datastore volume

The volume security style impacts how a client will attempt to authenticate into the ONTAP cluster. If a volume is NTFS security style, then NFS clients will attempt to map to Windows users to figure out the access allowed on an object. System Manager doesn’t let you define the security style at creation yet and will default to the security style of the vsroot volume (which is / in the namespace). Ideally, vsroot would also be UNIX security style, but in some cases, NTFS is used. For VMware datastores, there is no reason to use NTFS security style.

From the volumes screen, click on the newly created volume and click the “Edit” button to verify UNIX security style is used.

sec-style.png

10. Change the export policy assigned to the volume to the ESX export policy you created

Navigate to “Storage -> Namespace” to modify the export policy used by the datastore.

change-policy.png

11. Configure NTP

This prevents the SVM from getting outside of the 5 minute time skew that can break Kerberos authentication. This is done via the CLI. No GUI support for this yet.

cluster::> ntp server create -server stme-infra02.core-tme.netapp.com -version auto

12. Set the NFSv4 ID domain

While we’re in the CLI, let’s set the ID domain. This ID domain is used for client-server interaction, where a user string will be passed for NFSv4.x operations. If the user string doesn’t match on each side, the NFS user gets squashed to “nobody” as a security mechanism. This would be the same domain string on both ESX and on the NFS server in ONTAP (case-sensitive). For example, “core-tme.netapp.com” would be the ID domain here and users from ESX would come in as user@core-tme.netapp.com. ONTAP would look for user@core-tme.netapp.com to exist.

In ONTAP, that command is:

cluster::> nfs modify -vserver SVM1 -v4-id-domain core-tme.netapp.com

13. Change datastore volume permissions

By default, volumes get created with the root user and group as the owner, and 755 access. In ESX, if you want to create VMs on a datastore, you’d need either root access or to change write permissions. When you use Kerberos, ESX will use the NFS credentials specified in the configuration as the user that writes to the datastore. Think of this as a “VM service account” more or less. So, your options are:

  • Change the owner to a different user than root
  • Use root as the user (which would need to exist as a principal in the KDC)
  • Change permissions

In my opinion, changing the owner is the best, most secure choice here. To do that:

cluster::> volume modify -vserver SVM1 -volume kerberos_datastore -user parisi

That’s all from ONTAP for the GUI. The CLI commands would be (all in admin priv):

cluster::> dns create -vserver SVM1 -domains core-tme.netapp.com -name-servers 10.193.67.181 -timeout 2 -attempts 1 -skip-config-validation true
cluster::> nfs modify -vserver SVM1 -v4.1 enabled
cluster::> export-policy create -vserver SVM1 -policyname ESX
cluster::> export-policy rule create -vserver SVM1 -policyname ESX -clientmatch CX400S1-GWNSG6B.core-tme.netapp.com -rorule krb5* -rwrule krb5* -allow-suid true -ruleindex 1 -protocol nfs4 -anon 65534 -superuser any
cluster::> vol show -vserver SVM1 -volume vsroot -fields policy
cluster::> export-policy rule show -vserver SVM1 -policy [policy from prev command] -instance
cluster::> export-policy rule modify or create (if changes are needed)
cluster::> kerberos realm create -vserver SVM1 -realm CORE-TME.NETAPP.COM -kdc-vendor Microsoft -kdc-ip 10.193.67.181 -kdc-port 88 -clock-skew 5 -adminserver-ip 10.193.67.181 -adminserver-port 749 -passwordserver-ip 10.193.67.181 -passwordserver-port 464 -adserver-name stme-infra02.core-tme.netapp.com -adserver-ip 10.193.67.181
cluster::> kerberos interface enable -vserver SVM1 -lif data -spn nfs/ontap9.core-tme.netapp.com
cluster::> unix-group create -vserver SVM1 -name kerberos -id 501
cluster::> unix-user create -vserver SVM1 -user nfs -id 501 -primary-gid 501
cluster::> unix-user create -vserver SVM1 -user parisi -id 502 -primary-gid 501
cluster::> volume create -vserver SVM1 -volume kerberos_datastore -aggregate aggr1_node1 -size 500GB -state online -policy kerberos -user 0 -group 0 -security-style unix -unix-permissions ---rwxr-xr-x -junction-path /kerberos_datastore 

 ESXi Configuration

This is all driven through the vSphere GUI. This would need to be performed on each host that is being used for NFSv4.1 Kerberos.

1. Configure DNS

This is done under the “Hosts and Clusters -> Manage -> Networking -> TCP/IP config.”

dns-esx.png

2. Configure NTP

This is found in “Hosts and Clusters -> Settings -> Time Configuration”

ntp

3. Join the ESXi host to the Active Directory domain

Doing this automatically creates the machine account in AD and will transfer the keytab files between the host and KDC. This also sets the SPNs on the machine account. The user specified in the credentials must have create object permissions in the Computers OU in AD. (For example, a domain administrator)

This is found in “Hosts and Clusters -> Settings -> Authentication Services.”

join-domain.png

4. Specify NFS Kerberos Credentials

This is the user that will authenticate with the KDC and ONTAP for the Kerberos key exchange. This user name will be the same as the UNIX user you used in ONTAP. If you use a different name, create a new UNIX user in ONTAP or create a name mapping rule. If the user password changes in AD, you must also change it in ESXi.

nfs-creds

With NFS Kerberos in ESX, the ID you specified in NFS Kerberos credentials will be the ID used to write. For example, I used “parisi” as the user. My SVM is using LDAP authentication with AD. That user exists in my LDAP environment as the following:

cluster::*> getxxbyyy getpwbyuid -node ontap9-tme-8040-01 -vserver SVM1 -userID 3629
  (vserver services name-service getxxbyyy getpwbyuid)
pw_name: parisi
pw_passwd: 
pw_uid: 3629
pw_gid: 512
pw_gecos: 
pw_dir: 
pw_shell: /bin/sh

As a result, the test VM I create got written as that user:

drwxr-xr-x   2 3629  512    4096 Oct 12 10:40 test

To even be able to write at all, I had to change the UNIX permissions on the datastore to allow write access to “others.” Alternatively, I could have changed the owner of the volume to the specified user. I mention those steps in the ONTAP section.

If you plan on changing the user for NFS creds, be sure to use “clear credentials,” which will restart the service and clear caches. Occasionally, you may need to restart the nfsgssd service from the CLI if something is stubbornly cached:

[root@CX400S1-03003-B3:/] /etc/init.d/nfsgssd restart
watchdog-nfsgssd: Terminating watchdog process with PID 33613
Waiting for process to terminate...
nfsgssd stopped
nfsgssd started

In rare cases, you may have to leave and re-join the domain, which will generate new keytabs. In one particularly stubborn case, I had to reboot the ESX server after I changed some credentials and the Kerberos principal name in ONTAP.

That’s the extent of the ESXi host configuration for now. We’ll come back to the host to mount the datastore once we make some changes in Active Directory.

Active Directory Configuration

Because there are variations in support for encryption types, as well as DNS records needed, there are some AD tasks that need to be performed to get Kerberos to work.

1. Configure the machine accounts

Set the machine account attributes for the ESXi host(s) and ONTAP NFS server to only allow AES encryption. Doing this avoids failures to mount via Kerberos that manifest as “permission denied” on the host. In a packet trace, you’d potentially be able to see the ESXi host trying to exchange keys with the KDC and getting “unsupported enctype” errors if this step is skipped.

The exact attribute to change is msDS-SupportedEncryptionTypes. Set that value to 24, which is AES only. For more info on encryption types in Windows, click to view this blog.

You can change this attribute using “Advanced Features” view with the attribute editor. If that’s not available or it’s not possible to use, you can also modify using PowerShell.

To modify in the GUI:

msds-enctype.png

To modify using PowerShell:

PS C:\> Set-ADComputer -Identity [NFSservername] -Replace @{'msDS-SupportedEncryptionTypes'=24}

2. Create DNS records for the ESXi hosts and the ONTAP server

This would be A/AAAA records for forward lookup and PTR for reverse. Windows DNS let’s you do both at the same time. Verify the DNS records with “nslookup” commands.

This can also be done via GUI or PowerShell.

From the GUI:

From PowerShell:

PS C:\Users\admin>Add-DnsServerResourceRecordA -IPv4Address 10.193.67.220 -CreatePtr core-tme.netapp.com -Name ontap9

PS C:\Users\admin>Add-DnsServerResourceRecordA -IPv4Address 10.193.67.35 -CreatePtr core-tme.netapp.com -Name cx400s1-gwnsg6b

Mounting the NFS Datastore via Kerberos

Now, we’re ready to create the datastore in ESX using NFSv4.1 and Kerberos.

Simply go to “Add Datastore” and follow the prompts to select the necessary options.

1. Select “NFS” and then “NFS 4.1.”

VMware doesn’t recommend mixing v3 and v4.1 on datastores. If you have an existing datastore that you were mounting via v3, VMware recommends migrating VMs using storage vmotion.

new-ds1new-ds2

2. Specify the name and configuration

The datastore name can be anything you like. The “folder” has to be the junction-path/export path on the ONTAP cluster. In our example, we use /kerberos_datastore.

Server(s) would be the data LIF you enabled Kerberos on. ONTAP doesn’t support NFSv4.1 multi-pathing/trunking yet, so specifying multiple NFS servers won’t necessarily help here.

new-ds3.png

4. Check “Enable Kerberos-based authentication”

Kind of a no-brainer here, but still worth mentioning.

new-ds4.png

5. Select the hosts that need access.

If other hosts have not been configured for Kerberos, they won’t be available to select.

new-ds5.png

6. Review the configuration details and click “Finish.”

This should mount quickly and without issue. If you have an issue, review the “Troubleshooting” tips below.

new-ds6.png

This can also be done with a command from ESX CLI:

esxcli storage nfs41 add -H ontap9 -a SEC_KRB5 -s /kerberos_datastore -v kerberosDS

Troubleshooting Tips

If you follow the steps above, this should all work fine.

mounted-ds.png

But sometimes I make mistakes. Sometimes YOU make mistakes. It happens. 🙂

Some steps I use to troubleshoot Kerberos mount issues…

  • Review the vmkernel logs for vcenter
  • Review “event log show” from the cluster CLI
  • Ensure the ESX host name and SVM host name exist in DNS (nslookup)
  • Use packet traces from the DC to see what is failing during Kerberos authentication (filter on “kerberos” in wireshark)
  • Review the SVM config:
    • Ensure NFSv4.1 is enabled
    • Ensure the SVM has DNS configured
    • Ensure the Kerberos realm is all caps and is created on the SVM
    • Ensure the desired data LIF has Kerberos enabled (from system manager of via “kerberos interface show” from the cli)
    • Ensure the export policies and rules allow access to the ESX datastore volume for Kerberos, superuser and NFSv4. Ensure the vsroot volume allows at least read access for the ESX host.
    • Ensure the SVM has the appropriate UNIX users and group created (nfs user for the NFS SPN; UNIX user name that matches the NFS user principal defined in ESX) or the users exist in external name services
  • From the KDC/AD domain controller:
    • Ensure the machine accounts created use AES only to avoid any weird issues with encryption type support
    • Ensure the SPNs aren’t duplicated (setspn /q {service/fqdn}
    • Ensure the user defined in the NFS Kerberos config hasn’t had a password expire

 

Advertisements

Behind the Scenes: Episode 103 – vNAS using ONTAP Select

Welcome to the Episode 103, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

group-4-2016

Former Tech ONTAP podcast host and current Virtually Speaking podcast host, Pete Flecha (@vPedroArrow), is the TME for vSAN at VMware and is always bugging me to do a show on vSAN. So, here we go!

This week on the podcast, we brought in the technical director for ONTAP Select, Peter Skovrup (skovrup@netapp.com) to discuss the latest improvements in ONTAP Select, including the ability to use ONTAP Select on VMware vSAN platforms!

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

You can listen here:

Behind the Scenes: Episode 101 – NetApp at VMworld 2017; VSC 7.0

Welcome to the Episode 101, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

group-4-2016

This week on the podcast, we bring in Dr. Desktop, Chris Gebhardt (@chrisgeb) and Virtualization TME/NetApp A-Team member Steven Cortez (@mscproductions) to talk about what’s going on at VMworld 2017 in Las Vegas, what sessions to attend and what’s new in Virtual Storage Console (VSC) 7.0.

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

You can listen here:

https://soundcloud.com/techontap_podcast/episode-101-netapp-at-vmworld-2017-vsc-70

Running VMware on ONTAP? Why you should consider upgrading to ONTAP 9.2.

ontap-vmware.png

VMworld is right around the corner, so it’s a good time to remind folks about the goodness that is ONTAP + VMware.

ONTAP already has enterprise class storage for VMware, with support for both NFS and FCP/iSCSI on the same cluster to host VMware datastores. ONTAP also has robust support for VMware friendly features, such as VVols 1.0, VAAI, inline deduplication/compaction/compression, vSphere integration via the Virtual Storage Console, backing up VMs with SnapCenter, FlexClones, SRA plugins and much more!

For more information on VMware with ONTAP see:

ONTAP 9.2 went GA a couple weeks ago and included some nice new features that fit very well into virtualization workloads. When you upgrade ONTAP, you are able to do it non-disruptively, especially for VMware environments. Plus, NetApp’s internal predictive analysis points to ONTAP 9.2 having the highest quality of the available ONTAP releases out there, so there’s not a lot of reason *not* to upgrade to ONTAP 9.2.

Now, for those features…

Aggregate Inline Deduplication

If you’re not familiar with deduplication, it’s a storage feature that allows blocks that are identical to rely on pointers to a single block instead of having multiple copies of the same blocks.

This is all currently done inline (as data is ingested) only, and currently  on All Flash FAS systems by default. The space savings come in handy in workloads such as ESXi datastores, where you may be applying OS patches across multiple VMs in multiple datastores hosted in multiple FlexVol volumes. Aggregate inline deduplication brings an average additional ~1.32:1 ratio of space savings for VMware workloads. Who doesn’t want to save some space?

At a high level, this animation shows how it works:

aid-animation2

Quality of Service (QoS) Minimums/Guaranteed QoS

In ONTAP 8.2, NetApp introduced Quality of Service maximums to allow storage administrators to apply policies to volumes – and even files like luns or VMs – to prevent bully workloads from affecting other workloads in a cluster.

Last year, NetApp acquired SolidFire, which has a pretty mean QoS of its own where it actually approaches QoS from the other end of the spectrum – guaranteeing a performance floor for workloads that require a specific service level.

qos

I’m not 100% sure, but I’m guessing NetApp saw that and said “that’s pretty sweet. Let’s do that.”

So, they have. Now, ONTAP 9.2 has a maximum and a minimum/guaranteed QoS for storage administrators and service providers. (Guarantees only for SAN currently) For VMware environments, storage administrators can now easily apply floors and ceilings to VMs to maximize their SLAs for their end users and customers.

Check out a video on it here:

We also did a podcast on it here:

ONTAP Select enhancements

ONTAP Select is NetApp’s software-defined version of ONTAP software. Select allows you to “select” whatever server hardware platform you want to run your storage system on (see what they did there?).

ONTAP Select has been around for a while, first in the form of ONTAP Edge. In ONTAP 9.0, it was re-branded to Select and NetApp started adding additional functionality to extend the use case for the solution outside of “edge” cases, such as remote offices.

Select runs on a hypervisor, usually ESXi. ONTAP 9.2 added some functionality that could be appealing to storage administrators.

These include:

  • 2-node HA support
  • FlexGroup volume support
  • Improved performance
  • Easier deployment
  • ESX Robo license
  • Single node ONTAP Select vNAS with VSAN and iSCSI LUN support
  • Inline deduplication support

Three of the more compelling bullets above (to me, at least) for VMware environments are 2-node HA, the ability to use ESX ROBO licenses and the vNAS support with vSAN.

If you’re already using vSAN in your environments, you’ll know that they don’t do file protocols like CIFS/SMB or NFS. Instead, they use a proprietary protocol that is intended to speak only to VMs. While that’s great for datastores, it limits what sort of tasks the vSAN can be used for.

With ONTAP Select running on top of a vSAN, you can present NAS shares to clients, host NFS datastores, etc, without having to buy new hardware. Not only that, but you can also present datastores via vSAN on the same ONTAP Select instance.

vnas.png

Pretty nifty, eh?

From the NetApp vNAS Solution Brief:

Starting with ONTAP Select 9.2, the ONTAP Select vNAS solution also supports
VMware HA, vMotion, and Distributed Resources Scheduler (DRS). After deployment
of a single-node cluster that uses external storage or consumes a vSAN datastore,
the node can be moved through VMware vMotion, HA, or DRS actions. The ONTAP
Select Deploy utility can detect these movements, and updates its internal database
to continue normal management of the node.

For more information on ONTAP select, see:

Got questions or feedback? Insert them in the comments below!

Behind the Scenes: Episode 66 – @vMiss33 Gets Her #VCDX On

Welcome to the Episode 66, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

group-4-2016

This week, the Tech ONTAP podcast team synced up with a podcast veteran, the one and only Melissa Palmer (aka @vMiss33)!

Melissa recently achieved 1337 VMware architect status by completing the grueling VCDX, so we asked her about that journey.

And by we, I mean Glenn and Andrew – I was in Pittsburgh, discussing some FlexGroup goodness with the dev team there. Glenn and Andrew managed not to screw the show up too badly.

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

You can listen here:

BREAKING NEWS at #NetAppInsight – ONTAP VVol Support?

breaking-news

Along with the announcement that vSphere 6.5 went GA today, I’ve got some pretty cool news that I was just given permission to share, on the caveat that I can’t actually give the “when” portion of this…

VVols 1.0 is currently being qualified for ONTAP 9.1!

26602_sec-fig1

That’s right… Both SolidFire *and* ONTAP will have support for VVols. Somewhere, Pete Flecha (@vPedroArrow) is smiling.

If you have a vested interest in this news, please email me at whyistheinternetbroken@gmail.com or comment below with some contact information and I will pass the word on to the ONTAP team.

So, while you’re at NetApp Insight in Berlin, go find Pete at the VMware booth, or attend one of the VMware specific sessions:

  • 60831-2: How Customers and Partners use NFS for Virtualization
  • 62151-2 – VMware Horizon Portfolio on NetApp
  • 61521-2 – VMware on NetApp ONTAP 9: New Tricks and Best Practice Update
  • 61718-3 – Creating a Storage Portal Using VMware vRealize and NetApp
  • 88633-2 Bridging the Gap: Networking for Storage and Virtualization Administrators
  • 88644-2 – New Capabilities in NetApp ONTAP 9, Optimized for All-Flash Virtualized Workloads
  • 61476-2 – VMware Virtual Volumes: Deploy, Implement and Troubleshoot

For NFS-specific information on vSphere 6.5, see:

vSphere 6.5: The NFS edition

For a rundown on the new ONTAP 9.1 features:

ONTAP 9.1 RC1 is now available!

Behind the Scenes: Episode 54 –VVols and SolidFire

Welcome to the Episode 54, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

ep54

This week, we bring in the Storage Janitorhimself, Andy Banta (@andybanta) to do a deep dive into VVols on SolidFire. Andy will also be at VMWorld to answer your VVol questions, so be sure to visit him at the NetApp booth!

Andy likes to work in dirty word phrases to his podcasts, so see if you can find the hidden gem in this one. Hint: He intersperses it in several sections.

We also did a VVols episode with VMware’s VVols guy, Pete Flecha. You may remember him from previous roles, such as “Tech ONTAP podcast host.”

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

You can listen here:

Adventures in Upgrading ESXi

Here at NetApp, we have a variety of labs available to us to tinker with. I work with a few other TMEs in managing a few clustered Data ONTAP clusters, as well as an ESXi server farm. We have 6 ESXi servers that we just moved into a new lab location and are finally ready to be powered back up after a 4-5 month hiatus.

So, I figured, since the lab’s been down for so long anyway, why not upgrade the ESXi servers from 5.1 to 6.0 update 2 while we’re at it?

What could possibly go wrong on my first actual ESXi upgrade on servers that have been migrated from different IP addresses, some of which may still be lingering on the system and are unreachable?

Well, I’ll tell you.

First attempt at upgrading a server, all sorts of things were broken.

  • vCenter couldn’t connect
  • The web client couldn’t connect – error was “503 Service Unavailable (Failed to connect to endpoint: [N7Vmacore4Http16LocalServiceSpecE:0x1f06ff18] _serverNamespace = / _isRedirect = false _port = 8309)”
  • esxcli and vim-cmd commands failed with:
[root@esxi1:~] esxcli
Connect to localhost failed: Connection failure.

After spending a few hours poking around to try to fix the issue, I decided it was probably user error.  I used “install” instead of “update” and when I rebooted, so that probably nuked the server, right?

So I tried again on a new server. This time, I read the manual and did the update the way that was supposedly correct. I even got an error found in the release notes and used VMware’s workaround:

~ # esxcli system maintenanceMode set --enable true
~ # esxcli system maintenanceMode get
Enabled
~ # esxcli software vib update -d /vmfs/volumes/vm_storage/ESX6/update-from-esxi
6.0-6.0_update02.zip
 [DependencyError]
 VIB VMware_bootbank_esx-base_6.0.0-2.34.3620759 requires vsan >= 6.0.0-2.34, bu t the requirement cannot be satisfied within the ImageProfile.
 VIB VMware_bootbank_esx-base_6.0.0-2.34.3620759 requires vsan << 6.0.0-2.35, bu t the requirement cannot be satisfied within the ImageProfile.
 VIB VMware_bootbank_ehci-ehci-hcd_1.0-3vmw.600.2.34.3620759 requires xhci-xhci >= 1.0-3vmw.600.2.34, but the requirement cannot be satisfied within the ImagePr ofile.
 Please refer to the log file for more details.
~ # esxcli software profile update -d /vmfs/volumes/vm_storage/ESX6/update-from-esxi6.0-6.0_update02.zip -p ESXi-6.0.0-20160302001-standard
Update Result
 Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
 Reboot Required: true

After I rebooted:

[root@esxi1:~] esxcli
Connect to localhost failed: Connection failure.

Son of a…

I started Googling like a madman.

google-errors

Found the ever-helpful William Lam’s blog on the web client issue. His recommendation was running a vim-cmd command. However…

[root@esxi2:~] vim-cmd hostsvc/advopt/update Config.HostAgent.plugins.solo.enableMob bool true
Failed to login: Invalid response code: 503 Service Unavailable

In the vpxa.log file, a ton of these:

verbose vpxa[FF8E8AC0] [Originator@6876 sub=vpxXml] [VpxXml] Error fetching /sdk/vimService?wsdl: 503 (Service Unavailable)
warning vpxa[FFCC0B70] [Originator@6876 sub=Default] Closing Response processing in unexpected state: 3
warning vpxa[FFCC0B70] [Originator@6876 sub=hostdcnx] [VpxaHalCnxHostagent] Could not resolve version for authenticating to host agent

 

The log suggested there was a connection failure on port 443, but telnet to that port worked fine. It took me a little bit of tinkering, but I finally figured out where that port number is controlled – /etc/vmware/vpxa/vpxa.cfg.

In that log file, I also noticed that my IP address was wrong – it was using the old IP addresses the hosts had. I changed the IP address and the port used to port 80. Once I did that, my error changed a bit. This time, it was a SSL error:

Error in sending request - SSL Exception

I spent a bit more time poking around and finally decided – time to blow it up. Way easier to re-install a lab box than to try to dig through all the configuration files.

If you find yourself in a similar bind, don’t waste your time – unless it’s production. Then open a case.

I think my issue ended up being a combination of:

  • Stale IP addresses
  • Stale iSCSI HBA settings
  • Stale configs
  • Upgrading to ESXi 6 without addressing the above first

If anyone has any suggestions for fixing this issue, by all means, post in the comments. 🙂

UPDATE:

Both ESXi boxes have been wiped and reinstalled with ESXi 6.0. All is working fine. Funny story, though… after one re-image, I connected via SSH and thought it broke again. Turns out I had a duplicate IP and was still connecting to the old server. Ooops.

Behind the Scenes: Episode 29 –VVols with Pedro Arrow!

Welcome to the Episode 29 version of the new series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

This week, we dragged former Tech ONTAP Podcast host Pete Flecha (aka @vPedroArrow) into the studio to talk VVols after our successful episode with SolidFire’s Aaron Patten last week.

Getting Pedro into the studio was easy – we still keep in touch with him and he was happy to come lend his expertise to the podcast. He also happens to have his own podcast that he just started up at VMware: Virtually Speaking.

I was glad to get Pedro into the studio – he was the guy who championed me to join up with the excellent podcast crew.

pedro-ep29

Recording the Podcast

The podcast went pretty smoothly – it’s as if Pedro had done this before – but in classic Pedro Arrow fashion, we had to do some re-takes the next day. He’s a bit of a perfectionist.

Check out the new episode below and be sure to send any questions or comments to podcast@netapp.com:

VMWORLD::Diary of a vN00b (complete with name dropping)

It’s been a hectic and exhausting week at my first VMworld. I’ve been to tech conferences (such as NetApp Insight), but never to anything of the sheer size and scale of this one. It’s not a storage conference, but you better believe storage was on the forefront of the conversation as the virtualization message starts to shift to converged, hyper-converged, flash and cloud.

Luckily for NetApp, we happen to have all of those bases covered already with FlexPod, EVO:RAIL, All-Flash FAS and Cloud ONTAP/NetApp Private Storage, as well as the NetApp Data Fabric.

My primary role at VMworld for NetApp was to man the NetApp booth and offer my knowledge and expertise regarding NetApp technology. I had some very good discussions with customers regarding their challenges and how they could potentially solve them. In the Meet the Engineer sessions I had, I made sure to emphasize that those customers should be doing an open and honest evaluation of their options for two reasons:

  • Doing your homework is always a good thing.
  • I was confident that once they did the research, they’d see that NetApp offered the most value.

While I was there, I managed to snap some photos of the booth and what we were doing.

Busy booth!

Meet the engineer!

The illustrious All Flash FAS 8080

Dan Isaacs grinning about his Vaughn Stewart argument

The guys from TechONTAP solving real problems

Rachel Dines showing a customer how awesome AltaVault is

For more photos, check out the Google Album.

Community

I also was here to meet new people, both at NetApp and in the tech community. I got to know a ton of really smart people and interacted with folks that I previously only knew on social media.

Some highlights (and blatant name dropping):

So my first VMworld is in the books and now I get to give my aching feet a break. Bring on the next one!