How to see data transfer on an ONTAP cluster network

ONTAP clusters utilize a backend cluster network to allow multiple HA pairs to communicate and provide more scale for performance and capacity. This is done by allowing you to nondisruptively add new nodes (and, as a result, capacity and compute) into a cluster. Data will be accessible regardless of where you connect in the cluster. You can scale up to 24 nodes for NAS-only clusters, while being able to mix different HA pair types in the same cluster if you choose to offer different service levels for storage (such as performance tiers, capacity tiers, etc).

Network interfaces that serve data to clients live on physical ports on nodes and are floating/virtual IP addresses that can move to/from any node in the cluster. File systems for NAS are defined by Storage Virtual Machines (SVMs) and volumes. The SVMs own the IP addresses you would use to access data.

When using NAS (CIFS/SMB/NFS) for data access, you can connect to a data interface in the SVM that lives on any node in the cluster, regardless of where the data volume resides. The following graphic shows how that happens.

When you access a NAS volume on a data interface on the same node as the data volume, ONTAP can “cheat” a little and directly interact with that volume without having to do extra work.

If that data interface is on a different node than where the volume resides, then the NAS packet gets packaged up as a proprietary protocol and shipped over the cluster network backend to the node where the volume lives. This volume/node relationship is stored in an internal database in ONTAP so we always have a map to find volumes quickly. Once the NAS packet arrives on the destination node, it gets unpackaged, processed and then the response to the client goes back out the way it came.

Traversing the cluster network has a bit of a latency cost, however, as the packaging/unpackaging/traversal takes some time (more time than a local request). This manifests into slightly less performance for those workloads. The impact of that performance hit is negligible in most environments, but for latency-sensitive applications, there might be some noticeable performance degradation.

There are protocol features that help mitigate the remote I/O that can occur in a cluster, such as SMB node referrals and pNFS, but in scenarios where you can’t use either of those (SMB node referrals didn’t use Kerberos in earlier Windows versions; pNFS needs NFSv4.1 and later), then you’re going to likely have remote cluster traffic. As mentioned, in most cases this isn’t an issue, but it may be useful to have an easy way to find out if an ONTAP cluster is doing remote/cluster traffic.

Cluster level – Statistics show-periodic

To get a cluster-wide view if there is remote traffic on the cluster, you can use the advanced priv command “statistics show-periodic.” This command gives a wealth of information by default, such as:

  • CPU average/busy
  • Total ops/NFS ops/CIFS ops
  • FlexCache ops
  • Total data recieved/sent (Data and cluster network throughput)
  • Data received/sent (Data throughput only)
  • Cluster received/sent (Cluster throughput only)
  • Cluster busy % (how busy the cluster network is)
  • Disk reads/writes
  • Packets sent/received

We also have options to limit the intervals, define SVMs/vservers, etc.

::*> statistics show-periodic ?
[[-object] ] *Object
[ -instance ] *Instance
[ -counter ] *Counter
[ -preset ] *Preset
[ -node ] *Node
[ -vserver ] *Vserver
[ -interval ] *Interval in Seconds (default: 2)
[ -iterations ] *Number of Iterations (default: 0)
[ -summary {true|false} ] *Print Summary (default: true)
[ -filter ] *Filter Data

But for backend cluster traffic, we only care about a few of those, so we can filter the iterations for only what we want to view. In this case, I just want to look at the data sent/received and the cluster busy %.

::*> statistics show-periodic -counter total-recv|total-sent|data-recv|data-sent|cluster-recv|cluster-sent|cluster-busy

When I do that, I get a cleaner, easier to read capture. This is what it looks like when we have remote traffic. This is an NFSv4.1 workload without pNFS, using a mount wsize of 64K.

cluster1: cluster.cluster: 5/11/2021 14:01:49
    total    total     data     data cluster  cluster  cluster
     recv     sent     recv     sent    busy     recv     sent
 -------- -------- -------- -------- ------- -------- --------
    157MB   4.85MB    148MB   3.46MB      0%   8.76MB   1.39MB
    241MB   70.2MB    197MB   4.68MB      1%   43.1MB   65.5MB
    269MB    111MB    191MB   4.41MB      4%   78.1MB    107MB
    329MB   92.5MB    196MB   4.52MB      4%    133MB   88.0MB
    357MB    117MB    246MB   5.68MB      2%    111MB    111MB
    217MB   27.1MB    197MB   4.55MB      1%   20.3MB   22.5MB
    287MB   30.4MB    258MB   5.91MB      1%   28.7MB   24.5MB
    205MB   28.1MB    176MB   4.03MB      1%   28.9MB   24.1MB
cluster1: cluster.cluster: 5/11/2021 14:01:57
    total    total     data     data cluster  cluster  cluster
     recv     sent     recv     sent    busy     recv     sent
 -------- -------- -------- -------- ------- -------- --------
Minimums:
    157MB   4.85MB    148MB   3.46MB      0%   8.76MB   1.39MB
Averages for 8 samples:
    258MB   60.3MB    201MB   4.66MB      1%   56.5MB   55.7MB
Maximums:
    357MB    117MB    258MB   5.91MB      4%    133MB    111MB

As we can see, there is an average of 55.7MB sent and 56.5MB received over the cluster network each second; this accounts for an average of 1% of the available bandwidth, which means we have plenty of cluster network utilization left over.

When we look at the latency for this workload, this is what we see. (Using qos statistics latency show)

Policy Group            Latency
-------------------- ----------
-total-                364.00us
extreme-fixed          364.00us
-total-                619.00us
extreme-fixed          619.00us
-total-                490.00us
extreme-fixed          490.00us
-total-                409.00us
extreme-fixed          409.00us
-total-                422.00us
extreme-fixed          422.00us
-total-                474.00us
extreme-fixed          474.00us
-total-                412.00us
extreme-fixed          412.00us
-total-                372.00us
extreme-fixed          372.00us
-total-                475.00us
extreme-fixed          475.00us
-total-                436.00us
extreme-fixed          436.00us
-total-                474.00us
extreme-fixed          474.00us

This is what the cluster network looks like when I use pNFS for data locality:

cluster1: cluster.cluster: 5/11/2021 14:18:19
    total    total     data     data cluster  cluster  cluster
     recv     sent     recv     sent    busy     recv     sent
 -------- -------- -------- -------- ------- -------- --------
    208MB   6.24MB    206MB   4.76MB      0%   1.56MB   1.47MB
    214MB   5.37MB    213MB   4.85MB      0%    555KB    538KB
    214MB   6.27MB    213MB   4.80MB      0%   1.46MB   1.47MB
    219MB   5.95MB    219MB   5.40MB      0%    572KB    560KB
    318MB   8.91MB    317MB   7.44MB      0%   1.46MB   1.47MB
    203MB   5.16MB    203MB   4.62MB      0%    560KB    548KB
    205MB   6.09MB    204MB   4.64MB      0%   1.44MB   1.45MB
cluster1: cluster.cluster: 5/11/2021 14:18:26
    total    total     data     data cluster  cluster  cluster
     recv     sent     recv     sent    busy     recv     sent
 -------- -------- -------- -------- ------- -------- --------
Minimums:
    203MB   5.16MB    203MB   4.62MB      0%    555KB    538KB
Averages for 7 samples:
    226MB   6.28MB    225MB   5.22MB      0%   1.08MB   1.07MB
Maximums:
    318MB   8.91MB    317MB   7.44MB      0%   1.56MB   1.47MB

There is barely any cluster traffic other than the normal cluster operations. The “data” and “total” sent/received is nearly identical.

And the latency was an average of .1 ms lower.

Policy Group            Latency
-------------------- ----------

-total-                323.00us
extreme-fixed          323.00us
-total-                323.00us
extreme-fixed          323.00us
-total-                325.00us
extreme-fixed          325.00us
-total-                336.00us
extreme-fixed          336.00us
-total-                325.00us
extreme-fixed          325.00us
-total-                328.00us
extreme-fixed          328.00us
-total-                334.00us
extreme-fixed          334.00us
-total-                341.00us
extreme-fixed          341.00us
-total-                336.00us
extreme-fixed          336.00us
-total-                330.00us
extreme-fixed          330.00us

Try it out and see for yourself! If you have questions or comments, enter them below.

How to Map File and Folder Locations to NetApp ONTAP FlexGroup Member Volumes with XCP

The concept behind a NetApp FlexGroup volume is that ONTAP presents a single large namespace for NAS data, while ONTAP handles the balance and placement of files and folders to the underlying FlexVol member volumes, rather than a storage administrator needing to manage that.

I cover it in more detail in:

There’s also this USENIX presentation:

However, while not knowing/caring where files and folders live in your cluster is nice most of the time, there are occasions where you may need to figure out where a file or folder *actually* lives in the cluster – such as if a member volume has a large imbalance of capacity usage and you need to know what files need to be deleted/moved out of that volume. Previously, there’s been no real good way to do that, but thanks to the efforts of one of our global solutions architects (and one of the inventors of XCP), we now have a way and we don’t even need a treasure map.

Amazon.com: Plastic Treasure Map Party Accessory (1 count) (1/Pkg): Kitchen  & Dining

What is NetApp XCP?

If you’re unfamiliar with NetApp XCP, it’s NetApp’s FREE copy utility/data move that also can be used to do file analytics. There are other use cases, too:

Using XCP to delete files en masse: A race against rm

How to find average file size and largest file size using XCP

Because XCP can run in parallel from a client, it can perform tasks (such as find) much faster in high file count environments, so you’re not sitting around waiting for a command to finish for minutes/hours/days.

Since a FlexGroup is pretty much made for high file count environments, we’d want a way to quickly find files and their locations.

ONTAP NAS and File Handles

In How to identify a file or folder in ONTAP in NFS packet traces, I covered how to find inode information and a little bit about how ONTAP file handles are created/presented. The deep details aren’t super important here, but the general concept – that FlexGroup member volume information is stored in file handles that NFS can read – is.

Using that information and some parsing, there’s a Python script that can be used as an XCP plugin to translate file handles into member volume index numbers and present them in easy-to-read formats.

That Python script can be found here:

FlexGroup File Mapper

How to Use the “FlexGroup File Mapper” plugin with XCP

First of all, you’d need a client that has XCP installed. The version isn’t super important, but the latest release is generally the best release to use.

There are two methods we’ll use here to map files to member volumes.

  1. Scan All Files/Folders in a FlexGroup and Map Them All to Member Volumes
  2. Use a FlexGroup Member Volume Number and Find All Files in that Member Volume

To do this, I’ll use a FlexGroup that has ~2 million files.

::*> df -i FGNFS
Filesystem iused ifree %iused Mounted on Vserver
/vol/FGNFS/ 2001985 316764975 0% /FGNFS DEMO

Getting the XCP Host Ready

First, copy the FlexGroup File Mapper plugin to the XCP host. The file name isn’t important, but when you run the XCP command, you’ll either want to specify the plugin’s location or run the command from the folder the plugin lives in.

On my XCP host, I have the plugin named fgid.py in /testXCP:

# ls -la | grep fgid.py
-rw-r--r-- 1 502 admin 1645 Mar 25 17:34 fgid.py
# pwd
/testXCP

Scan All Files/Folders in a FlexGroup and Map Them All to Member Volumes

In this case, we’ll map all files and folders to their respective FlexGroup member volumes.

This is the command I use:

xcp diag -run fgid.py scan -fmt '"{} {}".format(x, fgid(x))' 10.10.10.10:/exportname

You can also include -parallel (n) to control how many processes spin up to do this work and you can use > filename at the end to pipe the output to a file (recommended).

For example, scanning ~2 million files in this volume took just 37 seconds!

# xcp diag -run fgid.py scan -fmt '"{} {}".format(x, fgid(x))' 10.10.10.10:/FGNFS > FGNFS.txt
402,061 scanned, 70.6 MiB in (14.1 MiB/s), 367 KiB out (73.3 KiB/s), 5s
751,933 scanned, 132 MiB in (12.3 MiB/s), 687 KiB out (63.9 KiB/s), 10s
1.10M scanned, 193 MiB in (12.2 MiB/s), 1007 KiB out (63.6 KiB/s), 15s
1.28M scanned, 225 MiB in (6.23 MiB/s), 1.14 MiB out (32.6 KiB/s), 20s
1.61M scanned, 283 MiB in (11.6 MiB/s), 1.44 MiB out (60.4 KiB/s), 25s
1.91M scanned, 335 MiB in (9.53 MiB/s), 1.70 MiB out (49.5 KiB/s), 31s
2.00M scanned, 351 MiB in (3.30 MiB/s), 1.79 MiB out (17.4 KiB/s), 36s
Sending statistics…

Xcp command : xcp diag -run fgid.py scan -fmt "{} {}".format(x, fgid(x)) 10.10.10.10:/FGNFS
Stats : 2.00M scanned
Speed : 351 MiB in (9.49 MiB/s), 1.79 MiB out (49.5 KiB/s)
Total Time : 37s.
STATUS : PASSED

The file created was 120MB, though… that’s a LOT of text to sort through.

-rw-r--r--. 1 root root 120M Apr 27 15:28 FGNFS.txt

So, there’s another way to do this, right? Correct!

If I know the folder I want to filter, or even a matching of file names, I can use -match in the command. In this case, I want to find all folders named dir_33.

This is the command:

# xcp diag -run fgid.py scan -fmt '"{} {}".format(x, fgid(x))' -match "name=='dir_33'" 10.10.10.10:/FGNFS > dir_33_FGNFS.txt

This is the output of the file. Two folders – one in member volume 3, one in member volume 4:

# cat dir_33_FGNFS.txt
x.x.x.x:/FGNFS/files/client1/dir_33 3
x.x.x.x:/FGNFS/files/client2/dir_33 4

If I want to use pattern matching for file names (ie, I know I want all files with “moarfiles3” in the name), then I can do this using regex and/or wildcards. More examples can be found in the XCP user guides.

Here’s the command I used. It found 440,400 files with that pattern in 27s.

# xcp diag -run fgid.py scan -fmt '"{} {}".format(x, fgid(x))' -match "fnm('moarfiles3*')" 10.10.10.10:/FGNFS > moarfiles3_FGNFS.txt

507,332 scanned, 28,097 matched, 89.0 MiB in (17.8 MiB/s), 465 KiB out (92.9 KiB/s), 5s
946,796 scanned, 132,128 matched, 166 MiB in (15.4 MiB/s), 866 KiB out (80.1 KiB/s), 10s
1.31M scanned, 209,340 matched, 230 MiB in (12.8 MiB/s), 1.17 MiB out (66.2 KiB/s), 15s
1.73M scanned, 297,647 matched, 304 MiB in (14.8 MiB/s), 1.55 MiB out (77.3 KiB/s), 20s
2.00M scanned, 376,195 matched, 351 MiB in (9.35 MiB/s), 1.79 MiB out (48.8 KiB/s), 25s
Sending statistics…

Filtered: 444400 matched, 1556004 did not match

Xcp command : xcp diag -run fgid.py scan -fmt "{} {}".format(x, fgid(x)) -match fnm('moarfiles3*') 10.10.10.10:/FGNFS
Stats : 2.00M scanned, 444,400 matched
Speed : 351 MiB in (12.6 MiB/s), 1.79 MiB out (65.7 KiB/s)
Total Time : 27s.

And this is a sample of some of those entries (the file is 27MB):

x.x.x.x:/FGNFS/files/client1/dir_45/moarfiles3158.txt 3
x.x.x.x:/FGNFS/files/client1/dir_45/moarfiles3159.txt 3

I can also look for files over a certain size. In this volume, the files are all 4K in size; but in my TechONTAP volume, I have varying file sizes. In this case, I want to find all .wav files greater than 100MB. This command didn’t seem to pipe to a file for me, but the output was only 16 files.

# xcp diag -run fgid.py scan -fmt '"{} {}".format(x, fgid(x))' -match "fnm('.wav') and size > 500*M" 10.10.10.11:/techontap > TechONTAP_ep.txt

10.10.10.11:/techontap/Episodes/Episode 20x - Genomics Architecture/ep20x-genomics-meat.wav 4
10.10.10.11:/techontap/archive/combine.band/Media/Audio Files/ep104-webex.output.wav 5
10.10.10.11:/techontap/archive/combine.band/Media/Audio Files/ep104-mics.output.wav 3
10.10.10.11:/techontap/archive/Episode 181 - Networking Deep Dive/ep181-networking-deep-dive-meat.output.wav 6
10.10.10.11:/techontap/archive/Episode 181 - Networking Deep Dive/ep181-networking-deep-dive-meat.wav 2

Filtered: 16 matched, 7687 did not match

xcp command : xcp diag -run fgid.py scan -fmt "{} {}".format(x, fgid(x)) -match fnm('.wav') and size > 100M 10.10.10.11:/techontap
Stats : 7,703 scanned, 16 matched
Speed : 1.81 MiB in (1.44 MiB/s), 129 KiB out (102 KiB/s)
Total Time : 1s.
STATUS : PASSED

But what if I know that a member volume is getting full and I want to see what files are in that member volume?

Use a FlexGroup Member Volume Number and Find All Files in that Member Volume

In the case where I know what member volume needs to be addressed, I can use XCP to search using the FlexGroup index number. The index number lines up with the member volume numbers, so if the index number is 6, then we know the member volume is 6.

In my 2 million file FG, I want to filter by member 6, so I use this command, which shows there are ~95019 files in member 6:

# xcp diag -run fgid.py scan -match 'fgid(x)==6' -parallel 10 -l 10.10.10.10:/FGNFS > member6.txt

 615,096 scanned, 19 matched, 108 MiB in (21.6 MiB/s), 563 KiB out (113 KiB/s), 5s
 1.03M scanned, 5,019 matched, 180 MiB in (14.5 MiB/s), 939 KiB out (75.0 KiB/s), 10s
 1.27M scanned, 8,651 matched, 222 MiB in (8.40 MiB/s), 1.13 MiB out (43.7 KiB/s), 15s
 1.76M scanned, 50,019 matched, 309 MiB in (17.3 MiB/s), 1.57 MiB out (89.9 KiB/s), 20s
 2.00M scanned, 62,793 matched, 351 MiB in (8.35 MiB/s), 1.79 MiB out (43.7 KiB/s), 25s

Filtered: 95019 matched, 1905385 did not match

Xcp command : xcp diag -run fgid.py scan -match fgid(x)==6 -parallel 10 -l 10.10.10.10:/FGNFS
Stats       : 2.00M scanned, 95,019 matched
Speed       : 351 MiB in (12.5 MiB/s), 1.79 MiB out (65.0 KiB/s)
Total Time  : 28s.
STATUS      : PASSED

When I check against the files-used for that member volume, it lines up pretty well:

::*> vol show -vserver DEMO -volume FGNFS__0006 -fields files-used
vserver volume      files-used
------- ----------- ----------
DEMO    FGNFS__0006 95120

And the output file shows not just the file names, but also the sizes!

rw-r--r-- --- root root 4KiB 4KiB 18h22m FGNFS/files/client2/dir_143/moarfiles1232.txt
rw-r--r-- --- root root 4KiB 4KiB 18h22m FGNFS/files/client2/dir_143/moarfiles1233.txt
rw-r--r-- --- root root 4KiB 4KiB 18h22m FGNFS/files/client2/dir_143/moarfiles1234.txt

And, if I choose, I can filter further with the sizes. Maybe I just want to see files in that member volume that are 4K or less (in this case, that’s all of them):

# xcp diag -run fgid.py scan -match 'fgid(x)==6 and size < 4*K' -parallel 10 -l 10.10.10.10:/FGNFS

In my “TechONTAP” volume, I look for 500MB files or greater in member 6:

# xcp diag -run fgid.py scan -match 'fgid(x)==6 and size > 500*M' -parallel 10 -l 10.10.10.11:/techontap

rw-r--r-- --- 501 games 596MiB 598MiB 3y219d techontap/Episodes/Episode 1/Epidose 1 GBF.band/Media/Audio Files/Tech ONTAP Podcast - Episode 1 - AFF with Dan Isaacs v3_1.aif
rw-r--r-- --- 501 games 885MiB 888MiB 3y219d techontap/archive/Prod - old MacBook/Insight 2016_Day2_TechOnTap_JParisi_ASullivan_GDekhayser.mp4
rw-r--r-- --- 501 games 787MiB 790MiB 1y220d techontap/archive/Episode 181 - Networking Deep Dive/ep181-networking-deep-dive-meat.output.wav

Filtered: 3 matched, 7700 did not match

Xcp command : xcp diag -run fgid.py scan -match fgid(x)==6 and size > 500*M -parallel 10 -l 10.10.10.11:/techontap
Stats : 7,703 scanned, 3 matched
Speed : 1.81 MiB in (1.53 MiB/s), 129 KiB out (109 KiB/s)
Total Time : 1s.
STATUS : PASSED

So, there you have it! A way to find files in a specific member volume inside of a FlexGroup! Let me know if you have any comments or questions below.

How to identify a file or folder in ONTAP in NFS packet traces

When you’re troubleshooting NFS issues, sometimes you have to collect a packet capture to see what’s going on. But the issue is, packet captures don’t really tell you the file or folder names. I like to use Wireshark for Mac and Windows, and regular old tcpdump for Linux. For ONTAP, you can run packet captures using this KB (requires NetApp login):

How to capture packet traces (tcpdump) on ONTAP 9.2+ systems

By default, Wireshark shows NFS packets like this ACCESS call. We see a FH, which is in hex, and then we see another filehandle that’s even more unreadable. We’ll occasionally see file names in the trace (like copy-file below), but if we need to find out why an ACCESS call fails, we’ll have difficulty:

Luckily, Wireshark has some built-in stuff to crack open those NFS file handles in ONTAP.

Also, check out this new blog:

How to Map File and Folder Locations to NetApp ONTAP FlexGroup Member Volumes with XCP

Changing Wireshark Settings

First, we’d want to set the NFS preferences. That’s done via Edit -> Preferences and then by clicking on “Protocols” in the left hand menu and selecting NFS:

Here, you’ll see some options that you can read more about by mousing over them:

I just select them all.

When we go to the packet we want to analyze, we can right click and select “Decode As…”:

This brings up the “Decode As” window. Here, we have “NFS File Handle Types” pre-populated. Double-click (none) under “Current” and you get a drop down menu. You’ll get some options for NFS, including…. ONTAP! In this case, since I’m using clustered ONTAP, I select ontap_gx_v3. (GX is what clustered ONTAP was before clustered ONTAP was clustered ONTAP):

If you click “OK” it will apply to the current session only. If you click “Save” it will keep those preferences every time.

Now, when the ACCESS packet is displayed, I get WAY more information about the file in question and they’re translated to decimal values.

Those still don’t mean a lot to us, but I’ll get to that.

Mapping file handle values to files in ONTAP

Now, we can use the ONTAP CLI and the packet capture to discern exactly what file has that ACCESS call.

Every volume in ONTAP has a unique identifier called a “Master Set ID” (or MSID). You can see the volume’s MSID with the following diag priv command:

cluster::*> vol show -vserver DEMO -volume vol2 -fields msid
vserver volume  msid
------- ------- -----------
DEMO    vol2    2163230318

If you know the volume name you’re troubleshooting, then that makes life easier – just use find in the packet details.

If you don’t, the MSID can be found in a packet trace in the ACCESS reply as the “fsid”:

You can then find the volume name and exported path with the MSID in the ONTAP CLI with:

cluster::*> set diag; vol show -vserver DEMO -msid  2163230318 -fields volume,junction-path
vserver volume  junction-path
------- ------- ----------- 
DEMO    vol2    /vol2 

File and directory handles are constructed using that MSID, which is why each volume is considered a distinct filesystem. But we don’t care about that, because Wireshark figures all that out for us and we can use the ONTAP CLI to figure it out as well.

The pertinent information in the trace as it maps to the files and folders are:

  • Spin file id = inode number in ONTAP
  • Spin file unique id = file generation number
  • File id = inode number as seen by the NFS client

If you know the volume and file or folder’s name, you can easily find the inode number in ONTAP with this command:

cluster::*> set advanced; showfh -vserver DEMO /vol/vol2/folder
Vserver                Path
---------------------- ---------------------------
DEMO                   /vol/vol2/folder
flags   snapid fileid    generation fsid       msid         dsid
------- ------ --------- ---------- ---------- ------------ ------------
0x8000  0      0x658e    0x227ed312 -          -            0x1639

In the above, the values are in hex, but we can translate with a hex converter, like this one:

https://www.rapidtables.com/convert/number/hex-to-decimal.html

So, for the values we got:

  • file ID (inode) 0x658e = 25998
  • generation ID 0x227ed312 = 578736914

In the trace, that matches up:

Finding file names and paths by inode number

But what happens if you don’t know the file name and just have the information from the trace?

One way is to use the nodeshell level command “inodepath.”

::*> node run -node node1 inodepath -v files 15447
Inode 15447 in volume files (fsid 0x142a) has 1 name.
Volume UUID is: 76a69b93-cc2f-11ea-b16f-00a098696eda
[ 1] Primary pathname = /vol/files/newapps/user1-file-smb

This will work with a FlexGroup volume as well, provided you know the node and the member volume where the file lives (see “How to Map File and Folder Locations to NetApp ONTAP FlexGroup Member Volumes with XCP” for a way to figure that info out).

::*> node run -node node2 inodepath -v FG2__0007 5292
Inode 5292 in volume FG2__0007 (fsid 0x1639) has 1 name.
Volume UUID is: 87b14652-9685-11eb-81bf-00a0986b1223
[ 1] Primary pathname = /vol/FG2/copy-file-finder

There’s also a diag privilege command in ONTAP for that. The caveat is it can be dangerous to run, especially if you make a mistake in running it. (And when I say dangerous, I mean best case, it hangs your CLI session for a while; worst case, it panics the node.) If possible, use inodepath instead.

Here’s how we could use the volume name and inode number to find the file name. For a FlexVol volume, it’s simple:

cluster::*> vol explore -format inode -scope volname.inode -dump name

For example:

cluster::*> volume explore -format inode -scope files.15447 -dump name
name=/newapps/user1-file-smb

With a FlexGroup volume, however, it’s a little more complicated, as there are member volumes to take into account and there’s no easy way for ONTAP to discern which FlexGroup member volume has the file, since ONTAP inode numbers can be reused in different member volumes. This is because the file IDs presented to NFS clients are created using the inode numbers and things like the member volume’s MSID (which is different than the FlexGroup’s MSID).

To make this happen with volume explore, we’d be working in reverse – listing the contents of the volume’s files/folders, then using the inode number of the parent folder, listing those, etc. With high file count environments, this is basically an impossibility.

In that case, we’d need to use an NFS client to discover the file name associated with the inode number in question.

From the client, we have two commands to find an inode number for a file. In this case we know the file’s location and name:

# ls -i /mnt/client1/copy-file-finder
4133624749 /mnt/client1/copy-file-finder
#stat copy-file-finder
File: ‘copy-file-finder’
Size: 12 Blocks: 0 IO Block: 1048576 regular file
Device: 2eh/46d Inode: 4133624749 Links: 1
Access: (0555/-r-xr-xr-x) Uid: ( 1102/ prof1) Gid: (10002/ProfGroup)
Access: 2021-04-14 11:47:45.579879000 -0400
Modify: 2021-04-14 11:47:45.588875000 -0400
Change: 2021-04-14 17:34:07.364283000 -0400
Birth: -

In a packet trace, that inode number is “fileid” and found in REPLY calls, such as GETATTR:

If we only know the inode number (as if we got it from a packet trace), we can use the number on the client to find the file name. One way is with “find”:

# find /path/to/mountpoint -inum <inodenumber>

For example:

# find /mnt/client1 -inum 4133624749
/mnt/client1/copy-file-finder

“find” can take a while – especially in a high file count environment, so we could also use XCP.

# xcp -l -match 'fileid== <inodenumber>' server1:/export

In this case:

# xcp -l -match 'fileid== 4133624749' DEMO:/FG2
XCP 1.6.1; (c) 2021 NetApp, Inc.; Licensed to Justin Parisi [NetApp Inc] until Tue Jun 22 12:34:48 2021

r-xr-xr-x --- 1102 10002 12 0 12d23h FG2/copy-file-finder

Filtered: 8173 did not match

Xcp command : xcp -l -match fileid== 4133624749 DEMO:/FG2
Stats : 8,174 scanned, 1 matched
Speed : 1.47 MiB in (2.10 MiB/s), 8.61 KiB out (12.3 KiB/s)
Total Time : 0s.
STATUS : PASSED

Hope this helps you find files in your NFS filesystem! If you have questions or comments, leave them below.

MacOS NFS Clients with ONTAP – Tips and Considerations

When I’m testing stuff out for customer deployments that I don’t work with a ton, I like to keep notes on the work so I can reference it later for TRs or other things. A blog is a great place to do that, as it might help other people in similar scenarios. This won’t be an exhaustive list, and it is certain to change over time and possibly make its way into a TR, but here we go…

ONTAP and Multiprotocol NAS

Before I get into the MacOS client stuff, we need to understand ONTAP multiprotocol NAS, as it can impact how MacOS clients behave.

In ONTAP, you can serve the same datasets to clients regardless of the NAS protocol they use (SMB or NFS). Many clients can actually do both protocols – MacOS is one of those clients.

The way ONTAP does Multiprotocol NAS (and keeps permissions predictable) is via name mappings and volume “security styles,” which controls what kind of ACLs are in use. TR-4887 goes into more detail on how all that works, but at a high level:

NTFS security styles use NTFS ACLs

SMB clients will map to UNIX users and NFS clients will require mappings to valid Windows users for authentication. Then, permissions are controlled via ACLs. Chmod/chown from NFS clients will fail.

UNIX security styles use UNIX mode bits (rwx) and/or NFSv4 ACLs

SMB clients will require mappings to a valid UNIX user for permissions; NFS clients will only require mapping to a UNIX user name if using NFSv4 ACLs. SMB clients can do *some* permissions changes, but on a very limited basis.

Mixed security styles always use either UNIX or NTFS effective security styles, based on last ACL change

Basically, if an NFS client chmods a file, it switches to UNIX security style. If an SMB client changes ownership of the file, it flips back to NTFS security style. This allows you to change permissions from any client, but you need to ensure you have proper name mappings in place to avoid undesired permission behavior. Generally, we recommend avoiding mixed security styles in most cases.

MacOS NFS Client Considerations

When using MacOS for an NFS client, there are a few things I’ve run into the past week or two while testing that you would want to know to avoid issues.

MacOS can be configured to use Active Directory LDAP for UNIX Identities

When you’re doing multiprotocol NAS (even if the clients will only do NFS, your volumes might have NTFS style permissions), you want to try to use a centralized name service like LDAP so that ONTAP, SMB clients and NFS clients all agree on who the users are, what groups they belong to, what numeric IDs they have, etc. If ONTAP thinks a user has a numeric ID of 1234 and the client things that user has a numeric ID of 5678, then you likely won’t get the access you expected. I wrote up a blog on configuring MacOS clients to use AD LDAP here:

MacOS clients can also be configured to use single sign on with AD and NFS home directories

Your MacOS clients – once added to AD in the blog post above – can now log in using AD accounts. There’s also an additional tab in the Directory Utility that allows you to auto-create home directories when a new user logs in to the MacOS client.

But you can also configure the auto-created home directories to leverage an NFS mount on the ONTAP storage system. You can configure the MacOS client to automount homedirs and then configure the MacOS client to use that path. (This process varies based on Mac version; I’m on 10.14.4 Catalina)

By default, the homedir path is /home in auto_master. We can use that.

Then, chmod the /etc/auto_home file to 644:

$ sudo chmod 644 /etc/auto_home

Create a volume on the ONTAP cluster for the homedirs and ensure it’s able to be mounted from the MacOS clients via the export policy rules (TR-4067 covers export policy rules):

::*> vol show -vserver DEMO -volume machomedirs -fields junction-path,policy
vserver volume      policy  junction-path
------- ----------- ------- ----------------
DEMO    machomedirs default /machomedirs

Create qtrees for each user and set the user/group and desired UNIX permissions:

qtree create -vserver DEMO -volume machomedirs -qtree prof1 -user prof1 -group ProfGroup -unix-permissions 755
qtree create -vserver DEMO -volume machomedirs -qtree student1 -user student1 -group group1 -unix-permissions 755

(For best results on Mac clients, use UNIX security styles.)

Then modify the automount /etc/auto_home file to use that path for homedir mounts. When a user logs in, the homedir will auto mount.

This is the line I used:

* -fstype=nfs nfs://demo:/machomedirs/&

And I also add the home mount

Then apply the automount change:

$ sudo automount -cv
automount: /net updated
automount: /home updated
automount: /Network/Servers updated
automount: no unmounts

Now, when I cd to /home/username, it automounts that path:

$ cd /home/prof1
$ mount
demo:/machomedirs/prof1 on /home/prof1 (nfs, nodev, nosuid, automounted, nobrowse)

But if I want that path to be the new homedir path, I would need to log in as that user and then go to “System Preferences -> Users and Groups” and right click the user. Then select “Advanced Options.”

Then you’d need to restart. Once that happens, log in again and when you first open Terminal, it will use the NFS homedir path.

NOTE: You may want to test if the Mac client can manually mount the homedir before testing logins. If the client can’t automount the homedir on login things will break.

Alternately, you can create a user with the same name as the AD account and then modify the homedir path (this removes the need to login). The Mac will pick up the correct UID, but the group ID may need to be changed.

If you use SMB shares for your home directories, it’s as easy as selecting “Use UNC path” in the User Experience area of Directory Utility (there’s no way to specify NFS here):

With new logins, the profile will get created in the qtree you created for the homedir (and you’ll go through the typical initial Mac setup screens):

# ls -la
total 28
drwxrwxr-x 6 student1 group1 4096 Apr 14 16:39 .
drwxr-xr-x 6 root root 4096 Apr 14 15:28 ..
drwx------ 2 student1 group1 4096 Apr 14 16:39 Desktop
drwx------ 2 student1 group1 4096 Apr 14 16:35 Downloads
drwxr-xr-x 25 student1 group1 4096 Apr 14 16:39 Library
-rw-r--r-- 1 student1 group1 4096 Apr 14 16:35 ._Library
drwx------ 4 student1 group1 4096 Apr 14 16:35 .Spotlight-V100

When you open terminal, it automounts the NFS home directory mount for that user and drops you right into your folder!

Mac NFS Considerations, Caveats, Issues

If you’re using NFS on Mac clients, there are two main things to remember:

  • Volumes/qtrees using UNIX security styles work best with NFS in general
  • Terminal/CLI works better than Finder in nearly all instances

If you have to/want to use Finder, or you have to/want to use NTFS security styles for multiprotocol, then there are some things you’d want to keep in mind.

  • If possible, connect the Mac client to the Active Directory domain and use LDAP for UNIX identities as described above.
  • Ensure your users/groups are all resolving properly on the Mac clients and ONTAP system. TR-4887 and TR-4835 cover some commands you can use to check users and groups, name mappings, group memberships, etc.
  • If you’re using NTFS security style volumes/qtrees and want the Finder to work properly for copies to and from the NFS mount, configure the NFS export policy rule to set -ntfs-unix-security-ops to “ignore” – Finder will bail out if ONTAP returns an error, so we want to silently fail those operations (such as SETATTR; see below).
  • When you open a file for reading/writing (such as a text file), Mac creates a ._filename file along with it. Depending on how many files you have in your volume, this can be an issue. For example, if you open 1 million files and Mac creates 1 million corresponding ._filename files, that starts to add up. Don’t worry! You’re not alone: https://apple.stackexchange.com/questions/14980/why-are-dot-underscore-files-created-and-how-can-i-avoid-them
  • If you’re using DFS symlinks, check out this KB: DFS links do not work on MAC OS client, with ONTAP 9.5 and symlinks enabled

I’ve also run into some interesting behaviors with Mac/Finder/SMB and junction paths in ONTAP, as covered in this blog:

Workaround for Mac Finder errors when unzipping files in ONTAP

One issue that I did a pretty considerable amount of analysis on was the aforementioned “can’t copy using Finder.” Here are the dirty details…

Permissions Error When Copying a File to a NFS Mount in ONTAP using Finder

In this case, a file copy worked using Terminal, but was failing with permissions errors when using Finder and complaining about the file already existing.

First, it wants a login (which shouldn’t be needed):

Then it says this:

If you select “Replace” this is the error:

If you select “Stop” it stops and you are left with an empty 0 byte “file” – so the copy failed.

If you select “Keep Both” the Finder goes into an infinite loop of 0 byte file creations. I stopped mine at around 2500 files (forced an unmount):

# ls -al | wc -l
1981
# ls -al | wc -l
2004
# ls -al | wc -l
2525

So what does that happen? Well, in a packet trace, I saw the following:

The SETATTR fails on CREATE (expected in NFS operations on NTFS security style volumes in ONTAP, but not expected for NFS clients as per RFC standards):

181  60.900209  x.x.x.x    x.x.x.y      NFS  226  V3 LOOKUP Call (Reply In 182), DH: 0x8ec2d57b/copy-file-finder << Mac NFS client checks if the file exists
182  60.900558  x.x.x.y   x.x.x.x      NFS  186  V3 LOOKUP Reply (Call In 181) Error: NFS3ERR_NOENT << does not exist, so let’s create it!
183  60.900633  x.x.x.x    x.x.x.y      NFS  238  V3 CREATE Call (Reply In 184), DH: 0x8ec2d57b/copy-file-finder Mode: EXCLUSIVE << creates the file
184  60.901179  x.x.x.y   x.x.x.x      NFS  362  V3 CREATE Reply (Call In 183)
185  60.901224  x.x.x.x    x.x.x.y      NFS  238  V3 SETATTR Call (Reply In 186), FH: 0x7b82dffd
186  60.901564  x.x.x.y   x.x.x.x      NFS  214  V3 SETATTR Reply (Call In 185) Error: NFS3ERR_PERM << fails setting attributes, which also fails the copy of the actual file data, so we have a 0 byte file

Then it REMOVES the file (since the initial operation fails) and creates it again, and SETATTR fails again. This is where that “Keep Both” loop behavior takes place.

229 66.995698 x.x.x.x x.x.x.y NFS 210 V3 REMOVE Call (Reply In 230), DH: 0x8ec2d57b/copy-file-finder
233 67.006816 x.x.x.x x.x.x.y NFS 226 V3 LOOKUP Call (Reply In 234), DH: 0x8ec2d57b/copy-file-finder
234 67.007166 x.x.x.y x.x.x.x NFS 186 V3 LOOKUP Reply (Call In 233) Error: NFS3ERR_NOENT
247 67.036056 x.x.x.x x.x.x.y NFS 238 V3 CREATE Call (Reply In 248), DH: 0x8ec2d57b/copy-file-finder Mode: EXCLUSIVE
248 67.037662 x.x.x.y x.x.x.x NFS 362 V3 CREATE Reply (Call In 247)
249 67.037732 x.x.x.x x.x.x.y NFS 238 V3 SETATTR Call (Reply In 250), FH: 0xc33bff48
250 67.038534 x.x.x.y x.x.x.x NFS 214 V3 SETATTR Reply (Call In 249) Error: NFS3ERR_PERM

With Terminal, it operates a little differently. Rather than bailing out after the SETATTR failure, it just retries it:

11 19.954145 x.x.x.x x.x.x.y NFS 226 V3 LOOKUP Call (Reply In 12), DH: 0x8ec2d57b/copy-file-finder
12 19.954496 x.x.x.y x.x.x.x NFS 186 V3 LOOKUP Reply (Call In 11) Error: NFS3ERR_NOENT
13 19.954560 x.x.x.x x.x.x.y NFS 226 V3 LOOKUP Call (Reply In 14), DH: 0x8ec2d57b/copy-file-finder
14 19.954870 x.x.x.y x.x.x.x NFS 186 V3 LOOKUP Reply (Call In 13) Error: NFS3ERR_NOENT
15 19.954930 x.x.x.x x.x.x.y NFS 258 V3 CREATE Call (Reply In 18), DH: 0x8ec2d57b/copy-file-finder Mode: UNCHECKED
16 19.954931 x.x.x.x x.x.x.y NFS 230 V3 LOOKUP Call (Reply In 17), DH: 0x8ec2d57b/._copy-file-finder
17 19.955497 x.x.x.y x.x.x.x NFS 186 V3 LOOKUP Reply (Call In 16) Error: NFS3ERR_NOENT
18 19.957114 x.x.x.y x.x.x.x NFS 362 V3 CREATE Reply (Call In 15)
25 19.959031 x.x.x.x x.x.x.y NFS 238 V3 SETATTR Call (Reply In 26), FH: 0x8bcb16f1
26 19.959512 x.x.x.y x.x.x.x NFS 214 V3 SETATTR Reply (Call In 25) Error: NFS3ERR_PERM
27 19.959796 x.x.x.x x.x.x.y NFS 238 V3 SETATTR Call (Reply In 28), FH: 0x8bcb16f1 << Hey let's try again and ask in a different way!
28 19.960321 x.x.x.y x.x.x.x NFS 214 V3 SETATTR Reply (Call In 27)

The first SETATTR tries to chmod to 700:

Mode: 0700, S_IRUSR, S_IWUSR, S_IXUSR

The retry uses 777. Since the file already shows as 777, it succeeds (because it was basically fooled):

Mode: 0777, S_IRUSR, S_IWUSR, S_IXUSR, S_IRGRP, S_IWGRP, S_IXGRP, S_IROTH, S_IWOTH, S_IXOTH

Since Finder bails on the error, setting the NFS server to return no error here for this export (ntfs-unix-security-ops ignore) on this client allows the copy to succeed. You can create granular rules in your export policy rules to just set that option for your Mac clients.

Now, why do our files all show as 777?

Displaying NTFS Permissions via NFS

Because NFS doesn’t understand NTFS permissions, the job to translate user identities into valid access rights falls onto the shoulders of ONTAP. A UNIX user maps to a Windows user and then that Windows user is evaluated against the folder/file ACLs.

So “777” here doesn’t mean we have wide open access; we only have access based on the Windows ACL. Instead, it just means “the Linux client can’t view the access level for that user.” In most cases, this isn’t a huge problem. But sometimes, you need files/folders not to show 777 (like for applications that don’t allow 777).

In that case, you can control somewhat how NFS clients display NTFS ACLs in “ls” commands with the NFS server option ntacl-display-permissive-perms.

[-ntacl-display-permissive-perms {enabled|disabled}] - Display maximum NT ACL Permissions to NFS Client (privilege: advanced)
This optional parameter controls the permissions that are displayed to NFSv3 and NFSv4 clients on a file or directory that has an NT ACL set. When true, the displayed permissions are based on the maximum access granted by the NT ACL to any user. When false, the displayed permissions are based on the minimum access granted by the NT ACL to any user. The default setting is false.

The default setting of “false” is actually “disabled.” When that option is enabled, this is the file/folder view:

When that option is disabled (the default):

This option is covered in more detail in TR-4067, but it doesn’t require a remount to take effect. It may take some time for the access caches to clear to see the results, however.

Keep in mind that these listings are approximations of the access as seen by the current user. If the option is disabled, you see the minimum access; if the option is enabled, you see the maximum access. For example, the “test” folder above shows 555 when the option is disabled, but 777 when the option is enabled.

These are the actual permissions on that folder:

::*> vserver security file-directory show -vserver DEMO -path /FG2/test
Vserver: DEMO
File Path: /FG2/test
File Inode Number: 10755
Security Style: ntfs
Effective Style: ntfs
DOS Attributes: 10
DOS Attributes in Text: ----D---
Expanded Dos Attributes: -
UNIX User Id: 1102
UNIX Group Id: 10002
UNIX Mode Bits: 777
UNIX Mode Bits in Text: rwxrwxrwx
ACLs: NTFS Security Descriptor
Control:0x8504
Owner:BUILTIN\Administrators
Group:NTAP\ProfGroup
DACL - ACEs
ALLOW-Everyone-0x1200a9-OI|CI (Inherited)
ALLOW-NTAP\prof1-0x1f01ff-OI|CI (Inherited)

Here are the expanded ACLs:

                     Owner:BUILTIN\Administrators
                     Group:NTAP\ProfGroup
                     DACL - ACEs
                       ALLOW-Everyone-0x1200a9-OI|CI (Inherited)
                          0... .... .... .... .... .... .... .... = Generic Read
                          .0.. .... .... .... .... .... .... .... = Generic Write
                          ..0. .... .... .... .... .... .... .... = Generic Execute
                          ...0 .... .... .... .... .... .... .... = Generic All
                          .... ...0 .... .... .... .... .... .... = System Security
                          .... .... ...1 .... .... .... .... .... = Synchronize
                          .... .... .... 0... .... .... .... .... = Write Owner
                          .... .... .... .0.. .... .... .... .... = Write DAC
                          .... .... .... ..1. .... .... .... .... = Read Control
                          .... .... .... ...0 .... .... .... .... = Delete
                          .... .... .... .... .... ...0 .... .... = Write Attributes
                          .... .... .... .... .... .... 1... .... = Read Attributes
                          .... .... .... .... .... .... .0.. .... = Delete Child
                          .... .... .... .... .... .... ..1. .... = Execute
                          .... .... .... .... .... .... ...0 .... = Write EA
                          .... .... .... .... .... .... .... 1... = Read EA
                          .... .... .... .... .... .... .... .0.. = Append
                          .... .... .... .... .... .... .... ..0. = Write
                          .... .... .... .... .... .... .... ...1 = Read

                       ALLOW-NTAP\prof1-0x1f01ff-OI|CI (Inherited)
                          0... .... .... .... .... .... .... .... = Generic Read
                          .0.. .... .... .... .... .... .... .... = Generic Write
                          ..0. .... .... .... .... .... .... .... = Generic Execute
                          ...0 .... .... .... .... .... .... .... = Generic All
                          .... ...0 .... .... .... .... .... .... = System Security
                          .... .... ...1 .... .... .... .... .... = Synchronize
                          .... .... .... 1... .... .... .... .... = Write Owner
                          .... .... .... .1.. .... .... .... .... = Write DAC
                          .... .... .... ..1. .... .... .... .... = Read Control
                          .... .... .... ...1 .... .... .... .... = Delete
                          .... .... .... .... .... ...1 .... .... = Write Attributes
                          .... .... .... .... .... .... 1... .... = Read Attributes
                          .... .... .... .... .... .... .1.. .... = Delete Child
                          .... .... .... .... .... .... ..1. .... = Execute
                          .... .... .... .... .... .... ...1 .... = Write EA
                          .... .... .... .... .... .... .... 1... = Read EA
                          .... .... .... .... .... .... .... .1.. = Append
                          .... .... .... .... .... .... .... ..1. = Write
                          .... .... .... .... .... .... .... ...1 = Read

So, prof1 has Full Control (7) and “Everyone” has Read (5). That’s where the minimum/maximum permissions show up. So you won’t get *exact* permissions here. If you want exact permission views, consider using UNIX security styles.

DS_Store files

Mac will leave these little files laying around as users browse shares. In a large environment, that can start to create clutter, so you may want to consider disabling the creation of these on network shares (such as NFS mounts), as per this:

http://hints.macworld.com/article.php?story=2005070300463515http://hints.macworld.com/article.php?story=2005070300463515

If you have questions, comments or know of some other weirdness in MacOS with NFS, comment below!

How to Configure MacOS to Use Active Directory LDAP for UNIX users/groups

In NetApp ONTAP, it’s possible to serve data to NAS clients over SMB and NFS, including the same datasets. This is known as “multiprotocol NAS” and I cover the best practices for that in the new TR-4887:

TR-4887: Multiprotocol NAS Best Practices in ONTAP

When you do multiprotocol NAS in ONTAP (or really, and storage system), it’s usually best to leverage a centralized repository for user names, group names and numeric IDs that the NAS clients and NAS servers all point to (such as LDAP). That way, you get no surprises when accessing files and folders – user and groups get the expected ownership and permissions.

I cover LDAP in ONTAP in TR-4835:

TR-4835: LDAP in NetApp ONTAP

One of the more commonly implemented solutions for LDAP in environments that serve NFS and SMB is Active Directory. In these environments, you can either use either native UNIX attributes or a 3rd party utility, such as Centrify. (Centrify basically just uses the AD UNIX attributes and centralizes the management into a GUI – both are covered in the LDAP TR.)

While most Linux clients are fairly straightforward for LDAP integration, MacOS does things slightly differently. However, it’s pretty easy to configure.

Note: The steps may vary based on your environment configs and this covers just AD LDAP; not OpenLDAP/IDM or other Linux-based LDAP.

macOS Big Sur is here - Apple

Step 1: Ensure the Mac is configured to use the same DNS as the AD domain

This is done via “Network” settings in System Preferences. DNS is important here because it will be used to query the AD domain when we bind the Mac for the Directory Services. In the following, I’ve set the DNS server IP and the search domain to my AD domain “NTAP.LOCAL”:

Then I tested the domain lookup in Terminal:

Step 2: Configure Directory Services to use Active Directory

The “Directory Utility” is what we’ll use here. It’s easiest to use the spotlight search to find it.

Essentially, this process adds the MacOS to the Active Directory domain (as you would with a Windows server or a Linux box with “realm join”) and then configures the LDAP client on the Mac to leverage specific attributes for LDAP queries.

In the above, I’ve used uidNumber and gidNumber as the attributes. You can also control/view these via the CLI command “dsconfigad”:

Configure domain access in Directory Utility on Mac

I can see in my Windows AD domain the machine account was created:

A few caveats here about the default behavior for this:

  • LDAP queries will be encrypted by default, so if you’re trying to troubleshoot via packet capture, you won’t see a ton of useful info (such as attributes used for queries). To disable this (mainly for troubleshooting purposes):
$ dsconfigad -packetsign disable -packetencrypt disable
  • MacOS uses sAMAccountName as the user name/uid value, so it should work fine with AD out of the gate
  • MacOS adds additional Mac-specific system groups to the “id” output (such as netaccounts:62, and GIDs 701/702); these may need to be added to LDAP, depending on file ownership
  • LDAP queries to AD from Mac will use the Global Catalog port 3268 by default when using Active Directory config (which I was able to see from a packet capture)

This use of the global catalog port can be problematic, as standard LDAP configurations in AD don’t set the Global Catalog to replicate attributes, but rather uses the standard port 389/636 for LDAP communication. AD doesn’t replicate the UNIX attributes across the global catalog by default for LDAP, so you’d have to configure that manually (covered in TR-4835) or modify the port the Mac uses for LDAP.

My AD domain does have the attributes that replicate via the Global Catalog, so the LDAP lookups work for me from the Mac:

Here’s what the prof1 user looks like from a CentOS client:

# id prof1
uid=1102(prof1) gid=10002(ProfGroup) groups=10002(ProfGroup),10000(Domain Users),48(apache-group),1101(group1),1202(group2),1203(group3)

This is how that user looks from the ONTAP CLI:

cluster::> set advanced; access-check authentication show-creds -node node1 -vserver DEMO -unix-user-name prof1 -list-name true -list-id true
UNIX UID: 1102 (prof1) <> Windows User: S-1-5-21-3552729481-4032800560-2279794651-1110 (NTAP\prof1 (Windows Domain User))
GID: 10002 (ProfGroup)
Supplementary GIDs:
10002 (ProfGroup)
10000 (Domain Users)
1101 (group1)
1202 (group2)
1203 (group3)
48 (apache-group)
Primary Group SID: S-1-5-21-3552729481-4032800560-2279794651-1111 NTAP\ProfGroup (Windows Domain group)
Windows Membership:
S-1-5-21-3552729481-4032800560-2279794651-1301 NTAP\apache-group (Windows Domain group)
S-1-5-21-3552729481-4032800560-2279794651-1106 NTAP\group2 (Windows Domain group)
S-1-5-21-3552729481-4032800560-2279794651-513 NTAP\DomainUsers (Windows Domain group)
S-1-5-21-3552729481-4032800560-2279794651-1105 NTAP\group1 (Windows Domain group)
S-1-5-21-3552729481-4032800560-2279794651-1107 NTAP\group3 (Windows Domain group)
S-1-5-21-3552729481-4032800560-2279794651-1111 NTAP\ProfGroup (Windows Domain group)
S-1-5-21-3552729481-4032800560-2279794651-1231 NTAP\local-group.ntap (Windows Alias)
S-1-18-2 Service asserted identity (Windows Well known group)
S-1-5-32-551 BUILTIN\Backup Operators (Windows Alias)
S-1-5-32-544 BUILTIN\Administrators (Windows Alias)
S-1-5-32-545 BUILTIN\Users (Windows Alias)
User is also a member of Everyone, Authenticated Users, and Network Users
Privileges (0x22b7):
SeBackupPrivilege
SeRestorePrivilege
SeTakeOwnershipPrivilege
SeSecurityPrivilege
SeChangeNotifyPrivilege

Most people aren’t going to want/be allowed to crack open ADSIEdit and modify schema attributes, so you’d want to change how MacOS queries LDAP to use port 389 or 636. I’m currently waiting on word of how to do that from Apple, so I’ll update when I get that info. If you are reading this and already know, feel free to add to the comments!

Step 3: Mount the NetApp NFS export and test multiprotocol access

NFS mounts to UNIX security style volumes are pretty straightforward, so we won’t cover that here. Where it gets tricky is when your ONTAP volumes are NTFS security style. When that’s the case, a UNIX -> Windows name mapping occurs when using NFS, as we need to make sure the user trying to access the NTFS permissions truly has access.

This is the basic process:

  • MacOS NFS client sends a numeric UID and GID to the NetApp ONTAP system (if NFSv3 is used)
  • If the volume is NTFS security, ONTAP will try to translate the numeric IDs into user names. The method depends on the cluster config; in this case, we’ll use LDAP.
  • If the numeric IDs map to user names/group names, then ONTAP uses those UNIX names and tries to find a valid Windows name with the same names; if none exist, ONTAP looks for explicit name mapping rules and a default Windows user; if none of those work then access is denied.

I have mounted a volume to my MacOS client that uses NTFS security style.

This is the volume in ONTAP:

::*> vol show -vserver DEMO -volume FG2 -fields security-style
vserver volume security-style
------- ------ --------------
DEMO       FG2           ntfs

MacOS user IDs start at the ID 501; So my “admin” user ID is 501. This user doesn’t exist in LDAP.

ONTAP has a local user named “Podcast” but no valid Windows user mapping:

::*> set advanced; access-check authentication show-creds -node ontap9-tme-8040-01 -vserver DEMO -uid 501
(vserver services access-check authentication show-creds)
Vserver: DEMO (internal ID: 10)
Error: Get user credentials procedure failed
[ 33] Determined UNIX id 501 is UNIX user 'Podcast'
[ 34] Using a cached connection to ntap.local
[ 36] Trying to map 'Podcast' to Windows user 'Podcast' using
implicit mapping
[ 36] Successfully connected to ip 10.193.67.236, port 445
using TCP
[ 46] Successfully authenticated with DC oneway.ntap.local
[ 49] Could not find Windows name 'Podcast'
[ 49] Unable to map 'Podcast'. No default Windows user defined.
**[ 49] FAILURE: Name mapping for UNIX user 'Podcast' failed. No
** mapping found
Error: command failed: Failed to get user credentials. Reason: "SecD Error: Name mapping does not exist".

In addition, MacOS disables root by default:

https://support.apple.com/en-us/HT204012

So when I try to access this mount, it will attempt to use UID 501 and translate it to a UNIX user and then to a Windows user. Since ONTAP can’t translate UID 501 to a valid Windows user, this will fail and we’ll see it in the event log of the ONTAP CLI.

Here’s the access failure:

Here’s the ONTAP error:

ERROR secd.nfsAuth.noNameMap: vserver (DEMO) Cannot map UNIX name to CIFS name. Error: Get user credentials procedure failed
[ 33] Determined UNIX id 501 is UNIX user 'Podcast'
[ 34] Using a cached connection to ntap.local
[ 36] Trying to map 'Podcast' to Windows user 'Podcast' using implicit mapping
[ 36] Successfully connected to ip 10.193.67.236, port 445 using TCP
[ 46] Successfully authenticated with DC oneway.ntap.local
[ 49] Could not find Windows name 'Podcast'
[ 49] Unable to map 'Podcast'. No default Windows user defined.
**[ 49] FAILURE: Name mapping for UNIX user 'Podcast' failed. No mapping found

When I “su” to a user that *does* have a valid Windows user (such as prof1), this works fine and I can touch a file and get the proper owner/group:

Note that in the above, we see “root:wheel” owned folders; just because root is disabled by default on MacOS doesn’t mean that MacOS isn’t aware of the user. Those folders were created on a separate NFS client.

Also, note in the above that the file shows 777 permissions; this is because those are the allowed permissions for the prof1 user on that file. The permissions are defined by Windows/NTFS. Here, they are set to “Everyone:Full Control” by way of file inheritance. These are the new permissions. Profgroup (with prof1 and studen1 as members) gets write access. Administrator gets “Full Control.” Group10 (with only student1 as a member) gets read access.

In ONTAP, you can also control the way NTFS security style files are viewed on NFS clients with the NFS server option -ntacl-display-permissive-perms. TR-4887 covers that option in more detail.

Prof1 access view after permissions change (write access):

Student1 access view after permissions change (read access only via group ACL):

Read works, but write does not (by design!)

Student2 access view (write access defined by membership in ProfGroup):

Newuser1 access view (not a member of any groups in the ACL):

Newuser1 can create a new file, however, and it shows the proper owner. The permissions are 777 because of the inherited NTFS ACLs from the share:

As you can see, we will get the expected access for users and groups on Mac NFS using NTFS security styles, but the expected *views* won’t always line up. This is because there isn’t a direct correlation between NTFS and UNIX permissions, so we deliver an approximation. ONTAP doesn’t store ACLs for both NTFS and UNIX on disk; it only chooses one or the other. If you require exact NFS permission translation via Mac NFS, consider using UNIX security style and mode bits.

Addendum: Squashing root

In the event your MacOS users enable the root account and become the “root” user on the client, you can squash the root user to an anonymous user by using ONTAP’s export policies and rules. TR-4067 covers how to do this:

NFS Best Practices in ONTAP

Let me know if you have questions!

Brand new tech report: Multiprotocol NAS Best Practices in ONTAP

I don’t like to admit to being a procrastinator, but…

Lazy Sloth Drawing (Page 1) - Line.17QQ.com
(Not actually a sloth)

Four years ago, I said this:

And people have asked about it a few times since then. To be fair, I did say “will be a ways out…”

In actuality, I started that TR in March of 2017. And then again in February of 2019. And then started all over when the pandemic hit, because what else did I have going on? 🙂

And it’s not like I haven’t done *stuff* in that time.

The trouble was, I do multiprotocol NAS every day, so I think I had writer’s block because I didn’t know where to start and the challenge of writing an entire TR on the subject without making it 100-200 pages like some of the others I’ve written was… daunting. But, it’s finally done. And the actual content is under 100 pages!

Topics include:

  • NFS and SMB best practices/tips
  • Name mapping explanations and best practices
  • Name service information
  • CIFS Symlink information
  • Advanced multiprotocol NAS concepts

Multiprotocol NAS Best Practices in ONTAP

If you have any comments/questions, feel free to comment!

How pNFS could benefit cloud architecture

** Edited on April 2, 2021 **
Funny story about this post. Someone pointed out I had some broken links, so I went in and edited the links. When I clicked “publish” it re-posted the article, which was actually a pointer back to an old DatacenterDude article I wrote from 2015 – which no longer exists. So I started getting *more* pings about broken links and plenty of people seemed to be interested in the content. Thanks to the power of the Wayback Machine, I was able to resurrect the post and decided to do some modernization while I was at it.

Yesterday, I was speaking with a customer who is a cloud provider. They were discussing how to use NFSv4 with Data ONTAP for one of their customers. As we were talking, I brought up pNFS and its capabilities. They were genuinely excited about what pNFS could do for their particular use case. In the cloud, the idea is to remove the overhead of managing infrastructure, so most cloud architectures are geared towards automation, limiting management, etc. In most cases, that’s great, but for data locality in NAS environments, we need a way to make those operations seamless, as well as providing the best possible security available. That’s where pNFS comes in.

18cloudy-600

So, let’s talk about what pNFS is and in what use cases you may want to use it.

What is pNFS?

pNFS is “parallel NFS,” which is a little bit of a misnomer in ONTAP, as it doesn’t do parallel reads and writes across single files (i.e., striping). In the case of pNFS on Data ONTAP, NetApp currently supports file-level pNFS, so the object store would be a flexible volume on an aggregate of physical disks.

pNFS in ONTAP establishes a metadata path to the NFS server and then splits off the data path to its own dedicated path. The client works with the NFS server to determine which path is local to the physical location of the files in the NFS filesystem via DEVICEINFO and LAYOUTGETINFO metadata calls (specific to NFSv4.1 and later) and then dynamically redirects the path to be local. Think of it as ALUA for NFS.

The following graphic shows how that all takes place.

pNFS

pNFS defines the notion of a device that is generated by the server (that is, an NFS server running on Data ONTAP) and sent to the client. This process helps the client locate the data and send requests directly over the path local to that data. Data ONTAP generates one pNFS device per flexible volume. The metadata path does not change, so metadata requests might still be remote. In a Data ONTAP pNFS implementation, every data LIF is considered an NFS server, so pNFS only works if each node owns at least one data LIF per NFS SVM. Doing otherwise negates the benefits of pNFS, which is data locality regardless of which IP address a client connects to.

The pNFS device contains information about the following:

  • Volume constituents
  • Network location of the constituents

The device information is cached to the local node for improved performance.

To see pNFS devices in the cluster, use the following command in advanced privilege:

cluster::> set diag
cluster::*> vserver nfs pnfs devices cache show

pNFS Components

There are three main components of pNFS:

  • Metadata server
    • Handles all nondata traffic such as GETATTR, SETATTR, and so on
    • Responsible for maintaining metadata that informs the clients of the file locations
    • Located on the NetApp NFS server and established via the mount point
  • Data server
    • Stores file data and responds to READ and WRITE requests
    • Located on the NetApp NFS server
    • Inode information also resides here
  • Clients

pNFS is covered in further detail in NetApp TRs 4067 (NFS) and 4571 (FlexGroup volumes)

How Can I Tell pNFS is Being Used?

To check if pNFS is in use, you can run statistics counters to check for “pnfs_layout_conversions” counters. If the number of pnfs_layout_conversions are incrementing, then pNFS is in use. Keep in mind that if you try to use pNFS with a single network interface, the data layout conversations won’t take place and pNFS won’t be used, even if it’s enabled. 

cluster::*> statistics start -object nfsv4_1_diag
cluster::*> statistics show -object nfsv4_1_diag -counter pnfs_layout_conversions


Object: nfsv4_1_diag
Instance: nfs4_1_diag
Start-time: 4/9/2020 16:29:50
End-time: 4/9/2020 16:31:03
Elapsed-time: 73s
Scope: node1

    Counter                                                     Value
   -------------------------------- --------------------------------
   pnfs_layout_conversions                                      4053

Gotta keep ’em separated!

One thing that is beneficial about the design of pNFS is that the metadata paths are separated from the read/write paths. Once a mount is established, the metadata path is set on the IP address used for mount and does not move without manual intervention. In Data ONTAP, that path could live anywhere in the cluster. (Up to 24 physical nodes with multiple ports on each node!)

That buys you resiliency, as well as flexibility to control where the metadata will be served.

The data path, however, will only be established on reads and writes. That path is determined in conversations between the client and server and is dynamic. Any time the physical location of a volume changes, the data path changes automatically, without need to intervene by the clients or the storage administrator. So, unlike NFSv3 or even NFSv4, you no longer would need to break the TCP connection to move the path for reads and writes to be local (via unmount or LIF migrations). And with NFSv4.x, the statefulness of the connection can be preserved.

That means more time for everyone. Data can be migrated in real time, non-disruptively, based on the storage needs of the client.

For example, I have a volume that lives on node cluster01 of my cDOT cluster:

cluster::> vol show -vserver SVM -volume unix -fields node
 (volume show)

vserver volume node
------- ------ --------------
SVM     unix   cluster01

I have data LIFs on each node in my cluster:

 cluster::> net int show -vserver SVM
(network interface show)

Logical     Status     Network                       Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
SVM
             data1      up/up     10.63.57.237/18    cluster01     e0c     true
             data2      up/up     10.63.3.68/18      cluster02     e0c     true
2 entries were displayed.

In the above list:

  • 10.63.3.68 will be my metadata path, since that’s where I mounted.
  • 10.63.57.237 will be my data path, as it is local to the physical node cluster02.

When I mount, the TCP connection is established to the node where the data LIF lives:

nfs-client# mount -o minorversion=1 10.63.3.68:/unix /unix

cluster::> network connections active show -remote-ip 10.228.225.140
Vserver     Interface              Remote
Name        Name:Local             Port Host:Port              Protocol/Service
---------- ---------------------- ---------------------------- ----------------
Node: cluster02
SVM         data2:2049             nfs-client.domain.netapp.com:912
                                                                TCP/nfs

My metadata path is established to cluster02, but my data volume lives on cluster01.

On a basic cd and ls into the mount, all the traffic is seen on the metadata path. (stuff like GETATTR, ACCESS, etc):

83     6.643253      10.228.225.140       10.63.3.68    NFS    270    V4 Call (Reply In 85) GETATTR
85     6.648161      10.63.3.68    10.228.225.140       NFS    354    V4 Reply (Call In 83) GETATTR
87     6.652024      10.228.225.140       10.63.3.68    NFS    278    V4 Call (Reply In 88) ACCESS 
88     6.654977      10.63.3.68    10.228.225.140       NFS    370    V4 Reply (Call In 87) ACCESS

When I start I/O to that volume, the path gets updated to the local path by way of new pNFS calls (specified in RFC-5663):

28     2.096043      10.228.225.140       10.63.3.68    NFS    314    V4 Call (Reply In 29) LAYOUTGET
29     2.096363      10.63.3.68    10.228.225.140       NFS    306    V4 Reply (Call In 28) LAYOUTGET
30     2.096449      10.228.225.140       10.63.3.68    NFS    246    V4 Call (Reply In 31) GETDEVINFO
31     2.096676      10.63.3.68    10.228.225.140       NFS    214    V4 Reply (Call In 30) GETDEVINFO
  1. In LAYOUTGET, the client asks the server “where does this filehandle live?”
  2. The server responds with the device ID and physical location of the filehandle.
  3. Then, the client asks “what devices to access that physical data are avaiabe to me?” via GETDEVINFO.
  4. The server responds with the list of available devices/IP addresses.

justin_getdevinfo

Once that communication takes place (and note that the conversation occurs in sub-millisecond times), the client then establishes the new TCP connection for reads and writes:

32     2.098771      10.228.225.140       10.63.57.237  TCP    74     917 > nfs [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=937300318 TSecr=0 WS=128
33     2.098996      10.63.57.237  10.228.225.140       TCP    78     nfs > 917 [SYN, ACK] Seq=0 Ack=1 Win=33580 Len=0 MSS=1460 SACK_PERM=1 WS=128 TSval=2452178641 TSecr=937300318
34     2.099042      10.228.225.140       10.63.57.237  TCP    66     917 > nfs [ACK] Seq=1 Ack=1 Win=14720 Len=0 TSval=937300318 TSecr=2452178641

And we can see the connection established on the cluster to both the metadata and data locations:

cluster::> network connections active show -remote-ip 10.228.225.140
Vserver     Interface              Remote
Name        Name:Local             Port Host:Port              Protocol/Service
---------- ---------------------- ---------------------------- ----------------
Node: cluster01
SVM         data2:2049             nfs-client.domain.netapp.com:912
                                                               TCP/nfs
Node: cluster02 
SVM         data2:2049             nfs-client.domain.netapp.com:912 
                                                               TCP/nfs

Then we start our data transfer on the new path (data path 10.63.57.237):

38     2.099798      10.228.225.140       10.63.57.237  NFS    250    V4 Call (Reply In 39) EXCHANGE_ID
39     2.100137      10.63.57.237  10.228.225.140       NFS    278    V4 Reply (Call In 38) EXCHANGE_ID
40     2.100194      10.228.225.140       10.63.57.237  NFS    298    V4 Call (Reply In 42) CREATE_SESSION
42     2.100537      10.63.57.237  10.228.225.140       NFS    194    V4 Reply (Call In 40) CREATE_SESSION

157    2.106388      10.228.225.140       10.63.57.237  NFS    15994  V4 Call (Reply In 178) WRITE StateID: 0x0d20 Offset: 196608 Len: 65536
163    2.106421      10.63.57.237  10.228.225.140       NFS    182    V4 Reply (Call In 127) WRITE

If I do a chmod later, the metadata path is used (10.63.3.68):

341    27.268975     10.228.225.140       10.63.3.68    NFS    310    V4 Call (Reply In 342) SETATTR FH: 0x098eaec9
342    27.273087     10.63.3.68    10.228.225.140       NFS    374    V4 Reply (Call In 341) SETATTR | ACCESS

How do I make sure metadata connections don’t pile up?

When you have many clients mounting to an NFS server, you generally want to try to control which nodes those clients are mounting to. In the cloud, this becomes trickier to do, as clients and storage system management may be handled by the cloud providers. So, we’d want to have a noninteractive way to do this.

With ONTAP, you have two options to load balance TCP connections for metadata. You can use the tried and true DNS round-robin method, but the NFS server doesn’t have any idea what IP addresses have been issued by the DNS server, so as a result, there are no guarantees the connections won’t pile up.

Another way to deal with connections is to leverage the ONTAP feature for on-box DNS load balancing. This feature allows storage administrators to set up a DNS forwarding zone on a DNS server (BIND, Active Directory or otherwise) to forward requests to the clustered Data ONTAP data LIFs, which can act as DNS servers complete with SOA records! The cluster will determine which IP address to issue to a client based on the following factors:

  • CPU load
  • overall node throughput

This helps ensure that any TCP connection that is established is done so in a logical manner based on performance of the phyical hardware.

I cover both types of DNS load balancing in TR-4523: DNS Load Balancing in ONTAP.

What about that data agility?

What’s great about pNFS is that it is a perfect fit for storage operating systems like ONTAP. NetApp and RedHat worked together closely on the protocol enhancement, and it shows in its overall implementation.

In ONTAP, there is the concept of non-disruptive volume moves. This feature gives storage administrators agility and flexibility in their clusters, as well as enabling service and cloud providers a way to charge based on tiers (pay as you grow!).

For example, if I am a cloud provider, I could have a 24-node cluster as a backend. Some HA pairs could be All-Flash FAS (AFF) nodes for high-performance/low latency workloads. Some HA pairs could be SATA or SAS drives for low performance/high capacity/archive storage. If I am providing storage to a customer that wants to implement high performance computing applications, I could sell them the performance tier. If those applications are only going to run during the summer months, we can use the performance tier, and after the jobs are complete, we can move them back to SATA/SAS drives for storage and even SnapMirror or SnapVault them off to a DR site for safekeeping. Once the job cycle comes back around, I can nondisruptively move the volumes back to flash. That saves the customer money, as they only pay for the performance they’re using, and that saves the cloud provider money since they can free up valuable flash real estate for other customers that need performance tier storage.

What happens when a volume moves in pNFS?

When a volume move occurs, the client is notified of the change via the pNFS calls I mentioned earlier. When the file attempts to OPEN for writing, the server responds, “that file is somewhere else now.”

220    24.971992     10.228.225.140       10.63.3.68    NFS    386    V4 Call (Reply In 221) OPEN DH: 0x76306a29/testfile3
221    24.981737     10.63.3.68    10.228.225.140       NFS    482    V4 Reply (Call In 220) OPEN StateID: 0x1077

The client says, “cool, where is it now?”

222    24.992860     10.228.225.140       10.63.3.68    NFS    314    V4 Call (Reply In 223) LAYOUTGET
223    25.005083     10.63.3.68    10.228.225.140       NFS    306    V4 Reply (Call In 222) LAYOUTGET
224    25.005268     10.228.225.140       10.63.3.68    NFS    246    V4 Call (Reply In 225) GETDEVINFO
225    25.005550     10.63.3.68    10.228.225.140       NFS    214    V4 Reply (Call In 224) GETDEVINFO

Then the client uses the new path to start writing, with no interaction needed.

251    25.007448     10.228.225.140       10.63.57.237  NFS    7306   V4 Call WRITE StateID: 0x15da Offset: 0 Len: 65536
275    25.007987     10.228.225.140       10.63.57.237  NFS    7306   V4 Call WRITE StateID: 0x15da Offset: 65536 Len: 65536

Automatic Data Tiering

If you have an on-premises storage system and want to save storage infrastructure costs by automatically tiering cold data to the cloud or to an on-premises object storage system, you could leverage NetApp FabricPool, which allows you to set tiering policies to chunk off cold blocks of data to more cost effective storage and then retrieve those blocks whenever they are requested by the end user. Again, we’re taking the guesswork and labor out of data management, which is becoming critical in a world driven towards managed services.

For more information on FabricPool:

TR-4598: FabricPool Best Practices

Tech ONTAP Podcast Episode 268 – NetApp FabricPool and S3 in ONTAP 9.8

What about FlexGroup volumes?

As of ONTAP 9.7, NFSv4.1 and pNFS is supported with FlexGroup volumes, which is an intriguing solution.

Part of the challenge of a FlexGroup volume is that you’re guaranteed to have remote I/O across a cluster network when you span multiple nodes. But since pNFS automatically redirects traffic to local paths, you can greatly reduce the amount of intracluster traffic.

A FlexGroup volume operates as a single entity, but is constructed of multiple FlexVol member volumes. Each member volume contains unique files that are not striped across volumes. When NFS operations connect to FlexGroup volumes, ONTAP handles the redirection of operations over a cluster network.

With pNFS, these remote operations are reduced, because the data layout mappings track the member volume locations and local network interfaces; they also redirect reads/writes to the local member volume inside a FlexGroup volume, even though the client only sees a single namespace. This approach enables a scale-out NFS solution that is more seamless and easier to manage, and it also reduces cluster network traffic and balances data network traffic more evenly across nodes.

FlexGroup pNFS differs a bit from FlexVol pNFS. Even though FlexGroup load-balances between metadata servers for file opens, pNFS uses a different algorithm. pNFS tries to direct traffic to the node on which the target file is located. If multiple data interfaces per node are given, connections can be made to each of the LIFs, but only one of the LIFs of the set is used to direct traffic to volumes per network interface.

What workloads should I use with pNFS?

pNFS is leveraging NFSv4.1 and later as its protocol, which means you get all the benefits of NFSv4.1 (security, Kerberos and lock integration, lease-based locks, delegations, ACLs, etc.). But you also get the potential negatives of NFSv4.x, such as higher overhead for operations due to the compound calls, state ID handling, locking, etc. and disruptions during storage failovers that you wouldn’t see with NFSv3 due to the stateful nature of NFSv4.x.

Performance can be severely impacted with some workloads, such as high file count workloads/high metadata workloads (think EDA, software development, etc). Why? Well, recall that pNFS is parallel for reads and writes – but the metadata operations still use a single interface for communication. So if your NFS workload is 80% GETATTR, then 80% of your workload won’t benefit from the localization and load balancing that pNFS provides. Instead, you’ll be using NFSv4.1 as if pNFS were disabled.

Plus, with millions of files, even if you’re doing heavy reads and writes, that means you’re redirecting paths constantly with pNFS (creating millions of DEVICEINFO and LAYOUTGET calls), which may prove more inefficient than simply using NFSv4.1 without pNFS.

pNFS also would need to be supported by the clients you’re using, so if you want to use it for something like VMware datastores, you’re out of luck (for now). VMware currently supports NFSv4.1, but not pNFS (they went with session trunking, which ONTAP does not currently support).

File-based pNFS works best with workloads that do a lot of sequential IO, such as databases, Hadoop/Apache Spark, AI training workloads, or other large file workloads, where reads and writes dominate the IO.

What about the performance?

In TR-4067, I did some basic performance testing on NFSv3 vs. NFSv4.1 for those types of workloads and the results were that pNFS stacked up nicely with NFSv3.

These tests were done using dd in parallel to simulate a sequential I/O workload. This isn’t intended to show the upper limits of the system (I used an AFF 8040 and some VM clients with low RAM and 1GB networks), but instead were intended to show an apples to apples comparison of NFSv3 and NFS4.1 with and without pNFS, using different wsize/rsize values. Be sure to do your own tests before implementing in production.

Note that our completion time for this workload using pNFS was a full 5 minutes faster than NFSv3 using a 1MB wsize/rsize value.

Test (wsize/rsize setting)Completion Time
NFSv3 (1MB)15m23s
NFSv3 (256K)14m17s
NFSv3 (64K)14m48s
NFSv4.1 (1MB)15m6s
NFSv4.1 (256K)12m10s
NFSv4.1 (64K)15m8s
NFSv4.1 (1MB; pNFS)10m54s
NFSv4.1 (256K; pNFS)12m16s
NFSv4.1 (64K; pNFS)13m57s
NFSv4.1 (1MB; delegations)13m6s
NFSv4.1 (256K; delegations)15m25s
NFSv4.1 (64K; delegations)13m48s
NFSv4.1 (1MB; pNFS + delegations)11m7s
NFSv4.1 (256K; pNFS + delegations)13m26s
NFSv4.1 (64K; pNFS + delegations)10m6s

The IOPS were lower overall for NFSv4.1 than NFSv3; that’s because NFSv4.1 combines operations into single packets. Thus, NFSv4.1 will be less chatty over the network than NFSv3. On the downside, the payloads are larger, so the NFS server has more processing to do for each packet, which can impact CPU, and with more IOPS, you can see a drop in performance due to that overhead.

Where NFSv4.1 beat out NFSv3 was with the latency and throughput – since we can guarantee data locality, we get benefits of fastpathing the reads/writes to the files, rather than the extra processing needed to traverse the cluster network.

Test
(wsize/rsize setting)
Average Read Latency (ms)Average Read Throughput (MB/s)Average Write Latency (ms)Average Write Throughput (MB/s)Average Ops
NFSv3 (1MB)665427.91160530
NFSv3 (256K)1.47662.911092108
NFSv3 (64K).26952.211108791
NFSv4.1 (1MB)6.562736.81400582
NFSv4.1 (256K)1.47123.211602352
NFSv4.1 (64K).16061.213107809
NFSv4.1 (1MB; pNFS)3.684026.81370818
NFSv4.1 (256K; pNFS)1.18075.215602410
NFSv4.1 (64K; pNFS).18351.914908526
NFSv4.1
(1MB; delegations)
5.168432.91290601
NFSv4.1
(256K; delegations)
1.36483.311401995
NFSv4.1
(64K; delegations)
.16631.310007822
NFSv4.1
(1MB; pNFS + delegations)
3.894122.41110696
NFSv4.1
(256K; pNFS + delegations)
1.17953.311402280
NFSv4.1
(64K; pNFS + delegations)
.18151117011130

For high file count workloads, NFSv3 did much better. This test created 800,000 small files (512K) in parallel. For this high metadata workload, NFSv3 completed 2x as fast as NFSv4.1. pNFS added some time savings versus NFSv4.1 without pNFS, but overall, we can see where we may run into problems with this type of workload. Future releases of ONTAP will get better with this type of workload using NFSv4.1 (these tests were on 9.7).

Test (wsize/rsize setting)Completion TimeCPU %Average throughput (MB/s)Average total IOPS
NFSv3 (1MB)17m29s32%3517696
NFSv3 (256K)16m34s34.5%3728906
NFSv3 (64K)16m11s39%39413566
NFSv4.1 (1MB)38m20s26%1677746
NFSv4.1 (256K)38m15s27.5%1677957
NFSv4.1 (64K)38m13s31%17210221
NFSv4.1 pNFS (1MB)35m44s27%1718330
NFSv4.1 pNFS (256K)35m9s28.5%1758894
NFSv4.1 pNFS (64K)36m41s33%17110751

Enter nconnect

One of the keys to pNFS performance is parallelization of operations across volumes, nodes, etc. But it doesn’t necessarily parallelize network connections across these workloads. That’s where the new NFS mount option nconnect comes in.

The purpose of nconnect is to provide multiple transport connections per TCP connection or mount point on a client. This helps increase parallelism and performance for NFS mounts – particularly for single client workloads. Details about nconnect and how it can increase performance for NFS in Cloud Volumes ONTAP can be found in the blog post The Real Baseline Performance Story: NetApp Cloud Volumes Service for AWS. ONTAP 9.8 offers official support for the use of nconnect with NFS mounts, provided the NFS client also supports it. If you would like to use nconnect, check to see if your client version provides it and use ONTAP 9.8 or later. ONTAP 9.8 and later supports nconnect by default with no option needed.

Client support for nconnect varies, but the latest RHEL 8.3 release supports it, as do the latest Ubuntu and SLES releases. Be sure to verify if your OS vendor supports it.

Our Customer Proof of Concept lab (CPOC) did some benchmarking of nconnect with NFSv3 and pNFS using a sequential I/O workload on ONTAP 9.8 and saw some really promising results.

  • Single NVIDIA DGX-2 client
  • Ubuntu 20.04.2
  • NFSv4.1 with pNFS and nconnect
  • AFF A400 cluster
  • NetApp FlexGroup volume
  • 256K wsize/rsize
  • 100GbE connections
  • 32 x 1GB files

In these tests, the following throughput results were seen. Latency for both were sub 1ms.

TestBandwidth
NFSv310.2 GB/s
NFSv4.1/pNFS21.9 GB/s
Both NFSv3 and NFSv4.1 used nconnect=16

In these tests, NFSv4.1 with pNFS doubled the performance for the sequential read workload at 250us latency. Since the files were 1GB in size, the reads were almost entirely from the controller RAM, but it’s not unreasonable to see that as the reality for a majority of workloads, as most systems have enough RAM to see similar results.

David Arnette and I discuss it a bit in this podcast:

Episode 283 – NetApp ONTAP AI Reference Architectures

Note: Benchmark tests such as SAS iotest will purposely recommend setting file sizes larger than the system RAM to avoid any caching benefits and instead will measure the network bandwidth of the transfer. In real world application scenarios, RAM, network, storage and CPU are all working together to create the best possible performance scenarios.

pNFS Best Practices with ONTAP

pNFS best practices in ONTAP don’t differ much from normal NAS best practices, but here are a few to keep in mind. In general:

  • Use the latest supported client OS version.
  • Use the latest supported ONTAP patch release.
  • Create a data LIF per node, per SVM to ensure data locality for all nodes.
  • Avoid using LIF migration on the metadata server data LIF, because NFSv4.1 is a stateful protocol and LIF migrations can cause brief outages as the NFS states are reestablished.
  • In environments with multiple NFSv4.1 clients mounting, balance the metadata server connections across multiple nodes to avoid piling up metadata operations on a single node or network interface.
  • If possible, avoid using multiple data LIFs on the same node in an SVM.
  • In general, avoid mounting NFSv3 and NFSv4.x on the same datasets. If you can’t avoid this, check with the application vendor to ensure that locking can be managed properly.
  • If you’re using NFS referrals with pNFS, keep in mind that referrals establish a local metadata server, but data I/O still redirect. With FlexGroup volumes, the member volumes might live on multiple nodes, so NFS referrals aren’t of much use. Instead, use DNS load balancing to spread out connections.

Drop any questions into the comments below!

Behind the Scenes – Episode 279: NetApp and Oracle – What’s New?

Welcome to the Episode 279, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

2019-insight-design2-warhol-gophers

This week, Jeff Steiner (@TweetOfSteiner) joins us to drop some Oracle knowledge on us, as well as talk about how to best implement Oracle on NetApp storage and what’s new for Oracle on NFS.

For more information:

Podcast Transcriptions

If you want an AI transcribed copy of the episode, check it out here (just set expectations accordingly):

Episode 279: NetApp and Oracle – What’s New? (Transcript)

Just use the search field to look for words you want to read more about. (For example, search for “storage”)

transcript.png

Or, click the “view transcript” button:

gong-transcript

Be sure to give us feedback on the transcription in the comments here or via podcast@netapp.com! If you have requests for other previous episode transcriptions, let me know!

Tech ONTAP Community

We also now have a presence on the NetApp Communities page. You can subscribe there to get emails when we have new episodes.

Tech ONTAP Podcast Community

techontap_banner2

Finding the Podcast

You can find this week’s episode here:

You can also find the Tech ONTAP Podcast on:

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

Behind the Scenes – Episode 270: NetApp ONTAP File Systems Analytics

Welcome to the Episode 270, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

2019-insight-design2-warhol-gophers

This week, we do a deep dive into the new ONTAP 9.8 feature in System Manager – File Systems Analytics – and why it’s improving the lives of storage administrators.

Podcast Transcriptions

If you want an AI transcribed copy of the episode, check it out here (just set expectations accordingly):

Episode 270: NetApp ONTAP File Systems Analytics – Transcript

Just use the search field to look for words you want to read more about. (For example, search for “storage”)

transcript.png

Or, click the “view transcript” button:

gong-transcript

Be sure to give us feedback on the transcription in the comments here or via podcast@netapp.com! If you have requests for other previous episode transcriptions, let me know!

Tech ONTAP Community

We also now have a presence on the NetApp Communities page. You can subscribe there to get emails when we have new episodes.

Tech ONTAP Podcast Community

techontap_banner2

Finding the Podcast

You can find this week’s episode here:

You can also find the Tech ONTAP Podcast on:

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss