Behind the Scenes Episode 289 – NetApp and Rubrik: Better Together

Welcome to the Episode 289, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”


This week, we discuss the latest news in the NetApp/Rubrik partnership and how Rubrik works with Ben Kendall (Alliances Technical Partner Manager, Americas at Rubrik, and PF Guglielmi (Alliances Field CTO @pfguglielmi) of Rubrik and Chris Maino (Manager, Americas Solutions Architects, of NetApp.

For more information:

Podcast Transcriptions

If you want a searchable transcript of the episode, check it out here (just set expectations accordingly):

Episode 289 – NetApp and Rubrik: Better Together – Transcript

Just use the search field to look for words you want to read more about. (For example, search for “storage”)


Be sure to give us feedback (or if you need a full text transcript – Gong does not support sharing those yet) on the transcription in the comments here or via! If you have requests for other previous episode transcriptions, let me know!

Tech ONTAP Community

We also now have a presence on the NetApp Communities page. You can subscribe there to get emails when we have new episodes.

Tech ONTAP Podcast Community


Finding the Podcast

You can find this week’s episode here:

You can also find the Tech ONTAP Podcast on:

I also recently got asked how to leverage RSS for the podcast. You can do that here:

Running PowerShell from Linux to Query SMB Shares in NetApp ONTAP

I recently got a question about how to perform the following scenario:

  • Run a script from Linux that calls PowerShell on a remote Windows client using Kerberos
  • Remote Windows client uses PowerShell to authenticate against an ONTAP SMB share

That’s some Inception-style IT work.

Inception Ending Explained: Christopher Nolan's Endless Spinning | Observer

The issue they were having was that the credentials used to connect to the Windows client weren’t passing through to the ONTAP system. As a result, they’d get “Access Denied” in their script when attempting to access the share. I figured out how to get this working and rather than let that knowledge rot in the far reaches of my brain, I’m writing this up, since in my Google hunt, I found lots of people had similar issues with Linux PowerShell (not necessarily to ONTAP).

This is a known issue with some workarounds listed here:

Making the second hop in PowerShell Remoting

One workaround is to use “Resource-based Kerberos constrained delegation,” where you basically tell the 3rd server to accept delegated credentials from the 2nd server via the PrincipalsAllowedToDelegateToAccount parameter in the ADComputer cmdlets. We’ll cover that in a bit, but first…

WAIT. I can run PowerShell on Linux???

Well, yes! And this article tells you how to install it:

Installing PowerShell on Linux

Now, the downside is that not all PowerShell modules are available from Linux (for example, ActiveDirectory isn’t currently available). But it works!

PS /> New-PSSession -ComputerName COMPUTER -Credential administrator@NTAP.LOCAL -Authentication Kerberos

PowerShell credential request
Enter your credentials.
Password for user administrator@NTAP.LOCAL: **

Id Name     Transport ComputerName ComputerType  State  ConfigurationName    Availability
-- ----     --------- ------------ ------------  ----- --------------------- ------------
9 Runspace9 WSMan     COMPUTER     RemoteMachine Opened Microsoft.PowerShell Available

In that document, they don’t list CentOS/RHEL 8, which can be problematic, as you might run into some issues with the SSL libraries (This blog calls one of those issues out, as well as a few others).

On my Centos8.3 box, I ran into this issue:

New-PSSession: This parameter set requires WSMan, and no supported WSMan client library was found. WSMan is either not installed or unavailable for this system.

Using the guidance from the blog listed earlier, I found that there were a couple of files not found:

# ldd /opt/microsoft/powershell/7/
… => not found => not found

That blog lists 1.0.2 as what is needed and looks to be using a different Linux flavor. You can find the files you need/where they live with:

# find / -name ''

Then you can use the symlink workaround and those files show up properly with ldd:

ln -s
ln -s
ldd /opt/microsoft/powershell/7/
... => /lib64/ (0x00007f41ce3fc000) => /lib64/ (0x00007f41cdf16000)

However, authenticating with the server also requires an additional step.

Authenticating Linux PowerShell with Windows

You can authenticate to Windows servers with Linux PowerShell using the following methods:

Basic Default Kerberos Credssp Digest Negotiate

Here’s how each auth method works (or doesn’t) without doing anything else.

BasicNew-PSSession: Basic authentication is not supported over HTTP on Unix.
KerberosAuthorization failed Unspecified GSS failure.
NegotiateAuthorization failed Unspecified GSS failure.
Auth methods with Linux PowerShell and results with no additional configs

In several places, I’ve seen the recommendation to install gssntlmssp on the Linux client, which works fine for “Negotiate” methods:

PS /> New-PSSession -ComputerName SERVER -Credential administrator@NTAP.LOCAL -Authentication Negotiate

PowerShell credential request
Enter your credentials.
Password for user administrator@NTAP.LOCAL: **

Id Name Transport ComputerName ComputerType State ConfigurationName Availability
-- ---- --------- ------------ ------------ ----- ----------------- ------------
2 Runspace2 WSMan SERVER       RemoteMachine Opened Microsoft.PowerShell Available

But not for Kerberos:

PS /> New-PSSession -ComputerName SERVER -Credential administrator@NTAP.LOCAL -Authentication Kerberos

PowerShell credential request
Enter your credentials.
Password for user administrator@NTAP.LOCAL: **

New-PSSession: [SERVER] Connecting to remote server SERVER failed with the following error message : Authorization failed Unspecified GSS failure. Minor code may provide more information Configuration file does not specify default realm For more information, see the about_Remote_Troubleshooting Help topic.

The simplest way to get around this is to add the Linux client to the Active Directory domain. Then you can use Kerberos for authentication to the client (at least with a user that has the correct permissions, such as a domain administrator).

*Alternately, you could do all this manually, which I don’t recommend.

**For non-AD KDCs, config methods will vary.

# realm join NTAP.LOCAL
Password for Administrator:

# pwsh
PowerShell 7.1.3
Copyright (c) Microsoft Corporation.
Type 'help' to get help.

PS /> New-PSSession -ComputerName SERVER -Credential administrator@NTAP.LOCAL -Authentication Kerberos

PowerShell credential request
Enter your credentials.
Password for user administrator@NTAP.LOCAL: **********

 Id Name            Transport ComputerName    ComputerType    State         ConfigurationName     Availability
 -- ----            --------- ------------    ------------    -----         -----------------     ------------
  1 Runspace1       WSMan     SERVER          RemoteMachine   Opened        Microsoft.PowerShell     Available

So, now that we know we can establish a session to the Windows server where we want to leverage PowerShell, now what?

Double-hopping with Kerberos using Delegation

One way to use Kerberos across multiple servers (including NetApp ONTAP) is to leverage the PrincipalsAllowedToDelegateToAccount parameter.

The script I’ll use does a basic “Get-Content” call to a file in an SMB/CIFS share in ONTAP (similar to “cat” in Linux).

If I don’t set the PrincipalsAllowedToDelegateToAccount parameter, a credential passed from Linux PowerShell to a Windows server to ONTAP will use Kerberos -> NTLM (with a NULL user) for the authentication and this is the end result:

# pwsh test.ps1

PowerShell credential request
Enter your credentials.
Password for user administrator@NTAP.LOCAL: **********

Test-Path: Access is denied
Get-Content: Access is denied
Get-Content: Cannot find path '\\DEMO\files\file-symlink.txt' because it does not exist.

In a packet capture, we can see the session setup uses NULL with NTLMSSP:

14    0.031496   x.x.x.x   x.x.x.y      SMB2 289  Session Setup Request, NTLMSSP_AUTH, User: \

And here’s what the ACCESS_DENIED looks like:

20    0.043026   x.x.x.x   x.x.x.y      SMB2 166  Tree Connect Request Tree: \\DEMO\files
21    0.043217   x.x.x.y   x.x.x.x      SMB2 131  Tree Connect Response, Error: STATUS_ACCESS_DENIED

To use Kerberos passthrough/delegation, I run this PowerShell command to set the parameter on the destination (ONTAP) CIFS server:

Set-ADComputer -Identity DEMO -PrincipalsAllowedToDelegateToAccount SERVER$

That allows the SMB session to ONTAP to set up using Kerberos auth:

2603 26.877660 x.x.x.x x.x.x.y SMB2 2179 Session Setup Request
2673 26.909735 x.x.x.y x.x.x.x SMB2 326 Session Setup Response
supportedMech: 1.2.840.48018.1.2.2 (MS KRB5 - Microsoft Kerberos 5)

And the tree connect succeeds (you may need to run klist purge on the Windows client):

2674 26.910117 x.x.x.x x.x.x.y SMB2 154 Tree Connect Request Tree: \demo\files
2675 26.910630 x.x.x.x x.x.x.y SMB2 138 Tree Connect Response

This is the result from the Linux client:

# pwsh test.ps1

PowerShell credential request
Enter your credentials.
Password for user administrator@NTAP.LOCAL: **********

This is a file symlink.

So, how do we work around this issue if we can’t delegate Kerberos?

Using the NULL user and NTLM

Remember when I said the request without Kerberos delegation used the NULL user and NTLMSSP?

14    0.031496   x.x.x.x   x.x.x.y      SMB2 289  Session Setup Request, NTLMSSP_AUTH, User: \ 

The reason we saw “Access Denied” to the ONTAP CIFS/SMB share is because ONTAP disallows the NULL user by default. However, in ONTAP 9.0 and later, you can enable NULL user authentication, as described in this KB article:

How to grant access to NULL (Anonymous) user in Clustered Data ONTAP

Basically, it’s a simple two-step process:

  1. Create a name mapping rule for ANONYMOUS LOGON
  2. Set the Windows default NULL user in the CIFS options

Here’s how I did it in my SVM (address is the Windows client IP):

::*> vserver name-mapping create -vserver DEMO -direction win-unix -position 3 -pattern "ANONYMOUS LOGON" -replacement pcuser -address x.x.x.x/24

The Windows user needs to be a valid Windows user.

::*> cifs options modify -vserver DEMO -win-name-for-null-user NTAP\powershell-user

You can verify ONTAP can find it with:

::*> access-check authentication translate -node node1 -vserver DEMO -win-name NTAP\powershell-user

Once that’s done, we authenticate with NTLM and get access with the NULL user:

14 0.009012 x.x.x.x x.x.x.y SMB2 289 Session Setup Request, NTLMSSP_AUTH, User: \
27 0.075264 x.x.x.x x.x.x.y SMB2 166 Tree Connect Request Tree: \DEMO\files
28 0.075747 x.x.x.y x.x.x.x SMB2 138 Tree Connect Response

And the Linux client is able to run the PowerShell calls:

# pwsh test.ps1

PowerShell credential request
Enter your credentials.
Password for user administrator@NTAP.LOCAL: **********

This is a file symlink.

Questions? Comments? Add them below!

Mounting ONTAP CIFS/SMB shares with Linux – Guidelines and tips

Every now and then, the question of mounting CIFS/SMB shares in ONTAP using Linux clients comes up. Many people might consider this configuration to be a crime against nature.


If you’re asking yourself “why would someone want to do this,” there are a few reasons.

  • Lock handling – SMB locks use oplocks. NFS locks use shared locks or delegations, depending on the NFS version and options being used. Generally this is fine, but in some cases, the style of lock might create complications with some applications. For example, Kafka on NFS has some issues with how NFS handles deletions when a file is locked:
  • Legacy copiers/scanners – Many companies will use copiers/scanners to create image files that redirect to a NAS share in their environment. Many of those copiers use old SMB clients and are no longer under support, so they can’t be upgrades. And many of those scanners/copiers are expensive, so people aren’t interested in buying newer ones just to get newer SMB versions.
  • “How we’ve always done it” – This is a common reason that doesn’t always have technical logic associated with it, but you still have to roll with it.

There are other reasons out there as well, I’m sure. But this post will attempt to group the considerations, support statements and issues you might see in case someone else has that question.

Mounting SMB shares in ONTAP – General Guidelines

In 7-Mode ONTAP, mounting SMB shares with clients like AIX had no issues. This was because 7-Mode supported a lot of old SMB technology concepts that newer versions of ONTAP have deprecated. Even though SMB shares on Linux worked in 7-Mode, there was never an official supported configuration for it.

NetApp ONTAP has traditionally only supported Apple OS and Windows OS for CIFS/SMB workloads.

You can find the supported clients in the Interoperability Matrix here:

With newer versions of ONTAP, you *can* get SMB/CIFS working properly, provided you take the following into account:

  • CIFS/SMB clients should support UNICODE for CIFS/SMB communication. Clustered ONTAP doesn’t support non-UNICODE in any release and never will. Newer CIFS/SMB clients should support UNICODE. AIX is a good example of an older Linux client that has some challenges using SMB because it still uses non-UNICODE in some client installs.
  • CIFS/SMB versions mounting should reflect the supported SMB versions in ONTAP. ONTAP has deprecated SMB1, so you’d need to mount using SMB2 or SMB3.
  • CIFS/SMB enabled features in ONTAP should reflect the feature support for the CIFS/SMB Linux clients. For example, my Centos7 Linux Samba client doesn’t support large MTU, SMB3 encryption, multichannel, etc. That can cause failures in the mounts (for an example, see below).
  • If using Kerberos for the CIFS/SMB mounts, be sure the CIFS/SMB ONTAP server has a valid SPN for the CIFS service. The name should match the mount name. For example, if your CIFS/SMB server is named “CIFS” then the SPN would be cifs/

Common Issues when Mounting CIFS/SMB

One of the main issues people run into with SMB mounts in Linux (provided they follow the guidelines above when using ONTAP) is using the wrong command syntax. For example, when you specify the share path, you use either \\\\name\\share (extra slashes to tell Linux that the \ isn’t an operand) or you use //name/share.

The following NetApp KB article does a good job of giving command examples for the right way to mount SMB:

How to access CIFS from Linux machine using SAMBA

Usually the command will tell you if the wrong option is being used, but sometimes the errors are less than helpful. These are a few errors I ran into.

Incorrect Mount Option

This was pretty self-explanatory. First, I didn’t have the cifs-utils installed. Second, I used the wrong command syntax.

Mount point does not exist

Again, self explanatory. The directory you’re trying to mount to needs to exist.

No such file or directory (when mounting)

This means you specified the wrong export path on the NFS server. Check your junction-paths and try again.

Host is down

This one was a tricky error, as it suggests that the server was not up. In some cases, that may truly be the issue. But it wasn’t in my case.

# mount -t cifs -o user=cifsuser \\\\DEMO\\nas /mnt/nas
Password for cifsuser@\DEMO\nas: **********
mount error(112): Host is down

But, in reality, the issue was that I didn’t specify the SMB version and it tried to use SMBv1.0 by default.

Specifying the SMB version (-o vers=3.0) got past that issue.

Required key not available

This is a Kerberos specific error. In my case, the cifs/ SPN didn’t exist for the hostname I used in the UNC path. You can see that in a packet capture.

Permission denied

This error is fairly useless in both Windows and Linux in a lot of cases – mainly because it can mean a variety of things. Sometimes, it really is an access issue (such as share or file-level permissions). But in my testing, I also ran into this issue when I had unsupported SMB features enabled on my CIFS server in ONTAP.

# mount -t cifs -o vers=3.0,user=administrator,domain=NTAP.LOCAL //DEMO/nas /mnt/nas
Password for administrator@//DEMO/nas: **********
mount error(13): Permission denied

In a packet trace, I could see the client telling me what it supported:


But the reply didn’t really mention the issue. So I made sure to disable the following CIFS/SMB features:

  • SMB3 encryption (cifs security modify)
  • Large MTU and SMB Multichannel (cifs options modify)

Once I did that, I was able to mount.

[root@centos7 /]# mount -t cifs -o vers=3.0,user=administrator,domain=NTAP.LOCAL //DEMO/nas /mnt/nas
Password for administrator@//DEMO/nas: **********
[root@centos7 /]# touch /mnt/nas/smbfile
[root@centos7 /]# ls -la /mnt/nas
total 1
drwxr-xr-x 2 root root 0 Mar 27 14:36 .
drwxr-xr-x. 9 root root 97 Mar 27 10:17 ..
-rwxr-xr-x 1 root root 0 Mar 27 14:36 smbfile

For Kerberos, you would just specify sec=krb5.

# mount -t cifs -o sec=krb5,vers=3.0,user=administrator,domain=NTAP.LOCAL //DEMO/nas /mnt/nas

Stale file handle errors when crossing junction paths

In some cases, if you mount volumes to volumes (via junction-path), newer Linux clients may report different inode numbers for the junctioned volume when using a command like “ls -i” than when using “ls -id”. This can create potential access problems from clients, such as “stale file handle” errors.

For example, this Centos 8.3 client shows different inode numbers for the same volume:

# ls -i
8107029540347314273 mountedvolume
# ls -id /mnt/cifs/mountedvolume/
2744251190362505280 /mnt/cifs/mountedvolume/

The way around this issue is to mount with the mount option “actimeo=0.” Newer SMB clients seem to cache inode information and create potential conflicts. Disabling actimeo prevents that caching from taking place. Since this is a caching option, you might see slower overall performance, but it will fix the stale file handle issue. Alternately, you may want to mount with the “noserverino” option instead, which doesn’t try to use the SMB server’s inodes.

Inode Numbers

When Unix Extensions are enabled, we use the actual inode number provided by the server in response to the POSIX calls as an inode number.

When Unix Extensions are disabled and "serverino" mount option is enabled there is no way to get the server inode number. The client typically maps the server-assigned "UniqueID" onto an inode number.

Note that the UniqueID is a different value from the server inode number. The UniqueID value is unique over the scope of the entire server and is often greater than 2 power 32. This value often makes programs that are not compiled with LFS (Large File Support), to trigger a glibc EOVERFLOW error as this won't fit in the target structure field. It is strongly recommended to compile your programs with LFS support (i.e. with -D_FILE_OFFSET_BITS=64) to prevent this problem. You can also use "noserverino" mount option to generate inode numbers smaller than 2 power 32 on the client. But you may not be able to detect hardlinks properly.

Once I set actimeo=0, this is how those inodes looked:

# ls -i
8107029540347314273 mountedvolume
# ls -id /mnt/cifs/mountedvolume/
8107029540347314273 /mnt/cifs/mountedvolume/

There you go! Hopefully this helped someone from going down a Google K-hole. Leave comments or suggestions below.

How to find average file size and largest file size using XCP

If you use NetApp ONTAP to host NAS shares (CIFS or NFS) and have too many files and folders to count, then you know how challenging it can be to figure out file information in your environment in a quick, efficient and effective way.

This becomes doubly important when you are thinking of migrating NAS data from FlexVol volumes to FlexGroup volumes, because there is some work up front that needs to be done to ensure you size the capacity of the FlexGroup and its member volumes correctly. TR-4571 covers some of that in detail, but it basically says “know your average file size.” It currently doesn’t tell you *how* to do that (though it will eventually). This blog attempts to fill that gap.


I’ve written previously about XCP here:

Generally speaking, it’s been to tout the data migration capabilities of the tool. But, in this case, I want to highlight the “xcp scan” capability.

XCP scan allows you to use multiple, parallel threads to analyze an unstructured NAS share much more quickly than you could with basic tools like rsync, du, etc.

The NFS version of XCP also allows you to output this scan to a file (HTML, XML, etc) to generate a report about the scanned data. It even does the math for you and finds the largest (max) file size and average file size!


The command I ran to get this information was:

# xcp scan -md5 -stats -html SERVER:/volume/path > filename.html

That’s it! XCP will scan and write to a file. You can also get info about the top five file consumers (by number and capacity) by owner, as well as get some nifty graphs. (Pro tip: Managers love graphs!)


What if I only have SMB/CIFS data?

Currently, XCP for SMB doesn’t support output to HTML files. But that doesn’t mean you can’t have fun, too!

You can stand up a VM using CentOS or whatever your favorite Linux kernel is and use XCP for NFS to scan data – provided the client has the necessary access to do so and you can score an NFS license (even if it’s eval). XCP scans are read-only, so you shouldn’t have issues running them.

Just keep in mind the following:

NFS to shares that have traditionally been SMB/CIFS-only are likely NTFS security style. This means that the user you are accessing the data as (for example, root) should be able to map to a valid Windows user that has read access to the data. NFS clients that access NTFS security style volumes map to Windows users to figure out permissions. I cover that here:

Mixed perceptions with NetApp multiprotocol NAS access

You can check the volume security style in two ways:

  • CLI with the command
    ::> volume show -volume [volname] -fields security-style
  • OnCommand System Manager under the “Storage -> Qtrees” section (yea, yea… I know. Volumes != Qtrees)


To check if the user you are attempting to access the volume via NFS with maps to a valid and expected Windows user, use this CLI command from diag privilege:

::> set diag
::*> diag secd name-mapping show -node node1 -vserver DEMO -direction unix-win -name prof1

'prof1' maps to 'NTAP\prof1'

To see what Windows groups this user would be a member of (and thus would get access to files and folders that have those groups assigned), use this diag privilege command:

::*> diag secd authentication show-creds -node ontap9-tme-8040-01 -vserver DEMO -unix-user-name prof1

UNIX UID: prof1 <> Windows User: NTAP\prof1 (Windows Domain User)

GID: ProfGroup
 Supplementary GIDs:

Primary Group SID: NTAP\DomainUsers (Windows Domain group)

Windows Membership:
 NTAP\group2 (Windows Domain group)
 NTAP\DomainUsers (Windows Domain group)
 NTAP\sharedgroup (Windows Domain group)
 NTAP\group1 (Windows Domain group)
 NTAP\group3 (Windows Domain group)
 NTAP\ProfGroup (Windows Domain group)
 Service asserted identity (Windows Well known group)
 BUILTIN\Users (Windows Alias)
 User is also a member of Everyone, Authenticated Users, and Network Users

Privileges (0x2080):

If you want to run XCP as root and want it to have administrator level access, you can create a name mapping. This is what I have in my SVM:

::> vserver name-mapping show -vserver DEMO -direction unix-win

Vserver: DEMO
Direction: unix-win
Position Hostname         IP Address/Mask
-------- ---------------- ----------------
1        -                -                Pattern: root
                                           Replacement: DEMO\\administrator

To create a name mapping for root to map to administrator:

::> vserver name-mapping create -vserver DEMO -direction unix-win -position 1 -pattern root -replacement DEMO\\administrator

Keep in mind that backup software often has this level of rights to files and folders, and the XCP scan is read-only, so there shouldn’t be any issue. If you are worried about making root an administrator, create a new Windows user for it to map to (for example, DOMAIN\xcp) and add it to the Backup Operators Windows Group.

In my lab, I ran a scan on a NTFS security style volume called “xcp_ntfs_src”:

::*> vserver security file-directory show -vserver DEMO -path /xcp_ntfs_src

Vserver: DEMO
 File Path: /xcp_ntfs_src
 File Inode Number: 64
 Security Style: ntfs
 Effective Style: ntfs
 DOS Attributes: 10
 DOS Attributes in Text: ----D---
Expanded Dos Attributes: -
 UNIX User Id: 0
 UNIX Group Id: 0
 UNIX Mode Bits: 777
 UNIX Mode Bits in Text: rwxrwxrwx
 ACLs: NTFS Security Descriptor

I used this command and nearly 600,000 objects were scanned in 25 seconds:

# xcp scan -md5 -stats -html 10.x.x.x:/xcp_ntfs_src > xcp-ntfs.html
XCP 1.3D1-8ae2672; (c) 2018 NetApp, Inc.; Licensed to Justin Parisi [NetApp Inc] until Tue Sep 4 13:23:07 2018

126,915 scanned, 85,900 summed, 43.8 MiB in (8.75 MiB/s), 14.5 MiB out (2.89 MiB/s), 5s
 260,140 scanned, 187,900 summed, 91.6 MiB in (9.50 MiB/s), 31.3 MiB out (3.34 MiB/s), 10s
 385,100 scanned, 303,900 summed, 140 MiB in (9.60 MiB/s), 49.9 MiB out (3.71 MiB/s), 15s
 516,070 scanned, 406,530 summed, 187 MiB in (9.45 MiB/s), 66.7 MiB out (3.36 MiB/s), 20s
Sending statistics...
 594,100 scanned, 495,000 summed, 220 MiB in (6.02 MiB/s), 80.5 MiB out (2.56 MiB/s), 25s
594,100 scanned, 495,000 summed, 220 MiB in (8.45 MiB/s), 80.5 MiB out (3.10 MiB/s), 25s.

This was the resulting report:


Happy scanning!

Workaround for Mac Finder errors when unzipping files in ONTAP

ONTAP allows you to mount volumes to other volumes in a Storage Virtual Machine, which provides a way for storage administrators to create their own folder structures across multiple nodes in a cluster. This is useful when you want to ensure the workload gets spread across nodes, but you can’t use FlexGroup volumes for whatever reason.

This graphic shows how that can work:


In NAS environments, a client will ask for a file or folder location and ONTAP will re-direct the traffic to wherever that object lives. This is supposed to be transparent to the client, provided they follow standard NAS deployment steps.

However, not all NAS clients are created equal. Sometimes, Linux serves up SMB and will do things differently than Microsoft does. Windows also will do NFS, but it doesn’t entirely follow the NFS RFCs. So, occasionally, ONTAP doesn’t expect how a client handles something and stuff breaks.

Mac Finder

If you’ve ever used a Mac, you’ll know that the Finder can do somethings a little differently than the Terminal does. In this particular issue, we’ll focus on how Finder unzips files (when you double-click the file) in volumes that are mounted to other volumes in ONTAP.

One of our customers hit this issue, and after poking around a little bit, I figured out how to workaround the issue.

Here’s what they were doing:

  • SMB to Mac clients
  • Shares at the parent FlexVol level (ie, /vol1
  • FlexVols mounted to other FlexVols several levels deep (ie, /vol1/vol2/vol3)

When files are unzipped after accessing a share at a higher level and then drilling down into other folders (which are actually FlexVols mounted to other FlexVols), then unzipping in Finder via double-click fails.

When the shares are mounted at the same level as the FlexVol where the unzip is attempted, unzip works. When the Terminal is used to unzip, it works.

However, when your users refuse to use/are unable to use the Terminal and you don’t want to create hundreds of shares just to work around one issue, it’s an untenable situation.

So, I decided to dig into the issue…

Reproducing the issue

The best way to troubleshoot problems is to set up a lab environment and try to recreate the problem. This allows you freedom to gather logs, packet traces, etc. without bothering your customer or end user. So, I brought my trusty old 2011 MacBook running OS Sierra and mounted the SMB share in question.

These are the volumes and their junction paths:

DEMO inodes /shared/inodes
DEMO shared /shared

This is the share:

 Vserver: DEMO
 Share: shared
 CIFS Server NetBIOS Name: DEMO
 Path: /shared
 Share Properties: oplocks
 Symlink Properties: symlinks
 File Mode Creation Mask: -
 Directory Mode Creation Mask: -
 Share Comment: -
 Share ACL: Everyone / Full Control
 File Attribute Cache Lifetime: -
 Volume Name: shared
 Offline Files: manual
 Vscan File-Operations Profile: standard
 Maximum Tree Connections on Share: 4294967295
 UNIX Group for File Create: -

I turned up debug logging on the cluster (engage NetApp Support if you want to do this), got a packet trace on the Mac and reproduced the issue right away. Lucky me!


I also tried a 3rd party unzip utility (Stuffit Expander) and it unzipped fine. So this was definitely a Finder/ONTAP/NAS interaction problem, which allowed me to focus on that.

Packet traces showed that the Finder was attempting to look for a folder called “.TemporaryItems/folders.501/Cleanup At Startup” but couldn’t find it – and couldn’t create it, apparently either. But it would created folders named “BAH.XXXX” instead, and they wouldn’t get cleaned up.

So, I thought, why not manually create the folder path, since it wasn’t able to do it on its own?

You can do this through Terminal, or via Finder. Keep in mind that the path above has “folders.501” – 501 is my uid, so check your users uid on the Mac and make sure the folder path is created using that uid. If you have multiple users that access the share, you may need to create multiple in .TemporaryItems.

If you do it via Finder, you may want to enable hidden files. I learned how to do that via this article:

So I did that and then I unmounted the share and re-mounted, to make sure there wasn’t any weird cache issue lingering. You can check CIFS/SMB sessions, versions, etc with the following command, if you want to make sure they are closed:

cluster::*> cifs session show -vserver DEMO -instance

Vserver: DEMO

Node: node1
 Session ID: 16043510722553971076
 Connection ID: 390771549
 Incoming Data LIF IP Address: 10.x.x.x
 Workstation IP Address: 10.x.x.x
 Authentication Mechanism: NTLMv2
 User Authenticated as: domain-user
 Windows User: NTAP\prof1
 UNIX User: prof1
 Open Shares: 1
 Open Files: 1
 Open Other: 0
 Connected Time: 7m 49s
 Idle Time: 6m 2s
 Protocol Version: SMB3
 Continuously Available: No
 Is Session Signed: true
 NetBIOS Name: -
 SMB Encryption Status: unencrypted
 Connection Count: 1

Once I reconnected with the newly created folder path, double-click unzip worked perfectly!

Check it out yourself:

Note: You *may* have to enable the option is-use-junctions-as-reparse-points-enabled on your CIFS server. I haven’t tested with it off and on thoroughly, but I saw some inconsistency when it was disabled. For the record, it’s on by default.

You can check with:

::*> cifs options show -fields is-use-junctions-as-reparse-points-enabled

Give it a try and let me know how it works for you in the comments!

Life hack: Change the UID of all files in NAS volume… without actually changing anything!


There are a ton of hidden gems in ONTAP that go unnoticed because they get added without a lot of fanfare. For example, the volume recovery queue got added in 8.3 and no one really knew what it was, what it did, or why the volumes they deleted didn’t seem to actually get deleted for 24 hours.

I keep my ears open for these features so I can promote them and I ran across a pretty slick, simple gem while at the NetApp Converge (sales kick off) conference, from an old colleague in my support days that now does SE work. (Shout out to Maarten Lippmann!)

But, features are only as good as their use cases.

Here’s the scenario…

Let’s say you have a Git code repository with millions of files and its files are owned by a number of different people that one of your developers wants to access and make changes to. They don’t have access to some of those files by way of permissions, but there are way too many to re-permission effectively and in a timely manner. Plus, if you change the access to these files, you might break the code repo horribly.

So, how do you:

  • Create a usable copy of the entire code repo in a reasonable amount of time without eating up a ton of space
  • Assign a new owner to all the files in the volume quickly and easily
  • Keep the original repo intact

It’s pretty easy in ONTAP, actually – In fact, it’s a single command. All you need is a FlexClone license and you can make an instant copy of a volume with a new file owner without impacting the source volume and without using up any new space. Additionally, if you wanted to keep those changes, you can split the clone into its own unique volume.

In the following example, I have an existing volume that has a ton of files and folders, all owned by root:

[root@XCP nfs4]# ls -la
total 8012
d------r-x. 102 root root 8192 Apr 11 11:41 .
drwxr-xr-x. 5 root root 4096 Apr 12 17:20 ..
----------. 1 root root 0 Apr 11 11:29 file
d---------. 1002 root root 77824 Apr 11 11:47 topdir_0
d---------. 1002 root root 77824 Apr 11 11:47 topdir_1
d---------. 1002 root root 77824 Apr 11 11:47 topdir_99

I want the new owner of the files in the cloned volume to be a user named “prof1” and the GID to be 1101.

cluster::*> getxxbyyy getpwbyname -node ontap9-tme-8040-01 -vserver DEMO -username prof1
 (vserver services name-service getxxbyyy getpwbyname)
pw_name: prof1
pw_uid: 1100
pw_gid: 1101

So, I do the following:

cluster::*> vol clone create -vserver DEMO -flexclone clone -type RW -parent-vserver DEMO -parent-volume flexvol -junction-active true -foreground true -junction-path /clone -uid 1100 -gid 1101
[Job 12606] Job succeeded: Successful

cluster::*> vol show -vserver DEMO -volume clone -fields clone-volume,clone-parent-name,clone-parent-vserver
vserver volume clone-volume clone-parent-vserver clone-parent-name
------- ------ ------------ -------------------- -----------------
DEMO clone true DEMO flexvol

That command took literally 10 seconds to complete. There are over 1.8 million objects in that volume.

cluster::*> df -i /vol/clone
Filesystem iused ifree %iused Mounted on Vserver
/vol/clone/ 1824430 4401487 29% /clone DEMO

Then, I check the owner of the files:

cluster::*> vserver security file-directory show -vserver DEMO /clone/nfs4

Vserver: DEMO
 File Path: /clone/nfs4
 File Inode Number: 96
 Security Style: unix
 Effective Style: unix
 DOS Attributes: 10
 DOS Attributes in Text: ----D---
Expanded Dos Attributes: -
 UNIX User Id: 1100
 UNIX Group Id: 1101
 UNIX Mode Bits: 5
 UNIX Mode Bits in Text: ------r-x
 ACLs: NFSV4 Security Descriptor

cluster::*> vserver security file-directory show -vserver DEMO /clone/nfs4/topdir_99

Vserver: DEMO
 File Path: /clone/nfs4/topdir_99
 File Inode Number: 3556
 Security Style: unix
 Effective Style: unix
 DOS Attributes: 10
 DOS Attributes in Text: ----D---
Expanded Dos Attributes: -
 UNIX User Id: 1100
 UNIX Group Id: 1101
 UNIX Mode Bits: 0
 UNIX Mode Bits in Text: ---------
 ACLs: NFSV4 Security Descriptor

And from the client:

[root@XCP nfs4]# pwd

[root@XCP nfs4]# ls -la
total 8012
d------r-x. 102 1100 1101 8192 Apr 11 11:41 .
drwxr-xr-x. 5 1100 1101 4096 Apr 12 17:20 ..
----------. 1 1100 1101 0 Apr 11 11:29 file
d---------. 1002 1100 1101 77824 Apr 11 11:47 topdir_0
d---------. 1002 1100 1101 77824 Apr 11 11:47 topdir_1
d---------. 1002 1100 1101 77824 Apr 11 11:47 topdir_10
d---------. 1002 1100 1101 77824 Apr 11 11:47 topdir_11
d---------. 1002 1100 1101 77824 Apr 11 11:47 topdir_12

It shouldn’t be that easy, should it?

If I wanted to split the volume off into its own volume (such as when a dev makes changes and wants to keep them, but doesn’t want to change the source volume):

cluster::*> vol clone split
 estimate show start status stop

If I want to delete the clone after I’m done, I just run “volume destroy.”

Questions? Hit me up in the comments!

Backing up/restoring ONTAP SMB shares with PowerShell


A while back, I posted a SMB share backup and restore PowerShell script written by one of our SMB developers.  Later, Scott Harney added some scripts for NFS exports. You can find those here:

That was back in the ONTAP 8.3.x timeframe. They’ve worked pretty well for the most part, but since then, we’re up to ONTAP 9.3 and I’ve occasionally gotten feedback that the scripts throw errors sometimes.

While the idea of an open script repository is to have other people send updates of scripts and make it a living, breathing and evolving entity, that’s not how this script has ended up. Instead, it’s gotten old and crusty and in need of an update. The inspiration was this reddit thread:

So, I’ve done that. You can find the updated versions of the script for ONTAP 9.x at the same place as before:

However, other than for testing purposes, it may not have been necessary to do anything. I actually ran the original restore script without changing anything of note (changed some comments) and it ran fine. The errors most people see either have to do with the version of the NetApp PowerShell toolkit, a syntax error in their copy/paste or their version of PowerShell. Make sure they’re all up to date, else you’ll run into errors. I used:

  • Windows 2012R2
  • ONTAP 9.4 (yes, I have access to early releases!)
  • PowerShell
  • Latest NetApp PowerShell toolkit (4.5.1 for me)

When should I use these scripts?

These were created as a way to fill the gap that SVM-DR now fills. Basically, before SVM-DR existed, there was no way to backup and restore CIFS configurations. Even with SVM-DR, these scripts offer some nice granular functionality to backup and restore specific configuration areas and can be modified to include other things like CIFS options, SAN configuration, etc.

As for how to run them…

Backing up your shares

1) Download and install the latest PowerShell toolkit from


2) Import the DataONTAP module with “Import-Module DataONTAP”

(be sure that the PowerShell window is closed and re-opened after you install the toolkit; otherwise, Windows won’t find the new module to import)

3) Back up the desired shares as per the usage comments in the script. (see below)

# Usage:
# Run as: .\backupSharesAcls.ps1 -server <mgmt_ip> -user <mgmt_user> -password <mgmt_user_password> -vserver <vserver name> -share <share name or * for all> -shareFile <xml file to store shares> -aclFile <xml file to store acls> -spit <none,less,more depending on info to print>
# Example
# 1. If you want to save only a single share on vserver vs2.
# Run as: .\backupSharesAcls.ps1 -server -user admin -password netapp1! -vserver vs2 -share test2 -shareFile C:\share.xml -aclFile C:\acl.xml -spit more 
# 2. If you want to save all the shares on vserver vs2.
# Run as: .\backupSharesAcls.ps1 -server -user admin -password netapp1! -vserver vs2 -share * -shareFile C:\share.xml -aclFile C:\acl.xml -spit less
# 3. If you want to save only shares that start with "test" and share1 on vserver vs2.
# Run as: .\backupSharesAcls.ps1 -server -user admin -password netapp1! -vserver vs2 -share "test* | share1" -shareFile C:\share.xml -aclFile C:\acl.xml -spit more
# 4. If you want to save shares and ACLs into .csv format for examination.
# Run as: .\backupSharesAcls.ps1 -server -user admin -password netapp1! -vserver vs2 -share * -shareFile C:\shares.csv -aclFile C:\acl.csv -csv true -spit more

If you use “-spit more” you’ll get verbose output:


4) Review the shares/ACLs via the XML files.

That’s it for backup. Pretty straightforward. However, our backups are only as good as our restores…

Restoring the shares using the script

I don’t recommend testing this script the first time on a production system. I’d suggest creating a test SVM, or even leveraging SVM-DR to replicate the SVM to a target location.

In my lab, however… who cares! Let’s blow it all away!


Now, run your restore.


That’s it! Happy backing up/restoring!

Tips for running the script

  • Before running the script, copy and paste it into the “PowerShell ISE” to verify that the syntax is correct. From there, save the script to the local client. Syntax errors can cause problems with the script’s success.
  • Use the latest available NetApp PowerShell Toolkit and ensure the PowerShell version on your client matches what is in the release notes for the toolkit.
  • Test the script on a dummy SVM before running in production.
  • Ensure the DataONTAP module has been imported; if import fails after installing the toolkit, close the PowerShell window and re-open it.


If you have any questions or comments, leave them here. Also, if you customize these at all, please do share with the community! Add them to the Github repository or create your own repo!

Cache Rules Everything Around Me: New Global Name Service Cache in ONTAP 9.3


In an ONTAP cluster made up of individual nodes with individual hardware resources, it’s useful if a storage administrator can manage the entire cluster as a monolithic entity, without having to worry about what lives where.

Prior to ONTAP 9.3, name service caches were node-centric, for the most part. This sometimes could create scenarios where a cache could become stale on one node, where it was recently populated on another node. Thus, a client may get different results depending on which physical node the network connection occurred.

The following is pulled right out of the new name services best practices technical report (, which acts as an update to TR-4379. I wrote some of this, but most of what’s written here is by the new NFS/Name Services TME Chris Hurley. (@averageguyx) This is basically a copy/paste, but I thought this was a cool enough feature to highlight on its own.

Global Name Services Cache in ONTAP 9.3

ONTAP 9.3 offers a new caching mechanism that moves name service caches out of memory and into a persistent cache that is replicated asynchronously between all nodes in the cluster. This provides more reliability and resilience in the event of failovers, as well as offering higher limits for name service entries due to being cached on disk rather than in node memory.

The name service cache is enabled by default. If legacy cache commands are attempted in ONTAP 9.3 with name service caching enabled, an error will occur, such as the following:

Error: show failed: As name service caching is enabled, "Netgroups" caches no longer exist. Use the command "vserver services name-service cache netgroups members show" (advanced privilege level) to view the corresponding name service cache entries.

The name service caches are controlled in a centralized location, below the name-service cache command set. This provides easier cache management, from configuring caches to clearing stale entries.

The global name service cache can be disabled for individual caches using vserver services name-service cache commands in advanced privilege, but it is not recommended to do so. For more detailed information, please see later sections in this document.

ONTAP also offers the additional benefit of using the caches while external name services are unavailable.  If there is an entry in the cache, regardless if the entry’s TTL is expired or not, ONTAP will use that cache entry when external name services servers cannot be reached, thereby providing continued access to data served by the SVM.

Hosts Cache

There are two individual host caches; forward-lookup and reverse-lookup but the hosts cache settings are controlled as a whole.  When a record is retrieved from DNS, the TTL of that record will be used for the cache TTL, otherwise, the default TTL in the host cache settings will be used (24 hours).  The default for negative entries (host not found) is 60 seconds.  Changing DNS settings does not affect the cache contents in any way.

  • The network ping command does not use the name services hosts cache when using a hostname.

User and Group Cache

The user and group caches consist of three categories; passwd (user), group and group membership.

  • Cluster RBAC access does not use the any of the caches

Passwd (User) Cache

User cache consists of two caches, passwd and passwd-by-uid.  The caches only cache the name, uid and gid aspects of the user data to conserve space since the other data such as homedir and shell are irrelevant for NAS access.  When an entry is placed in the passwd cache, the corresponding entry is created in the passwd-by-uid cache.  By the same token, when an entry is deleted from one cache, the corresponding entry will be deleted from the other cache.  If you have an environment where there are disjointed username to uid mappings, there is an option to disable this behavior.

Group Cache

Like the passwd cache, the group cache consists of two caches, group and group-by-gid.  When an entry is placed in the group cache, the corresponding entry is created in the group-by-gid cache.  By the same token, when an entry is deleted from one cache, the corresponding entry will be deleted from the other cache.  The full group membership is not cached to conseve space and is not necessary for NAS data access, therefore only the group name and gid are cached.  If you have an environment where there are disjointed group name to gid mappings, there is an option to disable this behavior.

Group Membership Cache

In file and NIS environments, there is no efficient way to gather a list of groups a particular user is a member of, so for these environments ONTAP has a group membership cache to provide these efficiencies.  The group membership cache consists of a single cache and contains a list of groups a user is a member of.

Netgroup Cache

Beginning in ONTAP 9.3, the various netgroup caches have been consolidated into 2 caches; a netgroup.byhost and a netgroup.byname cache.  The netgroup.byhost cache is the first cache consulted for the netgroups a host is a part of.  Next, if this information is not available, then the query reverts to gathering the full netgroup members and comparing that to the host.  If the information is not in the cache, then the same process is performed against the netgroup ns-switch sources.  If a host requesting access via a netgroup is found via the netgroup membership lookup process, that ip-to-netgroup mapping is always added to the netgroup.byhost cache for faster future access.  This also leads to needing a lower TTL for the members cache so that changes in netgroup membership can be reflected in the ONTAP caches within the TTL timeframe.

Viewing cache entries

Each of the above came service caches and be viewed.  This can be used to confirm whether or not expected results are gotten from name services servers.  Each cache has its own individual options that you can use to filter the results of the cache to find what you are looking for.  In order to view the cache, the name-services cache <cache> <subcache> show command is used.

Caches are unique per vserver, so it is suggested to view caches on a per-vserver basis.  Below are some examples of the caches and the options.

ontap9-tme-8040::*> name-service cache hosts forward-lookup show  ?

  (vserver services name-service cache hosts forward-lookup show)

  [ -instance | -fields <fieldname>, ... ]

  [ -vserver <vserver name> ]                                                   *Vserver

  [[-host] <text>]                                                              *Hostname

  [[-protocol] {Any|ICMP|TCP|UDP}]                                              *Protocol (default: *)

  [[-sock-type] {SOCK_ANY|SOCK_STREAM|SOCK_DGRAM|SOCK_RAW}]                     *Sock Type (default: *)


  [[-family] {Any|Ipv4|Ipv6}]                                                   *Family (default: *)

  [ -canonname <text> ]                                                         *Canonical Name

  [ -ips <IP Address>, ... ]                                                    *IP Addresses

  [ -ip-protocol {Any|ICMP|TCP|UDP}, ... ]                                      *Protocol

  [ -ip-sock-type {SOCK_ANY|SOCK_STREAM|SOCK_DGRAM|SOCK_RAW}, ... ]             *Sock Type

  [ -ip-family {Any|Ipv4|Ipv6}, ... ]                                           *Family

  [ -ip-addr-length <integer>, ... ]                                            *Length

  [ -source {none|files|dns|nis|ldap|netgrp_byname} ]                           *Source of the Entry

  [ -create-time <"MM/DD/YYYY HH:MM:SS"> ]                                      *Create Time

  [ -ttl <integer> ]                                                            *DNS TTL

ontap9-tme-8040::*> name-service cache unix-user user-by-id show

  (vserver services name-service cache unix-user user-by-id show)

Vserver    UID         Name         GID            Source  Create Time

---------- ----------- ------------ -------------- ------- -----------

SVM1       0           root         1              files   1/25/2018 15:07:13


           0           root         1              files   1/24/2018 21:59:47

2 entries were displayed.

If there are no entries in a particular cache, the following message will be shown:

ontap9-tme-8040::*> name-service cache netgroups members show

  (vserver services name-service cache netgroups members show)

This table is currently empty.

There you have it! New cache methodology in ONTAP 9.3. If you’re using NAS and name services in ONTAP, it’s highly recommended to go to ONTAP 9.3 to take advantage of this new feature.

Behind the Scenes: Episode 126 – Komprise

Welcome to the Episode 126, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”


This week on the podcast, we bring in Komprise (@Komprise) CEO Kumar Goswami (@KumarKGoswami) to chat about data management and how their software helps get the most out of your NetApp storage systems!


For more information about Komprise, check out!

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to

This week’s episode is here:

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

I also recently got asked how to leverage RSS for the podcast. You can do that here:

Our YouTube channel (episodes uploaded sporadically) is here:

XCP SMB/CIFS support available!

If you’re not familiar with what XCP is, I covered it in a previous blog post, Migrating to ONTAP – Ludicrous speed! as well as in the XCP podcast. Basically, it’s a super-fast way to scan and migrate data.

One of the downsides of the tool was the fact that it only supported NFSv3 migrations, which also meant it couldn’t handle NTFS style ACLs. Doing that would require a SMB/CIFS supported version of XCP. Today, we get that with XCP SMB/CIFS 1.0:

XCP for SMB/CIFS supports the following:

“show” Displays information about the CIFS shares of a system
“scan”  Reads all files and directories found on a CIFS share and build assessment reports
“copy”  Recursively copies everything from source to destination
“sync”  Performs multiple incremental syncs from source to target
“verify”  Verifies that the target state matches the source, including attributes and NTFS ACLs
“activate”  Activates the XCP license on Windows hosts
“help”     Displays detailed information about XCP commands and options


Right now, it’s CLI only, but be on the lookout for a GUI version.

“Installing” XCP on Windows

XCP in Windows is a simple executable file that runs via the cmd or a PowerShell window. One of the pre-requisites for the software includes Microsoft Visual C++ Redistributable for Visual Studio 2017. If you don’t install this, trying to run the program will result in an error that calls out a specific DLL that isn’t registered.

When I copied the file to my Windows host, I created a new directory called “C:\XCP.” You can put that directory anywhere. To run the utility in CMD, you can either navigate to the directory and run “xcp” or add the directory to your system paths to run from anywhere.

For example:



Once that’s done, run XCP from any location:



Licensing XCP

XCP is a licensed feature. That doesn’t mean you have to pay for it; the license is only used for tracking purposes. But you do have to apply a license. In Windows, that’s pretty easy.

  1. Download a license from
  2. Copy the license into the C:\NetApp\XCP folder
  3. Run “xcp activate”


XCP show

The command “xcp show \\server” can give some useful information for an ONTAP SMB/CIFS server, such as:

  • Available shares
  • Capacity (used and available)
  • Current connections
  • Folder path
  • Share attributes and permissions

This output is a good way to get an overall look at what is available on a server.


XCP scan

XCP has a number of useful scanning features. These include:

PS C:\XCP> xcp help scan

usage: xcp scan [-h] [-v] [-parallel <n>] [-match <filter>] [-preserve-atime]
 [-depth <n>] [-stats] [-l] [-ownership] [-du]
 [-fmt <expression>]

positional arguments:

optional arguments:
 -h, --help show this help message and exit
 -v increase debug verbosity
 -parallel <n> number of concurrent processes (default: <cpu-count>)
 -match <filter> only process files and directories that match the filter
 (see `xcp help -match` for details)
 -preserve-atime restore last accessed date on source
 -depth <n> limit the search depth
 -stats print tree statistics report
 -l detailed file listing output
 -ownership retrieve ownership information
 -du summarize space usage of each directory including
 -fmt <expression> format file listing according to the python expression
 (see `xcp help -fmt` for details)

I scanned my “shared” directory with the -stats option and it was able to scan over 60,000 files in 31 seconds and gave me the following stats:

== Maximum Values ==
 Size Depth Namelen Dirsize
 2.02KiB 5 15 100

== Average Values ==
 Size Depth Namelen Dirsize
 25.6 5 6 6

== Top File Extensions ==
 50003 1

== Number of files ==
 empty <8KiB 8-64KiB 64KiB-1MiB 1-10MiB 10-100MiB >100MiB
 3 50001

== Space used ==
 empty <8KiB 8-64KiB 64KiB-1MiB 1-10MiB 10-100MiB >100MiB
 0 1.22MiB 0 0 0 0 0

== Directory entries ==
 empty 1-10 10-100 100-1K 1K-10K >10k
 2 10004 101

== Depth ==
 0-5 6-10 11-15 16-20 21-100 >100

== Modified ==
 >1 year >1 month 1-31 days 1-24 hrs <1 hour <15 mins future

== Created ==
 >1 year >1 month 1-31 days 1-24 hrs <1 hour <15 mins future

Total count: 60111
Directories: 10107
Regular files: 50004
Symbolic links:
Special files:
Total space for regular files: 1.22MiB
Total space for directories: 0
Total space used: 1.22MiB
60,111 scanned, 0 errors, 31s

When I increased the parallel threads to 8, it finished in 18 seconds:

PS C:\XCP> xcp scan -stats -parallel 8 \\demo\shared

Total count: 60111
Directories: 10107
Regular files: 50004
Symbolic links:
Special files:
Total space for regular files: 1.22MiB
Total space for directories: 0
Total space used: 1.22MiB
60,111 scanned, 0 errors, 18s

XCP copy

With xcp copy, I can copy SMB/CIFS data with or without ACLs at a much faster rate than simple robocopy. Keep in mind that with this version of XCP, it doesn’t have BACKUP OPERATOR rights, so you’d need to run the utility as an admin user on both source and destination.

In the following example, I used robocopy to copy the same dataset as XCP to a NetApp FlexGroup volume.

Robocopy to FlexGroup results (~20-30 minutes)

         Total Copied Skipped Mismatch FAILED Extras
 Dirs :  10107  10106       1        0      0      0
 Files : 50004  50004       0        0      0      0
 Bytes : 1.21m  1.21m       0        0      0      0
 Times : 0:19:01 0:13:11 0:00:00 0:05:50

Speed : 1615 Bytes/sec.
 Speed : 0.092 MegaBytes/min.

UPDATE: Someone asked if the above robocopy run was done with the /MT flag, which would be a more fair apples to apples comparison, since XCP does multithreading. It wasn’t. The syntax used was:

PS C:\XCP> robocopy /S /COPYALL source destination

So, I re-ran it using MT:8 and with an empty FlexGroup after restoring the base snapshot and converting the security style to NTFS to ensure the ACLs come over as well. The multithreading of robocopy cut the time to completion roughly in half.

Robocopy /MT to FlexGroup results (~8-9 minutes)

 PS C:\XCP> robocopy /S /COPYALL /MT:8 \\demo\shared \\demo\flexgroup\robocopyMT

 ROBOCOPY :: Robust File Copy for Windows
Started : Tue Aug 22 20:32:54 2017

Source : \\demo\shared\
 Dest : \\demo\flexgroup\robocopyMT\

Files : *.*

Options : *.* /S /COPYALL /MT:8 /R:1000000 /W:30
Total Copied Skipped Mismatch FAILED Extras
 Dirs : 10107 10106 1 0 0 0
 Files : 50004 50004 0 0 0 0
 Bytes : 1.21 m 1.21 m 0 0 0 0
 Times : 0:35:21 0:06:23 0:00:00 0:01:59

Ended : Tue Aug 22 20:41:18 2017

Then I re-ran the XCP to FlexGroup by restoring the baseline snapshot and then making sure the security style of the volume was NTFS. (It was UNIX before, which would have affected ACLs and overall speed). But, the run still held within 4 minutes. So, we’re looking at 2x as fast as robocopy with a small 60k file and folder workload. In addition, the host I’m using is a Windows 7 client VM with a 1GB network connection and not a ton of power behind it. XCP works best with more robust hardware.


XCP to FlexGroup results – NTFS security style (~4 minutes!)

PS C:\XCP> xcp copy -parallel 8 \\demo\shared \\demo\flexgroup\XCP
1,436 scanned, 0 errors, 0 skipped, 0 copied, 0 (0/s), 5s
4,381 scanned, 0 errors, 0 skipped, 507 copied, 12.4KiB (2.48KiB/s), 10s
5,426 scanned, 0 errors, 0 skipped, 1,882 copied, 40.5KiB (5.64KiB/s), 15s
7,431 scanned, 0 errors, 0 skipped, 3,189 copied, 67.4KiB (5.37KiB/s), 20s
8,451 scanned, 0 errors, 0 skipped, 4,537 copied, 96.1KiB (5.75KiB/s), 25s
9,651 scanned, 0 errors, 0 skipped, 5,867 copied, 123KiB (5.31KiB/s), 30s
10,751 scanned, 0 errors, 0 skipped, 7,184 copied, 150KiB (5.58KiB/s), 35s
12,681 scanned, 0 errors, 0 skipped, 8,507 copied, 178KiB (5.44KiB/s), 40s
13,891 scanned, 0 errors, 0 skipped, 9,796 copied, 204KiB (5.26KiB/s), 45s
14,861 scanned, 0 errors, 0 skipped, 11,136 copied, 232KiB (5.70KiB/s), 50s
15,966 scanned, 0 errors, 0 skipped, 12,464 copied, 259KiB (5.43KiB/s), 55s
18,031 scanned, 0 errors, 0 skipped, 13,784 copied, 287KiB (5.52KiB/s), 1m0s
19,056 scanned, 0 errors, 0 skipped, 15,136 copied, 316KiB (5.80KiB/s), 1m5s
20,261 scanned, 0 errors, 0 skipped, 16,436 copied, 342KiB (5.21KiB/s), 1m10s
21,386 scanned, 0 errors, 0 skipped, 17,775 copied, 370KiB (5.65KiB/s), 1m15s
23,286 scanned, 0 errors, 0 skipped, 19,068 copied, 397KiB (5.36KiB/s), 1m20s
24,481 scanned, 0 errors, 0 skipped, 20,380 copied, 424KiB (5.44KiB/s), 1m25s
25,526 scanned, 0 errors, 0 skipped, 21,683 copied, 451KiB (5.35KiB/s), 1m30s
26,581 scanned, 0 errors, 0 skipped, 23,026 copied, 479KiB (5.62KiB/s), 1m35s
28,421 scanned, 0 errors, 0 skipped, 24,364 copied, 507KiB (5.63KiB/s), 1m40s
29,701 scanned, 0 errors, 0 skipped, 25,713 copied, 536KiB (5.70KiB/s), 1m45s
30,896 scanned, 0 errors, 0 skipped, 26,996 copied, 561KiB (5.15KiB/s), 1m50s
31,911 scanned, 0 errors, 0 skipped, 28,334 copied, 590KiB (5.63KiB/s), 1m55s
33,706 scanned, 0 errors, 0 skipped, 29,669 copied, 617KiB (5.52KiB/s), 2m0s
35,081 scanned, 0 errors, 0 skipped, 30,972 copied, 644KiB (5.44KiB/s), 2m5s
36,116 scanned, 0 errors, 0 skipped, 32,263 copied, 671KiB (5.30KiB/s), 2m10s
37,201 scanned, 0 errors, 0 skipped, 33,579 copied, 698KiB (5.48KiB/s), 2m15s
38,531 scanned, 0 errors, 0 skipped, 34,898 copied, 726KiB (5.65KiB/s), 2m20s
40,206 scanned, 0 errors, 0 skipped, 36,199 copied, 753KiB (5.36KiB/s), 2m25s
41,371 scanned, 0 errors, 0 skipped, 37,507 copied, 780KiB (5.39KiB/s), 2m30s
42,441 scanned, 0 errors, 0 skipped, 38,834 copied, 808KiB (5.63KiB/s), 2m35s
43,591 scanned, 0 errors, 0 skipped, 40,161 copied, 835KiB (5.47KiB/s), 2m40s
45,536 scanned, 0 errors, 0 skipped, 41,445 copied, 862KiB (5.31KiB/s), 2m45s
46,646 scanned, 0 errors, 0 skipped, 42,762 copied, 890KiB (5.56KiB/s), 2m50s
47,691 scanned, 0 errors, 0 skipped, 44,052 copied, 916KiB (5.30KiB/s), 2m55s
48,606 scanned, 0 errors, 0 skipped, 45,371 copied, 943KiB (5.45KiB/s), 3m0s
50,611 scanned, 0 errors, 0 skipped, 46,518 copied, 967KiB (4.84KiB/s), 3m5s
51,721 scanned, 0 errors, 0 skipped, 47,847 copied, 995KiB (5.54KiB/s), 3m10s
52,846 scanned, 0 errors, 0 skipped, 49,138 copied, 1022KiB (5.32KiB/s), 3m15s
53,876 scanned, 0 errors, 0 skipped, 50,448 copied, 1.02MiB (5.53KiB/s), 3m20s
55,871 scanned, 0 errors, 0 skipped, 51,757 copied, 1.05MiB (5.42KiB/s), 3m25s
57,011 scanned, 0 errors, 0 skipped, 53,080 copied, 1.08MiB (5.52KiB/s), 3m30s
58,101 scanned, 0 errors, 0 skipped, 54,384 copied, 1.10MiB (5.39KiB/s), 3m35s
59,156 scanned, 0 errors, 0 skipped, 55,714 copied, 1.13MiB (5.57KiB/s), 3m40s
60,111 scanned, 0 errors, 0 skipped, 57,049 copied, 1.16MiB (5.52KiB/s), 3m45s
60,111 scanned, 0 errors, 0 skipped, 58,483 copied, 1.19MiB (6.02KiB/s), 3m50s
60,111 scanned, 0 errors, 0 skipped, 59,907 copied, 1.22MiB (5.79KiB/s), 3m55s
60,111 scanned, 0 errors, 0 skipped, 60,110 copied, 1.22MiB (5.29KiB/s), 3m56s

XCP sync and verify

Sync and verify can be used during data migrations to ensure the source and target match up before cutting over. These use the same multi-processing capabilities as copy, so this should also be fast. Keep in mind that sync could also potentially be used to do incremental backups using XCP!