Backing up/restoring ONTAP SMB shares with PowerShell


A while back, I posted a SMB share backup and restore PowerShell script written by one of our SMB developers.  Later, Scott Harney added some scripts for NFS exports. You can find those here:

That was back in the ONTAP 8.3.x timeframe. They’ve worked pretty well for the most part, but since then, we’re up to ONTAP 9.3 and I’ve occasionally gotten feedback that the scripts throw errors sometimes.

While the idea of an open script repository is to have other people send updates of scripts and make it a living, breathing and evolving entity, that’s not how this script has ended up. Instead, it’s gotten old and crusty and in need of an update. The inspiration was this reddit thread:

So, I’ve done that. You can find the updated versions of the script for ONTAP 9.x at the same place as before:

However, other than for testing purposes, it may not have been necessary to do anything. I actually ran the original restore script without changing anything of note (changed some comments) and it ran fine. The errors most people see either have to do with the version of the NetApp PowerShell toolkit, a syntax error in their copy/paste or their version of PowerShell. Make sure they’re all up to date, else you’ll run into errors. I used:

  • Windows 2012R2
  • ONTAP 9.4 (yes, I have access to early releases!)
  • PowerShell
  • Latest NetApp PowerShell toolkit (4.5.1 for me)

When should I use these scripts?

These were created as a way to fill the gap that SVM-DR now fills. Basically, before SVM-DR existed, there was no way to backup and restore CIFS configurations. Even with SVM-DR, these scripts offer some nice granular functionality to backup and restore specific configuration areas and can be modified to include other things like CIFS options, SAN configuration, etc.

As for how to run them…

Backing up your shares

1) Download and install the latest PowerShell toolkit from


2) Import the DataONTAP module with “Import-Module DataONTAP”

(be sure that the PowerShell window is closed and re-opened after you install the toolkit; otherwise, Windows won’t find the new module to import)

3) Back up the desired shares as per the usage comments in the script. (see below)

# Usage:
# Run as: .\backupSharesAcls.ps1 -server <mgmt_ip> -user <mgmt_user> -password <mgmt_user_password> -vserver <vserver name> -share <share name or * for all> -shareFile <xml file to store shares> -aclFile <xml file to store acls> -spit <none,less,more depending on info to print>
# Example
# 1. If you want to save only a single share on vserver vs2.
# Run as: .\backupSharesAcls.ps1 -server -user admin -password netapp1! -vserver vs2 -share test2 -shareFile C:\share.xml -aclFile C:\acl.xml -spit more 
# 2. If you want to save all the shares on vserver vs2.
# Run as: .\backupSharesAcls.ps1 -server -user admin -password netapp1! -vserver vs2 -share * -shareFile C:\share.xml -aclFile C:\acl.xml -spit less
# 3. If you want to save only shares that start with "test" and share1 on vserver vs2.
# Run as: .\backupSharesAcls.ps1 -server -user admin -password netapp1! -vserver vs2 -share "test* | share1" -shareFile C:\share.xml -aclFile C:\acl.xml -spit more
# 4. If you want to save shares and ACLs into .csv format for examination.
# Run as: .\backupSharesAcls.ps1 -server -user admin -password netapp1! -vserver vs2 -share * -shareFile C:\shares.csv -aclFile C:\acl.csv -csv true -spit more

If you use “-spit more” you’ll get verbose output:


4) Review the shares/ACLs via the XML files.

That’s it for backup. Pretty straightforward. However, our backups are only as good as our restores…

Restoring the shares using the script

I don’t recommend testing this script the first time on a production system. I’d suggest creating a test SVM, or even leveraging SVM-DR to replicate the SVM to a target location.

In my lab, however… who cares! Let’s blow it all away!


Now, run your restore.


That’s it! Happy backing up/restoring!

Tips for running the script

  • Before running the script, copy and paste it into the “PowerShell ISE” to verify that the syntax is correct. From there, save the script to the local client. Syntax errors can cause problems with the script’s success.
  • Use the latest available NetApp PowerShell Toolkit and ensure the PowerShell version on your client matches what is in the release notes for the toolkit.
  • Test the script on a dummy SVM before running in production.
  • Ensure the DataONTAP module has been imported; if import fails after installing the toolkit, close the PowerShell window and re-open it.


If you have any questions or comments, leave them here. Also, if you customize these at all, please do share with the community! Add them to the Github repository or create your own repo!


Behind the Scenes: Episode 91 – Learning to Code, with Ashley McNamara

Welcome to the Episode 91, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”


This week on the podcast, we chat with developer advocate, Ashley McNamara (@ashleymcnamara) of Pivotal to talk about how storage administrators (and pretty much anyone) should be learning to code. Ashley also gives us places to look for resources for aspiring developers and scripters to be successful. Feel free to check out her Git repository here:

And her Gopher work here:


Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

I also recently got asked how to leverage RSS for the podcast. You can do that here:

You can listen here:

You can also now find us on YouTube. (The uploads are sporadic and we don’t go back prior to Episode 85):

NetApp stuff you should be using: NetAppDocs


Sometimes, there are NetApp tools out there that no one really knows about – including people who work at NetApp. And it’s unfortunate, as there are some pretty great tools out there.

One tool in particular – NetAppDocs.

What is it?

NetAppDocs is:

A PowerShell module and contains a set of functions that automate the creation of NetApp® site design documentation. NetAppDocs can generate Excel, Word and PDF document types. The data contained in the output documents can be sanitized for use in sites where the data may be sensitive.

The tool/guide was written by NetApp PSC Jason Cole and can be found here (requires a NetApp internal or partner login. No customers yet. Sorry. 😦 ):

What can I use it for?

The intent of the NetAppDocs tool is to automate documentation based on specific storage configurations. The idea is that, while documentation tries to fit all use cases, it’s not perfect and cannot adapt to varying configurations. By using this tool, we can generate a set of docs that cover specific configurations.

Another use case that came up recently on our DLs at NetApp was to document the default options for ONTAP in an easy to find, easy to read format. While the man pages keep most of this information, it can be time consuming to trawl through the pages and pages of docs out there. With this tool, once a cluster is installed, simply run it and get the default option settings right off the bat.

Additionally, the data collected can be useful for support cases where ASUP isn’t sending to NetApp for whatever reason.

This tool works with ONTAP running in 7-Mode or clustered Data ONTAP. You can even use it in secure sites easily and sanitize the data for external consumption!

How to use it

Because this is a PowerShell tool, you’d install it on a server running PowerShell. Refer to the tool’s documentation to find what the minimum PS version to use. In the case of NetAppDocs 3.1, the following is recommended:

  1. Microsoft Windows® 32-bit/64-bit computer
  2. Microsoft Windows PowerShell 3.0 or higher
  3. Microsoft .Net Framework 4.0 or higher
  4. NetApp Data ONTAP PowerShell Toolkit (included in the zip file or install package)
  5. NetApp Data ONTAP 7.2.x, 7.3.x, 8.0.x (7-Mode), 8.1.x, 8.2.x and 8.3.x
  6. Internal NetApp connection and SSO login required for ASUP data collection

The installation is simple; just a simple .msi and some mouse clicks. This essentially installs the necessary PowerShell cmdlets and scripts.

Then, follow the instructions in the guide to allow PowerShell execution and import the module.

PS C:\> Import-Module NetAppDocs

To view the HTML documentation after the tools are installed:

PS C:\> Show-NtapDocsHelp

In those docs, there are usage examples, functions and other helpful information.

You can also get help via PowerShell:

PS C:\> Get-Command -Module NetAppDocs

If you have a NetApp login, go check it out today and let them know what you think of it at mailto:

TECH::cDOT 8.3 Upgrade Check via PowerShell

in case you aren’t aware, there is an excellent community post out there by NetApp FSE Tim McGue that does a PowerShell check for cDOT 8.3 upgrades.

From the intro:

This script checks a specified cluster for the items in the “Steps for preparing for a major upgrade” section. The items that are covered are the ones that can be addressed prior to the actual software image update. These are outlined roughly on pages 32-68 in the guide. Based upon the output of the script you can make the necessary adjustments in the cluster to ensure a successful upgrade.

Check it out!

How to Check Data ONTAP 8.3 Upgrade Requirements Using a PowerShell Script

TECH::Docker + CIFS/SMB? That’s unpossible!


Recently, I’ve been playing with Docker quite a bit more, trying to educate myself on what it can and cannot do and where it fits in to NetApp and file services/NAS.

I wrote a blog on setting up a PaaS container that can do Firefox over VNC (for Twitter, of all things), as well as one on using NFS in Docker. People have asked me (and I have wondered), what about CIFS/SMB? Now, we could totally do this via the Linux container I created via mount -t cifs or Samba. But I’m talking about Windows-based CIFS/SMB.

Microsoft supports Docker?

Recently, Microsoft issued an announcement that it will be integrating Docker into Windows Server and Windows Azure, as well as adding Server container images in Docker hub. In fact, you can find Microsoft containers in GitHub today. But the content is a bit sparse, as far as I could see. This could be due to new-ness, or worse, apathy. Time will tell.

As far as Server containers, it seems that Windows containers won’t support RDP, nor local login. Only PowerShell and WMI, as per this Infoworld article on Microsoft doing a Docker demo. And when I look for PowerShell images, I found just one:

# docker search powershell

It would be totally valid to connect to a CIFS/SMB share via PowerShell, but it looks like there’s a bit of work to do to get this image running – namely, running it on a Windows server rather than Linux:

# docker run -t -i --privileged
Application tried to create a window, but no driver could be loaded.
Make sure that your X server is running and that $DISPLAY is set correctly.
Encountered a problem reading the registry. Cannot find registry key SOFTWARE\Microsoft\PowerShell.

Registry errors? That sure looks familiar… 🙂

What about Azure?

Microsoft also has Azure containers out there. I installed one of the Azure CLI containers, just to see if we could do anything with it. No dice. The base OS for Azure appears to be Linux:

# docker run -t -i --privileged
root@b23878ec46c4:/# uname -a
Linux b23878ec46c4 3.10.0-229.1.2.el7.x86_64 #1 SMP Fri Mar 27 03:04:26 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

This is the set of commands I get:

# help
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
These shell commands are defined internally. Type `help' to see this list.
Type `help name' to find out more about the function `name'.
Use `info bash' to find out more about the shell in general.
Use `man -k' or `info' to find out more about commands not in this list.
A star (*) next to a name means that the command is disabled.
job_spec [&] history [-c] [-d offset] [n] or history -anrw [filename] or history -ps arg [arg..>
 (( expression )) if COMMANDS; then COMMANDS; [ elif COMMANDS; then COMMANDS; ]... [ else COMMANDS; >
 . filename [arguments] jobs [-lnprs] [jobspec ...] or jobs -x command [args]
 : kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
 [ arg... ] let arg [arg ...]
 [[ expression ]] local [option] name[=value] ...
 alias [-p] [name[=value] ... ] logout [n]
 bg [job_spec ...] mapfile [-n count] [-O origin] [-s count] [-t] [-u fd] [-C callback] [-c quantum] >
 bind [-lpsvPSVX] [-m keymap] [-f filename] [-q name] [-u name] [-r keyseq] [-x keys> popd [-n] [+N | -N]
 break [n] printf [-v var] format [arguments]
 builtin [shell-builtin [arg ...]] pushd [-n] [+N | -N | dir]
 caller [expr] pwd [-LP]
 case WORD in [PATTERN [| PATTERN]...) COMMANDS ;;]... esac read [-ers] [-a array] [-d delim] [-i text] [-n nchars] [-N nchars] [-p prompt] [->
 cd [-L|[-P [-e]] [-@]] [dir] readarray [-n count] [-O origin] [-s count] [-t] [-u fd] [-C callback] [-c quantum>
 command [-pVv] command [arg ...] readonly [-aAf] [name[=value] ...] or readonly -p
 compgen [-abcdefgjksuv] [-o option] [-A action] [-G globpat] [-W wordlist] [-F fu> return [n]
 complete [-abcdefgjksuv] [-pr] [-DE] [-o option] [-A action] [-G globpat] [-W wordl> select NAME [in WORDS ... ;] do COMMANDS; done
 compopt [-o|+o option] [-DE] [name ...] set [-abefhkmnptuvxBCHP] [-o option-name] [--] [arg ...]
 continue [n] shift [n]
 coproc [NAME] command [redirections] shopt [-pqsu] [-o] [optname ...]
 declare [-aAfFgilnrtux] [-p] [name[=value] ...] source filename [arguments]
 dirs [-clpv] [+N] [-N] suspend [-f]
 disown [-h] [-ar] [jobspec ...] test [expr]
 echo [-neE] [arg ...] time [-p] pipeline
 enable [-a] [-dnps] [-f filename] [name ...] times
 eval [arg ...] trap [-lp] [[arg] signal_spec ...]
 exec [-cl] [-a name] [command [arguments ...]] [redirection ...] true
 exit [n] type [-afptP] name [name ...]
 export [-fn] [name[=value] ...] or export -p typeset [-aAfFgilrtux] [-p] name[=value] ...
 false ulimit [-SHabcdefilmnpqrstuvxT] [limit]
 fc [-e ename] [-lnr] [first] [last] or fc -s [pat=rep] [command] umask [-p] [-S] [mode]
 fg [job_spec] unalias [-a] name [name ...]
 for NAME [in WORDS ... ] ; do COMMANDS; done unset [-f] [-v] [-n] [name ...]
 for (( exp1; exp2; exp3 )); do COMMANDS; done until COMMANDS; do COMMANDS; done
 function name { COMMANDS ; } or name () { COMMANDS ; } variables - Names and meanings of some shell variables
 getopts optstring name [arg] wait [-n] [id ...]
 hash [-lr] [-p pathname] [-dt] [name ...] while COMMANDS; do COMMANDS; done
 help [-dms] [pattern ...] { COMMANDS ; }

There is an Azure command set also, but that seems to connect directly to an Azure cloud instance, which requires an account, etc. I suspect I’d have to pay to use commands like “azure storage,” which is why I haven’t set one up yet. (I’m cheap)


root@b23878ec46c4:/# azure storage share show
info: Executing command storage share show
error: Please set the storage account parameters or one of the following two environment variables to use storage command. 1.AZURE_STORAGE_CONNECTION_STRING, 2. AZURE_STORAGE_ACCOUNT and AZURE_STORAGE_ACCESS_KEY
info: Error information has been recorded to /root/.azure/azure.err
error: storage share show command failed

Whither Windows file services?

The preliminary results of using Docker to connect to CIFS/SMB shares aren’t promising. That isn’t to say it won’t be possible. I still need to install Docker on a Windows server and try that PowerShell container again. Once I do that, I’ll update this blog, so stay tuned!

Plus, it’s entirely possible that more containers will pop up as the Microsoft repository grows. However, I do hope this works or is at least in the plans for Microsoft. While it’s cool to connect to a cloud share via CIFS/SMB and Azure, I’d like to be able to have control over connecting to shares on my private storage, such as NetApp.

TECH::Using PowerShell to back up and restore CIFS shares/NFS exports in NetApp’s clustered Data ONTAP

NOTE: This post covers DR for NAS objects prior to 8.3.1. After 8.3.1, use the new SVM DR functionality if possible.


NetApp’s Data ONTAP operating in 7-mode kept all relevant configuration files in its root volume under /etc. These files get read at boot and are used to set up the filer. This included stuff like DNS configuration (resolv.conf), name service switches (nsswitch.conf), initial config (rc file), hosts and other various configuration files.

Another file that is stored in /etc in 7-mode is the file that builds the filer’s CIFS shares each time it is booted – cifsconfig_share.cfg.

This file is essentially a list of CIFS share and access commands that gets sourced each time the system boots. This is what one of those files looks like in 7-mode:

#Generated automatically by cifs commands
cifs shares -add "ETC$" "/etc" -comment "Remote Administration"
cifs access "ETC$" S-1-5-32-544 Full Control
cifs shares -add "HOME" "/vol/vol0/home" -comment "Default Share"
cifs access "HOME" S-NONE "nosd"
cifs shares -add "C$" "/" -comment "Remote Administration"
cifs access "C$" S-1-5-32-544 Full Control
cifs shares -add "CIFS" "/vol/cifs" -comment "CIFS"
cifs access "CIFS" S-NONE "nosd"
cifs shares -add "mixed" "/vol/mixed" -comment ""
cifs access "mixed" S-NONE "nosd"

7mode> cifs shares
Name Mount Point      Description
---- -----------      -----------
ETC$ /etc             Remote Administration
                 BUILTIN\Administrators / Full Control
HOME /vol/vol0/home   Default Share
                 everyone / Full Control
C$ /                  Remote Administration
                 BUILTIN\Administrators / Full Control
CIFS /vol/cifs        CIFS
                 everyone / Full Control
mixed /vol/mixed
                 everyone / Full Control

One benefit of this file in 7-mode was the ability to copy this file off somewhere to back up and possibly restore the shares at a later date, or even retrieve the file from snapshot.

However, with the newer clustered Data ONTAP, the concept of flat files is gone. Everything gets stored in a replicated database, which helps the cluster act like a cluster. I cover that in some detail in a previous post on, NetApp cDOT, RDB, & Epsilon.

Additionally, in clustered Data ONTAP, if a CIFS server gets deleted (such as when removing it from the domain/re-adding it), the CIFS shares get blown away and would need to get re-created one by one.

So what do the people who relied on the old 7-mode CIFS share files do?

Script it out, of course! For more information, including where to find pre-written scripts, see the post on!

Requires powershell module for Data ONTAP, which can be found here:


Recently, a consultant named Scott Harney was inspired by the CIFS share script and not only made some improvements to it, but also created one for NFS exports and rules!

Check it out at his blog:

UPDATE #2 (7/6/15):

Tested the scripts with both 8.2.4 and 8.3.1. Had to work out a few kinks/make some improvements. There is an issue in 8.3.1 with Add-NcCifsShare.

The following changes were made:

  • Tested with 8.2.4 and 8.3.1 cDOT releases
  • Change Import-Module to generic “DataONTAP” to avoid path issues
  • Added link to DataONTAP PS module download in comments
  • Changed PS commands to replace “-Name” with “-Share”
  • Changed output file of ACLs to $aclFile (was $shareFile)

These changes are up on the github repository now. Feel free to notify me if anything else is broken or needs improvement!

If you’re looking for a way to backup Snapmirror schedules, see this link: