ALL YOUR BASE…

allyourbasecats-w710-h473

When dealing in storage space/capacity numbers, there are generally two ways of representing the numbers – base 2 (binary) and base 10 (decimal/metric). Originally, space was all done in base 2. But, over the years, decimal has crept in to the storage representation – probably due to easier math. After all, it’s easier/quicker to multiply by 1000 than by 1024.

What that has done, however, is introduce some confusion, as not all storage vendors follow the same standards when reporting capacity. Seagate has an excellent knowledge base article on this here:

Gebi, Tebi, Pebi

In an attempt to alleviate the confusion, a new measurement name was created that isn’t widely adopted. Most people still refer to everything as MB, GB, TB, etc. But that’s all base 10. Base 2 uses a little “i” in the abbreviation, so we get MiB, GiB, TiB, etc. That is intended to represent a capacity measured in 1024 vs. 1000. It gets even more fun when you consider the “b” vs. “B” to mean byte versus bit, but I digress.

This handy table on the wiki entry for Tebibyte shows how the math works for decimal vs. binary in storage terms.

space-table.png

What happens when you use decimal vs. binary to measure storage? Well, it can mean that what you thought was 316GB of storage is really only 288GiB – depending on how the vendor has decided to display it.

What does this mean for ONTAP?

So, some vendors use decimal because it reports more space available. Microsoft actually has a statement on this here:

Although the International Electronic Commission established the term kibibyte for 1024 bytes, with the abbreviation KiB, Windows Explorer continues to use the abbreviation KB. Why doesn’t Explorer get with the program?

Because nobody else is on the program either.

If you look around you, you’ll find that nobody (to within experimental error) uses the terms kibibyte and KiB. When you buy computer memory, the amount is specified in megabytes and gigabytes, not mebibytes and gibibytes. The storage capacity printed on your blank CD is indicated in megabytes. Every document on the Internet (to within experimental error) which talks about memory and storage uses the terms kilobyte/KB, megabyte/MB, gigabyte/GB, etc. You have to go out of your way to find people who use the terms kibibyte/KiB, mebibyte/MiB, gibibyte/GiB, etc.

In other words, the entire computing industry has ignored the guidance of the IEC.

NetApp ONTAP uses binary because it’s closer to what is accurate with regards to how computers operate. However, ONTAP, while showing the correct *numbers* (in decimal) doesn’t show the correct *units*. ONTAP shows, by default, GB, TB, etc.  Bug 1078123 covers this.

:facepalm:

71a

For example, my Tech ONTAP Podcast FlexGroup volume is 10TB:

cluster::*> vol show -fields size -vserver DEMO -volume Tech_ONTAP
vserver volume     size
------- ---------- ----
DEMO    Tech_ONTAP 10TB

OR IS IT???

cluster::*> df /vol/Tech_ONTAP
Filesystem       kbytes      used     avail       capacity Mounted on Vserver
/vol/Tech_ONTAP/ 10200547328 58865160
                                      10141682168       0% /techontap DEMO
/vol/Tech_ONTAP/.snapshot
                 536870912   466168   536404744         0% /techontap/.snapshot DEMO

If we use TB to mean base 10, then 10200547328 + 536870912 (10737418240) kbytes is actually 10.737TB! If we use base 2, then yes, it’s 10TB.

There is a way to change the unit displayed to “raw,” but that basically just shows the giant number you’d see with “df.” If you’re interested:

cluster::> set -units
 auto raw B KB MB GB TB PB

Why should you care?

Ultimately, you probably don’t care. But it’s good to know when you’re trying to figure out where that extra X number of GB went, as well as how much capacity you’re buying up front. And it’s a good idea to make it a best practice to ask *every* vendor how they measure capacity, so they don’t try to shortchange you.

Advertisements

Behind the Scenes: Episode 143 – NetApp Service Level Manager 1.0GA

Welcome to the Episode 143, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

tot-gopher

This week on the podcast, we’re joined by NSLM Product Managers Yossi Weihs (https://www.linkedin.com/in/yossiw/) and Naga Anur (naga@netapp.com) to discuss the latest GA release of NetApp Service Level Manager and how it helps provisioning NetApp storage like a service provider oh-so-easy.

You can find the latest release of NSLM here:

https://mysupport.netapp.com/NOW/cgi-bin/software/?product=NetApp+Service+Level+Manager&platform=Linux

And the official TR:

https://www.netapp.com/us/media/tr-4654.pdf

You can find some handy blogs and Puppet modules here: https://netapp.io/2017/07/19/provision-storage-like-service-provider-netapp-service-level-manager/ https://github.com/NetApp/Puppet-with-NetApp-Service-Level-Manager

And installation/config videos here:

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

This week’s episode is here:

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

Our YouTube channel (episodes uploaded sporadically) is here:

Logical vs. Used Space in ONTAP 9.4

1eqdbk

In storage, the race to zero usually takes on two different faces:

  • Performance/latency
  • Space used/storage efficiencies

While it may seem counter-intuitive for a company that sells data storage to want people to have to use less of it, this is what people want. They don’t want to feel like they’re being double-charged for inefficiencies in how the storage filesystem operates or for duplicate files. It’s now gone from a “nice to have” feature to a necessity that every vendor must have.

NetApp has been one of the leaders in storage efficiency since the ONTAP 7.3 days, but they’ve recently stepped up their game by shaving even more off the top for space savings without having to sacrifice storage performance to do it.

Included in the storage efficiencies offered by ONTAP:

  • Volume Deduplication (inline and post-process)
  • Aggregate-level/cross volume deduplication (inline and post-process)
  • Data compaction
  • Compression (inline and post-process)
  • FabricPool

Sometimes, things like clones, snapshots and space guarantees are also factored in to storage efficiency, but that depends on your definition.

But I’m not here to tout the values of storage efficiency. I’m here to tell you about a new, relatively unknown feature added in ONTAP 9.4 that was brought to my attention by one of NetApp’s Sales Engineers, Riley Johnson.

So, what is it?

One of the main challenges we’ve seen is that while you like to save space in your cluster as a storage administrator, you may not necessarily like your end users/customers *knowing* you’re saving space, because that eats into your overall bottom line. Service providers, for example, might want to charge on data ingested rather than data that has been reduced, since storage efficiencies can make the actual amount of data being written to a volume wildly unpredictable and hard to track and if you ever needed to unpack that data, you might not have room.

For example… I can provision a 1TB volume and write 1.2TB to it, but with storage efficiencies, my storage system might report only 800GB used. If that data ever needed to inflate for whatever reason, then I could find myself painted into a corner. Before ONTAP 9.4, if my storage efficiencies saved me 400GB in space, then the client would see only 800GB used despite actually writing 1.2TB. ONTAP 9.4, however, introduced a volume-level option to change how space is reported back to clients when storage efficiencies are in use. (FlexVol only)

 [-is-space-reporting-logical {true|false}] - Logical Space Reporting
 If this parameter is specified, the command displays information only about the volumes that have logical space reporting enabled or disabled as specified. When space is reported logically, ONTAP reports the volume space such that all the physical space saved by the storage efficiency features are also as reported as used. This parameter is not supported on FlexGroups or Infinite Volumes.

By default, this option is off. But when you enable it, ONTAP will hide the fact that efficiencies have saved space from clients that run a df or other space reporting utility, and will instead report back the actual data amount that was written to the volume.

There are also several other options added to the volume level to help report space used vs. logical space used.

[-logical-used {<integer>[KB|MB|GB|TB|PB]}] - Logical Used Size
 If this parameter is specified, the command displays information only about the volume or volumes that have the specified logical used size. This value includes all the space saved by the storage efficiency features along with the physically used space. This does not include Snapshot reserve but does consider Snapshot spill. This parameter is not supported on FlexGroups or Infinite Volumes.

[-logical-used-percent <percent_no_limit>] - Logical Used Percentage
 If this parameter is specified, the command displays information only about the volume or volumes that have the specified logical used percentage. This parameter is not supported on FlexGroups or Infinite Volumes.

[-logical-available {<integer>[KB|MB|GB|TB|PB]}] - Logical Available Size
 If this parameter is specified, the command displays information only about the volume or volumes that have the specified logical available size. This value is the amount of free space currently available considering space saved by the storage efficiency features as being used. This does not include Snapshot reserve. This parameter is not supported on FlexGroups or Infinite Volumes.

[-logical-used-by-afs {<integer>[KB|MB|GB|TB|PB]}] - Logical Size Used by Active Filesystem
 If this parameter is specified, the command displays information only about the volume or volumes that have the specified logical size used by the active file system. This value differs from logical-used by the amount of Snapshot spill that exceeds Snapshot reserve. This parameter is not supported on FlexGroups or Infinite Volumes.

[-logical-used-by-snapshots {<integer>[KB|MB|GB|TB|PB]}] - Logical Size Used by All Snapshots
 If this parameter is specified, the command displays information only about the volume or volumes that have the specified logical size used across all Snapshot copies. This value differs from size-used-by-snapshots by the space saved by the storage efficiency features across the Snapshot copies. This parameter is not supported on FlexGroups or Infinite Volumes.

I proved this out on one of my volumes in a cluster running ONTAP 9.4. As you can see, this volume has a discrepancy in used vs. logical space:

vserver volume               used   logical-used is-space-reporting-logical
------- -------------------- ------ ------------ --------------------------
SVM1    xcp_hardlinks_source 8.97GB 15.89GB      false

From a NFS client, this is what I see with logical space reporting set to false:

[root@XCP /]# df -h
Filesystem                     Size Used Avail Use% Mounted on
10.x.x.x:/xcp_hardlinks_source 9.5G 9.0G 542M  95% /xcphardlink

As you can see, the client is reporting the “used” field from ONTAP – not “logical-used.” That means it’s not taking storage efficiency savings into account. If I’m an end user and I see that, I think “oh goodie! I have more space left!” even though I’ve actually used 15.9GB in logical space.

When “-is-space-reporting-logical” is set to true, the client will see the space as if no storage efficiencies were applied:

cluster ::*> vol modify -vserver SVM1 -volume xcp_hardlinks_source -is-space-reporting-logical true
Volume modify successful on volume xcp_hardlinks_source of Vserver SVM1.
[root@XCP /]# df -h | grep xcphardlink
Filesystem                     Size Used Avail Use% Mounted on
10.x.x.x:/xcp_hardlinks_source 17G  16G  542M  97% /xcphardlink

Fuzzy math

What we also see is that the “size” reported back is essentially logical used + available, so if you provisioned 10GB in a volume and have enabled logical space reporting, your clients will see a larger size than what the volume actually has. In the above, the size reporting is 17GB, which is roughly 16GB (used) + 542MB (available), rounded up.

As a result, reporting the logical space needs to be a decision made based on what you want your end users to see.

So where did all that space go?

On this volume, I had all available storage efficiencies enabled. As a result, there is roughly a 7GB discrepancy in logical space and used space after efficiencies are applied. We can see the difference with the following command:

vol show -fields size,logical-used,used,available,logical-available -is-flexgroup false -is-constituent false -vserver SVM -volume volname

Or with:

cluster::*> volume show-space -vserver SVM1 -volume xcp_hardlinks_source

Vserver: SVM1
 Volume Name: xcp_hardlinks_source
 Volume MSID: 2163226536
 Volume DSID: 1932
 Vserver UUID: 05e7ab78-2d84-11e6-a796-00a098696ec7
 Aggregate Name: aggr1_node1
 Aggregate UUID: 4b7f701e-ee9a-43db-980f-ba6f1ea7a8cb
 Hostname: ontap9-tme-8040-01
 User Data: 8.96GB
 User Data Percent: 90%
 Deduplication: 12KB
 Deduplication Percent: 0%
 Temporary Deduplication: -
 Temporary Deduplication Percent: -
 Filesystem Metadata: 7.02MB
 Filesystem Metadata Percent: 0%
 SnapMirror Metadata: -
 SnapMirror Metadata Percent: -
 Tape Backup Metadata: -
 Tape Backup Metadata Percent: -
 Quota Metadata: -
 Quota Metadata Percent: -
 Inodes: 3.88MB
 Inodes Percent: 0%
 Inodes Upgrade: -
 Inodes Upgrade Percent: -
 Snapshot Reserve: 512MB
 Snapshot Reserve Percent: 5%
 Snapshot Reserve Unusable: -
Snapshot Reserve Unusable Percent: -
 Snapshot Spill: -
 Snapshot Spill Percent: -
 Performance Metadata: 1.74MB
 Performance Metadata Percent: 0%
 Total Used: 9.47GB
 Total Used Percent: 95%
 Total Physical Used Size: 9.03GB
 Physical Used Percentage: 90%
 Logical Used Size: 15.89GB
 Logical Used Percent: 159%
 Logical Available: 542.1MB

We can also see efficiencies at the aggregate level:

cluster::*> aggr show-efficiency -aggregate aggr1_node1

Name of the Aggregate: aggr1_node1
 Node where Aggregate Resides: node1
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 3.19TB
 Total Physical Used: 140.9GB
 Total Storage Efficiency Ratio: 23.20:1
 Total Data Reduction Logical Used: 268.0GB
 Total Data Reduction Physical Used: 137.0GB
 Total Data Reduction Efficiency Ratio: 1.95:1
 Logical Space Used for All Volumes: 269.8GB
 Physical Space Used for All Volumes: 139.3GB
 Space Saved by Volume Deduplication: 58.33GB
Space Saved by Volume Deduplication and pattern detection: 122.7GB
 Volume Deduplication Savings ratio: 1.83:1
 Space Saved by Volume Compression: 7.85GB
 Volume Compression Savings ratio: 1.03:1
Space Saved by Inline Zero Pattern Detection: 64.33GB
 Volume Data Reduction SE Ratio: 1.94:1
 Logical Space Used by the Aggregate: 149.1GB
 Physical Space Used by the Aggregate: 140.9GB
 Space Saved by Aggregate Data Reduction: 8.21GB
 Aggregate Data Reduction SE Ratio: 1.06:1
 Logical Size Used by Snapshot Copies: 2.93TB
 Physical Size Used by Snapshot Copies: 3.48GB
 Logical Size Used by FlexClone Volumes: 1.77GB
 Physical Sized Used by FlexClone Volumes: 364.5MB
Snapshot And FlexClone Volume Data Reduction SE Ratio: 781.17:1
 Snapshot Volume Data Reduction Ratio: 860.45:1
 FlexClone Volume Data Reduction Ratio: 4.97:1
 Number of Volumes Offline: -
 Number of SIS Disabled Volumes: 7
 Number of SIS Change Log Disabled Volumes: 23

If we want to see the amount of space saved with all efficiencies, use the following:

cluster::*> vol show -vserver SVM1 xcp_hardlinks_source -fields sis-space-saved,sis-space-saved-percent
vserver volume               sis-space-saved sis-space-saved-percent
------- -------------------- --------------- -----------------------
SVM1    xcp_hardlinks_source 6.92GB          44%

If you want that number broken down into specific storage efficiencies, use this:

cluster::*> vol show -vserver SVM1 xcp_hardlinks_source -fields dedupe-space-saved,dedupe-space-saved-percent,compression-space-saved,compression-space-saved-percent
vserver volume               dedupe-space-saved dedupe-space-saved-percent compression-space-saved compression-space-saved-percent
------- -------------------- ------------------ -------------------------- ----------------------- -------------------------------
SVM1    xcp_hardlinks_source 2.14GB             14%                        4.77GB                  30%

Or, for a more basic view, use System Manager!

You can see storage efficiencies when you drill down into the actual volumes (Storage -> Volumes on the left menu) and then click on the desired volume and view the “Space Allocation” section:

space-allocation-ocsm.png

If you have questions, comment below!

Behind the Scenes: Episode 142 – Supportability in ONTAP

Welcome to the Episode 142, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

tot-gopher

This week on the podcast, we dish out information on one of NetApp’s best kept secrets – the supportability team! Join us as we uncover the work Matt Mercer, Scott Morris, Kunal Raina and Matt Trudewind are doing in the trenches to improve the overall customer experience for support of NetApp products.

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

This week’s episode is here:

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

Our YouTube channel (episodes uploaded sporadically) is here:

Life hack: Change the UID of all files in NAS volume… without actually changing anything!

146538-640x313

There are a ton of hidden gems in ONTAP that go unnoticed because they get added without a lot of fanfare. For example, the volume recovery queue got added in 8.3 and no one really knew what it was, what it did, or why the volumes they deleted didn’t seem to actually get deleted for 24 hours.

I keep my ears open for these features so I can promote them and I ran across a pretty slick, simple gem while at the NetApp Converge (sales kick off) conference, from an old colleague in my support days that now does SE work. (Shout out to Maarten Lippmann!)

But, features are only as good as their use cases.

Here’s the scenario…

Let’s say you have a Git code repository with millions of files and its files are owned by a number of different people that one of your developers wants to access and make changes to. They don’t have access to some of those files by way of permissions, but there are way too many to re-permission effectively and in a timely manner. Plus, if you change the access to these files, you might break the code repo horribly.

So, how do you:

  • Create a usable copy of the entire code repo in a reasonable amount of time without eating up a ton of space
  • Assign a new owner to all the files in the volume quickly and easily
  • Keep the original repo intact

It’s pretty easy in ONTAP, actually – In fact, it’s a single command. All you need is a FlexClone license and you can make an instant copy of a volume with a new file owner without impacting the source volume and without using up any new space. Additionally, if you wanted to keep those changes, you can split the clone into its own unique volume.

In the following example, I have an existing volume that has a ton of files and folders, all owned by root:

[root@XCP nfs4]# ls -la
total 8012
d------r-x. 102 root root 8192 Apr 11 11:41 .
drwxr-xr-x. 5 root root 4096 Apr 12 17:20 ..
----------. 1 root root 0 Apr 11 11:29 file
d---------. 1002 root root 77824 Apr 11 11:47 topdir_0
d---------. 1002 root root 77824 Apr 11 11:47 topdir_1
...
d---------. 1002 root root 77824 Apr 11 11:47 topdir_99

I want the new owner of the files in the cloned volume to be a user named “prof1” and the GID to be 1101.

cluster::*> getxxbyyy getpwbyname -node ontap9-tme-8040-01 -vserver DEMO -username prof1
 (vserver services name-service getxxbyyy getpwbyname)
pw_name: prof1
pw_passwd:
pw_uid: 1100
pw_gid: 1101
pw_gecos:
pw_dir:
pw_shell:

So, I do the following:

cluster::*> vol clone create -vserver DEMO -flexclone clone -type RW -parent-vserver DEMO -parent-volume flexvol -junction-active true -foreground true -junction-path /clone -uid 1100 -gid 1101
[Job 12606] Job succeeded: Successful

cluster::*> vol show -vserver DEMO -volume clone -fields clone-volume,clone-parent-name,clone-parent-vserver
vserver volume clone-volume clone-parent-vserver clone-parent-name
------- ------ ------------ -------------------- -----------------
DEMO clone true DEMO flexvol

That command took literally 10 seconds to complete. There are over 1.8 million objects in that volume.

cluster::*> df -i /vol/clone
Filesystem iused ifree %iused Mounted on Vserver
/vol/clone/ 1824430 4401487 29% /clone DEMO

Then, I check the owner of the files:

cluster::*> vserver security file-directory show -vserver DEMO /clone/nfs4

Vserver: DEMO
 File Path: /clone/nfs4
 File Inode Number: 96
 Security Style: unix
 Effective Style: unix
 DOS Attributes: 10
 DOS Attributes in Text: ----D---
Expanded Dos Attributes: -
 UNIX User Id: 1100
 UNIX Group Id: 1101
 UNIX Mode Bits: 5
 UNIX Mode Bits in Text: ------r-x
 ACLs: NFSV4 Security Descriptor
 Control:0x8014
 DACL - ACEs
 ALLOW-user-prof1-0x1601ff-FI|DI|IO
 ALLOW-user-student1-0x21-FI|DI|IO
 ALLOW-group-ProfGroup-0x1200a9-FI|DI|IO|IG
 ALLOW-EVERYONE@-0x1200a9

cluster::*> vserver security file-directory show -vserver DEMO /clone/nfs4/topdir_99

Vserver: DEMO
 File Path: /clone/nfs4/topdir_99
 File Inode Number: 3556
 Security Style: unix
 Effective Style: unix
 DOS Attributes: 10
 DOS Attributes in Text: ----D---
Expanded Dos Attributes: -
 UNIX User Id: 1100
 UNIX Group Id: 1101
 UNIX Mode Bits: 0
 UNIX Mode Bits in Text: ---------
 ACLs: NFSV4 Security Descriptor
 Control:0x8004
 DACL - ACEs
 ALLOW-user-prof1-0x1601ff-FI|DI
 ALLOW-user-student1-0x21-FI|DI
 ALLOW-group-ProfGroup-0x1200a9-FI|DI|IG

And from the client:

[root@XCP nfs4]# pwd
/clone/nfs4

[root@XCP nfs4]# ls -la
total 8012
d------r-x. 102 1100 1101 8192 Apr 11 11:41 .
drwxr-xr-x. 5 1100 1101 4096 Apr 12 17:20 ..
----------. 1 1100 1101 0 Apr 11 11:29 file
d---------. 1002 1100 1101 77824 Apr 11 11:47 topdir_0
d---------. 1002 1100 1101 77824 Apr 11 11:47 topdir_1
d---------. 1002 1100 1101 77824 Apr 11 11:47 topdir_10
d---------. 1002 1100 1101 77824 Apr 11 11:47 topdir_11
d---------. 1002 1100 1101 77824 Apr 11 11:47 topdir_12

It shouldn’t be that easy, should it?

If I wanted to split the volume off into its own volume (such as when a dev makes changes and wants to keep them, but doesn’t want to change the source volume):

cluster::*> vol clone split
 estimate show start status stop

If I want to delete the clone after I’m done, I just run “volume destroy.”

Questions? Hit me up in the comments!

SnapMirror Interoperability in ONTAP

wt2

A few releases ago, ONTAP introduced a new SnapMirror engine (XDP) that changed how things replicated a bit, but also enabled the ability to perform SnapMirrors to non-ONTAP NetApp platforms like AltaVault and SolidFire. It also provided a well-received benefit of being able to perform SnapMirrors between ONTAP releases regardless of whether the versions were the same, within a few releases of one another (also known as version independent SnapMirror).

However, there was also some confusion introduced. Which ONTAP versions could SnapMirror to other ONTAP versions? Could older (DP) SnapMirror relationships be version independent? What about non-disruptive volume moves?

Well, we have answers, courtesy of the SnapMirror product manager, Chris Winter!

This covers both DR (Mirror) and Backup (Vault) versions of SnapMirror/SnapVault.

Before we start, let’s level-set on how ONTAP releases are now handled, because that impacts how SnapMirror interop is handled…

6-Month Cadence

ONTAP has a new release every 6 months, with a new feature payload. This started in ONTAP 9.0 out of a need to be more agile in development and provide more value to customers out of upgrades.

So, what changed?

  • Major ONTAP releases used to occur every 18 months or so.
    That meant it took a year and a half to wait for new features.
  • Release timing wasn’t super predictable, which meant storage admins couldn’t plan accordingly for upgrades.
    They’d usually wait a release or two for upgrades. So, they’d be a year to three years behind.
  • Major releases (i.e., 8.0, 8.1, 8.2 etc) would often have several “maintenance” releases (i.e., 8.1.1, 8.1.2, 8.1.3, etc) where bugs were fixed, but new features were *rarely* added.
    In addition, we’d also have patch releases (P), which did smaller bug fix roll ups and even development releases (D), which were controlled emergency patches for major bugs. This took up a lot of dev time and money and didn’t really offer much more in stability than a faster cadence. Plus, it kind of made things confusing for storage admins…
  • The introduction of long-term and short-term releases. 
    See below for more information…

So, what does the new 6-month cadence offer?

  • More features, faster.
    No more waiting nearly two years for new stuff.
  • Predictable releases every May/June and November/December.
    This helps upgrade planning immensely.
  • Fewer maintenance releases.
    We still have RC and patch releases, but those are fewer in quantity.

Long term/regular releases

For official information on this, see:

https://mysupport.netapp.com/info/web/ECMP1147223.html

In addition to the 6-month cadence, ONTAP releases are now broken up into two categories.

  • Regular releases are the even numbered releases (i.e., 9.2, 9.4 and so on… Spring releases) and provided 1 year of full support and 2 years of “limited support.” Limited support in this case means that we’ll still offer official support via the regular channels, but won’t be releasing new patch releases after a year.
  • Long term support releases are odd numbered (i.e., 9.1, 9.3 and so on… Fall/NetApp Insight releases). This means you get 3 years of full support and then 2 more years of limited support.

The idea of having long-term releases isn’t unique to NetApp; many other software vendors have the same idea. The value in doing this is in simplicity, cost savings for software dev and support, stability in releases and incentive for storage administrators to upgrade to take advantage of the latest and greatest features ONTAP has to offer.

In addition, this also helps storage admins make educated decisions on which releases they should standardize on.

  • Want to ensure more frequent upgrades? Standardize on a regular release cycle.
  • Want to keep a code base for a longer time period? Standardize on  the long term release cycle.

I mention the long term release cycle in this blog because it directly impacts SnapMirror interoperability.

All new SnapMirror Unified Replication (XDP) releases will support the immediately prior ONTAP release and the two ONTAP LTS releases before that.

That means if you are running, say, ONTAP 9.11 (which doesn’t exist… it’s just an example), then you’d be able to use XDP SnapMirror to/from the following releases:

  • 9.10 (regular release, immediately prior)
  • 9.7, 9.9 (LTS, prior)

And to the following future releases:

  • 9.12 (because 9.11 is the immediately prior release)
  • 9.13 (because 9.12 is the prior and 9.11 is one of the two LTS releases)
  • 9.15 (because 9.14 is the prior, and 9.13 and 9.11 are the two LTS releases)
  • 9.16 (because 9.15 is the prior and 9.13 and 9.11 are the two LTS releases)

Here are two examples.

sm-interop-example1.png

sm-interop-example2.png

There are a couple exceptions, though.

Exception #1: If you are running ONTAP 9.3, XDP SnapMirror will work across all prior ONTAP 9.x releases. Any ONTAP release prior to 9.0 will need to be upgraded to ONTAP 9.0.

Exception #2: If you are running ONTAP 9.4, XDP SnapMirror will work across all prior ONTAP 9.x releases except for ONTAP 9.2, which was the first “regular release” in ONTAP. ONTAP 9.0 is treated like a LTS release. Any ONTAP release prior to 9.0 will need to be upgraded to ONTAP 9.0.

This matrix covers up to the latest ONTAP release. I won’t be updating this matrix, because they’re essentially patterns and you can fill in the blanks based on the general logic stated above.

sm-xdp-interop

What about SnapMirror DP/Volume move support?

SnapMirror DP relationships (the older SnapMirror/block based engine) and volume move (which uses SnapMirror DP) have different restrictions than XDP, as they’re not considered eligible for version-independent SnapMirror replication.

  • As such, to use SnapMirror DP relationships across ONTAP clusters, the ONTAP versions all have to be within 2 releases after one another. For example, if you’re on ONTAP 9.3 and use DP mirrors, then you can replicate to/from ONTAP 9.3 or ONTAP 9.4.
  • Non-disruptive volume moves will all have to operate on the same ONTAP versions to be considered supported.

The following charts show a matrix of DP mirror interop support for current ONTAP versions. I won’t be updating these, because they’re essentially patterns and you can fill in the blanks based on the general logic stated above.

sm-dp-interop

volmove-dp-interop.png

How do I get my current SnapMirror relationships from DP to a more flexible XDP?

DP SnapMirror definitely has some limitations, particularly in version interoperability. But the good news is, it’s easy and relatively non-disruptive to go from DP to XDP!

https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.pow-dap%2FGUID-898A828D-B69E-4951-98AD-C8BF7D6DB7BA.html

Keep in mind that you can’t go from XDP to DP without having to rebaseline, however.

If you have any questions, feel free to comment below and I’ll find answers for you!

Behind the Scenes: Episode 141 FabricPool Enhancements in ONTAP 9.4

Welcome to the Episode 141, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

tot-gopher

This week on the podcast, FabricPools Lifeguard/TME John Lantz discusses the latest enhancements to the cloud-connected feature in ONTAP 9.4, as well as the technical details behind how FabricPools work.

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

This week’s episode is here:

Be sure to also check out the FabricPools overview:

And deep dive:

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

Our YouTube channel (episodes uploaded sporadically) is here:

Behind the Scenes: Episode 140 Quarterly Security Update: ONTAP 9.4 and GDPR

Welcome to the Episode 140, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

tot-gopher

This week on the podcast, we bring in Security PM Juan Mojica (@Juan_M_Mojica) and Security TMEs Andrae Middleton and Dan Tulledge to get ready for GDPR by discussing ONTAP 9.4’s newest security enhancements and what they mean for the new European regulation as the grace period ends. We also discuss best practices and how to best protect your storage systems from breaches.

For our GDPR landing page: https://www.netapp.com/us/info/gdpr.aspx

For the latest ONTAP 9.4 Security blog: https://blog.netapp.com/new-data-security-and-privacy-features-in-ontap-9-4

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

This week’s episode is here:

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

Our YouTube channel (episodes uploaded sporadically) is here:

New and updated FlexGroup Technical Reports now available for ONTAP 9.4!

ONTAP 9.4 is now available, so that means the TRs need to get a refresh.

161212-westworld-news

Here’s what I’ve done for FlexGroup in ONTAP 9.4…

New Tech Report!

First, I moved the data protection section of the best practices TR (TR-4571) into its own dedicated backup and data protection TR, which can be found here:

TR-4678: Data Protection and Backup – FlexGroup volumes

Why? Well, that section is going to grow larger and larger as we add more data protection and backup functionality, so it made sense to proactively create a new one.

Updated TRs!

TR-4557 got an update of mostly just what’s new in ONTAP 9.4. That TR is a technical overview, which is intended just to give information on how FlexGroups work. The new feature payload for FlexGroup volumes in ONTAP 9.4 included:

  • QoS minimums and Adaptive QoS
  • FPolicy and file audit
  • SnapDiff support

TR-4571 is the best practices TR and got a brunt of the updates. Included in the TR (aside from details about new features), I added:

  • More detailed information about high file count environments and directory structure
  • More information about maxdirsize limits
  • Information on effects of drive failures
  • Workarounds for lack of NFSv4.x ACL support
  • Member volume count considerations when dealing with small and large files
  • Considerations when deleting FlexGroup volumes (and the volume recovery queue)
  • Clarifications on requirements for available space in an aggregate
  • System Manager support updates

Most of these updates came from feedback and questions I received. If you have something you want to see added to the TRs, let me know!

Behind the Scenes: Episode 139 – NVMe and New Hardware in ONTAP 9.4

Welcome to the Episode 139, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

tot-gopher

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

This week’s episode is here:

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

Our YouTube channel (episodes uploaded sporadically) is here: