Behind the Scenes: Episode 118 – MetroCluster Primer

Welcome to the Episode 118, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

tot-gopher

This week on the podcast, we cover everything you want to know about MetroCluster with MetroCluster TME Nabil Fares (@nfares) and Solutions Architect Niels Reker (niels.reker@netapp.com), including the new MetroCluster over IP feature in ONTAP 9.3!

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

This week’s episode is here:

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

Our YouTube channel (episodes uploaded sporadically) is here:

Advertisements

Behind the Scenes: Episode 117 – Storage QoS in ONTAP 9.3

Welcome to the Episode 117, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

tot-gopher

This week on the podcast, we invited the NTAPFLIGuy, Mike Peppers, to talk about QoS and performance in ONTAP 9.3. Listen for a general overview of QoS maximums and minimums, as well as the new Adaptive QoS feature!

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

This week’s episode is here:

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

Our YouTube channel (episodes uploaded sporadically) is here:

Behind the Scenes: Episode 115 – Primary Data

Welcome to the Episode 115, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

group-4-2016

This week on the podcast, Brendan Wolfe (@bgwolfe) and Douglas Fallstrom (@dfsweden) from NetApp partners Primary Data joined us to discuss what Primary Data does and how it ties into the NetApp Data Fabric. Be sure to check out their booth at NetApp Insight in Berlin!

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

This week’s episode is here:

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

Our YouTube channel (episodes uploaded sporadically) is here:

ONTAP 9.3RC1 is now available!

ONTAP 9.3 was announced at NetApp Insight 2017 in Las Vegas and was covered at a high level by Jeff Baxter in the following blog:

Announcing NetApp ONTAP 9.3: The Next Step in Modernizing Your Data Management

I also did a brief video summary here:

We also did a podcast with ONTAP Chief Evangelist Jeff Baxter (@baxontap) and ONTAP SVP Octavian Tanase (@octav) here:

ONTAP releases are delivered every 6 months, with the odd numbered releases landing around time for Insight. Now, the first release candidate for 9.3 is available here:

http://mysupport.netapp.com/NOW/download/software/ontap/9.3RC1

For info on what a release candidate is, see:

http://mysupport.netapp.com/NOW/products/ontap_releasemodel/

Also, check out the documentation center:

docs.netapp.com/ontap-9/index.jsp

The general theme around ONTAP 9.3 is modernization of the data center. I cover this at Insight in session 30682-2, which is available as a recording from Las Vegas for those with a login. If you’re going to Insight in Berlin, feel free to add it to your schedule builder. Here’s a high level list of features, with more detail on some of them later in this blog.

Security enhancements

Simplicity innovations

  • MongoDB support added to application provisioning
  • Simplified data protection flows in System Manager
  • Guided cluster setup and expansion
  • Adaptive QoS

Performance and efficiency improvements

  • Up to 30% performance improvement for specific workloads via WAFL improvements, parallelization and flash optimizations
  • Automatic schedules for deduplication
  • Backgroup inline aggregate deduplication (AFF only; automatic schedule only)

NetApp FlexGroup volume features

This is covered in more detail in What’s New for NetApp FlexGroup Volumes in ONTAP 9.3?

  • Qtrees
  • Antivirus
  • Volume autogrow
  • SnapVault/Unified SnapMirror
  • SMB Change/notify
  • QoS Maximums
  • Improved automated load balancing logic

Data Fabric additions

  • SolidFire to ONTAP SnapMirror
  • MetroCluster over IP

Now, let’s look at a few of the features in a bit more detail. If you have things you want covered more, leave a comment.

Multifactor Authentication (MFA)

Traditionally, to log in to an ONTAP system as an admin, all you needed was a username and password and you’d get root-level access to all storage virtual machines in a cluster. If you’re the benevolent storage admin, that’s great! If you’re a hostile actor, great!* (*unless you’re the benevolent storage admin… then, not so great)

ONTAP 9.3 introduces the ability to configure an external Identity Provider (IdP) server to interact with OnCommand System Manager and Unified Manager to require a key to be passed in addition to a username and password. Initial support for IdP will include Microsoft Active Directory Federation Services and Shibboleth.

MFA

For the command line, the multifactor portion would be passed by way of SSH keys currently.

SnapLock Enhancements

SnapLock is a NetApp ONTAP feature that provides data compliance for businesses that need to preserve data for regulatory reasons, such as HIPAA standards (SnapLock compliance) or for internal requirements, such as needing to preserve records (SnapLock enterprise).

ONTAP 9.3 provides a few enhancements to SnapLock, including one that isn’t available from any storage provider currently.

legal-hold.png

Legal hold is useful in the event that a court has ordered specific documents to be preserved for an ongoing case or investigation. This can be applied to multiple files and remains in effect until you choose to remove it.

event-based

Event-based retention allows storage administrators to set protections on data based on defined events, such as an employee leaving the company (to avoid disgruntled deletions), or for insurance use cases (such as death of a policy holder).

vol-append.png

Volume append mode is the SnapLock feature I alluded to, where no one else can currently accomplish this. Essentially, it’s for media workloads (audio and video) and will write-protect the portion of the files that have already been streamed and allow appending to those files after they’ve been protected. It’s kind of like having a CD-R on  your storage system.

Performance improvements

improve-perf

Every release of ONTAP strives to improve performance in some way. ONTAP 9.3 introduces performance enhancements (mostly for SAN)/block via the following changes:

  • Read latency reductions via WAFL optimizations for All Flash FAS SAN (block) systems
  • Better parallelization for all workloads on mid-range and high-end systems (FAS and AFF) to deliver more throughput/IOPS at lower latencies
  • Parallelization of the iSCSI layer to allow iSCSI to use more cores (best results on 20 core or higher systems)

The following graphs show some examples of that performance improvement versus ONTAP 9.2.

a700-fcp

a700-iscsi

Adaptive Quality of Service (QoS)

Adaptive QoS is a way for storage administrators to allow ONTAP to manage the number of IOPS per TB of volume space without the need to intervene. You simply set a service level class and let ONTAP control the rest.

The graphic below shows how it works.

adaptive-qos

MetroCluster over IP

MetroCluster is a way for clusters to operate in a high availability manner over long distances. (hundreds of kilometers) Traditionally, MetroCluster has been done over fiber channel networks due to low latency requirements needed to guarantee writes can be committed to both sites.

However, now that IP networks are getting more robust, ONTAP is able to support MetroCluster over IP, which provides the following benefits:

  • Reduced CapEx and OpEx (no more dedicated fiber channel networks, cards, bridges)
  • Simplicty of management (use existing IP networks)

mcc-ip.png

The ONTAP 9.3 release is going to be a limited release for this feature, with the following caveats:

  • A700, FAS9000 only
  • 100km limit
  • Dedicated ISL with extended VLAN currently required
  • 1 iWARP card per node

SolidFire to ONTAP SnapMirror

A few years back, the concept of a data fabric (where all of your data can be moved anywhere with the click of a button) was introduced.

That vision continued this year with the inclusion of SnapMirror from SolidFire (and NetApp HCI systems) to ONTAP.

sf-snapmirror.png

ONTAP 9.3 will allow storage administrators to implement a disaster recovery plan for their SolidFire systems.

This includes the following:

  • Baseline and incremental replication using NetApp SnapMirror from SolidFire to ONTAP
  • Failover storage to ONTAP for disaster recovery
  • Failback storage from ONTAP to SolidFire
    • Only for LUNs replicated from SolidFire
    • Replication from ONTAP to SolidFire only for failback

That covers a deeper look at some of the new ONTAP 9.3 features. Feel free to comment if you want to learn more about these features, or any not listed in the overview.

ONTAP 9.3 NFS sneak preview: Mount and security tracing

aid1871175-v4-728px-trace-step-6-version-2

ONTAP 9.3 is on its way, and with it comes some long-awaited new functionality for NFS debugging, including a way to map volumes to IP addresses!

Mount trace

In ONTAP 7-Mode, you could trace mount requests with an option “nfs.mountd.trace.” That didn’t make its way into ONTAP operating in cluster mode until ONTAP 9.3. I covered a long and convoluted workaround in How to trace NFSv3 mount failures in clustered Data ONTAP.

Now, you can set different levels of debugging for mount traces via the cluster CLI without having to jump through hoops. As a bonus, you can see which data LIF has mounted to which client, to which volume!

To enable it, you would use the following diag level commands:

::*> debug sktrace tracepoint modify -node [node] -module MntTrace -level [0-20] -enabled true

::*> debug sktrace tracepoint modify -node [node] -module MntDebug -level [0-20] -enabled true

When enabled, ONTAP will log the mount trace modules to the sktrace log file, which is located at /mroot/etc/mlog/skrace.log. This file can be accessed via systemshell, or via the SPI interface. Here are a few of the logging levels:

4 – Info
5 – Error
8 – Debug

When you set the trace level to 8, you can see successful mounts, as well as failures. This gives volume info, client IP and data LIF IP. For example, this mount was done from client 10.63.150.161 to data LIF 10.193.67.218 of vserverID 10 on the /FGlocal path:

cluster::*> debug log sktrace show -node node2 -module-level MntTrace_8
Time TSC CPU:INT Module_Level
--------------------- ------------------------ ------- -------------------
 LogMountTrace: Mount access granted for Client=10.63.150.161
 VserverID=10 Lif=10.193.67.218 Path=/FGlocal

With that info, we can run the following command on the cluster to find the SVM and volume:

::*> net int show -address 10.193.67.218 -fields lif
 (network interface show)
vserver lif
------- ---------
DEMO    10g_data1

::*> volume show -junction-path /FGlocal -fields volume
vserver volume
------- ---------------
DEMO    flexgroup_local

The mount trace command can also be used to figure out why mount failures may have occurred from clients. We can also leverage performance information from OnCommand Performance Manager (top clients) and per-client stats to see what volumes might be seeing large performance increases and work our way backward to see what clients are mounting what LIFs, nodes, volumes, etc. with mount trace enabled.

Security trace (sectrace)

In ONTAP 9.2 and prior, you could trace CIFS/SMB permission issues only, using “sectrace” commands. Starting in ONTAP 9.3, you can now use sectrace on SMB and/or NFS. This is useful to troubleshoot why someone might be having access to a file or folder inside of a volume.

With the command, you can filter on:

  • Client IP
  • Path
  • Windows or UNIX name

Currently, sectrace is not supported on FlexGroup volumes, however.

cluster::*> sectrace filter create -vserver DEMO -index 1 -protocols nfs -trace-allow yes -enabled enabled -time-enabled 60

Warning: Security tracing for NFS will not be done for the following FlexGroups because Security tracing for NFS is not supported for FlexGroups: TechONTAP,flexgroupDS,flexgroup_16,flexgroup_local.
Do you want to continue? {y|n}: y

Then, I tested a permission issue.

# mkdir testsec
# chown 1301 testsec/
# chmod 700 testsec
# su user
$ cd /mnt/flexvol/testsec
bash: cd: /mnt/flexvol/testsec: Permission denied

And this was the result:

cluster::*> sectrace trace-result show -vserver DEMO

Node            Index Filter Details             Reason
--------------- ----- -------------------------- ------------------------------
node2           1     Security Style: UNIX       Access is allowed because the
                      permissions                user has UNIX root privileges
                                                 while creating the directory.
                                                 Access is granted for:
                                                 "Append"
                      Protocol: nfs
                      Volume: flexvol
                      Share: -
                      Path: /testsec
                      Win-User: -
                      UNIX-User: 0
                      Session-ID: -
node2           1     Security Style: UNIX       Access is allowed because the
                      permissions                user has UNIX root privileges
                                                 while setting attributes.
                      Protocol: nfs
                      Volume: flexvol
                      Share: -
                      Path: /testsec
                      Win-User: -
                      UNIX-User: 0
                      Session-ID: -
node2           1     Security Style: UNIX       Access is allowed because the
                                                 permissions user has UNIX root privileges
                                                 while setting attributes.
                      Protocol: nfs
                      Volume: flexvol
                      Share: -
                      Path: /testsec
                      Win-User: -
                      UNIX-User: 0
                      Session-ID: -
node2           1     Security Style: UNIX       Access is not granted for:
                      permissions                "Modify", "Extend", "Delete"
                      Protocol: nfs
                      Volume: flexvol
                      Share: -
                      Path: /
                      Win-User: -
                      UNIX-User: 7041
                      Session-ID: -
node2           1     Security Style: UNIX       Access is not granted for:
                      permissions                "Lookup", "Modify", "Extend",
                                                 "Delete", "Read"
                      Protocol: nfs
                      Volume: flexvol
                      Share: -
                      Path: /testsec
                      Win-User: -
                      UNIX-User: 7041
                      Session-ID: -

As you can see above, the trace output gives a very clear picture about who tried to access the folder, which folder had the error and why the permission issued occurred.

Bonus Round: Block Size Histograms!

Now, this isn’t really a “new in ONTAP 9.3” thing; in fact, I found it as far back as 9.1. I just hadn’t ever noticed it before. But in ONTAP, you can see the block sizes for NFS and CIFS/SMB operations in the CLI with the following command:

cluster::> statistics-v1 protocol-request-size show -node nodename

When you run this, you’ll see the average request size, the total count and a breakdown of what block sizes are being written to the cluster node. This can help you understand your NAS workloads better.

For example, this node runs mostly a VMware datastore workload

cluster::> statistics-v1 protocol-request-size show -node node2 -stat-type nfs3_read

Node: node2
Stat Type: nfs3_read
                     Value    Delta
--------------       -------- ----------
Average Size:        30073    -
Total Request Count: 92633    -
0-511:                1950    -
512-1023:                0    -
1K-2047:              1786    -
2K-4095:              1253    -
4K-8191:             18126    -
8K-16383:              268    -
16K-32767:            4412    -
32K-65535:             343    -
64K-131071:           1560    -
128K - :             62935    -

When you run the command again, you get a delta from the last time you ran it.

If you’re interested in more ONTAP 9.3 feature information, check out Jeff Baxter’s blog here:

https://blog.netapp.com/announcing-netapp-ontap-9-3-the-next-step-in-modernizing-your-data-management/

You can also see me dress up all fancy and break down the new features at a high level here:

I’ll also be doing more detailed blogs on new features as we get closer to the release.

Behind the Scenes: Episode 109– ONTAP 9.3 Security Enhancements

Welcome to the Episode 109, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

Note: If you’re looking for last week’s podcast (IBM Watson/Elio), then it will be back up soon. It had to be reviewed before it could be officially published. Should be up as Episode 110 in a couple days.

group-4-2016

This week on the podcast, we cover the new security enhancements in ONTAP 9.3 with the security super squad, Juan Mojica (@Juan_M_Mojica, http://securitybrutesquad.blogspot.com) and Dan Tulledge (@Dan_Tulledge). Join us as we discuss Multifactor Authentication and NetApp Volume Encryption enhancements.

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

This week’s episode is here:

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

Our YouTube channel (episodes uploaded sporadically) is here:

NetApp FlexGroup: Crazy fast

This week, the SPEC SFS®2014_swbuild test results for NetApp FlexGroup volumes submitted for file services were approved and published.

TL;DR – NetApp was the cream of the crop.

You can find those results here:

http://spec.org/sfs2014/results/res2017q3/sfs2014-20170908-00021.html

The testing rig was as follows:

  • Four node FAS8200 cluster (not AFF)
  • 72 4TB 7200 RPM 12Gb SAS drives (per HA pair)
  • NFSv3
  • 20 IBM servers/clients
  • 10GbE network (four connections per HA pair)

Below is a graph that consolidates the results of multiple vendor SPEC SFS®2014_swbuild results. Notice the FlexGroup did more IOPS (around 260k) at a lower latency (sub 3ms):

specsfs-fg

In addition, NetApp had the best Overall Response Time (ORT) of the competition:

specsfs-ort

And had the best MBps/throughput:

specsfs-mbps

Full results here:

http://spec.org/sfs2014/results/sfs2014swbuild.html

For more information on the SPEC SFS®2014_swbuild test, see https://www.spec.org/sfs2014/.

Everything but the kitchen sink…

With a NetApp FlexGroup, the more clients and work you throw at it, the better it will perform. An example of this is seen in TR-4571, with a 2 node A700 doing GIT workload testing. Note how increasing the jobs only encourages the FlexGroup.

average-iops

max-mbps-git

FlexGroup Resources

If you’re interested in learning more, see the following resources:

You can also email us at flexgroups-info@netapp.com.

Tech ONTAP Podcast: Now powered by NetApp FlexGroup volumes!

If you’re not aware, I co-host the Tech ONTAP Podcast. I also am the TME for NetApp FlexGroup volumes. Inexplicably, we weren’t actually storing our podcast files on NetApp storage – instead, we were using the local Mac SSD, which was problematic for three reasons:

  1. It was eventually going to fill up.
  2. If it failed, bye bye files.
  3. It was close to impossible to access unless were were local to the Mac, for a variety of reasons.

So, it finally dawned on me that I had an AFF8040 in my lab, barely being used for anything except testing and TR writing.

At first, I was going to use a FlexVol, out of habit. But then I realized that a FlexGroup volume would provide a great place to write a bunch of 1-400MB files while leveraging all of my cluster resources. The whole process, from creating the FlexGroup, googling autofs in Mac and setting up the NFS mount and Audio Hijack, took me all of maybe 30 minutes (most of that googling and setting up autofs). Not bad!

The podcast setup

When we record the podcast, we use software called Audio Hijack. This allows us to pipe in sound from applications like WebEx and web browsers, as well as from the in-studio microphones, which all get converted to MP3. This is where the FlexGroup NFS mount comes in – we’ll be pointing Audio Hijack to the FlexGroup volume, where the MP3 files will stream in real time.

Additionally, I also migrated all the existing data over to the FlexGroup for archival purposes. We do use OneDrive to do podcast sharing and such, but I wanted an extra layer of centralized data access, and the NFS mounted FlexGroup provides that. Setting it up to stream right from Audio Hijack removes an extra step for me when processing the files. But, before I could point the software at the NFS mount, I had to configure the Mac to automount the FlexGroup volume on boot.

Creating the FlexGroup volume

Normally, a FlexGroup volume is created with 8 member volumes per node for an AFF (as per best practice). However, my FlexGroup volume was going to be around 5TB. That means 16 member volumes would be around 350-400GB each. That violates the other best practices of no less than 500GB per member, to avoid too much remote allocation. While my file sizes weren’t going to be huge, I wanted to avoid issues as the volume filled, so I met in the middle – 8 member volumes total, 4 per node. To do that, you have to go to the CLI; System Manager doesn’t do customization like that yet. In particular, you need the -aggr-list and -aggr-list-multiplier options with volume create.

ontap9-tme-8040::*> vol create -vserver DEMO -volume TechONTAP -aggr-list aggr1_node1,aggr1_node2 -aggr-list-multiplier 4
ontap9-tme-8040::*> vol show -vserver DEMO -volume TechONTAP* -sort-by size -fields size,node
vserver volume size node
------- --------------- ----- ------------------
DEMO TechONTAP__0001 640GB ontap9-tme-8040-01
DEMO TechONTAP__0002 640GB ontap9-tme-8040-02
DEMO TechONTAP__0003 640GB ontap9-tme-8040-01
DEMO TechONTAP__0004 640GB ontap9-tme-8040-02
DEMO TechONTAP__0005 640GB ontap9-tme-8040-01
DEMO TechONTAP__0006 640GB ontap9-tme-8040-02
DEMO TechONTAP__0007 640GB ontap9-tme-8040-01
DEMO TechONTAP__0008 640GB ontap9-tme-8040-02
DEMO TechONTAP 5TB -

Automounting NFS on boot with a Mac

When you mount NFS with a Mac, it doesn’t retain it after you reboot. To get the mount to come back up, you have to configure the autofs service on the Mac. This is different from Linux, where you can simply edit the fstab file. The process is covered very well in this blog post (just be sure to read all the way down to avoid the issue he mentions at the end):

https://coderwall.com/p/fuoa-g/automounting-nfs-share-in-os-x-into-volumes

Here’s my configuration…. I disabled “nobrowse” to prevent issues in case Audio Hijack needed to be able to browse.

autofs.conf

Screen Shot 2017-09-22 at 10.04.37 AM

auto_master file

Screen Shot 2017-09-22 at 10.04.59 AM

auto_nfs

Screen Shot 2017-09-22 at 10.05.17 AM

After that was set up, I copied over the existing 50-ish GBs of data into the FlexGroup and cleaned up some space on the Mac.

ontap9-tme-8040::*> vol show -vserver DEMO -volume TechONTAP* -sort-by size -fields size,used
vserver volume size used
------- --------------- ----- -------
DEMO TechONTAP__0001 640GB 5.69GB
DEMO TechONTAP__0002 640GB 8.24GB
DEMO TechONTAP__0003 640GB 5.56GB
DEMO TechONTAP__0004 640GB 6.48GB
DEMO TechONTAP__0005 640GB 6.42GB
DEMO TechONTAP__0006 640GB 8.39GB
DEMO TechONTAP__0007 640GB 6.25GB
DEMO TechONTAP__0008 640GB 6.25GB
DEMO TechONTAP 5TB 53.29GB
9 entries were displayed.

Then, I configured Audio Hijack to pump the recordings to the FlexGroup volume.

Screen Shot 2017-09-22 at 10.01.00 AM.png

Then, we recorded a couple episodes, without an issue!

Screen Shot 2017-09-22 at 10.34.30 AM.png

As you can see from this output, the FlexGroup volume is relatively evenly allocated:

ontap9-tme-8040::*> node run * flexgroup show TechONTAP
2 entries were acted on.

Node: ontap9-tme-8040-01
FlexGroup 0x80F03817
* next snapshot cleanup due in 2886 msec
* next refresh message due in 886 msec (last to member 0x80F0381F)
* spinnp version negotiated as 4.6, capability 0x3
* Ref count is 8

Idx Member L Used Avail Urgc Targ Probabilities D-Ingest Alloc F-Ingest Alloc
--- -------- - --------------- ---------- ---- ---- --------------------- --------- ----- --------- -----
 1 2044 L 1485146 0% 159376256 0% 12% [100% 100% 79% 79%] 0+ 0 0 0+ 0 0
 2 2045 R 2153941 1% 159376256 0% 12% [100% 100% 98% 98%] 0+ 0 0 0+ 0 0
 3 2046 L 1415120 0% 159339950 0% 12% [100% 100% 76% 76%] 0+ 0 0 0+ 0 0
 4 2047 R 1690392 1% 159376256 0% 12% [100% 100% 98% 98%] 0+ 0 0 0+ 0 0
 5 2048 L 1675583 1% 159376256 0% 12% [100% 100% 98% 98%] 0+ 0 0 0+ 0 0
 6 2049 R 2191360 1% 159376256 0% 12% [100% 100% 98% 98%] 0+ 0 0 0+ 0 0
 7 2050 L 1630946 1% 159376256 0% 12% [100% 100% 87% 87%] 0+ 0 0 0+ 0 0
 8 2051 R 1631429 1% 159376256 0% 12% [100% 100% 87% 87%] 0+ 0 0 0+ 0 0

Node: ontap9-tme-8040-02
FlexGroup 0x80F03817
* next snapshot cleanup due in 3144 msec
* next refresh message due in 144 msec (last to member 0x80F03818)
* spinnp version negotiated as 4.6, capability 0x3
* Ref count is 8

Idx Member L Used Avail Urgc Targ Probabilities D-Ingest Alloc F-Ingest Alloc
--- -------- - --------------- ---------- ---- ---- --------------------- --------- ----- --------- -----
 1 2044 R 1485146 0% 159376256 0% 12% [100% 100% 79% 79%] 0+ 0 0 0+ 0 0
 2 2045 L 2153941 1% 159376256 0% 12% [100% 100% 98% 98%] 0+ 0 0 0+ 0 0
 3 2046 R 1415120 0% 159339950 0% 12% [100% 100% 76% 76%] 0+ 0 0 0+ 0 0
 4 2047 L 1690392 1% 159376256 0% 12% [100% 100% 98% 98%] 0+ 0 0 0+ 0 0
 5 2048 R 1675583 1% 159376256 0% 12% [100% 100% 98% 98%] 0+ 0 0 0+ 0 0
 6 2049 L 2191360 1% 159376256 0% 12% [100% 100% 98% 98%] 0+ 0 0 0+ 0 0
 7 2050 R 1630946 1% 159376256 0% 12% [100% 100% 87% 87%] 0+ 0 0 0+ 0 0
 8 2051 L 1631429 1% 159376256 0% 12% [100% 100% 87% 87%] 0+ 0 0 0+ 0 0

I plan on using this setup when I start writing the new FlexGroup data protection best practice guide, so stay tuned for that…

So, now, the Tech ONTAP podcast is happily drinking the NetApp FlexGroup champagne!

If you’re going to NetApp Insight, check out session 16594-2 on FlexGroup volumes.

For more information on NetApp FlexGroup volumes, see:

Why are there so many P releases in ONTAP lately?

If you’ve been paying any attention, you’d notice that ONTAP 9.1P8 just released last week. That’s insane, right? I mean, ONTAP 9.1 just went GA less than a year ago! And ONTAP 8.2.4 only had 5 or 6 P releases ever! What’s going on???

It’s simple… ONTAP has a different software release cadence.

Starting with ONTAP 9, the release cadence model changed to accelerate the release of new ONTAP features. Now, instead of a major release (think 8.1, 8.2, 8.3, etc.) coming out every year and a half, we ship feature-rich major releases every 6 months. This means that NetApp can be more agile with their development cycles and more aggressive in releasing new features.

This also means, no more “maintenance releases.”

What’s a maintenance release?

A maintenance release was one of the “dot” releases you’d see in between major releases. Remember, it was usually 18 months between a major release, so while you were waiting for 8.2 to ship, NetApp was releasing 8.1.1, 8.1.2, 8.1.3, etc. These releases were generally devoid of new features, but instead included bug fixes. That was in addition to the “patch” releases, which were intended to be releases made to fix major bugs faster than a maintenance release could.

So, instead of seeing 9.1.1, 9.1.2, and so on, you’re going to get P releases. And that’s why you’re seeing an uptick in P releases for ONTAP 9.x in a shorter time frame. So, no worries! ONTAP 9.x is still one of the most stable family of releases we’ve seen for clustered ONTAP, regardless of the number of P releases.

General P release/upgrade guidance

If you’re trying to determine whether you should upgrade to a P release of ONTAP, here are some helpful tips:

  • P releases are fully production ready and QA tested
  • If you are trying to decide whether to upgrade to a P release, be sure to review the bug fix list on the P release download page to see if you’re exposed to any of the bugs and if you think it’s worth your time to upgrade
  • Make use of the “upgrade recommendation” found in MyAutoSupport.
  • ONTAP provides the ability to perform non-disruptive upgrades, so updating to a P release should take minimal downtime. This is especially true of ONTAP versions running in the same major release family, as there are no version mismatches to worry about in the upgrade.
  • System Manager now provides automated upgrade utilities to provide for a simpler upgrade process
  • Be sure to review the software version support policy for your release to make the most informed decision you can.

Hopefully this clears up any questions you have about P releases. Ping me in the comments if you need clarifications!

Behind the Scenes: Episode 105 – Converged Systems Advisor

Welcome to the Episode 105, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

group-4-2016

This week on the podcast, we chat about converged systems like FlexPod, and how the NetApp acquisition of Immersive brought us Config Advisor for your Converged Infrastructure, Converged Systems Advisor (CSA)! Join us and Keith Barto, Director of Product Management for Converged Infrastructure Management for everything you need to know about CSA!

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

Our YouTube channel (episodes uploaded sporadically) is here:

You can listen to this week’s episode here: