ONTAP 9.2RC1 is available!

Like clockwork, the 6 month cadence is upon us again.

clockwork_930w_spc-31

ONTAP 9.2RC1 is available for download here:

http://mysupport.netapp.com/NOW/download/software/ontap/9.2RC1/

If you’re interested in a podcast where we cover the ONTAP 9.2 features, check it out here:

Also out: OnCommand (truly) Unified Manager 7.2:

http://mysupport.netapp.com/documentation/productlibrary/index.html?productID=61373

For now, let’s dive in a bit, shall we?

First of all, I made sure to upgrade my own cluster to show some of the new stuff off. Went off without a hitch:

upgraded

Now, let’s start with one of the most eagerly awaited new features…

Aggregate Inline Deduplication

If you’re not familiar with deduplication, it’s a storage feature that allows blocks that are identical to rely on pointers to a single block instead of having multiple copies of the same blocks. For example, if I am storing multiple JPEG images on a share (or even inside the same PowerPoint file), deduplication will allow me to save storage space by storing just one copy of the data. The image below is an 8.4MB photo I took in Point Reyes, California:

point-reyes-info.png

If I store two copies of the file on a share (no deduplication), that means I use up 16MB.

wo-dedupe

If I use deduplication, then that means the duplicate blocks only take up 4KB per block as they are pointed back to a single copy of the blocks.

w-dedupe.png

If I have multiple copies of the same image, they all point back to the same blocks:

w-dedupe-multiples.png

Pretty cool, eh?

Well, there was *one* problem with how ONTAP does deduplication; the duplicate blocks only count against a single FlexVol volume. That meant if we had the same file in multiple volumes, you don’t get the benefits of deduplication across those volumes.

dedupe-multiple-flexvol.png

In ONTAP 9.2, that issue is resolved. You can now take advantage of deduplication when multiple volumes reside in the same physical aggregate.

dedupe-aggr.png

This is all currently done inline (as data is ingested) only, and currently only on All Flash FAS systems. The space savings come in handy in workloads such as ESXi datastores, where you may be applying OS patches across multiple VMs in multiple datastores hosted in multiple FlexVol volumes.

At a high level, this animation shows how it works:

aid-animation2

Another place where aggregate inline deduplication would rock? NetApp FlexGroup volumes, where a single container is comprised of multiple member FlexVols on the same physical storage. Speaking of FlexGroup volumes, that leads us to the next feature added to ONTAP 9.2.

Other storage efficiency improvements

In addition to aggregate inline dedupe, ONTAP 9.2 also adds:

  • Advanced Drive Partitioning v2 (ADPv2) support for FAS8xxx and FAS9xxx with spinning drives; previously ADPv2 was only supported on All Flash FAS
  • Increase of the maximum aggregate size to 800TB (was previously 400TB)
  • Automated aggregate provisioning in System Manager for easier aggregate creation

NetApp Volume Encryption on FlexGroup volumes

ONTAP 9.1 introduced volume-level encryption (NVE). We did a podcast on it if you’re interested in learning more about it, but in ONTAP 9.2, support for NVE was added to NetApp FlexGroup volumes. Now you can apply encryption only at the volume level (as opposed to the disks via NSE drives) for your large, unstructured NAS workloads.

To apply it, all you need is a volume encryption license. Then, use the same process you would use for a FlexVol volume.

Additionally, NVE can now be used on SnapLock compliance volumes!

Quality of Service (QoS) Minimums/Guaranteed QoS

In ONTAP 8.2, NetApp introduced Quality of Service to allow storage administrators to apply policies to volumes – and even files like luns or VMs – to prevent bully workloads from affecting other workloads in a cluster.

Last year, NetApp acquired SolidFire, which has a pretty mean QoS of its own where it actually approaches QoS from the other end of the spectrum – guaranteeing a performance floor for workloads that require a specific service level.

qos

I’m not 100% sure, but I’m guessing NetApp saw that and said “that’s pretty sweet. Let’s do that.”

So, they have. Now, ONTAP 9.2 has a maximum and a minimum/guaranteed QoS for storage administrators and service providers. Check out a video on it here:

ONTAP Select enhancements

ONTAP 9.2 also includes some ONTAP Select enhancements, such as:

  • 2-node HA support
  • FlexGroup volume support
  • Improved performance
  • Easier deployment
  • ESX Robo license
  • Single node ONTAP Select vNAS with VSAN and iSCSI LUN support
  • Inline deduplication support

Usability enhancements

ONTAP is also continuing its mission to make the deployment and configuration via the System Manager GUI easier and easier. In ONTAP 9.2, we bring:

  • Enhanced upgrade support
  • Application aware data management
  • Simplified cluster expansion
  • Simplified aggregate deployment
  • Guided cluster setup

FabricPools

We covered FabricPools in Episode 63 of the Tech ONTAP podcast. Essentially, FabricPools tier cold blocks from flash disk to cloud or an on-premises S3 target like StorageGRID WebScale. It’s not a replacement for backup or disaster recovery; it’s more of a way to lower your total cost of ownership for storage by moving data that is not actively in use to free up space for other workloads. This is all done automatically via a policy. It behaves more like an extension of the aggregate, as the pointers to the blocks that moved remain on the local storage device.

fabricpool

ONTAP 9.2 introduces version 1 of this feature, which will support the following:

  • Tiering to S3 (StorageGRID) or AWS
  • Snapshot-only tiering on primary storage
  • SnapMirror destination tiering on secondary storage

Future releases will add more functionality, so stay tuned for that! We’ll also be featuring FabricPools in a deep dive for a future podcast episode.

So there you have it! The latest release of ONTAP! Post your thoughts or questions in the comments below!

ONTAP 9.1 is now generally available (GA)!

Back in October, ONTAP 9.1 RC1 was released. Tons of new features were added, which I covered in ONTAP 9.1 RC1 is now available!

woohoo-for-pinterest-cd0xix-clipart

Some of the major features included:

Now, ONTAP 9.1 is officially GA. For information on what GA means:

http://mysupport.netapp.com/NOW/products/ontap_releasemodel/

You can find it here:

http://mysupport.netapp.com/NOW/download/software/ontap/9.1

Also, check out the documentation center:

docs.netapp.com/ontap-9/index.jsp

Happy upgrading!

If you’re interested in building your own ONTAP 9.x simulator, check out:

http://www.flackbox.com/netapp-simulator/

9.1RC2 is now available!

 

9.1RC2 is now available!

That’s right – release candidate now available. If you have concerns over the “RC” designation, allow me to recap what I mentioned in a previous blog post:

RC versions have completed a rigorous set of internal NetApp tests and are are deemed ready for public consumption. Each release candidate would provide bug fixes that eventually lead up to the GA edition. Keep in mind that all release candidates are fully supported by NetApp, even if there is a GA version available. However, while RC is perfectly fine to run in production environments, GA is the recommended version of any ONTAP software release.

For a more official take on it, see the NetApp link:

http://mysupport.netapp.com/NOW/products/ontap_releasemodel/post70.shtml

What’s new in ONTAP 9.1?

At a high level, ONTAP 9.1 brings:

9.1RC2 specifically brings (outside of bug fixes):

  • Support for the DS460C shelves
  • Official support for backup of NAS to cloud via AltaVault (SnapMirror)
  • SMB support for NetApp FlexGroup volumes

Happy upgrading!

For info about ONTAP 9.0, see:

ONTAP 9 RC1 is now available!

ONTAP 9.0 is now generally available (GA)!

 

NetApp FlexGroup: An evolution of NAS

evolution-of-man-parodies-333

Check out the official NetApp version of this blog on the NetApp Newsroom!

I’ve been the NFS TME at NetApp for 3 years now.

I also cover name services (LDAP, NIS, DNS, etc.) and occasionally answer the stray CIFS/SMB question. I look at NAS as a data utility, not unlike water or electricity in your home. You need it, you love it, but you don’t really think about it too much and it doesn’t really excite you.

However, once I heard that NetApp was creating a brand new distributed file system that could evolve how NAS works, I jumped at the opportunity to be a TME for it. So, now, I am the Technical Marketing Engineer for NFS, Name Services and NetApp FlexGroup (and sometimes CIFS/SMB). How’s that for a job title?

We covered NetApp FlexGroup in the NetApp Tech ONTAP Podcast the week of June 30, but I wanted to write up a blog post to expand upon the topic a little more.

Now that ONTAP 9.1 is available, it was time to update the blog here.

For the official Technical Report, check out TR-4557 – NetApp FlexGroup Technical Overview.

For the best practice guide, see TR-4571 – NetApp FlexGroup Best Practices and Implementation Guide.

Here are a couple videos I did at Insight:

I also had a chance to chat with Enrico Signoretti at Insight:

Data is growing.

It’s no secret… we’re leaving – some may say, left – the days behind where 100TB in a single volume is enough space to accommodate a single file system. Files are getting larger and datasets are increasing. For instance, think about the sheer amount of data that’s needed to keep something like a photo or video repository running. Or a global GPS data structure. Or Electronic Design Automation environments designing the latest computer chipset. Or seismic data analyzing oil and gas locations.

Environments like these require massive amounts of capacity, with billions of files in some cases. Scale-out NAS storage devices are the best way to approach these use cases because of the flexibility, but it’s important to be able to scale the existing architecture in a simple and efficient manner.

For a while, storage systems like ONTAP had a single construct to handle these workloads – the Flexible Volume (or, FlexVol).

FlexVols are great, but…

For most use cases, FlexVols are perfect. They are large enough (up to 100TB) and can handle enough files (up to 2 billion). For NAS workloads, they can do just about anything. But where you start to see issues with the FlexVol is when you start to increase the number of metadata operations in a file system. The FlexVol volume will serialize these operations and won’t use all possible CPU threads for the operations. I think of it like a traffic jam due to lane closures; when a lane is closed, everyone has to merge, causing slowdowns.

traffic-jam

When all lanes are open, traffic is free to move normally and concurrently.

traffic-clear.png

Additionally, because a FlexVol volume is tied directly to a physical aggregate and node, your NAS operations are also tied to that single aggregate or node. If you have a 10-node cluster, each with multiple aggregates, you might not be getting the most bang for your buck.

That’s where NetApp FlexGroup comes in.

FlexGroup has been designed to solve multiple issues in large-scale NAS workloads.

  • Capacity – Scales to multiple petabytes
  • High file counts – Hundreds of billions of files
  • Performance – parallelized operations in NAS workloads, across CPUs, nodes, aggregates and constituent member FlexVol volumes
  • Simplicity of deployment – Simple-to-use GUI in System Manager allows fast provisioning of massive capacity
  • Load balancing – Use all your cluster resources for a single namespace

With FlexGroup volumes, NAS workloads can now take advantage of every resource available in a cluster. Even with a single node cluster, a FlexGroup can balance workloads across multiple FlexVol constituents and aggregates.

How does a FlexGroup volume work at a high level?

FlexGroup volumes essentially take the already awesome concept of a FlexVol volume and simply enhances it by stitching together multiple FlexVol member constituents into a single namespace that acts like a single FlexVol to clients and storage administrators.

A FlexGroup volume would roughly look like this from an ONTAP perspective:

fg-diagram

Files are not striped, but instead are placed systematically into individual FlexVol member volumes that work together under a single access point. This concept is very similar in function to a multiple FlexVol volume configuration, where volumes are junctioned together to simulate a large bucket.

multi-flexvol.png

However, multiple FlexVol volume configurations add complexity via junctions, export policies and manual decisions for volume placement across cluster nodes, as well as needing to re-design applications to point to a filesystem structure that is being defined by the storage rather than by the application.

To a NAS client, a FlexGroup volume would look like a single bucket of storage:

flexgroup-client.png

When a client creates a file in a FlexGroup, ONTAP will decide which member FlexVol volume is the best possible container for that write based on a number of things such as capacity across members, throughput, last accessed… Basically, doing all the hard work for you. The idea is to keep the members as balanced as possible without hurting performance predictability at all, and, in fact, increasing performance in some workloads.

The creates can arrive on any node in the cluster. Once the request arrives to the cluster, if ONTAP chooses a member volume that’s different than where the request arrived, a hardlink is created within ONTAP (remote or local, depending on the request) and the create is then passed on to the designated member volume. All of this is transparent to clients.

Reads and writes after a file is created will operate much like they already do in ONTAP FlexVols now; the system will tell the client where the file location is and point that client to that particular member volume. As such, you would see much better gains with initial file ingest versus reads/writes after the files have already been placed.

 

Why is this better?

 

When NAS operations can be allocated across multiple FlexVol volumes, we don’t run into the issue of serialization in the system. Instead, we start spreading the workload across multiple file systems (FlexVol volumes) joined together (the FlexGroup volume). And unlike Infinite Volumes, there is no concept of a single FlexVol volume to handle metadata operations – every member volume in a FlexGroup volume is eligible to process metadata operations. As a result, FlexGroup volumes perform better than Infinite Volumes in most cases.

What kind of performance boost are we potentially seeing?

In preliminary testing of a FlexGroup against a single FlexVol, we’ve seen up to 6x the performance. And that was with simple spinning SAS disk. This was the set up used:

  • Single FAS8080 node
  • SAS drives
  • 16 FlexVol member constituents
  • 2 aggregates
  • 8 members per aggregate

The workload used to test the FlexGroup as a software build using Git. In the graph below, we can see that operations such as checkout and clone show the biggest performance boosts, as they take far less time to run to completion on a FlexGroup than on a single FlexVol.

fg-git-graph

Adding more nodes and members can improve performance. Adding AFF into the mix can help latency. Here’s a similar test comparison with an AFF system. This test used GIT, but did a compile of gcc instead of the Linux source code to give us more files.

aff-fg.png

In this case, we see similar performance between a single FlexVol and FlexGroup. We do see slightly better performance with multiple FlexVols (junctioned), but doing that creates complexity and doesn’t offer a true single namespace of >100TB.

We also did some recent AFF testing with a GIT workload. This time, the compile was the gcc library, rather than a Linux compile. This gave us more files and folders to work with. The systems used were an AFF8080 (4 nodes) and an A700 (2 nodes).

aff-completiontimes.png

Simple management

FlexGroup volumes allow storage administrators to deploy multiple petabytes of storage to clients in a single container within a matter of seconds. This provides capacity, as well as similar performance gains you’d see with multiple junctioned FlexVol volumes. (FYI, a junction is essentially just mounting a FlexVol to a FlexVol)

In addition to that, there is compatability out of the gate with OnCommand products. The OnCommand TME Yuvaraju B has created a video showing this, which you can see here:

Snapshots

This section is added after the blog post was already published, as per one of the blog comments. I just simply forgot to mention it. 🙂

In the first release of NetApp FlexGroup, we’ll have access to snapshot functionality. Essentially, this works the same as regular snapshots in ONTAP – it’s done at the FlexVol level and will capture a point in time of the filesystem and lock blocks into place with pointers. I cover general snapshot technology in the blog post Snapshots and Polaroids: Neither Last Forever.

Because a FlexGroup is a collection of member FlexVols, we want to be sure snapshots are captured at the exact same time for filesystem consistency. As such, FlexGroup snapshots are coordinated by ONTAP to be taken at the same time. If a member FlexVol cannot take a snapshot for any reason, the FlexGroup snapshot fails and ONTAP cleans things up.

SnapMirror

FlexGroup supports SnapMirror for disaster recovery. This currently replicates up to 32 member volumes per FlexGroup (100 total per cluster) to a DR site. SnapMirror will take a snapshot of all member volumes at once and then do a concurrent transfer of the members to the DR site.

Automatic Incremental Resiliency

Also included in the FlexGroup feature is a new mechanism that seeks out metadata inconsistencies and fixes them when a client requests access, in real time. No outages. No interruptions. The entire FlexGroup remains online while this happens and the clients don’t even notice when a repair takes place. In fact, no one would know if we didn’t trigger a pesky EMS message to ONTAP to ensure a storage administrator knows we fixed something. Pretty underrated new aspect of FlexGroup, if you ask me.

How do you get NetApp FlexGroup?

NetApp FlexGroup is currently available in ONTAP 9.1 for general availability. It can be used by anyone, but should only be used for the specific use cases covered in the FlexGroup TR-4557. I also cover best practices in TR-4571.

In ONTAP 9.1, FlexGroup supports:

  • NFSv3 and SMB 2.x/3.x (RC2 for SMB support; see TR-4571 for feature support)
  • Snapshots
  • SnapMirror
  • Thin Provisioning
  • User and group quota reporting
  • Storage efficiencies (inline deduplication, compression, compaction; post-process deduplication)
  • OnCommand Performance Manager and System Manager support
  • All-flash FAS (incidentally, the *only* all-flash array that currently supports this scale)
  • Sharing SVMs with FlexVols
  • Constituent volume moves

To get more information, please email flexgroups-info@netapp.com.

What other ONTAP 9 features enhance NetApp FlexGroup volumes?

While FlexGroup as a feature is awesome on its own, there are also a number of ONTAP 9 features added that make a FlexGroup even more attractive, in my opinion.

I cover ONTAP 9 in ONTAP 9 RC1 is now available! but the features I think benefit FlexGroup right out of the gate include:

  • 15 TB SSDs – once we support flash, these will be a perfect fit for FlexGroup
  • Per-aggregate CPs – never bottleneck a node on an over-used aggregate again
  • RAID Triple Erasure Coding (RAID-TEC) – triple parity to add extra protection to your large data sets

Be sure to keep an eye out for more news and information regarding FlexGroup. If you have specific questions, I’ll answer them in the comments section (provided they’re not questions I’m not allowed to answer). 🙂

If you missed the NetApp Insight session I did on FlexGroup volumes, you can find session 60411-2 here:

https://www.brainshark.com/go/netapp-sell/insight-library.html?cf=12089#bsk-lightbox

(Requires a login)

Also, check out my blog on XCP, which I think would be a pretty natural fit for migration off existing NAS systems onto FlexGroup.

Behind the Scenes: Episode 57 – Scale Out Networking in ONTAP

Welcome to the Episode 57, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

This week on the podcast, we invite Juan Mojica (@juan_m_mojica), Product Manager at NetApp, for a technical discussion about scale out networking in ONTAP. We cover IP Spaces, broadcast domains and subnets, as well as some other tidbits to help you understand how the network stack works in your cluster.

We originally had plans for another podcast on a new feature in ONTAP 9.1, but then we found out we couldn’t publish it until the week of Insight. So…. stay tuned! 😉

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

You can listen here:

What’s the deal with remote I/O in ONTAP?

cropped-jerry-seinfeld-stand-up-comedy-seinfeld1

I’m sure most of you have seen Seinfeld, so be sure to read the title in your head as if Seinfeld is delivering it.

I used a comedian as a starter because this post is about a question that I get asked – a lot – that is kind of a running joke by now.

The set up…

When Clustered Data ONTAP first came out, there was a pretty big kerfuffle (love that word) about the architecture of the OS. After all, wasn’t it just a bunch of 7-Mode systems stitched together with duct tape?

Actually, no.

It’s a complete re-write of the ONTAP operating system, for one. The NAS stack from 7-Mode was gutted and became a new architecture built for clustering.

Then, in 8.1, the SAN concepts in 7-Mode were re-done for clustering.

So, while a clustered Data ONTAP cluster is, at the hardware level, a series of HA pairs stitched together with a 10GB network, the operating system has been turned into essentially what I like to call a storage blade center. Your storage systems span clusters of up to 24 physical hardware nodes, effectively obfuscating the hardware and allowing a single management plane for the entire subsystem.

Every node in a cluster is aware of every other node, as well as every other storage object. If a volume lives on node 1, then node 20 knows about it and where it lives via the concept of a replicated database (RDB).

Additionally, the cluster also has a clustered networking stack, where an IP address or WWPN is presented via a logical interface (a LIF). While SAN LIFs have to stay put and leverage host-side pathing for data locality, NAS LIFs have the ability to migrate across any node and any port in the cluster.

However, volumes are still located on physical disks and owned by physical nodes, even though you can move them around via volume move or vol rehost. LIFs are still located on physical ports and nodes, even though you can move them around and load balance connections on them. This raises the question…

What is the deal with remote I/O in ONTAP?

Since you can have multiple nodes in a cluster and a volume can only exist on one node (well, unless you want to check out FlexGroups), and since data LIFs live on single or aggregated ports on a single node, you are bound to run into scenarios where you end up traversing the backend cluster network for data operations unless you want to take on the headache of ensuring every client mounts to a specific IP address to ensure data locality, or you want to leverage one of the data locality features in NAS, such as pNFS or node referrals on initial connection (available for NFSv4.x and CIFS/SMB). I cover some of the NFS-related data locality features in TR-4067, and CIFS autolocation is covered in TR-4191.

In SAN, we have ALUA to manage that locality (or optimized paths), but even adding an extra layer of protection in the form of protocol locality can’t avoid scenarios where interfaces go down or volumes move around after a TCP connection has been established.

That backend network? Why, it’s a 10GB dedicated network with 2-4 dedicated ports per node. No other data is allowed on the network other than cluster operations. Data I/O traverses the network in a proprietary protocol known as SpinNP, which leverages TCP to guarantee the arrival of packets. And, with the advent of 40GB ethernet and other speedier methods of data transfer, I’d be shocked if we didn’t see that backend network improve over the next 5-10 years. The types of operations that traverse the cluster network include:

  • SpinNP for data/local snapmirror
  • ZAPI calls

That’s pretty much it. It’s a beefy, robust backend network that is *extremely* hard to saturate. You’re more likely to bottleneck somewhere else (like your client) before you overload a cluster network.

So now that we’ve established that remote I/O will likely happen, let’s talk about if that matters…

The punchline

simpson_krusty_il_clown

Remote I/O absolutely adds overhead to operations. There’s no technical way around saying it. Suggesting there is no penalty would be dishonest. The amount of penalty, however, varies, depending on protocol. This is especially true when  you consider that NAS operations will leverage a fast path when you localize data.

But the question wasn’t “is there a penalty?” The question is “does it matter?”

I’ll answer with some anecdotal evidence – I spent 5 years in support, working on escalations for clustered Data ONTAP for 3 of those years. I closed thousands of cases over that time period. In that time, I *never* fixed a performance issue by making sure a customer used a local data path.  And believe me, it wasn’t for lack of effort. I *wanted* remote traffic to be the root cause, because that was the easy answer.

Sure, it could help when dealing with really low latency applications, such as Oracle. But in those cases, you architect the solution with data locality in mind. In the other vast majority of scenarios, the “remote I/O” penalty is pretty much irrelevant and causes more hand wringing than necessary.

The design of clustered Data ONTAP was intended to help storage administrators stop worrying about the layout of the data. Let’s start allowing it to do its job!

Behind the Scenes: Episode 51 – Guided Problem Solving and Live Chat Support

Welcome to the Episode 51, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

ep51

This week, we welcome Ross Ackerman (@TheRossAckerman) to talk about some improvements to the NetApp Support site experience, and how customers can leverage support without having to open cases or pick up the phone.

Guided Problem Solving

The first thing we discuss is a feature called “Guided Problem Solving.” This feature is exactly what it sounds like – a guided problem solver. If you want more information, check out the white paper on Guided Problem Solving and Chat.

When you land on the NetApp support site, you’ll see a green box in the middle of the page:

guided-problem-solving.png

Right now, those are the only options. Expect more available products in this feature in the near future…

From there, click on the solution you need to work on. That will open a page with a subset of solutions:

guided-problem-solving-2

Since I am the NFS dude, I picked NFS.

When you click on the desired subject, you get a new page. It starts off with the setup and configuration docs, mainly because that’s one of the first things people are trying to find.

However, there are also areas to find KBs, Tech Reports and community posts on the selected subject.

guided-problem-solving-3.png

Of course, if the provided information doesn’t help you, click “create a case.”

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

You can listen here:

The Joy of Sec: Realmd

Recently, the esteemed Jonathan Frappier (@jfrappier) posted an article on setting up Kerberos for use with Ansible. My Kerberos senses started to tingle…

kerb-sense

While Jonathan was referring to Ansible, it made me remember that this question comes up a lot when trying to use Kerberos with Linux clients.

Kerberos isn’t necessarily easy

When using Kerberos with Active Directory and Windows clients, it’s generally pretty straightforward, as the GUI does most of the work for you. When you add a Windows box to a domain, the SPN and machine account principal is auto-populated from the AD KDC.

The keytab file gets ported over to the client and, provided you have a valid Windows login, you can start using Kerberos without ever actually knowing you are using it. In fact, most people don’t realize they’re using it until it breaks.

Additionally, even if Kerberos isn’t working in Windows, there is the fallback option of NTLM authentication, so if you can’t get a ticket to access a share, you could always use the less secure auth method (unless you disabled it in the domain).

As a result, in 90% of the cases, you never even have to think about Kerberos in a Windows-only environment, much less know how it works. I know this from experience as a Windows administrator in my earlier IT days. Once I started working for NetApp support, I realized how little I actually knew about how Windows authentication worked.

So, say what you will about Windows, but it is *way* simpler in most cases for daily tasks like authentication.

Linux isn’t necessarily hard

One of the main things I’ve learned about Linux as I transitioned from solely being a “Windows guy” into a hybrid-NAS guy is that Linux isn’t really that hard. It’s just… different.

And by “different,” I mean it in terms of management. The core operating systems of Windows and Linux are essentially identical in terms of functionality:

  • They both boot from a kernel and load configurations via config files
  • They both leverage file system partitions and services
  • They both can be run on hardware or software (virtualized)
  • They both require resources like memory and CPU

The main differences between the two, in my opinion, are the open source aspect and the way you manage them. Naturally, there are a ton of other differences and I’m not interested in debating the merits of the OS. My point is simply this: Linux is only hard if you aren’t familiar with it.

That said, some things in Linux can be very manual processes. Kerberos configuration, for example, used to be a very convoluted process. In older Linux clients, you had to roughly do the following to get it to work:

  • Create a user or machine account in the KDC manaually (the Kerberos principal)
  • Assign SPNs manually to the principal
  • Configure the desired enctypes on the principal manually
  • Create the keytab for the principal manually (using something like ktpass)
  • Copy the keytab to the Linux client
  • Install the keytab to the client manually (using something like ktutil)
  • Configure the client to use secure NFS and configure the KDC realm information manually
  • Start the GSSD service manually and configure it to start on boot
  • Configure DNS
  • Ensure the time skew is within 5 minutes/configure NTP
  • Configure LDAP on the NFS client manually

That’s all off the top of my head. I’m sure I’m missing something, mainly because that’s a LONG LIST. But, Linux is getting better and automating more of these tasks. CentOS7/RHEL7 took a big leap in that regard by including realmd.

If you’re looking for the easiest way to configure Kerberos…

Use realmd. It’s brilliant.

It automates most the Kerberos client configuration tasks I listed above. Sure, you still have to install it and a few other tools (like SSSD, Kerberos workstation, etc) and configure the realm information, NTP and DNS settings, but after that, it’s as simple as running “realm join.”

This acts a lot like a Windows domain join in that it:

  • Creates a machine account for you
  • Creates the SPNs for you
  • Creates the keytab for you
  • Adds the keytab file to the client manually
  • Configures SSSD to use Windows AD for LDAP/Identity management for you

Super simple. I cover it in the next update of TR-4073 (update to that coming soon… stay tuned) as it pertains to NetApp storage systems, but there are plenty of how-to guides for just the client portion out there.

Happy Kerberizing!

Spreading the love: Load balancing NAS connections in ONTAP

peanut-butter-spread-400x400

I can be a little thick at times.

I’ll get asked a question a number of times, answer the question, and then forget the most important action item – document the question and answer somewhere to refer people to later, when I inevitably get asked the same question.

Some of the questions I get asked about fairly often as the NetApp NFS Technical Marketing Engineer involve DNS, which is only loosely associated with NFS. Go figure.

But, because I know enough about DNS to have written a blog post on it and a Technical Report on our Name Services Best Practices (and I actually respond to emails), I get asked.

These questions include:

  • What’s round robin DNS?
  • What other load balancing options are  there?
  • What is on-box DNS in clustered Data ONTAP?
  • How do I ensure data access is local?
  • How do I set it up?
  • When would I use on-box DNS vs DNS round robin?

So, in this blog, I’ll try to answer most of those at a high level. For more detail, see the new TR-4523: DNS Load Balancing in ONTAP.

What’s round robin DNS?

Remember when you were in school and you played “duck duck goose“? If you didn’t, click the link on the term and read about it.

But essentially, the game is: everyone sits in a circle, someone walks around the circle and taps each person and says “duck” and then when they want to initiate the chase, they yell “GOOSE!” and run around the circle to sit before the person catches them.

That’s essentially round robin DNS.

You create multiple A/AAAA records, associate with the same host name and away you go! The DNS server will deliver a different IP address for each request of the hostname, in ABCD/ABCD fashion. No real rhyme or reason, just first come/first serve.

What other DNS load balancing options are there?

There are 3rd party load balance appliances, such as F5 Big IP (not an endorsement, just an example). But, those cost money and require administration.

In ONTAP, however, there is a not-so-well-known feature for DNS load balancing called “on-box DNS load balancing” that is intended to incorporate intelligent load balancing for DNS requests into a cluster.

What is on-box DNS load balancing?

On-box DNS load balancing in ONTAP uses a patented algorithm to determine the best possible data LIFs on the best possible nodes to return to clients.

Basically, it looks a bit like this:

onbox

The client will make a DNS request to the DNS servers in its configuration.

The DNS server will notice that the request is from a specific zone and use its zone forwarder to pass that request to the cluster data LIFs acting as name servers.

The cluster will leverage its DNS application process and a weight file to determine which IP addresses out of the ones configured to be used in that DNS zone should be used.

The algorithm factors in CPU utilization, throughput, etc when making the determination.

The data LIF IP address is passed back to the DNS server, then to the client.

Easy peasy.

picture13911134748425

How do I ensure data locality?

The short answer: With on-box DNS, you can’t. But does it matter?

In clustered Data ONTAP, if you have multiple nodes and multiple data LIFs, you might end up landing on a node’s data LIF that is not local to the volume being requested. That can incur a slight latency penalty as the request traverses the backend cluster network.

In a majority of cases, this penalty is negligible to clients and applications, but with latency-sensitive applications (especially in flash environments), this penalty can hurt a little. Using local network connections to data volumes for NAS uses a concept of “fast path” that bypasses things that the remote connections need to do. I cover this in a little more detail in TR-4067 and in TECH::Data LIF best practices for NAS in cDOT 8.3.

In cases where you absolutely *need* data access to be local to the node, you would need to mount those local data LIFs specifically. Create A/AAAA records with node names incorporated to help discern which LIFs are on which nodes.

But in most cases, it doesn’t hurt to have remote traffic – in my 5 years in support, I never fixed a performance issue by making data access local to the node.

How do I set it up?

It’s pretty straightforward. I cover it in detail in TR-4523: DNS Load Balancing in ONTAP. In that TR, I cover Active Directory and BIND environments.

For a simple summary:

  1. Configure data LIFs in your storage virtual machine to use -dns-zone [zone name]
  2. Select data LIFs in your storage virtual machine that will act as name servers and listen for DNS queries on port 53 with “-listen-for-dns-query true”. I’d recommend multiple LIFs to provide fault tolerance.
  3. Add a DNS forwarding zone (subdomain in BIND, delegation or conditional forwarder in AD) on the DNS server. Use the data LIFs acting as name servers in the configuration and use the zone specified in -dns-zone.
  4. Add PTR records for the LIFs as needed.

That’s about it.

When to use on-box DNS vs Round Robin DNS?

This is one of the trickier questions I get, because it’s ultimately due to preference.

However, there are some guidelines…

  • If the cluster is 1 or 2 nodes in size, it probably makes sense from a administration perspective to simply use round robin DNS.
  • If the cluster is larger than 2 nodes or will eventually scale out to more than 2 nodes, it probably makes sense to get the forwarding zones set up and use on-box DNS.
  • If you require data locality or plan on using features such as NFS node referrals, SMB node referrals or pNFS, then the load balance choice doesn’t matter much – the locality features will override the DNS request.

Conclusion

So there you have it – the quick and dirty rundown of using DNS load balancing for NAS connections. I’m personally a big fan of on-box DNS as a feature because of the notion of intelligent calculation of “best available” IP addresses.

If you have any questions about the feature or the new TR-4523, please comment below.

Behind the Scenes: Episode 46 – FlexGroups!

ep46

Welcome to the Episode 46, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

This is yet another in the series of episodes for ONTAP 9 month on the podcast.

ontap9week

Be sure to check out the post on FlexGroups here:

FlexGroups: An evolution of NAS

This week, we get to chat about my newest pet project, FlexGroups. In addition to my work on NFS and Name Services, I am picking up this new and exciting NAS enhancement. Look for more information on this blog soon, as well as at Insight!

We brought in the Product Managers for FlexGroups, Sunitha Rao and Shriya Paramkusam, as well as the principal developer on FlexGroups, Richard Jernigan. Richard is a long time NetApp developer who has worked on previous iterations of distributed filesystems in ONTAP.

FlexGroups are a new distributed NAS filesystem, intended to provide up to 20PB of capacity, 400 billion (!) files and automated load balancing to ensure your cluster gets even distribution of load. I’ll be writing up a new blog post soon about them in more detail.

But for now, check out the podcast…

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

The official blog is here:

http://community.netapp.com/t5/Technology/Tech-ONTAP-Podcast-Episode-46-FlexGroups/ba-p/120858

The podcast episode is here: