TECH:: JLaaS – Justice League as a Service

jlaas-it

Being a superhero is tough – it doesn’t pay much (unless you’re Booster Gold) and for guys like Batman, they’re actually hemorrhaging money. Crime grows at a faster rate than they can handle and someone has to pay the bills for that giant space station they call a home base.

280px-JLsatellite2

So they do what everyone ends up having to do – they get jobs.

Find out which ones they get at DataCenterDude.com!

JLaaS: Justice League as a Service

TECH::Jurassic IT – Is NetApp a dinosaur?

Google “netapp dinosaur” and you get some… interesting articles.

You’ll find quotes like:

“a business in stagnation”

“obsessed with Data ONTAP”

“ONTAP showing its age”

I always find it funny when someone says a software company is overly obsessed with their own OS. I assume Apple is too obsessed with iOS, Microsoft too obsessed with Windows, etc.

But I digress…

Jurassic IT

philosoraptor-dino

Calling a company a dinosaur is essentially implying that company is doomed for extinction. It’s suggesting that the company is slow, plodding, unable to get out of its own way.

It’s also an AWFUL analogy, if you know your dinosaurs.

There are essentially two types of people that know dinosaurs better than anyone else: paleontologists and parents of small children.

ross-dino

I have a two year old son. So that makes me an expert*. 🙂

* On the internet, you can call yourself an expert at pretty much anything without repercussion.

Extinction

Dinosaurs lived on the planet roughly 165 million years. Humans have lived on Earth for around 200,000 years. Data storage? Maybe 70-75 years? (Punchcards count!)

It’s a little silly for a human to mock how long dinosaurs lived on Earth, just like it’s silly for any storage startup to call NetApp a “dinosaur.”

As for extinction, the implication is that dinosaurs were so stupid and plodding, they offed themselves – and it’s supposed to be some sort of analogy to what NetApp is doing to themselves. However, that’s not at all the case with dinosaurs (nor NetApp).

For the entirety of those 165 million years, dinosaurs were at the top of the food chain. They evolved over the course of time to adjust and adapt to their environment. The general consensus from scientists is that a catastrophic world event took the dinosaurs out – an Ice Age, a meteor shower, Deccan traps… but it’s not like the dinosaurs decided that they weren’t going to change and avoid extinction.

Plus, let’s think a little more on extinction – dinosaurs aren’t really extinct, Their descendants are birds and reptiles. So the whole notion that a dinosaur is destined for extinction and that using it as a metaphor for a company is based on false pretense.

Slow and steady

Another implication of calling a company a dinosaur is that they are slow to innovate (evolve) and so big that they can’t get out of their own way. Which, in the world of dinosaurs, is more fallacy.

Sure, the brontosaurus was massive and slow.

yabba

But what about the velociraptor? Or the ornithomimids, which were the fastest dinos (and small) and ran faster than Usain Bolt in his prime?

There were definitely dinosaurs out there that were agile, fast and able to maneuver as their surroundings dictated.

Tiny brains?

The notion that dinosaurs were kind of stupid? Yeah, that’s accurate.

But implying that a company that’s been around nearly 30 years, contributing to new SNIA and IETF standards every year, filing hundreds of patents, evolving with products like All Flash FAS and adding value to the storage industry is somehow “stupid” is just intellectually dishonest.

NetApp is a dinosaur

When you think about it for awhile, the critics are right – NetApp is a dinosaur. A massive, fast, deadly, top of the food chain, evolving dinosaur. They face the same challenges the rest of the storage industry faces, just like Earth faced when whatever world event wiped out the original dinos.

And guess what? People kind of love dinosaurs.

d-rex

Jurassic World is setting box office records (it surpassed Avengers 2 this weeked) and is showing us all just what dinosaurs can do. My son’s favorite animal on Old MacDonald’s farm is a stegosaurus. Yes, that’s ridiculous, but that’s kids for you.

I’ll take being called a dinosaur any day.

TECH::Partial Givebacks during Storage Failovers in NetApp’s Clustered Data ONTAP

If you’ve ever used clustered Data ONTAP and done storage failover tests, you may have noticed something strange when you attempted a giveback.

tough_mudder16_t653

cluster::> storage failover giveback -ofnode node1
Info: Run the storage failover show-giveback command to check giveback status.

cluster::> storage failover show
                              Takeover 
Node           Partner        Possible State Description 
-------------- -------------- -------- -------------------------------------
node1          node2          true     Connected to node2. Waiting
                                       for cluster applications to come
                                       online on the local node.
node2          node1          true     Connected to node1, Partial giveback
2 entries were displayed.

cluster::> storage failover show
                              Takeover 
Node           Partner        Possible State Description 
-------------- -------------- -------- -------------------------------------
node1          node2          true     Connected to node2
node2          node1          true     Connected to node1, Giveback
                                       of one or more SFO aggregates failed
2 entries were displayed.

Partial Givebacks (No takebacks)

So, what happened?

To better understand that, we need to understand how storage failovers work in clustered Data ONTAP and how they differ from storage failovers in 7-Mode. Find out more on the complete post on DataCenterDude.com!

Partial Givebacks during Storage Failovers in NetApp’s Clustered Data ONTAP

TECH::Storage Virtual Machine (SVM) DR in cDOT

With the release of clustered Data ONTAP 8.3.1 comes a whole new and exciting set of features, such as:

  • Improved inline compression
  • FlashEssentials flash optimizations
  • Online foreign LUN import

But the one I’ll cover here is Storage Virtual Machine DR, which is a key component of the enterprise storage story.

Let’s start off with some terminology definitions:

Clustered Data ONTAP

From TR-3982:

Clustered Data ONTAP is enterprise-capable, unified scale-out storage. It is the basis for virtualized
shared storage infrastructures. Clustered Data ONTAP is architected for nondisruptive operations,
storage and operational efficiency, and scalability over the lifetime of the system.

A Data ONTAP cluster typically consists of fabric-attached storage (FAS) controllers: computers
optimized to run the clustered Data ONTAP operating system. The controllers provide network ports that
clients and hosts use to access storage. These controllers are also connected to each other using a
dedicated, redundant 10-gigabit Ethernet interconnect. The interconnect allows the controllers to act as a
single cluster. Data is stored on shelves attached to the controllers. The drive bays in these shelves can
contain hard disks, flash media, or both.

Storage Virtual Machine (SVM)

From TR-3982:

A cluster provides hardware resources, but clients and hosts access storage in clustered Data ONTAP
through storage virtual machines (SVMs). SVMs exist natively inside clustered Data ONTAP. They define
the storage available to the clients and hosts. SVMs define authentication, network access to the storage
in the form of logical interfaces (LIFs), and the storage itself in the form of SAN LUNs or NAS volumes.
Clients and hosts are aware of SVMs, but they may be unaware of the underlying cluster. The cluster
provides the physical resources the SVMs need in order to serve data. The clients and hosts connect to
an SVM, rather than to a physical storage array.

Like compute virtual machines, SVMs decouple services from hardware. Unlike compute virtual
machines, a single SVM can use the network ports and storage of many controllers, enabling scale-out.
One controller’s physical network ports and physical storage also can be shared by many SVMs, enabling
multi-tenancy.

SnapMirror

NetApp® SnapMirror® technology provides fast, efficient data replication and disaster recovery (DR) for your critical data.

Use a single solution across all NetApp storage arrays and protocols. SnapMirror technology works with any application, in both virtual and traditional environments, and in multiple configurations, including hybrid cloud.

Tune SnapMirror technology to meet recovery-point objectives ranging from minutes to hours. Fail over to a specific point in time in the DR copy to recover at once from mirrored data corruption.

Disaster Recovery (DR)

This is pretty standard; it’s a set of policies and procedures put in place for enterprise IT organizations to recover from a catastrophic loss of service at a primary site. Ideally, the failover will be instantaneous and service will be restored quickly, with as little disruption as possible.

No one needs DR… until they do.

One of the most criminally ignored sections of IT is backup and DR. This is because it costs money and doesn’t immediately make you any money. The ROI is low, so it becomes a low priority when it should be one of the highest priorities.

Luckily, the cloud is making DR more of a reality (through things like DRaaS, offered by Cloud ONTAP), as cloud storage prices are dropping and allowing companies to start taking DR more seriously. And remember – your data is only as good as your last restore test.

What is SVM DR?

svmdr

Storage Virtual Machines (SVMs) are essentially blades running Data ONTAP, more or less. They act as their own tenants in a cluster and could represent individual divisions, companies or test/prod environments.

However, even with multiple SVMs, you still end up with a single point of failure – the storage system itself. If a meteor hit your datacenter, your cluster would be toast and your clients would be dead in the water, unless you planned for disaster recovery accordingly.

dayafter

Oops. Did we ever set up DR?

SVM DR allows disaster recovery capability at a granular SVM level, as opposed to having to replicate an entire cluster or filer. This is analogous to the vfiler DR functionality available in 7-Mode.

SVM DR does the following:

  • Leverages NetApp SnapMirror to replicate data to a secondary site.
  • Leverages the new Configuration Replication Service (CRS) application to replicate SVM configuration, including CIFS/SMB shares, network information, NFS exports, etc.
  • Allows two flavors of SVM DR – Identity Preserving and Identity Discarding.

Identity Preserving

This replicates the primary SVM’s configuration and allows us to change to that identity in a failover scenario. One use case for this would be DR on the same physical campus/site (two separate buildings).

The following graphic shows what is (and is not) replicated for SVM DR in Identity Preserve:

svmdr-replicate

Identity Discarding

This allows us to use a different network configuration on a secondary SVM and bring it online as its own identity. A use case for this would be DR to a different geographical location in the world.

The following graphic shows what is (and is not) replicated for SVM DR in Identity Discard:

svmdr-discard

How it works

The flow of operation in SVM DR is essentially:

  • Create SVM DR relationship/schedule
  • Initialize the SnapMirror
  • Ensure updates are successful
  • Test DR

When we test (or do a real failover) to DR, the following happens:

  • SnapMirror break; break means we can now do R/W operations
  • SnapMirror goes from snapmirrored to broken-off
  • Depending on identity type, we either preserve or discard old identity
  • SVM DR destination goes from dp-destination to default
  • Once source site is back up, we can do a resync/flip-resync

When the flip resync occurs:

  • Data written to DR destination gets synced back to source to ensure we have current copy of data and config; this uses a new SVM DR relationship
  • After we’re synced up, the original SVM DR relationship is re-established
  • The flip resync SnapMirror gets broken off and removed
  • SVM DR destination changes from default to dp-destination
  • Snapmirror goes from broken-off to snapmirrored

Some things to keep in mind

While SVM DR makes heavy use of SnapMirror functionality, it is not a true SnapMirror in terms of how it is managed.

  • qtrees in the SVM root volume do *not* get replicated.
  • If you mount a qtree under SVM root and then mount a volume below that qtree, SVM DR will fail unless there is a qtree with the same name created in the destination SVM root volume.
  • All non-SVM root volumes (data volumes) have are type DP.
  • You cannot manage SVM DR SnapMirrors independently. They must be managed via the SVM level as a single entity.
  • SVM DR snapshots are named with vserverdr….
  • If reverting from 8.3.1, all SVM DR relationships and snapshots must be deleted before revert.
  • Source and destination should be at 8.3.1 or later; source version should never be higher than destination.
  • Source and destination must have SnapMirror licenses.
  • Destination cluster should have at least one non-root aggregate with at least 10GB free space for configuration replication.
  • Destination cluster must have same licenses (ie, CIFS, NFS, FCP, etc.) as source to ensure full functionality as source upon failover.
  • If using NFS mounts, clients must remount the volumes on DR failover, as the FSIDs will change. NOTE: ONTAP 9 now supports FSID preservation on SVM DR!

For more information on SVM DR, be sure to check TR-4015 for updates as 8.3.1 goes to general availability (GA – find out what that is here) and follow the SVM DR/Multi-tenancy TME Doug Moore on Twitter @mooredo21. Doug will also be presenting SVM DR sessions at NetApp Insight 2015 in Las Vegas and Berlin.

I’ll also be presenting some sessions at NetApp Insight 2015, so keep checking back at whyistheinternetbroken.com for updates!

If you’re interested in step by step guides of how to set up SVM DR, check out the Express Guides for your version of ONTAP!

TECH::Clustered Data ONTAP 8.3.1 is now in general availability (GA)!

Looking for cDOT 8.3.2? Check it out here:

https://whyistheinternetbroken.wordpress.com/2015/11/19/clustered-data-ontap-832/

cDOT 8.3.2 is the first release that offers Copy-Free Transition!

UPDATE: Back in June, clustered Data ONTAP (cDOT) 8.3.1 became available as a release candidate (RC). (I cover what a release candidate is in my “What’s the latest 8.3 release?” blog.)

Now, it’s available in General Availability! That means, have at it!

NOTE: Be sure to get the latest patch release of 8.3.1, which is currently:

http://mysupport.netapp.com/NOW/download/software/ontap/8.3.1P2/

I also cover the latest All Flash FAS promotional updates in “NetApp is kicking some flash!

flash-banner

Despite having the designation of being a “minor version,” this release brings new features to clustered Data ONTAP, which is why it’s getting RC designation. If you’re running clustered Data ONTAP (especially for NAS environments), this is the version to be on, hands down.

Each section includes links to documentation. Some of the docs might not be updated until 8.3.1 goes to General Availability (GA), so keep checking back!

New features

minion-party

The 8.3.1 release brings a number of new features that will greatly improve performance, transition to clustered Data ONTAP and overall cluster resiliency.

Improved inline data compression

Data compression in clustered Data ONTAP allows for greater storage efficiency by compressing data within a FlexVol on primary, secondary and archive storage.

Some of the improvements include:

  • Support for all workload environments
  • Optimization for All Flash FAS (AFF) systems (where compression is enabled by default)
  • Adaptive compression
  • Improved read performance
  • Sub-file clone support on compressed volumes

For more information regarding compression in 8.3.1, see the product documentation.

SnapVault support for inline adaptive data compression

In cDOT 8.3.1, you can leverage the new inline adaptive data compression! (Provided both source and destination support it)

For more information on this, see TR-4183.

Foreign LUN Import Improvements

cDOT 8.3.1 brings two major improvements to Foreign LUN Import (FLI), which greatly enhances the transition story:

  • Support for ONLINE FLI!
  • FLI Throttling

This includes the ability to import LUNs from 7-Mode to a cDOT cluster, making transition *that* much easier!

For more information regarding Foreign LUN import in 8.3.1, see the product documentation.

Also, check out the new FLI TR-4442!

2-Node MetroCluster (MCC)

Prior to clustered Data ONTAP 8.3.1, MetroCluster was only supported on 4 nodes. cDOT 8.3.1 introduces support for MCC configurations of 2 nodes, with a single node cluster at each site.

For more information on MetroCluster in cDOT, see TR-4375.

Security Audit Log Forwarding

cDOT 8.3.1 now allows forwarding the command-history logs to a remote server. This allows for flexibility and security management for storage administrators.

For more information on events in cDOT, see TR-4303.

Support for cluster peering (SnapMirror) in non-Default IP Spaces

I cover what IP Spaces and Broadcast Domains in cDOT are in my DataCenterDude blog post. This new feature in cDOT 8.3.1 allows SnapMirror relationships to use independent IP spaces for SnapMirror. The full-mesh network requirement seen in previous versions of cDOT is now only required at the IP Space level.

For more information on networking in cDOT, see TR-4182.

NAS Improvements

In cDOT 8.3.1, NAS has vastly improved. In fact, it is the recommended clustered Data ONTAP version for all NAS environments. These include:

  • Ability to modify credential cache timeouts
  • New options including cifs.nfs_root_ignore_acl, nfs.ntacl_display_permissive_perms
  • SMBv3 Encryption Support
  • Better export policy rule cache handling
  • Better netgroup cache handling
  • Support for Windows NFS (previously not supported in any 8.3.x release)

For more information on NAS improvements in 8.3.1, see TR-4067 and {need CIFS/SMB TR}

And last, but certainly not least…

Storage Virtual Machine Disaster Recovery!

That’s right! The analog to vfiler DR is now available in cDOT 8.3.1. You can replicate entire SVM configurations and data to remote sites and failover when disaster strikes. For more information on SVM DR, see my blog post SVM DR in cDOT!

Supported platforms

The following shows supported platforms for 8.3.1.

  • FAS2xx0: FAS2220, FAS2240
  • FAS/V 3xx0: FAS/V 32×0 all models except 3210
  • FAS/V 6xx0: FAS/V 62×0 all models
  • FAS 80xx: 8080, 8060, 8040, 8020
  • FAS 25xx: 2554, 2552, 2520

Systems that are NOT supported with AFF in 8.3.1:

  • FAS/V 3xx0: 31×0 all models, 3210
  • FAS/V 6xx0: 6040, 6080
  • IBM n-Series

TECH:: NetApp is kicking some flash!

Picture this…

You’re a storage administrator, and your boss has just told you to go out and find a suitable flash array for your production workloads. The problem is, everything you’ve heard about flash is that there isn’t a flash array that can do unified protocols (SAN and NAS) *and* deliver the top-line performance with built-in data protection at the lowest cost per GB out there.

That changes today.

Flash_Blue_Lantern_Corps_001

Customers and partners alike have been asking for an affordable, dependable and reliable high performance, enterprise-ready all flash system – and NetApp listened.

Recently, there has been a lot of talk about flash replacing disk as a primary storage mechanism in a datacenter. Because of the price point, performance and other factors, flash storage has become coveted in IT organizations.

There are plenty of storage vendors out there, but only one has the proven track record in enterprise environments and the ability to offer unified protocol support, flash-to-disk-to-cloud, non-disruptive operations (NDO), and integrated data protection on a single storage system.

NetApp All Flash FAS is stepping up its game with a new, aggressive pricing model that allows enterprise IT departments to implement a scale-out All Flash storage system without needing to sell a kidney to do it.

Now, you can get:

  • Consistent high performance at a low latency
  • Ability to scale up to 4 million IOPS and 16PB of effective capacity
  • The enhanced features, non-disruptive operations and flexibility of clustered Data ONTAP
  • The only all flash array that supports combining hybrid and all flash systems into a unified storage resource!

Best of all, it’s very competitive in the industry – as low as $5/GB!

This includes software license bundles, support, and a 3 year basic warranty. So, for competitive, if not superior pricing, you get world-class, enterprise-ready flash storage from the only vendor that can offer enterprise-ready flash storage!

Think that’ll make your boss happy?

Flashpoint

Just think about how happy he/she will be when you tell them that you bought an all flash system that can also incorporate spinning disk for all those workloads that don’t need to live on flash, such as archives, home directories and other things that aren’t so performance hungry.

Because, let’s be honest with ourselves – you don’t need flash for everything.

If anyone tells you otherwise, they’re being purely dishonest.

That same unified storage system that has the performance and efficiency benefits of flash, also has the ability to do non-disruptive moves of data to spinning disk when those performance needs are filled.

Have a seasonal workload that needs top line performance for a short period of time, but doesn’t do anything the rest of the year? Use your All Flash FAS nodes for the performance and then use volume move to migrate the data to spinning disk, freeing up valuable flash storage real estate for your needy workloads.

That gives you a storage system that is ideal for any and all workloads. Want to do VDI, Oracle databases and SQL on flash and archives, home directories and other capacity-based workloads in the same namespace?

NetApp is the only vendor that offers that option.

What about 8.3.1? Does that get me anything?

Heck yes!

napolean-dynamite

With the new inline data compression enhancements in clustered Data ONTAP 8.3.1, you get:

  • Support for all workload environments
  • Performance and efficiency optimization for All Flash FAS (AFF) systems (where compression is enabled by default) known as “FlashEssentials”
  • Adaptive compression 
  • Improved read performance
  • Sub-file clone support on compressed volumes

In fact, in some workload scenarios, using inline compression in clustered Data ONTAP 8.3.1 with AFF brings better performance than not using inline compression in cDOT 8.3!

That’s right! You get faster results using compression than not using it. Faster performance and lower capacity? Yes, please!

For more information on 8.3.1 features, check out my post on specific enhancements in clustered Data ONTAP 8.3.1.

Is it cloud ready?

You’re in luck – clustered Data ONTAP is ideal for the cloud.

Got a private cloud? Cool. Use your volume moves to non-disruptively move data between flash and non-flash storage.

Want to move data to a public cloud like AWS? Awesome. NetApp allows you to replicate and manage your data in the cloud via Cloud ONTAP and NetApp Private Storage.

Want to tie it all together and leverage a hybrid cloud without any constraints across your choice of resources? Then check out the NetApp Data Fabric.

NetApp’s All Flash FAS solution is the only all flash product that provides full flash-to-disk-to-cloud data management within the NetApp Data Fabric.

Still not sure?

No problem. NetApp offers a “Try and Buy” program, where you can get an All Flash FAS system into your datacenter to test various workloads and see the performance gains and advantages with your own eyes!

flash-gameface

My NetApp AFF game face

For some more information on the NetApp Data Fabric, check out Jarett Klum’s blog!

For the Reg article on this offering see NetApp cackles as cheaper FlashRay lurches out of the door.

For Dmitris Krekoukias’s excellent take on this, see NetApp Enterprise Grade Flash.

For ESG’s Lab Review of AFF and 8.3.1 see ESG Lab Review: NetApp Clustered Data ONTAP 8.3.1 and All Flash FAS AFF8080 EX

aff-war

TECH::cDOT 8.3 Upgrade Check via PowerShell

in case you aren’t aware, there is an excellent community post out there by NetApp FSE Tim McGue that does a PowerShell check for cDOT 8.3 upgrades.

From the intro:

This script checks a specified cluster for the items in the “Steps for preparing for a major upgrade” section. The items that are covered are the ones that can be addressed prior to the actual software image update. These are outlined roughly on pages 32-68 in the guide. Based upon the output of the script you can make the necessary adjustments in the cluster to ensure a successful upgrade.

Check it out!

How to Check Data ONTAP 8.3 Upgrade Requirements Using a PowerShell Script

TECH::The Underdog Effect

tcdundeec005092110-300x450

There’s an interesting trend that’s been happening for a while now, but I’ve only recently started paying attention to it.

Hate the big guy and root for the little guy, regardless of logic or evidence to the contrary.

It’s called the “Underdog Effect.”

We introduce the concept of an underdog brand biography (UBB) to describe an emerging trend in branding in which firms author an historical account of their humble origins, lack of resources, and determined struggle against the odds. We identify two essential dimensions of an underdog biography: external disadvantage, and passion and determination. We demonstrate that a UBB can increase purchase intentions, real choice, and brand loyalty. We argue that UBBs are effective because consumers react positively when they see the underdog aspects of their own lives being reflected in branded products. Four studies demonstrate that the UBB effect is driven by identity mechanisms: we show that the effect is 1) mediated by consumers’ identification with the brand, 2) greater for consumers who strongly self-identify as underdogs, 3) stronger when consumers are purchasing for themselves vs. others, and 4) stronger in cultures in which underdog narratives are part of the national identity.

This trend crosses mediums. Sports, politics, music and even tech. I can remember when I was in college, how much pride I took in knowing all the small indie bands and turning my nose up at any artist who had corporate radio airplay, mostly because they were successful and lots of people knew about them and liked them. I never took it as far as some others, though, where they would spurn the band they once raved about and pushed on all their friends once that band became successful. The irony was that they were part of the problem. Their favorite indie band got big because they (and people like them) created a buzz and generated free word of mouth marketing.

Warriors come out to play

WARNING: SPORTBALL REFERENCE

For a more recent example of this, just look at the 2015 NBA Finals.

You have the Golden State Warriors, who won 67 games in the regular season and were favored going into the Finals. And you have a Cleveland Cavaliers team that is hobbled by injuries and have the feel good story of “hometown boy comes home” going for them.

Except that hometown boy is LeBron James.

James is a perfect example of the underdog effect at work. Most people who hate them can’t verbalize *why* they hate him. They’ll mumble something about “The Decision,” but that point is moot now that he came back to Cleveland. The root of the reason they hate him is his success. Even though the Golden State Warriors were favored over the Cavaliers *before* they lost Kyrie Irving for the season, people still consider the Warriors the underdog. They’re the startup (or upstarts, for the relevant sportball term).

Tech hate

This is not unlike what happens to tech companies that get large and successful. Microsoft has endured this for decades, and, until only recently, has had trouble recapturing some of that “cool” tech company vibe. But before they were hated, Microsoft was generally well-liked. But they got too successful. Rumors and FUD spread like wildfire. And, to be honest, they made some very public missteps (looking at you Windows ME and Zune).

We’re also seeing former tech darlings Apple and Google start to see some of that hate trickle in. People wonder when Apple will “innovate” again and scoff at the iWatch. And iPad. And iPhone.

They still haven’t predicted Apple’s demise correctly.

Sometimes, the hate is somewhat justified, especially when you put a target on your back like Google did with the “Don’t Be Evil” slogan. Those are lofty (and unrealistic) standards to hold up when so much money is at stake. The larger you grow, the louder your critics will get. The warts become obvious and your audience is much larger and diverse, so it’s harder to please everyone.

“You can please some of the people all of the time, all of the people some of the time, but you cannot please all of the people all of the time” – Abe Lincoln

Storage Wars

real_storage_wars

What really started me on this thought has been the recent anti-NetApp sentiment. Now, I work for NetApp, so I have a vested interest in their success. But I don’t “bleed blue” or have some sort of blind loyalty – I just think we do good stuff. I understand what’s most important in life is not my career, but everything outside of that. But I’m a HUGE proponent of justice and fairness, and I feel like some of the anti-NetApp sentiment has been unfair and unjust – and uninformed.

A lot of the misinformation has been spread by NetApp’s competitors in the form of FUD. It happens – it’s how salespeople without a good story to tell about their own products sell their stuff. But that FUD evolves into some weird bastardized fact that gets repeated ad nauseum and it just gets… old.

It’s now en vogue to bash NetApp, especially in light of their recent struggles. It’s “kicking a man while he’s down,” so to speak. Some of the criticism is certainly justified, and in some cases, completely on point. But there’s never any honest discussion. The criticisms are one-sided podcast monologues or blog posts. Sometimes, they’re in threads soaked in click-bait headlines like OMG IS NETAPP DEAD???

In the meantime, the new guys are reaping the benefits of it. Sure, they have some good products for specific use cases. And it’s nice and new and shiny. But there are warts – there are always warts. Over time, we will start to see them. And for those startups that survive long enough to become successful enough to hate, the warts will become common knowledge and the same people who were singing their praises will be writing the next IS [STARTUP HERE] DEAD??? article.

Everyone loves the underdog… until they aren’t the underdog.

This is the lesson in all of this. If you are looking to purchase from a company, or apply to work for a company, or even write about a company, be sure to do your homework. Read the negative articles. Read the fluff pieces. Then do the math – the truth is somewhere in the middle. And if you feel like you are starting to love or hate a tech company too much, go on vacation. There are better places to focus those emotions.

To drive the point home, I leave you with this video from “The Interview”:

VMWORLD:: A three hour tour? VMWorld US, Here I Come!

This year, I will be making my inaugural trip to VMWorld in San Francisco. I’ll be a booth babe, manning one of the NetApp booths for a few days.

I’ve only previously been to NetApp Insight as far as tech conferences are concerned, and as I understand it, VMWorld is considerably larger. That will be interesting to see…

It’s probably also good that I’ve not been previously, as I’ve only recently been active in the social media-sphere/blog-osphere, so I’ll actually “know” people that will be there. Will be cool to put real faces to names/photos/twitter handles.

As I am I rookie/n00b, any advice from you seasoned veterans out there?

I plan on documenting my experiences via blog and have created a new bog category called VMWORLD for that.

Feel free to comment on this or hit me up on Twitter @NFSDudeAbides with tips or if you want to meet up!

If you’ve never been to the SF/Bay Area before, I love it out there and will end this blog with a link to photos from my last trip out there, from the Grand Canyon to Vegas to SF.

https://photos.google.com/album/AF1QipP4cbWMAhbdhqBRwveRUwgUKakZOZbydJoCPSJL

LDAP::LDAP servers and clients and bears, oh my! – Part 5

UPDATE: I realized today that I wrote this same topic twice, in two different posts. This one should be considered the older effort. The newer, up-to-date post is here:

https://whyistheinternetbroken.wordpress.com/2015/07/29/ldap-servers-clients/

This post is part 5 of the new LDAP series I’ve started to help people become more aware of what LDAP is and how it works. Part one was posted here:

What the heck is an LDAP anyway?

This post will focus on the bind portion of LDAP.

We’re off to see the Wizard of LDAP!

Clients vs. Servers

LDAP isn’t unlike other protocols, such as HTTP, NFS, etc. There is a client and server relationship. And just like any successful relationship, there needs to be someone talking and someone listening.

I have a two year old son. He talks, but most of it is just practice for him. He’ll say things like “dada, bird truck.” And I’ll say “Bird? Truck? Cool!” It’s mostly a one-sided conversation where I essentially ACK his SYNs. (I need to get out more)

Sometimes, it’s “dada read book.” And I comply.

LDAP client/server conversations aren’t much different. The client is my two year old. The LDAP server is me.

“LDAP, find user information”

“User information? Cool! Here ya go!”

At its very base, LDAP conversations are nothing more than TCP SYNs and ACKs. So, when configuring or troubleshooting, they should be treated as such.

Clients

LDAP clients can be anything running software that can query LDAP via RFC-2307 standards. Windows, Linux, storage system OSes, etc can all act as clients. Some operating systems contain built-in LDAP functionality (such as Windows) that doesn’t require you to install anything special. Some storage systems, such as NetApp’s clustered Data ONTAP, fully support LDAP that adheres to the RFC-2307 standard. For more information, see TR-4073: Secure Unified Authentication.

Others, like Linux OSes, need you to install packages to run client operations. Some common Linux-based LDAP clients include:

And many more!

When you install a client, you are far from done. You now have to configure that client to talk to the LDAP server. This requires a plethora of necessary information, including:

Basic network information.

Remember, at its base, it’s a simple TCP conversation.

  • LDAP server names, URI or IP addresses (is the server in DNS? Are there SRV records for LDAP?)
  • LDAP port (default ports are 389 for LDAP, 636 for LDAP over SSL, 3268 for Global Catalog; did you change the server port?)
  • Can the client talk to the server (routing?)

Bind/Login information.

This will depend on type of bind supported by the server. Can be username/password, Kerberos SPN, anonymous, etc. For more detailed info on binds, see part 2 of this series.

LDAP Search Information

This tells the client where to start looking for information in the LDAP server.The format for this information is Distinguished Names (DNs), which I cover in part 4. You can set a base DN and then specific DNs for users, groups, netgroups, etc. You can even specify multiple locations to search. The idea here is to filter our searches to speed things up for the clients.

Fun fact: Apple auto-corrects DNs to DNS. Not cool, Apple. Not cool.

LDAP schema information

I cover schemas in detail on part 3. Many clients know about the default schemas LDAP uses, such as RFC-2307, RFC-2307bis, etc. In most cases, the schemas on the server will not stray from that. But in some instances, such as through manual intervention or 3rd party tools like Dell Vintela (also known as Centrify, Quest, etc), there may be need to make adjustments. This can be done on the client. This allows the client to ask for the right information from the server, which then allows the server to find the information and respond to the client.

Client-specific options

Many clients offer specific options like caching of users/groups, credentials, Kerberos configuration, etc. These are generally optional, but should be looked into on a per-client vendor basis.

Sample client configuration

The following is an example of what a clustered Data ONTAP LDAP client would look like:

cluster::*> ldap client show -client-config DOMAIN

                                 Vserver: NAS
               Client Configuration Name: DOMAIN
                        LDAP Server List: 10.228.225.120
                 Active Directory Domain: domain.win2k8.netapp.com
       Preferred Active Directory Servers: -
Bind Using the Vserver's CIFS Credentials: false
                          Schema Template: WinMap
                         LDAP Server Port: 389
                      Query Timeout (sec): 3
        Minimum Bind Authentication Level: sasl
                           Bind DN (User): ldapuser
                                  Base DN: dc=domain,dc=win2k8,dc=netapp,dc=com
                        Base Search Scope: subtree
                                  User DN: cn=users,dc=domain,dc=win2k8,dc=netapp,dc=com
                        User Search Scope: subtree
                                 Group DN: cn=users,dc=domain,dc=win2k8,dc=netapp,dc=com
                       Group Search Scope: subtree
                              Netgroup DN: -
                    Netgroup Search Scope: subtree
               Vserver Owns Configuration: true
      Use start-tls Over LDAP Connections: false
 Allow SSL for the TLS Handshake Protocol: -
           Enable Netgroup-By-Host Lookup: true
                      Netgroup-By-Host DN: -
                   Netgroup-By-Host Scope: subtree

This is what my client configuration running SSSD looks like:

# cat /etc/sssd/sssd.conf
[domain/default]
cache_credentials = True
case_sensitive = False
[sssd]
config_file_version = 2
services = nss, pam
domains = DOMAIN
debug_level = 7
[nss]
filter_users = root,ldap,named,avahi,haldaemon,dbus,radiusd,news,nscd
filter_groups = root
[pam]
[domain/DOMAIN]
id_provider = ldap
auth_provider = krb5
case_sensitive = false
chpass_provider = krb5
cache_credentials = false
ldap_uri = _srv_,ldap://domain.win2k8.netapp.com
ldap_search_base = dc=domain,dc=win2k8,dc=netapp,dc=com
ldap_schema = rfc2307
ldap_sasl_mech = GSSAPI
ldap_user_object_class = user
ldap_group_object_class = group
ldap_user_home_directory = unixHomeDirectory
ldap_user_principal = userPrincipalName
ldap_group_member = memberUid
ldap_group_name = cn
ldap_account_expire_policy = ad
ldap_force_upper_case_realm = true
ldap_user_search_base = cn=Users,dc=domain,dc=win2k8,dc=netapp,dc=com
ldap_group_search_base = cn=Users,dc=domain,dc=win2k8,dc=netapp,dc=com
ldap_sasl_authid = root/centos64.domain.win2k8.netapp.com@DOMAIN.WIN2K8.NETAPP.COM
krb5_server = domain.win2k8.netapp.com
krb5_realm = DOMAIN.WIN2K8.NETAPP.COM
krb5_kpasswd = domain.win2k8.netapp.com

Servers

If you want somewhere for the clients to ask for information, you need a server. The server needs to have a valid RFC-2307 schema to contain the necessary LDAP objects. If you’re doing UNIX-based LDAP and want to use Microsoft Active Directory to serve UNIX-based authentication, then you’d need to ensure the server has UNIX attributes in the schema. While Microsoft Active Directory runs on a LDAP backend, it’s not true UNIX-based LDAP server until you extend the schema. I talk a bit about this in my blog post on IDMU.

As mentioned in the client section, you need a bunch of information to configure clients. The server is where this information comes from. Here’s the stuff you need to check on the server to ensure you configure your clients correctly:

  • Server network info (IP address, hostname, DNS entries, SRV records, LDAP server ports, etc.)
  • Supported bind level (use the strongest available, if possible)
  • Valid bind user or SPN
  • DN information
  • Schema type/attributes

LDAP servers can host tons of information. UNIX user creds, Windows creds, netgroups, IP addresses, SPNs, name mapping rules…. It just depends on what the clients support that determines what you can use.

LDAP Referrals

If you are using multiple LDAP servers and a client is not able to find an object in the specified LDAP server’s domain, it may attempt to use an LDAP referral to look in the other servers. Essentially, it takes information in the LDAP server about other servers we know about and attempts to connect to them via LDAP URI until it either a) finds the object or b) runs out of servers to try. This can happen in both Windows and non-Windows LDAP servers. Some LDAP clients do not support “referral chasing,” so it’s important to know if this is happening in your environment and if your client is able to chase referrals.

Global Catalog Searches

In Active Directory, it is possible to store a copy of attributes from multiple domains in a forest on local domain controllers acting as Global Catalog servers. By default, UNIX attributes don’t get replicated to the Global Catalog, but you can change that behavior as needed. I cover how to do this in TR-4073. If you need to query multiple domains in the same forest and want to avoid LDAP referrals, you can simply replicate the necessary attributes and change the LDAP port to 3268 to let the servers know to use the Global Catalog instead!

My environment

In my environment, I use Active Directory LDAP with Identity Management. But I’ve been known to use OpenLDAP and RedHat Directory Services. Both are perfectly valid to use. However, if you’re intent on doing multiprotocol NAS (CIFS/SMB and NFS), I strongly suggest using Microsoft Active Directory for authentication for UNIX and Windows users. Makes life infinitely easier.

If you are already using Linux-based LDAP, that’s fine. If possible, try to ensure the UNIX user names (uid LDAP attribute) match the Windows user names (sAMAccount attribute). That way, if you are using multiprotocol NAS, you don’t have to worry about name mapping.

If you want to see anything added to this post regarding LDAP servers and clients, feel free to comment or follow me on Twitter @NFSDudeAbides!