TECH::New Security Technical Reports out for Clustered Data ONTAP, PCI-DSS!

If you need information on PCIDSS compliance support in clustered Data ONTAP, there is a new TR available that covers that:

TR-4401: PCI DSS 3.0 and Clustered Data ONTAP 8.3

This technical report provides guidance and information that auditors and system operators will find useful in applying the Payment Card Industry (PCI) Data Security Standard (DSS) requirements to a storage system that runs the clustered Data ONTAP operating system.

http://www.netapp.com/us/system/pdf-reader.aspx?m=tr-4401.pdf

If you want more generalized security recommendations around clustered Data ONTAP, such as RBAC, network isolation, firewalls, logging, keys, etc. then check this TR out:

TR-4393: Clustered Data ONTAP Security Guidance

This document provides a set of practical recommendations to enhance the security of a clustered Data ONTAP system. Note that the protection of user data itself is primarily the responsibility of the appropriate SAN and NAS protocols and configurations. These are secure by default and need little configuration and provide data confidentiality. Therefore, this document focuses on securing the system itself by focusing on administrative interfaces and services. It is intended to reinforce the integrity and availability of a clustered Data ONTAP system in a typical data center environment.

http://www.netapp.com/us/system/pdf-reader.aspx?m=tr-4393.pdf

Advertisements

TECH::Backing up through the years – The road to NetApp AltaVault

evolution-backup

Data protection has been a sore spot for IT organizations for decades, sometimes ignored or neglected until absolutely necessary. Of course, by then, it’s too late. Let’s take a look at how mankind has done their backups and archives throughout the years….

NOTE: I am not a historian and have played very fast and loose with the facts here. For a more serious take on backup history, check this site out.

First backup strategy: Cave drawings

cavement1

The early caveman would document history on the walls of caves, using various natural pigments, charcoal and torches. Naturally, this backup strategy did not have the advantage of being very agile and was susceptible to the elements. However, they have shown to be remarkably resilient, with some backups lasting 28,000 years!

Hieroglyphs

hieroglyph

The ancient Egyptians used a similar form of backup with hieroglyphs. The cavemen proved it could be effective. Why mess with a good thing?

Stone Tablets

250px-Moses

In theory, stone tablets were a significant upgrade for backups to cave walls. They were smaller and lighter, so you could move them to a safe location for better disaster recovery. But they were still pretty heavy. What if you dropped one? Or worse, what if you dropped one on your foot?

Pen and paper

new-dead-sea-scrolls-theory_24016_600x450

Mankind had had enough of the limitations of writing on stone walls and tablets. It was time for a new medium.

With the invention of papyrus in Egypt, people no longer had to backup to cave walls or stone tablets – neither of which were mobile or provided adequate disaster recovery. However, papyrus proved to be a fragile – and costly – backup medium.

Parchment, made from sheepskin, proved to be a more durable backup medium, but also was costly.

Eventually, wood pulp paper was invented. This made writing a cost-effective backup strategy, albeit not terribly efficient.

The printing press

gutenbergpress

Johannes Gutenberg, likely driven by his desire to rid the world of hand cramps and need for backups in a world of ever-increasing data* (no citation needed… I made that up), invented the printing press.

The printing press made cheap, fast and easy backups of large amounts of data possible. It’s first real test? 180 copies of the Gutenberg Bible.

Over the years, the printing press evolved into a much larger entity, running thousands of copies of newspapers a day and archiving the world’s most historic events accurately and efficiently.

chi-histdewey_truman20080104104817

The photo copier

914-trade-show-9f8aa281a4154894a209e4571323d7a340a74655-s900-c85

With the growing need for enterprises to back up important files on-site, Chester Carlson set out to try to develop his idea for electrophotographic copies of paper. He was turned down by over 20 companies, including IBM and GE. In 1944, 5 years after he started, a nonprofit called Battelle Memorial Institute finally listened. In 1947, a company called Haloid helped Carlson refine the process and renamed it “xerography.” In 1949, the first Xerox copier was introduced.

Now, the enterprise IT admin could make copies of anything – important files, signatures, buttocks. The sky was the limit.

But while paper was relatively inexpensive, it was also not durable. Plus, a new challenge was surfacing – how to back up the digital data stored on computers.

Punch cards

RemingtonRandCard.agr

The first computers,(such as ENIAC, UNIVAC), were monolithic devices that took up entire rooms and were operated by a series of punch cards. These cards contained computer programs that could store data on a larger scale than normal pen and paper. However, by modern data standards, they didn’t hold much. This Gizmodo article postulates how many punch cards you’d need to store 15 exabytes of data. The answer?

Let’s assume Google has a storage capacity of 15 exabytes, or 15,000,000,000,000,000,000 bytes. A punch card can hold about 80 characters, and a box of cards holds 2000 cards. 15 exabytes of punch cards would be enough to cover my home region, New England, to a depth of about 4.5 kilometers. That’s three times deeper than the ice sheets that covered the region during the last advance of the glaciers.

Yikes.

Magnetic tape

mixtape_main

With magnetic tape, we’re now entering the realm that most modern IT admins are familiar with. However, did you know that tape has existed for over 200 years in various fields? And that they actually were around when punch cards were used, but were so terrible, punch cards were preferred?

Over the years, that’s changed. Tape quality, capacity and performance has increased magnitudes to where we can feel safe storing our enterprise data on them…. and then trucking them off somewhere else. That was problematic in and of itself – not only did you have to back things up to tape, you had to pay a company to cart them off to a secure location – and trust your data with a 3rd party. And if you grew up during the time of mixed tapes, you remember wearing them out until they were unlistenable. (Or at least recall accidentally pulling the tape out and having to re-wind it with a pencil.

So, with tape, capacity and mobility was there, but durability and speed were still not bringing ease to companies that needed to keep data around a while.

Backup to disk

hard-drive

As hard drive costs came down over the years, it became more and more viable to back data up to disk. Then, with RAID, it became more of a reality – we could now reliably survive hardware failures. Plus, disks were mobile! We could back up, pull the disks out and store them elsewhere for safekeeping. And with the sheer physics of spinning disk, keeping disks powered off theoretically would help them last longer than if they were running, though there is no definitive study to prove or disprove that theory that I could find. Don’t tell that to SSDs, though

Storage operating systems, like NetApp’s Data ONTAP, also provide backup to disk capability via snapshot, as well as backup over a WAN using SnapMirror. I cover this in my Snapshots & Polaroid blog on DataCenterDude.

The cloud

allyourcloud

Today’s push for backup is now towards the cloud. Disks, while cheaper than ever, still cost money. And if you’re going to back up over the WAN, why not do it to a cloud provider with dirt cheap storage that only gets cheaper the less often you access your data?

Sure, you don’t own the disk, but you also don’t own the SLAs or maintenance – someone else does. Someone else has to hire the storage admins to manage the backups and archives you only use when you need them.

Sure, you worry about security – who doesn’t – but how is it any less secure than hiring a company to truck off your tapes to another facility? People did that for years with no concern. You could argue that backing up to the cloud might even be MORE secure than the old ways. Definitely more secure than cave drawings…

Enter Alta Vault!

This is where NetApp’s AltaVault (borne out of Riverbed Steelstore) comes in. Rachel Dines gives an excellent run down of this cloud-based backup solution in her blog on the NetApp communities.

Introducing AltaVault: The Solution to Your Backup & Restore Challenges

Additionally, NetApp A-Team member Jarett Kulm gives a run down from a non-NetApp perspective in his blog.

Netapp AltaVault – Speeding up, Cutting costs, and Cloudify your backups

Backups and restores are forever evolving and NetApp AltaVault is helping you get to the next step.

TECH::NetApp at OpenStack Summit 2015 – Where do I find session recordings?

openstack-logo512vancouver-154800

OpenStack Summit 2015 in Vancouver came and went last week and NetApp’s OpenStack team was well represented. That’s fitting, as NetApp is one of the key contributors to the OpenStack standard as a Gold member. If you’re interested in the NetApp Github repository, check it out here.

One of the cool things about the OpenStack Summit is that you can find the presentations online after the conference, hosted on YouTube. One of the OpenStack TMEs, Dave Cain, compiled a list of these sessions by NetApp contributors including:

Here’s where you can find each session:

Session Who Link
Manila: An Update on OpenStack’s Shared File Services Program Bob Callaway https://www.youtube.com/watch?v=2xS_P6q3mWI
FlexPod with Red Hat Enterprise Linux OpenStack Platform 6 Dave Cain
Eric Railine
https://www.youtube.com/watch?v=OrOKm3jjwTQ
Cinder 102: Pouring a solid foundation for block storage services Jeff Applewhite https://www.youtube.com/watch?v=HqAb-lwEyUc
Because Data speaking session with Jeff O’Neal & Red Hat, SUSE, Intel and HP in room 211 Jeff O’Neal https://www.youtube.com/watch?v=eDvcWLl2J2Q
Private Cloud POC with RHEL OpenStack Platform on FlexPod Customer https://www.youtube.com/watch?v=wnvQj086wKw
Do Appliances Violate OpenStack’s Prime Directive? Rob Esker https://www.youtube.com/watch?v=6hZgOD8kXxY
Manila: Taking OpenStack Shared File Storage to the Telco Cloud Thomas Lichtenstein https://www.youtube.com/watch?v=ikhjeGN8sY4
Couch to OpenStack – Understanding Where to Start Your Learning Journey Melissa Palmer https://www.youtube.com/watch?v=5_2zpMp6158
Ask the Experts: Designing Storage for the Enterprise Alex McDonald https://www.youtube.com/watch?v=BmEaHyu0-fk
Why Use Enterprise Storage with OpenStack? NetApp’s Portfolio Story Brendan Wolfe
Rob Esker
https://www.youtube.com/watch?v=cqFL046WKz0
Thomson Reuters: Manila Deployment Customer Story Greg Loughmiller
Customer
https://www.youtube.com/watch?v=NtrfBUdTudo
Virdata: A Customer Flexpod story Customer https://www.youtube.com/watch?v=VF_2SwrHP7M
Object Storage – Building a Reliable, Durable, Scalable Repository for Your Data Ingo Fuchs https://www.youtube.com/watch?v=xd8fUEb-A-E
Manila 101 & Use Cases Greg Loughmiller
Thomas Lichtenstein
https://www.youtube.com/watch?v=8KhXD9v0jKI
POC to DevOps: Accelerating Deployments with Enterprise Storage Scott Peiffer
Matt Tangvald
https://www.youtube.com/watch?v=aINOeyYN5hQ
Deploying Red Hat Enterprise Linux OpenStack Platform 6 on NetApp Storage Jeff Applewhite https://www.youtube.com/watch?v=3nnJhqVLuGE
NetApp: Innovation for the OpenStack Community of Operators and Users Bob Callaway
Anurag Palsule
https://www.youtube.com/watch?v=IQOxIn9rWCE
SUSE & NetApp Reveal the Secrets of OpenStack Deployment Success Jeff Applewhite
Rick Ashford
https://www.youtube.com/watch?v=oA8Q8qmJXe4

If you have any questions, feel free to comment or tweet @NFSDudeAbides!

Some other NetApp OpenStack experts to follow on Twitter:

Sanjay Nighojkar (@sanikar)

Looking for OpenStack on NetApp Technical reports?

LDAP::Distinguishing Distinguished Names in LDAP – Part 4

This post is part 4 of the new LDAP series I’ve started to help people become more aware of what LDAP is and how it works. Part one was posted here:

What the heck is an LDAP anyway?

This post will focus on Distinguished Names (DN) in LDAP.


In LDAP, objects are stored in a hierarchical structure, much like folders in a file system. Each object has a unique identifier so that LDAP queries can find them quickly. These identifiers are known as “distinguished names.”

homer-distinguished

Contrary to how it sounds, these attributes are not fancy by any stretch of the imagination. In fact, they can be downright messy, depending on how the LDAP objects are laid out.

Distinguished Names (DNs)

Distinguished Names are the full paths to objects in LDAP. They are comprised of a number of Relative Distinguished Names (RDNs).

Relative Distinguished Names (RDNs)

RDNs are components of the Distinguished Name. In the diagram below, “DC=domain,DC=com” is a Relative DN to the DN of CN=Pat, OU=Users, DC=domain, DC=com.

ldap-directory-structure

How do these get messy?

In very large LDAP environments where objects are laid out in a deep folder structure, a DN can start to become very long. Each time a new object is added as a child to another object, another RDN is added to the DN. As a result, you could see a DN that looks like this:

CN=Pat, OU=Super, OU=Cali, OU=Fragi, OU=Listic, OU=Espi, OU=Alli, OU=Docius, OU=Users, DC=domain, DC=com

To avoid long DNs, you could use a wide folder structure. This is one suggested best practice for designing a LDAP environment, if only to make managing objects easier.

ldap-deep-wide

Types of Relative Distinguished Names

In the examples above, we see abbreviations like OU (Organizational Unit), DC (Domain Component) and others. These are simply different types of RDNs. There are a slew of these that are used in various LDAP schemas. The following table (from Microsoft’s MSDN) shows a list of typical RDNs used in LDAP.

String Attribute type
DC domainComponent
CN commonName
OU organizationalUnitName
O organizationName
STREET streetAddress
L localityName
ST stateOrProvinceName
C countryName
UID userid

As to what “type” of RDN to use, there really isn’t a wrong answer. These are simply identifiers. If there’s any “best practice” for this, it would be “make sure it makes sense.” For example, don’t use STREET to identify a user. Use something like CN instead. Sure, it’s possible to use STREET for users, but…

just-because

Reserved characters

There are rules to what characters can be used in RDNs. If these characters need to be included in the RDN, they have to be “escaped” with a backslash (\). The following characters are reserved for specific functions, so they require an escape.

Reserved character Description Hex value
space or # character at the beginning of a string
space character at the end of a string
, comma 0x2C
+ plus sign 0x2B
double quote 0x22
\ backslash 0x5C
< left angle bracket 0x3C
> right angle bracket 0x3E
; semicolon 0x3B
LF line feed 0x0A
CR carriage return 0x0D
= equals sign 0x3D
/ forwards slash 0x2F

Modifying DNs

In LDAP, it’s always possible to modify a DN. However, the best way to do that is via the automated tools available from the LDAP server provider. For example, in Active Directory LDAP, we can modify DNs via ADSI Edit. But the better way to do this is via the Active Directory Users and Computers interface. ADSI Edit should only be used when necessary. However, keep in mind that native Windows LDAP management tools are going away in the future, so you may have to resort to a 3rd party management tool. In non-Windows LDAP, DN modification can be done manually via LDIF files, or via the provided GUI interface. As always, check with your vendor for the best ways to do this.

Distinguished, but often forgotten

Distinguished names are often overlooked when implementing or troubleshooting LDAP environments. It’s important to keep them in mind and avoid not being able to see the forest for the trees…

For more detailed information on LDAP configuration (particularly with NetApp clustered Data ONTAP), see TR-4073: Secure Unified Authentication.

Also, stay tuned for more in the series of LDAP basics on this blog! Links to other sections can be found on the first post in this series:

What the heck is an LDAP anyway?

TECH::DevOp(inion) – Are we asking IT and developers to become Skynet?

When I first heard the term “DevOps” I went to do some research because I wanted to understand what it meant. One place I found a definition was a blog called “the agile admin.”

DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support.

And:

For this purpose, “DevOps” doesn’t differentiate between different sysadmin sub-disciplines – “Ops” is a blanket term for systems engineers, system administrators, operations staff, release engineers, DBAs, network engineers, security professionals, and various other subdisciplines and job titles. “Dev” is used as shorthand for developers in particular, but really in practice it is even wider and means “all the people involved in developing the product,” which can include Product, QA, and other kinds of disciplines.

Granted, I’m quite a bit late to the party – it seems that this concept started around 2010. What spawned this blog was an email I got from a recruiter for a “DevOps Engineer” position. What I had read about DevOps made it sound more like a concept or a philosophy of how to do things in a business; not a thing you could actually engineer. So, essentially, the title sounded a bit like “{Insert Buzzword} Engineer.”

What really resonated with me, however, and what confirmed what I was already thinking about DevOps, was the job description. I’ll post it here (with names/company redacted), but the way it read to me was something that could be summarized in one two word sentence: Be omniscient.

The dev-ops engineer will provide senior engineering and development operations support and expertise for the US Mortgage Insurance IT organization. The ideal candidate should have a minimum of 5 years of experience as a System Engineer and extensive knowledge of core infrastructure technologies such as virtual desktop (preferably Citrix environments), Microsoft operating systems, Linux/Unix operating systems, Active Directory, and virtual server environments. The candidate should also have a minimum of 2 years of development and orchestration experience. Excellent communication and documentation skills are required. • 2+ years developer experience (Java, JavaScript, SQL, C++, PHP, Ruby, Python) • 5+ years system administration hands on experience and preferred experience with the following: MS Exchange, MS SharePoint, VDI, Citrix, Scripting, Active Directory, Windows Server OS, Linux OS, Storage Systems, Avaya Telecom Solutions, Web Methods, SAS Analytics, Configuration Management Database Operations • Strong proficiency in writing build scripts to automate tasks that can be automated (i.e. Perl, BASH, PowerShell, Shell, Puppet, Chef) • Experience implementing and administering monitoring tools • Demonstrate proficiency in writing build scripts to automate tasks that can be automated (i.e. Perl, BASH, PowerShell, Shell) • Experience administering and maintaining source control systems, including branching and merging strategies with solutions (i.e. Microsoft TFS, Git) • Experience with Application Lifecycle Management concepts including continuous integration, continuous delivery, reproducibility, traceability, etc. • Excellent communication and documentation skills • ITIL V3 Certification (Or equivalent training/experience) • IT Service Management training/experience • Preferred Bachelor’s degree in Computer Science, MIS or a related field • Ability to prioritize and execute tasks in a high-pressure environment, make sound decisions in emergency situations. • Proven analytical and problem-solving abilities • Experience working in a team-oriented, collaborative environment. • Experience evaluating and leveraging 3rd party commercial and open source System Engineering/Dev Ops solutions. • Proficient in developing system-engineering solutions to help achieve highly available, highly scalable systems. • Proven solid analytical and problem solving skills.

If you’re familiar with any/all of the above technologies, you’d know that’s a REALLY long and complicated list and that many of the skills it takes to be proficient at just one or two of those don’t always translate to the others. As a former systems administrator, I can tell you that not all admins know how to code/script – the scripts I’ve made were rarely, if ever from scratch. Rather, they were grabbed from other scripts and cobbled together.

It’s rare (if not impossible) to find a solid Windows administrator who *also* can be a solid Linux administrator. Toss in VDI, Active Directory, Citrix, SAS Analytics and sprinkle in the need to know how to code, and you are virtually asking for a T-800 IT admin, but one that’s a friendly and collaborative people person.

t800-flowers

While the initial concept of DevOps, where a developer can manage and QA his or her own environment and be a contributor end to end is idealistic. Human nature sets in and managers read what DevOps is and they think:

Sweet! I can hire one person instead of 5! CH-ching! $$$$$$

It’s a startup mentality inside of established enterprise IT organizations. However, startups do this out of necessity and lack of resources. DevOps has become a movement to tell developers to row harder, and that’s unfortunate. I’m not alone in this thinking. This blog post sums it up from a developer’s point of view.

How DevOps should be implemented, in an ideal world, is not by hiring one person to do the job of 5 people. It should be hiring 5 people to collaborate to do the job of 5 people. That way, the overall concept of DevOps remains intact and you don’t need 1 person who’s Skynet – you just need 5 good developers with skill sets that overlap to cover gaps.

DevOps should not be a way to reduce headcount, nor a way to squeeze more out of your developers. It should be organic and well-planned. If anything, a DevOps engineer should be someone who comes up with the right way to implement DevOps in an organization.

Do you have an opinion on DevOps? Am I way off base (fully admit I could be)? Comment and let me know, or tweet @NFSDudeAbides!

TECH::Multiprotocol NAS, Locking and You

Bank_Vault_3D_Wallpaper-HD

One question I got today, and have gotten a few times in my years as a NAS guy, was “how does locking work when you are sharing files between CIFS/SMB and NFS?”

Essentially, “can you guarantee my data will be safe?”

Well, the first answer to that question is a question: What is a file lock?

File locks explained

Essentially, a file lock is exactly what it sounds like. We all know what locks are; we have them on our doors. They keep unwanted people and things out. A file lock is no different; it’s a way to prevent unwanted people and applications from accessing files while they are in use to prevent the “c” word – CORRUPTION!

ohfudge

File locks are always issued by the requesting client or application – a NAS will only honor or deny the lock. If the client or application does not issue a lock to the file, all bets are off. This is similar to our door locking analogy – a door will not lock unless you turn the key.

Similarly, a lock will only be safely broken or released when the application or client is done with it. If the client or application dies, the lock needs to be cleaned up. Depending on the NAS protocol/protocol version, this may or may not require manual intervention. For example, if NFSv3 locks are left over by an application crash (such as an Oracle database), then the locks must be manually cleared on the server before the application can be restarted to use the same files. This is because locking in NFSv3 is handled via the Network Lock Manager (NLM), which is an ancillary protocol to NFS and doesn’t always play well with others.

Conversely, NFSv4.x locking is integrated with the protocol and is lease-based. With leases, the locks will live for a pre-determined amount of time. If the client doesn’t renew the lock in that amount of time, the locks will expire. If the client or application crashes, the locks will release on their own after the lease period expires. If the NFS server restarts, the locks will remain intact until either a client reclaims them or the lease expires.

From RFC-5661:

Lease: A lease is an interval of time defined by the server for which the client is irrevocably granted locks. At the end of a lease period, locks may be revoked if the lease has not been 
extended. A lock must be revoked if a conflicting lock has been granted after the lease interval

A server grants a client a single lease for all state.

Simple enough. But what many people don’t know is that there are also different types of file locks.

File Locking in CIFS/SMB

Special thanks to NetApp CIFS/SMB TME Marc Waldrop (@CIFSorSMB on Twitter) for the CIFS/SMB file lock sanity check.

Locks in CIFS/SMB are done either at a share level or a file level. A share lock will dictate what level of access is allowed to the open file while the original opener of the file has the file opened. The share level lock, commonly known as share access mode, will dictate whether additional openers of the file can read or write to the file. The share access mode can lock the entire file from additional clients doing specific read or write operations. A file level lock is what most know as byte-range lock.

File locking in CIFS/SMB is done via oplocks and share locks. When a file is opened for editing, an oplock is applied and the share-level lock is modified to control what access a client or application has to the file. In some cases, classic byte-range locks are used when portions of a file need to be locked.

CIFS/SMB uses three types of “opportunistic locks,” or “oplocks.”

  1. Batch locks
    Created for batch files to help performance issues with files that are opened and closed many times.
  2. Exclusive locks
    Used when an application opens a file in “shared” mode; for example, Microsoft Office. Exclusive locks allow a client to assume they are the only ones using a file and will cache all changes locally. This is where that weird looking ~FILE.doc comes from in Word. If another application requests an exclusive lock on the file, the server will invalidate the original lock and the application will flush the cached changes to the file.
  3. Level 2 Oplocks
    These get issued when multiple clients want to access the same file. When an application gets issued a Level 2 Oplock, multiple clients can read/cache reads of a file. If any client attempts a write, that client gets issued an exclusive lock.

File Locking in NFS

Unlike CIFS/SMB, NFS doesn’t do share-level locking. It only does file locking. NFS locking comes in two flavors:

  1. Shared locks
    Shared locks can be used by multiple processes at the same time and can only be issued if there are no exclusive locks on a file. These are intended for read-only work, but can be used for writes (such as with a database). This is similar to the Level 2 Oplock in CIFS/SMB.
  2. Exclusive locks
    These operate the same as exclusive locks in CIFS/SMB – only one process can use the file when there is an exclusive lock. If any other processes have locked the file, an exclusive lock cannot be issued, unless that process was “forked.” However, unlike CIFS/SMB, there isn’t a notion of “opportunistic” locking, where a file will allow access without outside intervention.

One of the best analogies I’ve seen for this is a real-world example on stackoverflow:

I wrote this answer down because I thought this would be a fun (and fitting) analogy:

Think of a lockable object as a blackboard (lockable) in a class room containing a teacher (writer) and many students (readers).

While a teacher is writing something (exclusive lock) on the board:

1. Nobody can read it, because it’s still being written, and she’s blocking your view => If an object is exclusively
          locked, shared locks cannot be obtained.

2. Other teachers won’t come up and start writing either, or the board becomes unreadable, and confuses
students => If an object is exclusively locked, other exclusive locks cannot be obtained.

When the students are reading (shared locks) what is on the board:

1. They all can read what is on it, together => Multiple shared locks can co-exist.

2. The teacher waits for them to finish reading before she clears the board to write more => If one or more
          shared locks already exist, exclusive locks cannot be obtained.

The reason I prefer the CIFS/SMB notion of locking is that it feels a lot less messy in general and there is less overhead/management involved with the locking. This is especially true with NFSv3, where locking is not integrated into the protocol, but instead uses an ancillary process called NLM. (as mentioned above)

NFSv4.x and later realized this folly and have integrated locking into the protocol. Locking is better now in NFS, and applications are starting to adopt NFSv4.x as a standard because of the improved locking mechanisms.

Locking in multiprotocol environments

Multiprotocol in NAS simply means “ability to access the same datasets from multiple protocols.” Thus, CIFS/SMB and NFS can read and write to the same files.

This can be problematic for a few reasons:

  • CIFS/SMB and NFS permissions are not the same
    NFS, especially NFSv3, uses mode bit permissions (such as 777, 755, etc). NFSv4.x implements ACLs, which look and feel an awful lot like Windows NTFS ACLs. They’re so close, in fact, that it solves a lot of the old permissioning issues you would see in multiprotocol environments. But they don’t solve all of our problems.
  • CIFS/SMB and NFS user/group concepts are not the same
    Windows users and groups use a super long Security Identifier (SID) for unique identifiers, which is constructed by leveraging the domain SID and unique user/group Relative ID (RID). NFS users and groups use a numeric UID/GID and/or NFSv4 ID domain string. In order to get a user to leverage the correct security permission structure, a name mapping has to take place, depending on the originating request + the type of ACL on the file or folder. Unified name services, such as LDAP, are often useful in alleviating the pain of this scenario.
  • CIFS/SMB and NFS file locks are not the same
    As described above. Because of this, the underlying file system has to be able to negotiate the file locking. As it so happens, NetApp invented integrated locking in the 1990s.

So, how do NAS vendors (like NetApp) get this to work?

In ONTAP (and, by proxy, WAFL), the file system owns the locks and gets the final say as to who gets what access. When a CIFS/SMB client grants an exclusive lock to a file, a NFS client that tries to get a file lock to that same file would not be granted access until the lock has been released. The same goes for NFS clients that have an exclusive lock on a file – CIFS/SMB can’t do anything with that file until the locks is broken/released. These locks are only released when the original client releases them, either voluntarily or by lease expiration.

The general idea here is, protocols don’t matter – protect the files at all cost. If I buy a deadbolt lock, I should be able to use it on any door I choose and it should keep my house safe.

In ONTAP, you can check to see if a file has a lock via the command line or API calls.

In Data ONTAP operating in 7-Mode, the commands are:

7mode> lock status
lock status -f [file] [-p protocol] [-n]
lock status -h [host [-o owner]] [-f file] [-p protocol] [-n]
lock status -o [-f file] [-p protocol] [-n]
lock status -o owner [-f file] [-p protocol] [-n] (valid for CIFS only)
lock status -p protocol [-n]
lock status -n

In clustered Data ONTAP, the command is:

cluster::> vserver locks show ?
 [ -instance | -smb-attrs | -fields , ... ]
 { [ -vserver  ] Vserver
 [[-volume] ] Volume
 [[-lif] ] Logical Interface
 [[-path] ] Object Path
 | [ -lockid  ] } Lock UUID
 [ -protocol  ] Lock Protocol
 [ -type {byte-range|share-level|op-lock|delegation} ] Lock Type
 [ -node  ] Node Holding Lock State
 [ -lock-state  ] Lock State
 [ -bytelock-offset  ] Bytelock Starting Offset
 [ -bytelock-length  ] Number of Bytes Locked
 [ -bytelock-mandatory {true|false} ] Bytelock is Mandatory
 [ -bytelock-exclusive {true|false} ] Bytelock is Exclusive
 [ -bytelock-super {true|false} ] Bytelock is Superlock
 [ -bytelock-soft {true|false} ] Bytelock is Soft
 [ -oplock-level {exclusive|level2|batch|null|read-batch} ] Oplock Level
 [ -sharelock-mode  ] Shared Lock Access Mode
 [ -sharelock-soft {true|false} ] Shared Lock is Soft
 [ -delegation-type {read|write} ] Delegation Type
 [ -client-address  ] Client Address
 [ -smb-open-type {none|durable|persistent} ] SMB Open Type
 [ -smb-connect-state  ] SMB Connect State
 [ -smb-expiration-time  ] SMB Expiration Time (Secs)
 [ -smb-open-group-id  ] SMB Open Group ID

Testing file locking between protocols

This can be tricky. I’ve seen a lot of people try to test multiprotocol file locking in the following manner:

  • Open a file in a CIFS/SMB share with notepad in Windows.
  • Go to the NFS client. Open the file with vi.
  • Wonder why the heck writes are allowed on both. Bang head repeatedly on desk. Destroy a copier.

office-space-copier

The reason why writes are allowed on both clients in this scenario is because NEITHER APPLICATION HAS ISSUED A LOCK. File locks are the responsibility of the client and/or application, not the server. Vi and notepad don’t lock files!

Testing locks from NFS when CIFS/SMB owns the lock

The easiest way to lock a file in Windows? Microsoft Office.

When I open a MS Word file in a CIFS/SMB share on cDOT, I get some share locks and a batch oplock:

cluster::> vserver locks show -vserver NAS -volume unix 

Vserver: NAS
Volume   Object Path               LIF         Protocol  Lock Type   Client
-------- ------------------------- ----------- --------- ----------- ----------
unix     /unix/                    data1       cifs      share-level 10.228.225.120
               Sharelock Mode: read-deny_none
               Sharelock Mode: read-deny_none
                                                                     10.62.194.166
               Sharelock Mode: read-deny_none
         /unix/office.docx                               op-lock     10.62.194.166
               Oplock Level: batch
                                                         share-level 10.62.194.166
               Sharelock Mode: read_write-deny_write_delete

In some cases, I may see an exclusive oplock, especially during edits. I can generate an exclusive oplock if another CIFS/SMB client attempts to access the doc with something like WordPad and gets denied access:

Vserver: NAS
Volume   Object Path               LIF         Protocol  Lock Type   Client
-------- ------------------------- ----------- --------- ----------- ----------
unix     /unix/                    data1       cifs      share-level 10.228.225.120
               Sharelock Mode: read-deny_none
               Sharelock Mode: read-deny_none
                                                                     10.62.194.166
               Sharelock Mode: read-deny_none
         /unix/office.docx                               op-lock     10.62.194.166
               Oplock Level: exclusive
                                                         share-level 10.62.194.166
               Sharelock Mode: read_write-deny_write_delete

If I use another instance of MS Word on a separate client, I can get a level 2 oplock:

Vserver: NAS
Volume   Object Path               LIF         Protocol  Lock Type   Client
-------- ------------------------- ----------- --------- ----------- ----------
unix     /unix/                    data1       cifs      share-level 10.228.225.120
               Sharelock Mode: read-deny_none
               Sharelock Mode: read-deny_none
                                                                     10.62.194.166
               Sharelock Mode: read-deny_none
         /unix/office.docx                               op-lock     10.62.194.166
               Oplock Level: level2
                                                         share-level 10.62.194.166
               Sharelock Mode: read_write-deny_write_delete

With this Office file open, I want to test to see if my NFS client can access/write to the file. When I look at the share with “ls,” I can see that funny ~filename listed.

# ls | grep office
~$office.docx
office.docx

As I mentioned, vi is a terrible way to test locks. However, Linux clients have utilities to test file locking in NFS. In CentOS/RHEL, one utility to use is flock. With flock, I can run a command to lock a file. To be cute, I use vi. 😉

When I try to get an exclusive lock on that file, it hangs:

# flock --exclusive office.docx vi

Since I’m doing NFSv3, I can check to see if any NLM locks have been issued on my cDOT Storage Virtual Machine. I can see that I have been issued a byte-range lock, but since the command hung, I can’t do any damage to the file:

cluster::> vserver locks show -vserver NAS -volume unix -protocol nlm

Vserver: NAS
Volume   Object Path               LIF         Protocol  Lock Type   Client
-------- ------------------------- ----------- --------- ----------- ----------
unix     /unix/office.docx         data1       nlm       byte-range  10.228.225.140
                Bytelock Offset(Length): 0 (18446744073709551615)

When I try to get a shared lock to that file, it allows me in:

~ VIM - Vi IMproved 
~ 
~ version 7.2.411 
~ by Bram Moolenaar et al. 
~ Modified by <bugzilla@redhat.com> 
~ Vim is open source and freely distributable 
~ 
~ Become a registered Vim user! 
~ type :help register for information 
~ 
~ type :q to exit 
~ type :help or  for on-line help 
~ type :help version7 for version info

And my SVM grants a byte-range lock:

cluster::> vserver locks show -vserver NAS -volume unix -protocol nlm

Vserver: NAS
Volume   Object Path               LIF         Protocol  Lock Type   Client
-------- ------------------------- ----------- --------- ----------- ----------
unix     /unix/office.docx         data1       nlm       byte-range  10.228.225.140
                Bytelock Offset(Length): 0 (18446744073709551615)

But when I try to write, it fails.

E32: No file name

So, that’s good, right? I close the Word doc out and all the file locks are gone. Only share locks remain:

cluster::> vserver locks show -vserver NAS -volume unix
Vserver: NAS
Volume   Object Path               LIF         Protocol  Lock Type   Client
-------- ------------------------- ----------- --------- ----------- ----------
unix     /unix/                    data1       cifs      share-level 10.228.225.120
               Sharelock Mode: read-deny_none
               Sharelock Mode: read-deny_none
                                                                     10.62.194.166
               Sharelock Mode: read-deny_delete
3 entries were displayed.

Testing locks from CIFS/SMB when NFS owns the lock

From my NFS client, I lock one of the files in the share with an exclusive lock. In this case, I use “newfile,” because vi has no idea what to do with an Office doc.

# flock --exclusive newfile vi

And I can see the new byte-range lock on the SVM:

cluster::> vserver locks show -vserver NAS -volume unix
Vserver: NAS
Volume   Object Path               LIF         Protocol  Lock Type   Client
-------- ------------------------- ----------- --------- ----------- ----------
unix     /unix/                    data1       cifs      share-level 10.228.225.120
               Sharelock Mode: read-deny_none
               Sharelock Mode: read-deny_none
                                                                     10.62.194.166
               Sharelock Mode: read-deny_delete
unix     /unix/office.docx         data1       nlm       byte-range  10.228.225.140
                Bytelock Offset(Length): 0 (18446744073709551615)

The expected behavior here? The file will only allow reads. And I am not disappointed!

Screen Shot 2015-05-20 at 12.34.12 AM

What happens if I have a stale lock?

If my NFS client dies, for whatever reason, and that byte-range lock is still lingering, I can break it from the storage in advanced privilege (complete with scary/legit warning):

cluster::*> vserver locks break -vserver NAS -volume unix -lif data1 -path /unix/newfile

Warning: Breaking file locks can cause applications to become unsynchronized and may lead to data corruption.
Do you want to continue? {y|n}: y
1 entry was acted on.
cluster::> vserver locks show -vserver NAS -volume unix
Vserver: NAS
Volume   Object Path               LIF         Protocol  Lock Type   Client
-------- ------------------------- ----------- --------- ----------- ----------
unix     /unix/                    data1       cifs      share-level 10.228.225.120
               Sharelock Mode: read-deny_none
               Sharelock Mode: read-deny_none
                                                                     10.62.194.166
               Sharelock Mode: read-deny_delete
3 entries were displayed.

Once that happens, I can issue new locks from other clients, whether they are CIFS/SMB or NFS. Doesn’t matter – multiprotocol locking in ONTAP just works!

Where can I find out more?

For more information on NFS in clustered Data ONTAP, see TR-4067: NFS Best Practice and Implementation Guide.

For more information on assorted aspects of multiprotocol NAS access in clustered Data ONTAP, see TR-4073: Secure Unified Authentication.

Additionally, subscribe to Why is the Internet Broken and follow @NFSDudeAbides on Twitter for more NAS-related information!

TECH::Docker + CIFS/SMB? That’s unpossible!

docker-smb-ralph

Recently, I’ve been playing with Docker quite a bit more, trying to educate myself on what it can and cannot do and where it fits in to NetApp and file services/NAS.

I wrote a blog on setting up a PaaS container that can do Firefox over VNC (for Twitter, of all things), as well as one on using NFS in Docker. People have asked me (and I have wondered), what about CIFS/SMB? Now, we could totally do this via the Linux container I created via mount -t cifs or Samba. But I’m talking about Windows-based CIFS/SMB.

Microsoft supports Docker?

Recently, Microsoft issued an announcement that it will be integrating Docker into Windows Server and Windows Azure, as well as adding Server container images in Docker hub. In fact, you can find Microsoft containers in GitHub today. But the content is a bit sparse, as far as I could see. This could be due to new-ness, or worse, apathy. Time will tell.

As far as Server containers, it seems that Windows containers won’t support RDP, nor local login. Only PowerShell and WMI, as per this Infoworld article on Microsoft doing a Docker demo. And when I look for PowerShell images, I found just one:

# docker search powershell
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
docker.io: docker.io/solarkennedy/powershell

It would be totally valid to connect to a CIFS/SMB share via PowerShell, but it looks like there’s a bit of work to do to get this image running – namely, running it on a Windows server rather than Linux:

# docker run -t -i --privileged docker.io/solarkennedy/powershell:latest
Application tried to create a window, but no driver could be loaded.
Make sure that your X server is running and that $DISPLAY is set correctly.
Encountered a problem reading the registry. Cannot find registry key SOFTWARE\Microsoft\PowerShell.

Registry errors? That sure looks familiar… 🙂

What about Azure?

Microsoft also has Azure containers out there. I installed one of the Azure CLI containers, just to see if we could do anything with it. No dice. The base OS for Azure appears to be Linux:

# docker run -t -i --privileged docker.io/microsoft/azure-cli:latest
root@b23878ec46c4:/# uname -a
Linux b23878ec46c4 3.10.0-229.1.2.el7.x86_64 #1 SMP Fri Mar 27 03:04:26 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

This is the set of commands I get:

# help
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
These shell commands are defined internally. Type `help' to see this list.
Type `help name' to find out more about the function `name'.
Use `info bash' to find out more about the shell in general.
Use `man -k' or `info' to find out more about commands not in this list.
A star (*) next to a name means that the command is disabled.
job_spec [&] history [-c] [-d offset] [n] or history -anrw [filename] or history -ps arg [arg..>
 (( expression )) if COMMANDS; then COMMANDS; [ elif COMMANDS; then COMMANDS; ]... [ else COMMANDS; >
 . filename [arguments] jobs [-lnprs] [jobspec ...] or jobs -x command [args]
 : kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
 [ arg... ] let arg [arg ...]
 [[ expression ]] local [option] name[=value] ...
 alias [-p] [name[=value] ... ] logout [n]
 bg [job_spec ...] mapfile [-n count] [-O origin] [-s count] [-t] [-u fd] [-C callback] [-c quantum] >
 bind [-lpsvPSVX] [-m keymap] [-f filename] [-q name] [-u name] [-r keyseq] [-x keys> popd [-n] [+N | -N]
 break [n] printf [-v var] format [arguments]
 builtin [shell-builtin [arg ...]] pushd [-n] [+N | -N | dir]
 caller [expr] pwd [-LP]
 case WORD in [PATTERN [| PATTERN]...) COMMANDS ;;]... esac read [-ers] [-a array] [-d delim] [-i text] [-n nchars] [-N nchars] [-p prompt] [->
 cd [-L|[-P [-e]] [-@]] [dir] readarray [-n count] [-O origin] [-s count] [-t] [-u fd] [-C callback] [-c quantum>
 command [-pVv] command [arg ...] readonly [-aAf] [name[=value] ...] or readonly -p
 compgen [-abcdefgjksuv] [-o option] [-A action] [-G globpat] [-W wordlist] [-F fu> return [n]
 complete [-abcdefgjksuv] [-pr] [-DE] [-o option] [-A action] [-G globpat] [-W wordl> select NAME [in WORDS ... ;] do COMMANDS; done
 compopt [-o|+o option] [-DE] [name ...] set [-abefhkmnptuvxBCHP] [-o option-name] [--] [arg ...]
 continue [n] shift [n]
 coproc [NAME] command [redirections] shopt [-pqsu] [-o] [optname ...]
 declare [-aAfFgilnrtux] [-p] [name[=value] ...] source filename [arguments]
 dirs [-clpv] [+N] [-N] suspend [-f]
 disown [-h] [-ar] [jobspec ...] test [expr]
 echo [-neE] [arg ...] time [-p] pipeline
 enable [-a] [-dnps] [-f filename] [name ...] times
 eval [arg ...] trap [-lp] [[arg] signal_spec ...]
 exec [-cl] [-a name] [command [arguments ...]] [redirection ...] true
 exit [n] type [-afptP] name [name ...]
 export [-fn] [name[=value] ...] or export -p typeset [-aAfFgilrtux] [-p] name[=value] ...
 false ulimit [-SHabcdefilmnpqrstuvxT] [limit]
 fc [-e ename] [-lnr] [first] [last] or fc -s [pat=rep] [command] umask [-p] [-S] [mode]
 fg [job_spec] unalias [-a] name [name ...]
 for NAME [in WORDS ... ] ; do COMMANDS; done unset [-f] [-v] [-n] [name ...]
 for (( exp1; exp2; exp3 )); do COMMANDS; done until COMMANDS; do COMMANDS; done
 function name { COMMANDS ; } or name () { COMMANDS ; } variables - Names and meanings of some shell variables
 getopts optstring name [arg] wait [-n] [id ...]
 hash [-lr] [-p pathname] [-dt] [name ...] while COMMANDS; do COMMANDS; done
 help [-dms] [pattern ...] { COMMANDS ; }

There is an Azure command set also, but that seems to connect directly to an Azure cloud instance, which requires an account, etc. I suspect I’d have to pay to use commands like “azure storage,” which is why I haven’t set one up yet. (I’m cheap)

azure-cli

root@b23878ec46c4:/# azure storage share show
info: Executing command storage share show
error: Please set the storage account parameters or one of the following two environment variables to use storage command. 1.AZURE_STORAGE_CONNECTION_STRING, 2. AZURE_STORAGE_ACCOUNT and AZURE_STORAGE_ACCESS_KEY
info: Error information has been recorded to /root/.azure/azure.err
error: storage share show command failed

Whither Windows file services?

The preliminary results of using Docker to connect to CIFS/SMB shares aren’t promising. That isn’t to say it won’t be possible. I still need to install Docker on a Windows server and try that PowerShell container again. Once I do that, I’ll update this blog, so stay tuned!

Plus, it’s entirely possible that more containers will pop up as the Microsoft repository grows. However, I do hope this works or is at least in the plans for Microsoft. While it’s cool to connect to a cloud share via CIFS/SMB and Azure, I’d like to be able to have control over connecting to shares on my private storage, such as NetApp.

TECH::Using NFS with Docker – Where does it fit in?

docker-nfs

NOTE: I wrote this blog nearly 2 years ago, so a lot has changed since then regarding Docker. I will eventually update this blog, but do check out things like the Docker Volume Plugin for NetApp storage in the meantime. 🙂

Recently, I wrote a post on DataCenterDude.com on how to use Docker to create the ultimate subtweet via a container running VNC and Firefox.

That got me thinking… as the NFS TME for NetApp, I have to consider where file services fit into newer technologies like Docker. What use cases might there be? Why would we use it?

In this blog, I will talk about what sorts of things you can do with NFS in Docker. I’ll say upfront that I’m a Docker rookie, so some of the things I do might seem kludgy to Docker experts. But I’m learning as I go and trying to document it here. 🙂

Can I store Docker images on NFS?

When you run a build of a Docker image, it gets stored in /var/lib/docker.

# ls
containers    graph         repositories-devicemapper  vfs
devicemapper  init          tmp                        volumes
execdriver    linkgraph.db  trust

However, that file size is limited. From the /etc/sysconfig/docker-storage file:

# By default, Docker uses a loopback-mounted sparse file in
# /var/lib/docker. The loopback makes it slower, and there are some
# restrictive defaults, such as 100GB max storage.

We can see that a sparse 100GB file is created in /var/lib/docker/devicemapper/devicemapper, as well as a 2GB metadata file. This seems to be where our images get stored.

# ls -lah /var/lib/docker/devicemapper/devicemapper
total 743M
drwx------. 2 root root 32 May 4 22:59 .
drwx------. 5 root root 50 May 4 22:59 ..
-rw-------. 1 root root 100G May 7 21:16 data
-rw-------. 1 root root 2.0G May 7 21:16 metadata

And we can see that there is actually 743MB in use.

So, we’re limited to 100GB and that storage is going to be fairly slow. However, the filesystems supported with Docker for this don’t include NFS. Instead, it uses block-based filesystems, as described here:

Comprehensive Overview of Storage Scalability in Docker

However, you could still use NFS to store the Docker image data and metadata. How?

Mount a NFS mount to /var/lib/docker!

When I stop Docker and mount the directory via NFS, I can see that there is nothing in my 1TB volume:

# service docker stop
# mount 10.228.225.142:/docker /var/lib/docker
# df -h | grep docker
10.228.225.142:/docker 973G 384K 973G 1% /var/lib/docker

When I start the Docker service, ~300MB is written to the mount and the folder structure is created:

# service docker start
Redirecting to /bin/systemctl start docker.service
# df -h | grep docker
10.228.225.142:/docker 973G 303M 973G 1% /var/lib/docker
# ls
containers devicemapper execdriver graph init linkgraph.db repositories-devicemapper 
tmp trust volumes

However, there is still a 100GB limited sparse file and we can see where the 300M is coming from:

# ls -lah
total 295M
drwx------ 2 root root 4.0K May 7 21:39 .
drwx------ 4 root root 4.0K May 7 21:39 ..
-rw------- 1 root root 100G May 7 21:39 data
-rw------- 1 root root 2.0G May 7 21:39 metadata

In this particular case, the benefits NFS brings here is external storage in case the host ever tanks or external storage for hosts that don’t have 100GB to spare. It would also be cool if that sparse file could be customized to grow past 100GB if we wanted it to. I actually posted a request for that to happen. Then they replied and I discovered I didn’t RTFM and it actually DOES exist as an option. 😉

So what could we use NFS for in Docker?

When you create a container, you are going to be fairly limited to the amount of space in that container. This is exacerbated when you run multiple containers on the same host. So, to get around that, you could mount a NFS share at the start of the container. With NFS, my storage limits are only what my storage provider dictates.

Another benefit of NFS with Docker? Access to a unified set of data across all containers.

If you’re using containers to do development, it would make sense that the containers all have access to the same code branches and repositories. What if you had an application that needed access to a shared Oracle database?

How do we do it?

One thing I’ve noticed while learning Docker is that the container OS is nothing like a virtual machine. These containers are essentially thin clients and are missing some functionality by design. From what I can tell, the goal is to have a client that can run applications but is not too heavyweight (for efficiency) and not terribly powerful in what can be done on the client (for security). For instance, the default for Linux-based OSes seems to leave systemd out of the equation. Additionally, these clients all start in unprivileged mode by default, so root is a bit limited in what it can and cannot do.

As a result, doing something as simple as configuring a NFS client can be a challenge.

Why not just use the -v option when running your container?

One approach to serving NFS to your Docker image would be to mount the NFS share to your Docker host and then run your docker images using the -v option to mount a volume. That eliminates the need to do anything wonky on the images and allows easier NFS access. However, I wanted to mount inside of a Docker container for 2 reasons:

  1. My own knowledge – what would it take? I learned a lot about Docker, containers and the Linux OS doing it this way.
  2. Cloud – If we’re mounting NFS on a Docker host, what happens if we want to use those images elsewhere in the world? We’d then have to mount the containers to those Docker hosts. Our end users would have to either run a script on the host or would need to know what to mount. I thought it might scale better to have it built in to the image itself and make it into a true “Platform as a Service” setup. Then again, maybe it *would* make more sense to do it via the -v option… I’m sure there could be use cases made for both.

Step 1: Configure your NFS server

In this example, I am going to use NFS running on clustered Data ONTAP 8.3 with a 1TB volume.

cluster::> vol show -vserver parisi -volume docker -fields size,junction-path,unix-permissions,security-style
vserver volume size security-style unix-permissions junction-path
------- ------ ---- -------------- ---------------- -------------
parisi  docker 1TB  unix           ---rwxr-xr-x     /docker

The NFS server will have NFSv3 and NFSv4.1 (with pNFS) enabled.

cluster::> nfs server show -vserver parisi -fields v3,v4.1,v4.1-pnfs,v4-id-domain
vserver v3      v4-id-domain             v4.1    v4.1-pnfs
------- ------- ------------------------ ------- ---------
parisi  enabled domain.win2k8.netapp.com enabled enabled

I’ll need an export policy and rule. For now, I’ll create one that’s wide open and apply it to the volume.

cluster::> export-policy rule show -vserver parisi -instance
                                    Vserver: parisi
                                Policy Name: default
                                 Rule Index: 1
                            Access Protocol: any
Client Match Hostname, IP Address, Netgroup, or Domain: 0.0.0.0/0
                             RO Access Rule: any
                             RW Access Rule: any
User ID To Which Anonymous Users Are Mapped: 65534
                   Superuser Security Types: any
               Honor SetUID Bits in SETATTR: true
                  Allow Creation of Devices: true

Step 2: Modify your Dockerfile

The Dockerfile is a configuration file that can be used to build custom Docker images. They are essentially startup scripts.

If you want to use NFS in your Docker container, the appropriate NFS utilities would need to be installed. In this example, I’ll be using CentOS 7. In this case, the Dockerfile would need to include a section with an install of nfs-utils:

# Install NFS tools
yum install -y nfs-utils

To run NFSv3, you have to ensure that the necessary ancillary processes (statd, nlm, etc) are running. Otherwise, you have to mount with nolock, which is not ideal:

# mount -o nfsvers=3 10.63.3.68:/docker /mnt
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified

This requires systemd to be functioning properly in your Docker container image. This is no small feat, but is possible. Check out Running systemd within a Docker Container for info on how to do this with RHEL/Fedora/CentOS, or read further in this blog for some of the steps I had to take to get it working. There are containers out there that other people have built that run systemd, but I wanted to learn how to do this on my own and guarantee that I’d get exactly what I wanted from my image.

To run just NFSv4, all you need are the nfs-utils. No need to set up systemd.

# mount 10.63.3.68:/docker /mnt
# mount | grep docker
10.63.3.68:/docker on /mnt type nfs4 (rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.17.0.21,local_lock=none,addr=10.63.3.68)

I essentially took the Dockerfile from the systemd post I mentioned, as well as this one for CentOS and modified them a bit (removed the rm -f entries, created directory for mount point, changed location of dbus.service, copied the dbus.service file to the location of the Dockerfile, installed nfs-utils and autofs).

I based it on this Dockerfile: https://github.com/maci0/docker-systemd-unpriv/

From what I can tell, mounting NFS in a Docker container requires privileged access and since there is currently no way to build in privileged mode, we can’t add mount command to the Dockerfile. So I’d need to run the mount command after the image is built. There are ways to run systemd in unprivileged mode, as documented in this other blog.

Using autofs!

On the client, we could also set up automounter to mount the NFS mounts when we need them, rather than mounting them and leaving them mounted. I cover automounting in NFS (for homedirs)with clustered Data ONTAP a bit in TR-4073 on page 160.

Doing this was substantially trickier, as I needed to install/start autofs, which requires privileged mode *and* systemd to be working properly. Plus, I had to do a few other tricky things.

This is what the /etc/auto.master file would look like in the container:

# cat /etc/auto.master
#
# Sample auto.master file
# This is a 'master' automounter map and it has the following format:
# mount-point [map-type[,format]:]map [options]
# For details of the format look at auto.master(5).
#
/misc /etc/auto.misc
#
# NOTE: mounts done from a hosts map will be mounted with the
# "nosuid" and "nodev" options unless the "suid" and "dev"
# options are explicitly given.
#
/net -hosts
#
# Include /etc/auto.master.d/*.autofs
# The included files must conform to the format of this file.
#
+dir:/etc/auto.master.d
#
# Include central master map if it can be found using
# nsswitch sources.
#
# Note that if there are entries for /net or /misc (as
# above) in the included master map any keys that are the
# same will not be seen as the first read key seen takes
# precedence.
#
+auto.master
/docker-nfs /etc/auto.misc --timeout=50

And the /etc/auto.misc file:

# cat /etc/auto.misc
docker -fstype=nfs4,minorversion=1,rw,nosuid,hard,tcp,timeo=60 10.228.225.142:/docker

The Dockerfile

Here’s my Dockerfile for a container that can run NFSv3 or NFSv4, with manual or automount.

FROM centos:centos7
ENV container docker
MAINTAINER: Justin Parisi "whyistheinternetbroken@gmail.com"
RUN yum -y update; yum clean all
RUN yum -y swap -- remove fakesystemd -- install systemd systemd-libs
RUN yum -y install nfs-utils; yum clean all
RUN systemctl mask dev-mqueue.mount dev-hugepages.mount \
 systemd-remount-fs.service sys-kernel-config.mount \
 sys-kernel-debug.mount sys-fs-fuse-connections.mount
RUN systemctl mask display-manager.service systemd-logind.service
RUN systemctl disable graphical.target; systemctl enable multi-user.target

# Copy the dbus.service file from systemd to location with Dockerfile
COPY dbus.service /usr/lib/systemd/system/dbus.service

VOLUME ["/sys/fs/cgroup"]
VOLUME ["/run"]

CMD ["/usr/lib/systemd/systemd"]

# Make mount point
RUN mkdir /docker-nfs

# Configure autofs
RUN yum install -y autofs
RUN echo "/docker-nfs /etc/auto.misc --timeout=50" >> /etc/auto.master

###### CONFIGURE THIS PORTION TO YOUR OWN SPECS ######
#RUN echo "docker -fstype=nfs4,minorversion=1,rw,nosuid,hard,tcp,timeo=60 10.228.225.142:/docker" >> /etc/auto.misc
######################################################


# Copy the shell script to finish setup
COPY configure-nfs.sh /configure-nfs.sh
RUN chmod 777 configure-nfs.sh
This was my shell script:
#!/bin/sh

# Start services
service rpcidmapd start
service rpcbind start
service autofs start

# Kill autofs pid and restart, because Linux
ps -ef | grep '/usr/sbin/automount' | awk '{print $2}' | xargs kill -9
service autofs start

The script had to live in the same directory as my Dockerfile. I also had to copy dbus.service to that same directory.

# cp /usr/lib/systemd/system/dbus.service /location_of_Dockerfile/dbus.service

Once I had the files in place, I built the image:

# docker build -t parisi/nfs-client .
Sending build context to Docker daemon 5.12 kB
Sending build context to Docker daemon
Step 0 : FROM centos:centos7
 ---> fd44297e2ddb
Step 1 : ENV container docker
 ---> Using cache
 ---> 2cbdf1a478bc
Step 2 : RUN yum -y update; yum clean all
 ---> Using cache
 ---> d4015989b039
Step 3 : RUN yum -y swap -- remove fakesystemd -- install systemd systemd-libs
 ---> Using cache
 ---> ec0fbd7641bb
Step 4 : RUN yum -y install nfs-utils; yum clean all
 ---> Using cache
 ---> 485fac2c1733
Step 5 : RUN systemctl mask dev-mqueue.mount dev-hugepages.mount systemd-remount-fs.service sys-kernel-config.mount sys-kernel-debug.mount sys-fs-fuse-connections.mount
 ---> Using cache
 ---> d7f44caa9d8f
Step 6 : RUN systemctl mask display-manager.service systemd-logind.service
 ---> Using cache
 ---> f86f635b6af7
Step 7 : RUN systemctl disable graphical.target; systemctl enable multi-user.target
 ---> Using cache
 ---> a4b7fed3b91d
Step 8 : COPY dbus.service /usr/lib/systemd/system/dbus.service
 ---> Using cache
 ---> f11fa8045437
Step 9 : VOLUME /sys/fs/cgroup
 ---> Using cache
 ---> e042e697636d
Step 10 : VOLUME /run
 ---> Using cache
 ---> 374fc2b247cb
Step 11 : CMD /usr/lib/systemd/systemd
 ---> Using cache
 ---> b797b045d6b7
Step 12 : RUN mkdir /docker-nfs
 ---> Using cache
 ---> 8228a9ca400d
Step 13 : RUN yum install -y autofs
 ---> Using cache
 ---> 01a64d46a737
Step 14 : RUN echo "/docker-nfs /etc/auto.misc --timeout=50" >> /etc/auto.master
 ---> Using cache
 ---> 78b63c672baf
Step 15 : RUN echo "docker -fstype=nfs4,minorversion=1,rw,nosuid,hard,tcp,timeo=60 10.228.225.142:/docker" >> /etc/auto.misc
 ---> Using cache
 ---> a2d99d3e1ba3
Step 16 : COPY configure-nfs.sh /configure-nfs.sh
 ---> Using cache
 ---> 70e71370149d
Step 17 : RUN chmod 777 configure-nfs.sh
 ---> Running in c1e24ab5b643
 ---> 4fb2c5942cbb

Removing intermediate container c1e24ab5b643
Successfully built 4fb2c5942cbb

# docker images parisi/nfs-client
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
parisi/nfs-client latest 4fb2c5942cbb 49 seconds ago 298.2 MB

Now I can start systemd for the container in privileged mode.

# docker run --privileged -d -v /sys/fs/cgroup:/sys/fs/cgroup:ro parisi/nfs-client sh -c "/usr/lib/systemd/systemd"
e157ecdf7269dce8178cffd54c74abef2a242cbfe522298ad410c292896991e8

# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e157ecdf7269 parisi/nfs-client:latest "sh -c /usr/lib/syst 11 seconds ago Up 10 seconds sleepy_pike

Then I can go into the container and kick off my script:

# docker exec -t -i e157ecdf7269dce8178cffd54c74abef2a242cbfe522298ad410c292896991e8 /bin/bash
[root@e157ecdf7269 /]# ./configure-nfs.sh
Redirecting to /bin/systemctl start rpcidmapd.service
Failed to issue method call: Unit rpcidmapd.service failed to load: No such file or directory.
Redirecting to /bin/systemctl start rpcbind.service
Redirecting to /bin/systemctl start autofs.service
kill: sending signal to 339 failed: No such process
Redirecting to /bin/systemctl start autofs.service

[root@e157ecdf7269 /]# service autofs status
Redirecting to /bin/systemctl status autofs.service
autofs.service - Automounts filesystems on demand
 Loaded: loaded (/usr/lib/systemd/system/autofs.service; disabled)
 Drop-In: /run/systemd/system/autofs.service.d
 └─00-docker.conf
 Active: active (running) since Mon 2015-05-11 21:03:24 UTC; 21s ago
 Process: 355 ExecStart=/usr/sbin/automount $OPTIONS --pid-file /run/autofs.pid (code=exited, status=0/SUCCESS)
 Main PID: 357 (automount)
 CGroup: /system.slice/docker-e157ecdf7269dce8178cffd54c74abef2a242cbfe522298ad410c292896991e8.scope/system.slice/autofs.service
 └─357 /usr/sbin/automount --pid-file /run/autofs.pid

May 11 21:03:24 e157ecdf7269 systemd[1]: Starting Automounts filesystems on demand...
May 11 21:03:24 e157ecdf7269 automount[357]: lookup_read_master: lookup(nisplus): couldn't locate nis+ table auto.master
May 11 21:03:24 e157ecdf7269 systemd[1]: Started Automounts filesystems on demand.

Once the script runs, I can start testing my mounts!

First, let’s do NFSv3:

[root@e454fd0728bd /]# mount -o nfsvers=3 10.228.225.142:/docker /mnt
[root@e454fd0728bd /]# mount | grep mnt
10.228.225.142:/docker on /mnt type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.228.225.142,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=10.228.225.142)
[root@e454fd0728bd /]# cd /mnt
[root@e454fd0728bd mnt]# touch nfsv3file
[root@e454fd0728bd mnt]# ls -la
total 12
drwxrwxrwx 2 root root 4096 May 12 15:14 .
drwxr-xr-x 20 root root 4096 May 12 15:13 ..
-rw-r--r-- 1 root root 0 May 12 15:14 nfsv3file
-rw-r--r-- 1 root root 0 May 11 17:53 nfsv41
-rw-r--r-- 1 root root 0 May 11 18:46 nfsv41-file
drwxrwxrwx 11 root root 4096 May 12 15:05 .snapshot

Now, let’s try autofs!

When I log in, I can see that nothing is mounted:

[root@e157ecdf7269 /]# mount | grep docker
/dev/mapper/docker-253:1-50476183-e157ecdf7269dce8178cffd54c74abef2a242cbfe522298ad410c292896991e8 on / type ext4 (rw,relatime,discard,stripe=16,data=ordered)
/etc/auto.misc on /docker-nfs type autofs (rw,relatime,fd=19,pgrp=11664,timeout=50,minproto=5,maxproto=5,indirect)

Then I cd into my automount location and notice that my mount appears:

[root@e157ecdf7269 /]# cd /docker-nfs/docker
[root@e157ecdf7269 docker]# mount | grep docker
/dev/mapper/docker-253:1-50476183-e157ecdf7269dce8178cffd54c74abef2a242cbfe522298ad410c292896991e8 on / type ext4 (rw,relatime,discard,stripe=16,data=ordered)
/etc/auto.misc on /docker-nfs type autofs (rw,relatime,fd=19,pgrp=11664,timeout=50,minproto=5,maxproto=5,indirect)
10.228.225.142:/docker on /docker-nfs/docker type nfs4 (rw,nosuid,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=0,timeo=60,retrans=2,sec=sys,clientaddr=172.17.0.134,local_lock=none,addr=10.228.225.142)   <<<< there it is!

[root@e157ecdf7269 docker]# df -h | grep docker
/dev/mapper/docker-253:1-50476183-e157ecdf7269dce8178cffd54c74abef2a242cbfe522298ad410c292896991e8 9.8G 313M 8.9G 4% /
10.228.225.142:/docker 973G 1.7M 973G 1% /docker-nfs/docker

 What now?

Now that I’ve done all that work to create a Docker image, it’s time to share it to the world.

# docker login
Username (parisi):
Login Succeeded

# docker tag 7cb0f9093319 parisi/centos7-nfs-client-autofs

# docker push parisi/centos7-nfs-client-autofs

Do you really want to push to public registry? [Y/n]: Y
The push refers to a repository [docker.io/parisi/centos7-nfs-client-autofs] (len: 1)
Sending image list
Pushing repository docker.io/parisi/centos7-nfs-client-autofs (1 tags)
6941bfcbbfca: Image already pushed, skipping
41459f052977: Image already pushed, skipping
fd44297e2ddb: Image already pushed, skipping
78b3f0a9afb1: Image successfully pushed
ec8afb93938d: Image successfully pushed
3f5dcd409a0e: Image successfully pushed
231f18a54eee: Image successfully pushed
1bfe1aa6309e: Image successfully pushed
318c908bb3e7: Image successfully pushed
80f71e4c55e8: Image successfully pushed
1ef13e5d4686: Image successfully pushed
5d9999f99007: Image successfully pushed
ae28ae0477aa: Image successfully pushed
438342aef8e1: Image successfully pushed
1069a8beb629: Image successfully pushed
15692893daab: Image successfully pushed
8ce7a0621ca4: Image successfully pushed
4778761cf8bd: Image successfully pushed
7cb0f9093319: Image successfully pushed

Pushing tag for rev [7cb0f9093319] on {https://cdn-registry-1.docker.io/v1/repositories/parisi/centos7-nfs-client-autofs/tags/latest}

You can find the image repository here: https://registry.hub.docker.com/u/parisi/centos7-nfs-client-autofs/

And the files here: https://github.com/whyistheinternetbroken/docker-centos7-nfs-client-autofs

Also, check out this fantastic blog post by Andrew Sullivan on using NetApp snapshot technology to make Docker images persistent!

Installing Docker on a Mac? Check this out if you hit issues:

https://whyistheinternetbroken.wordpress.com/2016/03/25/docker-mac-install-fails/

LDAP::Schemin’ with schemas – Part 3

This post is part 3 of the new LDAP series I’ve started to help people become more aware of what LDAP is and how it works. Part one was posted here:

What the heck is an LDAP anyway?

This post will focus on the schema portion of LDAP.

homer-ldap-schema

What is a schema in LDAP?

A schema, as used by LDAP, is a set of defined rules that govern the kinds of information a server can hold. These include, but are not limited to:

  • Attribute Syntaxes—Provide information about the kind of information that can be stored in an attribute.
  • Matching Rules—Provide information about how to make comparisons against attribute values.
  • Matching Rule Uses—Indicate which attribute types may be used in conjunction with a particular matching rule.
  • Attribute Types—Define an object identifier (OID) and a set of names that may be used to refer to a given attribute, and associates that attribute with a syntax and set of matching rules. These are defined in detail in RFC-2252.
  • Object Classes—Define named collections of attributes and classify them into sets of required and optional attributes.
  • Name Forms—Define rules for the set of attributes that should be included in the RDN for an entry.
  • Content Rules—Define additional constraints about the object classes and attributes that may be used in conjunction with an entry.
  • Structure Rule—Define rules that govern the kinds of subordinate entries that a given entry may have.

When dealing with LDAP schemas, there are a subset of the above that we deal with on a regular basis for name services/identity management.

Attribute syntaxes

Attribute syntaxes define what type of information can be stored as an attribute. These are defined by an object identifier (OID) and control whether an entry can be in Unicode, integer format or other format.

The following table from OpenLDAP.org shows some commonly used attribute syntax OIDs:

Name OID Description
boolean 1.3.6.1.4.1.1466.115.121.1.7 boolean value
directoryString 1.3.6.1.4.1.1466.115.121.1.15 Unicode (UTF-8) string
distinguishedName 1.3.6.1.4.1.1466.115.121.1.12 LDAP DN
integer 1.3.6.1.4.1.1466.115.121.1.27 integer
numericString 1.3.6.1.4.1.1466.115.121.1.36 numeric string
OID 1.3.6.1.4.1.1466.115.121.1.38 object identifier
octetString 1.3.6.1.4.1.1466.115.121.1.40 arbitary octets

Microsoft Active Directory LDAP uses different OIDs for their attribute syntaxes. These can be seen in the schema via ADSI edit. As an example, while OpenLDAP uses an OID of 1.3.6.1.4.1.1466.115.121.1.15 for Unicode (UTF-8) strings, Active Directory uses 2.5.5.12:

attr-syntax-unicode

These attribute syntaxes are generally not modifiable.

Object classes

Object classes in LDAP are collections of specific types of attributes that help LDAP searches execute more efficiently. When a search is kicked off, the objectClass is passed in the search. For example:

Filter: (&(objectClass=User)(sAMAccountName=ldapuser))

The above objectClass is defined as “User,” but an also include (but is not limited to):

  • Person
  • organizationalPerson
  • Group
  • Top

For a complete list of objectClass attributes, see: http://oav.net/mirrors/LDAP-ObjectClasses.html

In Active Directory, the objectClass can be modified and viewed via the Active Directory Users and Computers in advanced view via the Attribute Editor tab.

attribute-edit-objectclass

When a LDAP search runs, it would pass the defined schema attribute for objectClass as defined in the client configuration. In NetApp’s clustered Data ONTAP, it would be passed via one of the object-class attributes:

cluster::*> ldap client schema show -vserver SVM -schema AD-IDMU -fields nis-object-class
 (vserver services ldap client schema show)
vserver schema  nis-object-class
------- ------- ----------------
SVM     AD-IDMU nisObject
cluster::*> ldap client schema show -vserver SVM -schema AD-IDMU -fields nis-netgroup-object-class
 (vserver services ldap client schema show)
vserver schema  nis-netgroup-object-class
------- ------- -------------------------
SVM     AD-IDMU nisNetgroup
cluster::*> ldap client schema show -vserver SVM -schema AD-IDMU -fields posix-group-object-class
 (vserver services ldap client schema show)
vserver schema  posix-group-object-class
------- ------- ------------------------
SVM     AD-IDMU Group
cluster::*> ldap client schema show -vserver SVM -schema AD-IDMU -fields posix-account-object-class
 (vserver services ldap client schema show)
vserver schema  posix-account-object-class
------- ------- --------------------------
SVM     AD-IDMU User

In a Linux client like RedHat running SSSD, it might look like this:

# cat /etc/sssd/sssd.conf | grep class
ldap_user_object_class = user
ldap_group_object_class = group

In general, we could think of objectClasses like we’d think of how we use a phone book’s Yellow Pages. We don’t look for things in the Yellow Pages by name, but rather by type of business. If we wanted to find a grocery store, we’d use an objectClass of “groceryStore” to filter our results to speed up our search.

Attributes

The most relevant and necessary parts of a LDAP server’s schema (at least when it comes to client configuration) are the attributes themselves. These are the specific entries in the LDAP schema that define who and what the object is, specifically. If we extend our Yellow Pages analogy, after we find the objectClass=groceryStore, these would be the phone number, address, etc. of the grocery store. That information happens to be the most critical portion of the object, as they tell us what we need to know about the object we were looking for.

In a LDAP server used for name services/identity management, the attributes will tell us information about users, groups, hosts, netgroups, etc. such as phone numbers, email addresses, physical location, IP addresses, UID/GID for UNIX based permissions, SIDs for Windows environments and many, many more.

Attributes are broken down into two categories: the attribute and the value.

The attribute

With regards to name services/identity management, an attribute that might matter to us would include (but are not limited to):

  • uid
  • uidNumber
  • gid
  • gidNumber
  • memberOf
  • objectClass
  • primaryGroupID
  • objectSid
  • sAMAccountName

These attributes are defined in schema templates of clients and should be compliant with RFC-2307 standards for a valid LDAP server configuration. Common LDAP servers such as Microsoft Active Directory, OpenLDAP, Sun Directory Server, RedHat Directory Server (and IdM), Apple Open Directory, Oracle Internet Directory, Vintela… these all fall under the RFC standards. For a list of LDAP software available, see this Wiki entry.

The value

This is the portion of the attribute that populates the information we want to be used in queries. For example, if I have an attribute of uid, I can populate the value with the list of UNIX names I want to use in queries.

When a LDAP search is issued, the attribute and value are passed.

Filter: (&(objectClass=User)(sAMAccountName=ldapuser))

In the above, both objectClass and sAMAccountName are attributes. User and ldapuser are values.

How do I change them?

Attribute values are editable in most cases, but in some cases, the LDAP schema might have set the value to “read only.” This is fairly common in Active Directory as a means of idiot-proofing/CYA. But there often are ways to modify the schemas to allow editing of these attributes, if desired. Always do this with caution, and ideally, in a lab first and with the guidance of your LDAP server vendor.

Attributes themselves are also editable. We can also create custom attributes in LDAP if we choose. 3rd party LDAP products like Vintela do this for you. But in most cases, it’s best to leave the default schemas alone, and instead, modify the clients to query the attributes we have populated. Many clients have default schema templates, including NetApp’s clustered Data ONTAP.

 Schema Template: AD-IDMU
 Comment: Schema based on Active Directory Identity Management for UNIX (read-only)
 RFC 2307 posixAccount Object Class: User
 RFC 2307 posixGroup Object Class: Group
 RFC 2307 nisNetgroup Object Class: nisNetgroup
 RFC 2307 uid Attribute: uid
 RFC 2307 uidNumber Attribute: uidNumber
 RFC 2307 gidNumber Attribute: gidNumber
 RFC 2307 cn (for Groups) Attribute: cn
 RFC 2307 cn (for Netgroups) Attribute: name
 RFC 2307 userPassword Attribute: unixUserPassword
 RFC 2307 gecos Attribute: name
 RFC 2307 homeDirectory Attribute: unixHomeDirectory
 RFC 2307 loginShell Attribute: loginShell
 RFC 2307 memberUid Attribute: memberUid
 RFC 2307 memberNisNetgroup Attribute: memberNisNetgroup
 RFC 2307 nisNetgroupTriple Attribute: nisNetgroupTriple
ONTAP Name Mapping windowsAccount Attribute: msDS-PrincipalName
 Vserver Owns Schema: false
 RFC 2307 nisObject Object Class: nisObject
 RFC 2307 nisMapName Attribute: nisMapName
 RFC 2307 nisMapEntry Attribute: nisMapEntry

Schema extension

By default, Microsoft Active Directory do not contain the UNIX attributes needed to use the server as a UNIX Identity Management server. The schema had to be extended to create the necessary attributes types to allow for proper UNIX-based authentication. Extending the schema in domains with multiple domain controllers is non-disruptive; only one server needs to have the schema extended and then it can replicate to the other servers. For information on extending schemas or enabling identity management, see the TechNet article on IDMU.

UNIX-based LDAP servers such as OpenLDAP don’t have to do this, as they are intended for use with UNIX authentication by default.

Replication of schemas

One of the reasons I like Microsoft’s Active Directory so much for UNIX identity management (aside from the integration of services and pre-existence in enterprise environments) is the built-in schema replication. Want to create a new LDAP server? Cool. Add it to the domain. You’re done.

Microsoft Active Directory replicates every 15 minutes by default. That means all schemas, attributes, etc. get copied over to all domain controllers in a domain.

In a forest with multiple domains, it’s possible to replicate UNIX attributes across those domains to allow cross-domain UNIX authentication. So if your company acquires another company, you can simply add their domain to your forest and enable replication of their UNIX attributes and keep using your existing set up. I describe how to do this in TR-4073 on page 103.

Non-Windows LDAP servers also have the capability to replicate schemas and attributes, but they need configuration and are not as robust as Windows, in my opinion. This is changing somewhat with RedHat Directory Server/IdM, especially with RedHat 7 and the ability to integrate into Active Directory domains. I’ll cover that set up in a later blog post. 🙂 (thanks to Natxo for pointing out that omission)

What else?

LDAP schemas are the lifeblood of directory services, so it’s important to understand them as best as possible. Hopefully this post has helped clear some things up.

For more detailed information on LDAP configuration (particularly with NetApp clustered Data ONTAP), see TR-4073: Secure Unified Authentication.

Also, stay tuned for more in the series of LDAP basics on this blog! Links to other sections can be found on the first post in this series:

What the heck is an LDAP anyway?