Behind the Scenes: Episode 45 – ONTAP Select and ONTAP 9 SAN Improvements

ep45

Welcome to the Episode 45, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

This is yet another in the series of episodes for ONTAP 9 month on the podcast.

ontap9week

This week, we welcome Director of Product Management Peter Skovrup and SAN TME Mike Peppers (@NTAPFLIGuy) to talk about ONTAP Select and SAN improvements, respectively.

ONTAP Select is the next generation of ONTAP Edge, which is NetApp’s software defined storage solution. With Select, you can install ONTAP on pretty much any server platform you want and pay only for the storage you use. Select brings HA functionality, better performance and 4 node clusters!

If you want to get ONTAP Select 9.0RC1, it’s available here (requires a NetApp support login):

http://mysupport.netapp.com/NOW/cgi-bin/software/?product=ONTAP+Select&platform=Appliance+Install

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

The official blog is here:

http://community.netapp.com/t5/Technology/Tech-ONTAP-Podcast-Episode-45-ONTAP-Select-amp-SAN-Improvements-in-ONTAP-9/ba-p/120575

Check out the podcast episode here:

Advertisements

Setting up BIND to be as insecure as possible in Centos/RHEL7

DNS, in general, should be locked down as much as possible. It’s too easy for hackers to send DNS attacks like DDoS unless you set up some security measures.

However, if you’re just trying to set up a simple BIND DNS server in a lab that’s not on a public network and is behind a ton of firewalls, just to test some basic functionality like I’ve been doing, you may want things to just *work* without having to set up all the extra security bells and whistles.

I’m writing this up to help people avoid the hours of head banging, Googling and debugging that always ends up in an Occam’s razor-like scenario: disable your firewall.

C2y3rk

Before we start, I want to re-iterate something:

DO NOT CONFIGURE YOUR PRODUCTION DNS SERVERS LIKE THIS, INCLUDING DNS SERVERS YOU RUN AT YOUR HOUSE. IF YOU DO, YOU ARE ASKING FOR TROUBLE.

Now that that’s out of the way…

BIND configuration – named.conf Worst Practices

The general recommendations to secure DNS servers is to diable recursion, lock down the allowed queries, etc. Eff that. We’re going all out and allowing everything.

Here’s the named.conf file I used on my BIND server:

options {
 listen-on port 53 {any;};
 listen-on-v6 port 53 {any;};
 directory "/var/named";
 dump-file "/var/named/data/cache_dump.db";
 statistics-file "/var/named/data/named_stats.txt";
 memstatistics-file "/var/named/data/named_mem_stats.txt";
 allow-transfer {any;};
 allow-query-cache {any;};
 allow-query {any;};
 recursion yes;

 dnssec-enable no;
 dnssec-validation no;

/* Path to ISC DLV key */
 bindkeys-file "/etc/named.iscdlv.key";

managed-keys-directory "/var/named/dynamic";

pid-file "/run/named/named.pid";
 session-keyfile "/run/named/session.key";
};

Hackable as s**t. But it works, dammit.

For good measure, my zones:

};

zone "bind.parisi.com" IN {
 type master;
 file "bind.parisi.com.zone";
 allow-update {any; };
 allow-query {any;};
};

zone "xx.xx.xx.in-addr.arpa" IN {
 type master;
 file "xx.xx.xx.in-addr.arpa.zone";
 allow-update {any;};
 allow-query {any;};
};

Arrrgh. Firewalls!

pirate

If you’ve worked with Linux in the past 10 years, I’m sure you’ve run into the problem with Linux firewalls where you just end up turning them off. Historically, it’s been iptables and SELinux. When I was working on my environment, I was seeing the following in a packet trace when attempting remote nslookups:

ICMP 118 Destination unreachable (Host administratively prohibited)

Local worked fine. Pinging the IP worked fine. But dig?

# dig @xx.xx.xx.xx dns.bind.parisi.com

; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> @xx.xx.xx.xx dns.bind.parisi.com
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

Ping?

# ping dns.bind.parisi.com
ping: unknown host dns.bind.parisi.com

Everything I read said it was either a config or firewall issue. I had already disabled the usual suspects, SELinux and iptables. But no dice.

Finally, I remembered that Centos/RHEL7 is pretty different from previous versions. So I Googled “centos7 security features” and found my answer: THEY ADDED A NEW &*@$ FIREWALL.

Introducing your newest Linux security nemesis…

Firewalld.

Now, I fully understand the need for new security enhancements. And you should totally leave this alone in production environments. But, like the Windows Firewall, it’s the bane of a lab machine’s existence. So, I disabled it.

# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
 Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
 Active: active (running) since Thu 2016-06-23 14:57:47 EDT; 6h ago
 Main PID: 670 (firewalld)
 CGroup: /system.slice/firewalld.service
 └─670 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

Jun 23 14:57:21 dns.bind.parisi.com systemd[1]: Starting firewalld - dynamic firewall daemon...
Jun 23 14:57:47 dns.bind.parisi.com systemd[1]: Started firewalld - dynamic firewall daemon.

# systemctl stop firewalld

stewie.jpg

# dig @xx.xx.xx.xx dns.bind.parisi.com

; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> @xx.xx.xx.xx dns.bind.parisi.com
; (1 server found)
;; global options: +cmd
;; Got answer:

Now, on to fight with BIND some more. Stay tuned for news on TR updates featuring BIND configuration with on-box DNS in ONTAP!

ONTAP 9 is now available!

UPDATED: Changed title and links to 9.0, as it’s now GA. Check for the GA announcment here:

ONTAP 9.0 is now generally available (GA)!

All month long, the Tech ONTAP podcast has been featuring ONTAP 9 on the show to get people hyped up for the impending release.

Well, today’s the day.

woohoo

ONTAP 9 is here!

That’s right – the next major release of ONTAP is now available. If you have concerns over the “RC” designation, allow me to recap what I mentioned in a previous blog post:

RC versions have completed a rigorous set of internal NetApp tests and are are deemed ready for public consumption. Each release candidate would provide bug fixes that eventually lead up to the GA edition. Keep in mind that all release candidates are fully supported by NetApp, even if there is a GA version available. However, while RC is perfectly fine to run in production environments, GA is the recommended version of any ONTAP software release.

For a more official take on it, see the NetApp link:

http://mysupport.netapp.com/NOW/products/ontap_releasemodel/post70.shtml

So, why do I need ONTAP 9?

I know what you’re thinking – “Why do I need ONTAP 9? I have a perfectly good working copy of 8.whatever running already.”

Well, honestly, if your version of ONTAP is working for you, super. Don’t change a thing. But, once you take a look at some of the feature enhancements we have in ONTAP 9, you might change your mind…

Features galore!

The list of features and improvements in ONTAP 9 is pretty impressive. It’s so impressive, in fact, that they re-branded ONTAP. (Ok that may be a bit of an exaggeration. We cover why we re-branded to ONTAP 9 in this podcast.)

The list includes:

  • Support for 15TB SSD
  • Inline data compaction
  • SnapLock® software for data compliance
  • RAID-TEC triple-parity protection
  • Headroom for visibility of performance capacity
  • MetroCluster enhancements (8 nodes!)
  • Onboard key manager (Included for FREE)
  • FlexGroups (PVR only in 9.0)
  • Workgroup mode for CIFS/SMB
  • LDAP Signing and Sealing for CIFS/SMB
  • Kerberos 5p support
  • AFF Simplicity templates
  • SAN Optimized factory configurations
  • Performance improvements
  • ONTAP Select (4 nodes, HA failover, software defined storage!)
  • Faster failover times
  • Per-aggregate CPs
  • Filehandle preservation for SVM DR – no more re-mounting after cutover!
  • Volume rehost
  • Global FIPS mode for FIPS compliance
  • TLS 1.1/1.2 support
  • Increased per-node LIF limits (cluster-wide limit remains)

That’s a lot of stuff!

If you have questions about any of the above, leave a comment and I’ll address them in a future blog post.

A few of the NetApp A-Team Members wrote up some blogs for the new stuff:

Behind the Scenes: Episode 44 – ONTAP 9 Flash Performance Improvements

ep44

Welcome to the Episode 44, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

This is yet another in the series of episodes for ONTAP 9 month on the podcast.

ontap9week

This week, we welcome the Flash experts Skip Shapiro and Dan Isaacs to discuss what’s new in the world of flash in ONTAP 9, as well as how flash performance is only getting better. We also talk about the new ONTAP feature called compaction.

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

The official blog is here:

http://community.netapp.com/t5/Technology/Tech-ONTAP-Podcast-Episode-44-ONTAP-9-Flash-Improvements/ba-p/120318

Check out the podcast episode here:

 

Behind the Scenes: Episode 43 – ONTAP 9 Data Protection Features

Welcome to the Episode 43 version of the new series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

This is yet another in the series of episodes for ONTAP 9 month on the podcast.

ontap9week

This week, we invited a couple of our TMEs to talk about Data Protection features in ONTAP 9. Siddharth Agrawal (@siddharth_145) and Mike Worthen (@worthenmichael) discuss MetroCluster, SnapMirror Enhancements and the addition of SnapLock support

We had Mike Worthen live in the studio, and he’s a bit of a loose cannon. We had to keep telling him to stop banging on the table. Sid was a bit more subdued, but he was also connecting via Skype, so who knows what he was up to over there.

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

Check out the podcast episode here:

ONTAP 9 Feature: Volume rehosting

ontap9week

Clustered Data ONTAP (now known as NetApp ONTAP) is a clustered file system that leverages virtualized storage containers known as Storage Virtual Machines (SVMs) that act as “blades” to create a secure, multi-tenant environment with a unified namespace.

These SVMs own objects such as network interfaces and Flexible Volumes (FlexVols) and act as their own segmented storage systems on shared hardware. In previous releases, the volumes were dedicated to the SVMs and could not be easily moved to another SVM in the cluster. You had to SnapMirror the volume over to the new SVM or copy the data. This process was time consuming and inefficient, so customers, for years, have asked for the ability to easily migrate volumes between SVMs.

In the 8.3.2 release, this functionality was added in limited fashion, for use with the new Copy-Free Transition feature. The volumes could only be migrated if they were marked as “transitioned” volumes from a 7-Mode system.

With ONTAP 9, we now have full reign to move volumes between SVMs without needing to copy anything! We cover ONTAP 9 in our podcast this month, so check it out.

How it works

Every SVM has a unique identifier known as a UUID. If you want to see it, you can run the following command from advanced priv:

ontap9-tme-8040::*> vserver show -vserver parisi,SVM1 -fields uuid
vserver uuid
------- ------------------------------------
SVM1 05e7ab78-2d84-11e6-a796-00a098696ec7
parisi 103879e8-2d84-11e6-a796-00a098696ec7
2 entries were displayed.

When a volume is “owned” by a SVM, that volume gets associated with the UUID of the SVM. It also gets its own file handle, based on the SVM root’s namespace.

In previous releases of cDOT, you could actually move volumes manually if you had a secret decoder ring of commands and enough gall to try it. What volume rehost does is automate the commands you’d need to make the necessary underlying changes to the cluster database tables to ensure nothing blows up.

Prove it!

So, now that I’ve told you about this cool new command, allow me to show you the details. This is the man page entry on an ONTAP 9 system in my lab. Keep in mind that this is a dev build, so things may change slightly when the release goes public.

ontap9-tme-8040::*> man volume rehost
volume rehost Data ONTAP 9.0 volume rehost

NAME
 volume rehost -- Rehost a volume from one Vserver into another Vserver

AVAILABILITY
 This command is available to cluster administrators at the admin privilege
 level.

DESCRIPTION
 The volume rehost command rehosts a volume from source Vserver onto desti-
 nation Vserver. The volume name must be unique among the other volumes on
 the destination Vserver.


PARAMETERS
 -vserver <vserver name> - Source Vserver name
 This specifies the Vserver on which the volume is located.

-volume <volume name> - Target volume name
 This specifies the volume that is to be rehosted.

-destination-vserver <vserver name> - Destination Vserver name
 This specifies the destination Vserver where the volume must be
 located post rehost operation.

{ [-force-unmap-luns {true|false}] - Unmap LUNs in volume
 This specifies whether the rehost operation should unmap LUNs
 present on volume. The default setting is false (the rehost opera-
 tion shall not unmap LUNs). When set to true, the command will unmap
 all mapped LUNs on the volume.

| [-auto-remap-luns {true|false}] } - Automatic Remap of LUNs
 This specifies whether the rehost operation should perform LUN map-
 ping operation at the destination Vserver for the LUNs mapped on the
 volume at the source Vserver. The default setting is false (the
 rehost operation shall not map LUNs at the destination Vserver).
 When set to true, at the destination Vserver the command will create
 initiators groups along with the initiators (if present) with same
 name as that of source Vserver. Then the LUNs on the volume are
 mapped to initiator groups at the destination Vserver as mapped in
 source Vserver.

The volume I intend to move is a volume called “move_me,” which is currently located in SVM1.

ontap9-tme-8040::*> volume show -vserver SVM1 -fields msid,dsid,uuid,vserver -volume move_me
vserver volume dsid msid uuid
------- ------- ---- ---------- ------------------------------------
SVM1 move_me 1027 2163225631 cc691049-2d84-11e6-a796-00a098696ec7

To move this volume, I simply use this command:

ontap9-tme-8040::*> volume rehost -vserver SVM1 -volume move_me -destination-vserver parisi

When I run this, I get this warning:

Warning: Rehosting a volume from one Vserver to another Vserver does not
 change the security information on that volume.
 If the security domains of the Vservers are not identical, unwanted
 access might be permitted, and desired access might be denied. An
 attempt to rehost a volume will disassociate the volume from all
 volume policies and policy rules. The volume must be reconfigured
 after a successful or unsuccessful rehost operation.

Basically, if I use a different AD domain or LDAP configuration in the SVM I am moving to, I could cause access issues.

The command takes a few seconds and I see this:

[Job 42] Job succeeded: Successful

Info: Volume is successfully rehosted on the target Vserver.
Set the desired volume configuration - such as the export policy and QoS policy - on the target Vserver.

Now, I check my volume. No longer on the old SVM:

ontap9-tme-8040::*> volume show -vserver SVM1 -fields msid,dsid,uuid,vserver -volume move_me
There are no entries matching your query.

Now is on the new SVM:

ontap9-tme-8040::*> volume show -vserver parisi -fields msid,dsid,uuid,vserver -volume move_me
vserver volume dsid msid uuid
------- ------- ---- ---------- ------------------------------------
parisi move_me 1028 2163225632 cc691049-2d84-11e6-a796-00a098696ec7

Different SVM. Different MSID/DSID. Same UUID for the volume, but belonging to a different UUID for the SVM. For NAS clients, you simply re-mount (as the file handle and IP address will change).

If you have LUNs, you can use the following flags:

{ [-force-unmap-luns {true|false}] - Unmap LUNs in volume
 This specifies whether the rehost operation should unmap LUNs
 present on volume. The default setting is false (the rehost opera-
 tion shall not unmap LUNs). When set to true, the command will unmap
 all mapped LUNs on the volume.

| [-auto-remap-luns {true|false}] } - Automatic Remap of LUNs
 This specifies whether the rehost operation should perform LUN map-
 ping operation at the destination Vserver for the LUNs mapped on the
 volume at the source Vserver. The default setting is false (the
 rehost operation shall not map LUNs at the destination Vserver).
 When set to true, at the destination Vserver the command will create
 initiators groups along with the initiators (if present) with same
 name as that of source Vserver. Then the LUNs on the volume are
 mapped to initiator groups at the destination Vserver as mapped in
 source Vserver.

What about FlexClones?

Sometimes, volumes will have FlexClones attached to them. FlexClones are RW copies of the volume that are backed by snapshots and don’t take any space until you start making changes to them – perfect for dev work!

So, what if I want to re-host a volume with FlexClones? Let’s find out!

ontap9-tme-8040::*> volume clone create -vserver parisi -flexclone clone -type RW -parent-vserver parisi -parent-volume move_me -junction-active true -foreground true
[Job 43] Job succeeded: Successful

Not supported… yet. 😉

ontap9-tme-8040::*> vol rehost -vserver parisi -volume move_me -destination-vserver SVM1 -force-unmap-luns false -allow-native-volumes false

Error: command failed: Cannot rehost volume "move_me" on Vserver "parisi"
 because the volume is a parent of a clone volume.


ontap9-tme-8040::*> vol rehost -vserver parisi -volume clone -destination-vserver SVM1 -force-unmap-luns false -allow-native-volumes false

Error: command failed: Cannot rehost volume "clone" on Vserver "parisi"
 because the volume is a clone volume.

So there you go. ONTAP 9 is bringing all sorts of goodies!

Other cool things

I ran across some other cool features I had not seen before in clustered Data ONTAP while writing this blog.

  • First of all, when you create a new aggregate, the CLI will give you a preview before you commit the change:
ontap9-tme-8040::*> aggr create -aggregate aggr1_node1 -diskcount 14 -node ontap9-tme-8040-01

Info: The layout for aggregate "aggr1_node1" on node "ontap9-tme-8040-01" would
 be:

First Plex

RAID Group rg0, 14 disks (block checksum, raid_dp)
 Position Disk Type Size
 ---------- ------------------------- ---------- ---------------
 dparity 1.1.3 SSD -
 parity 1.1.4 SSD -
 data 1.1.5 SSD 744.9GB
 data 1.1.6 SSD 744.9GB
 data 1.1.7 SSD 744.9GB
 data 1.1.8 SSD 744.9GB
 data 1.1.9 SSD 744.9GB
 data 1.1.10 SSD 744.9GB
 data 1.1.11 SSD 744.9GB
 data 1.1.12 SSD 744.9GB
 data 1.1.13 SSD 744.9GB
 data 1.1.14 SSD 744.9GB
 data 1.1.15 SSD 744.9GB
 data 1.1.16 SSD 744.9GB

Aggregate capacity available for volume use would be 7.86TB.

Do you want to continue? {y|n}:

If you create a volume and don’t specify an export policy and the default export policy has no rules, ONTAP will warn you:

ontap9-tme-8040::*> vol create -vserver SVM1 -volume move_me -aggregate aggr1_node1 -size 10g -state online

Warning: The export-policy "default" has no rules in it. The volume will
 therefore be inaccessible.
Do you want to continue? {y|n}: y
[Job 41] Job succeeded: Successful

Oddities

Some things I ran across when setting this cluster up:

  • If you have ports in the Cluster IPSpace that are down (I only connected 2 out of 4) and attempt to run “cluster create” or “cluster join,” the process will fail until you move those ports that are down to the Default IPSpace.
  • If you’re trying to reinit a node and the node has ADP/disk partitions, the reinit may fail and suggest that you don’t have enough disks to create a root aggregate. To fix that, boot into maintenance mode and delete the partitions using “disk unpartition.”

Migrating to ONTAP – Ludicrous speed!

As many of those familiar with NetApp know, the era of clustered Data ONTAP (CDOT) is upon us. 7-Mode is going the way of the dodo, and we’re helping customers (both legacy and new) move to our scale-out storage solution.

There are a variety of ways people have been moving to cDOT:

(Also, stay tuned for more transition goodness coming very, very soon!)

What’s unstructured NAS data?

If you’re not familiar with the term, unstructured NAS data is, more or less, just NAS data. But it’s really messy NAS data.

It’s home directories, file shares, etc. It refers to a dataset that has been growing and growing over time and becoming harder and harder to manage at a granular level due to the directory structure, number of objects and the sheer amount of ACLs.

It’s a sore point for NAS migrations because it’s difficult to move due to the dependencies. If you’re coming from 7-Mode, you can certainly migrate using the 7MTT, which will copy all those folders and ACLs, but you potentially miss out on the opportunity to restructure that NAS data into a more manageable, logical format via copy-based transition (CBT).

When coming from a non-NetApp storage system, it gets trickier because copying the data is the *only* option at that point. Then the complexity of the unstructured NAS data is exacerbated by the fact that it will take a very, very long time to migrate in some cases.

What tools are available to migrate unstructured NAS data?

The arrows in your quiver (so to speak) for migrating NAS data are your typical utilities, such as the tried and true Robocopy for CIFS/SMB data.

There is also the old standby of NDMP, which just about every storage vendor supports. This can migrate all NAS data types, as it’s file-system agnostic.

However, each of the available methods to migrate are not without challenges. They are fairly slow. Some are single-threaded. All are network-dependent. And the challenges only get more apparent as the number of files grows. Remember, you’re not just copying files – you are copying information associated with those files. That adds to the overhead.

One of the favorite migration tools of NAS data is rsync. Some people swear it’s the best backup tool ever. However, it faces the same challenges mentioned – it’s slow, especially when dealing with large numbers of objects and wide/deep directory structures.

How has NetApp fixed that?

Thanks to some excellent work by one of NetApp’s architects, Peter Schay, we now have a utility that can help your migrations hit ludicrous speed – without needing rsync.

The tool? XCP.

https://xcp.netapp.com/

Also be on the lookout for some more ONTAP goodness in ONTAP 9 that helps improve performance and capacity with NAS data.

What is XCP?

XCP is a free data migration tool offered by NetApp that promises to accelerate NFSv3 migration for large unstructured NAS datasets, gather statistics about your files, sync, verify… pretty much anything you ever wanted out of a NAS migration tool. Its wheelhouse is high file count environments that use NFSv3, which also happens to be one of the more challenging scenarios for data migration.

Now, I can’t tell you something is really, really fast without giving you some empirical data. I won’t name names, because that’s not what I do, but our test runs showed an unspecified NAS vendor’s migration of 165 million files took 20 times longer than XCP. We took an 8-10 day file copy down to twelve hours in our testing.

In another use case, a customer moved 4 BILLION inodes and a petabyte of data from a non-NetApp system to a cDOT system and it was 30x faster than rsync.

That’s INSANE.

However, if you’re migrating a few large files, you won’t see a huge gain in speed. Rsync would be similarly effective.

And data migration isn’t the only use case – XCP can also help with file listing.

Recall what I mentioned before…

Remember, you’re not just copying files – you are copying information associated with those files.

That “information” I mentioned? It’s called metadata. And it has long been the bane of existence for NAS file systems. It’s all those messy bits of file systems – the directory tree locations, filehandles, file permissions, owners – all the things that make file based storage awesome because of the granularity and security also make it not so awesome because of the overhead. It’s a problem that is seen across vendors.

Case in point – that same not-to-be-named, non-NetApp storage vendor? It took 9 days to do a listing of the aforementioned 165 million files. NINE DAYS.

I’ve seen bathroom renovations take less time than that.

With XCP on a cDOT cluster?

That listing took 30 minutes.

That’s a 400x performance improvement with a free, easy to use tool. It takes traditionally slow utilities like du, ls, find and dd and makes them faster. It also does another thing – it makes them useful for storage performance benchmark tests.

I used to work in support – we’d get numerous calls about how “slow” our storage was because dd, du, ls or find were slow. We’d get a perfstat, see hardly any iops on the storage, disk utilization near idle, CPUs barely at 25% and say “yea, you’re using the wrong type of test.”

XCP is now another arrow in the quiver for performance testing.

What else can it do?

XCP can also do some pretty rich reporting of datasets. You can gather information like space utilization, extension types, number of files, directory entries, dates modified/created/accessed, even the top 5 space consumers… and all in manager-friendly graphs and charts.

For example:

Screen Shot 2015-11-04 at 10.22.29 PM

Pretty cool, eh? And did I mention it’s FREE?

How does it work?

XCP, at a high level, is built from the ground up and takes the overall concept of rsync, re-invents it and multi-threads it. Everything is done in parallel, using multiple connections and cores. This ensures the only bottleneck of your data transfer is your pipe. XCP will copy as much data over as many threads as your network (and CPUs) can handle. You can saturate as many 10GbE network links as your storage can handle.

The details relayed to me by the XCP team:

  • Parallelism galore – multitasking, multiprocessing, and multiple links
  • Built-in NFS client that does asynchronous queueing and streaming of all standard NFSv3 requests listed in RFC-1813
  • Typically 5-25 times faster than rsync!

As our German friends say, it’s like the Autobahn – no speed limit (other than the limits of your own vehicle).

If you don’t believe me, try it for yourself. Contact your NetApp sales reps or partners and get a proof of concept going. Keep in mind that all this awesomeness is just in version 1.0 of this software. There are many plans to make this tool even better, including plans for supporting other protocols. Right now, in the lab, we’re looking at S3 (DataFabric, anyone?) and CIFS/SMB support for XCP!

XCP a breakthrough in data migration, processing and reporting.

Behind the Scenes: Episode 42 –ONTAP 9 Overview

group-4-2016

Welcome to the Episode 42 version of the new series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

This is yet another in the series of episodes for ONTAP 9 month on the podcast.

ontap9week

This week, we invited the director of product management, Quinn Summers, to give a technical overview of the new features of ONTAP 9. We also interviewed the SolidFire CTO, Val Bercovici (@valb00) to speak about the SolidFire Analyst Day announcements on June 2. Earlier in the week, we released a full episode on the decision process of branding ONTAP 9, as well as a mini-podcast on the SolidFire announcement (for those of you who didn’t want to sit through the ONTAP 9 episode).

sf-robot

Finding the Podcast

The podcast is all finished and up for listening. You can find it on iTunes or SoundCloud or by going to techontappodcast.com.

Also, if you don’t like using iTunes or SoundCloud, we just added the podcast to Stitcher.

You can find it here:

http://www.stitcher.com/podcast/tech-ontap-podcast?refid=stpr

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

Check out the podcast episode here:

SolidFire Analyst Day News #BringOnTheFuture

Today, SolidFire announced a few things during their Analyst Day.

First of all, they have new branding – one that’s more in line with the NetApp color scheme.

Secondly, there’s a pretty rad new robot on the solidfire.com site, who totally looks like he’s contemplating his own existence:

sf-robot

They’ve also announced the intent to create a SolidFire FlexPod solution, as well as the new Element 9 OS.

Flash Forward

Most importantly, SolidFire promised that June 2 would change the storage industry forever. And it seems like they might have done just that with a new way to think about how customers purchase storage. The Flash Forward program offers flexibility for customers – you only pay for what you use – and promises to “future proof” your purchases. The concept has promise – but ultimately, customers will decide if it’s the right approach for them. I tend to think they’ll hop onboard, as well as the rest of the storage industry.

How’s that for change?

Some of the NetApp A-Team blogged about the event:

In addition, the Tech ONTAP podcast team spoke with the SolidFire CTO, Val Bercovici (@valb00) about the announcement. There’s a mini-podcast here:

Also, be sure to check out ONTAP 9 month on  the podcast. I blogged about it here.