Issues installing Docker on OS 10.10 or later?

Today I was trying to install the Docker Toolbox on my Mac. It failed. Because I was able to fix it and did not see any other articles on how to do this that specifically referenced this app or issue, I decided to write it up. Because, community!


The installation appeared to work fine, but once I clicked the “Docker Terminal” icon, the terminal would launch with the following message:

Docker Machine is not installed. Please re-run the Toolbox Installer and try again.

Docker Machine installs by default to the /usr/local/bin directory in OS X. And when I tried to change that location in the install package, I didn’t have any luck.

That directory is locked down pretty tight (700 permissions, my user as the owner).

drwx------  24 parisi  wheel  816 Mar 25 10:52 bin

When I tried to open the directory up a bit and re-install, it would have the same issue. And, when I tried to cd directly into that directory, it either threw a permission denied or silently failed, even though I had allowed access:

$ sudo chmod 766 bin

$ ls -la
total 0
drwxr-xr-x   5 root    wheel  170 Mar 25 10:50 .
drwxr-xr-x@ 10 root    wheel  340 May  8  2015 ..
drwxr-xr-x   3 root    wheel  102 Apr 20  2015 Qt
drwxrw-rw-  24 parisi  wheel  816 Mar 25 10:52 bin
drwxr-xr-x   3 root    wheel  102 Mar 25 10:50 share

$ cd bin
-bash: cd: bin: Permission denied

$ sudo cd bin

$ pwd

And Docker commands failed:

$ docker
-bash: docker: command not found

Color me stumped.

So I turned to Google and found an article on Homebrew installations failing, but nothing specifically on Docker failing. I used the Homebrew workaround found in this article and it fixed my issue.

Here are the commands I ran:

$ sudo chown $(whoami):admin /usr/local && sudo chown -R $(whoami):admin /usr/local

Essentially, the command above does a recursive (-R) chown on the /usr/local directories as the logged in user (via whoami).

Before the change, /usr/local looked like this:

drwxr-xr-x     6 root  wheel    204 Mar 25 10:56 local

After the change:

drwxr-xr-x     6 parisi  admin    204 Mar 25 10:56 local

After that, I could run Docker commands:

$ pwd

$ docker
Usage: docker [OPTIONS] COMMAND [arg...]
       docker [ --help | -v | --version ]
A self-sufficient runtime for containers.

  --config=~/.docker              Location of client config files
  -D, --debug                     Enable debug mode
  -H, --host=[]                   Daemon socket(s) to connect to
  -h, --help                      Print usage
  -l, --log-level=info            Set the logging level
  --tls                           Use TLS; implied by --tlsverify
  --tlscacert=~/.docker/ca.pem    Trust certs signed only by this CA
  --tlscert=~/.docker/cert.pem    Path to TLS certificate file
  --tlskey=~/.docker/key.pem      Path to TLS key file
  --tlsverify                     Use TLS and verify the remote
  -v, --version                   Print version information and quit


And the Docker terminal starts correctly:

Creating CA: /Users/parisi/.docker/machine/certs/ca.pem
Creating client certificate: /Users/parisi/.docker/machine/certs/cert.pem
Running pre-create checks...
Creating machine...
(default) Copying /Users/parisi/.docker/machine/cache/boot2docker.iso to /Users/parisi/.docker/machine/machines/default/boot2docker.iso...
(default) Creating VirtualBox VM...
(default) Creating SSH key...
(default) Starting the VM...
(default) Check network to re-create if needed...
(default) Found a new host-only adapter: "vboxnet0"
(default) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: /usr/local/bin/docker-machine env default

                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           \______ o           __/
             \    \         __/

docker is configured to use the default machine with IP X.X.X.X
For help getting started, check out the docs at

If you’re interested on more detail on the issue, check out the Homebrew blog, as well as this on System Integrity Protector (SIP):

Hopefully this helps someone else.

Another option, pointed out to me on Twitter, is to use the native Docker apps (still in beta):

If interested, I’ve written a couple other blogs on Docker.

TECH::Using NFS with Docker – Where does it fit in?

TECH::Docker + CIFS/SMB? That’s unpossible!

Behind the Scenes: Episode 32 – SnapCenter 1.1

Welcome to the Episode 32 version of the new series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

This week, we invited a previous guest back, John Spinks (@jbspinks), on to the podcast to talk about the new release of SnapCenter. John actually approached us to come on, and will be back again once SnapCenter gets its next update.

We also discussed my trip to Storage Field Day 9 and they make me wear the guest beard.

Recording the podcast

This week was intended to be one where we recorded multiple podcasts, including 3 in one day! However, schedules change and SnapCenter was the only one we ended up recording. But, it’s a good one – Spinks lays it out nicely.

Take a listen here:

Behind the Scenes: Episode 31 – FlexPod with Infrastructure Automation

Welcome to the Episode 31 version of the new series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

If you noticed that I am late in delivering this post, it’s because I wasn’t actually behind the scenes of this episode. I was out in Sunnyvale at Storage Field Day 9, moonlighting as a live blogger/roving reporter.

Thankfully, the podcast was in completely capable hands, as Glenn and Andrew nailed the latest episode on the new FlexPod product, Infrastructure Automation. Glenn sent it to me to edit on Wednesday afternoon and I knocked it out that night to make sure it got released on time. They left me a few language surprises that I had to edit, but overall, it was a solid show. Sounds like FlexPod has its own HCI-like feel to it now. While there’s technically no hypervisor in place, the solution seems to solve the simplicity and manageability aspects nicely.

I’ll be back in the studio this week, as we’re recording a whopping FOUR episodes on Wednesday and Thursday. Be on the lookout for new shows on Storage Service Design and SnapCenter!

Now, check out Episode 31:

Storage Field Day 9 at NetApp! – Live Blog

Check out all the SFD9 presentations at NetApp here:

I was lucky enough to be invited to sit in on the Storage Field Day 9 here at NetApp headquarters in Sunnyvale, CA. The idea was to have me here to document, as well as rolling it all up into a podcast later.

This is NetApp’s first Storage Field Day in several years, and it’s my opinion that it’s long overdue. Tim Waldron, the Strategic Marketing Technical Lead at NetApp (@timwaldron) put this all together, as he feels that having industry influencers in to listen to our story and kick the tires a bit goes a long way towards building a solid perception to go along with what already is a solid technical story. Giving direct access to NetApp leaders offers the chance to eliminate guesswork by influencers and clear up any misconceptions that have been floating around. (Or, FUD.)

For this blog post, I’ll be live blogging the event! Keep checking back throughout the day for updates.

The Roster

The NetApp presenters include:

  • Dave Hitz – Executive VP of NetApp (@DaveHitz)
  • Joe CaraDonna – Sr. Technical Director, Cloud Services Group
  • Dave Wright – VP and General Manager, SolidFire (@jungledave)

In addition to myself and Tim, we also have a number of other NetApp representatives on-hand to answer questions.

  • Val Bercovici – Cloud Czar (@valb00)
  • John Fullbright – Principal Technical Architect
  • Jeramiah Dooley – Systems Engineer, SolidFire (@jdooley_clt)
  • Andy Grimes – Principal Architect for Flash (@Andy_NTAP_Flash)
  • Andy Banta – Storage Janitor, SolidFire (@andybanta)
  • The list of influencers include:

The list of tech influencers:

Live Blog

3/16/16 – 9AM

I’m here early. I’m still on East Coast time. I flew in yesterday and got here around 6:30PM local time.

They are setting up the table here in the cafeteria.

3/16/16 – 9:45AM

We’re supposed to start soon. People are going to be arriving in a bit. The breakfast table is set. It looks a little last-supper-y. But I’m sure it’ll be fine.

3/16/16 – 9:55AM

UPDATE: I’m an idiot. SFD not in the cafe. It’s in a conference room. Migrated there. All is right in the world.

Foskett is currently laying down the ground rules for SFD9.

 3/16/16 – 10AM

We are LIVE! MC Foskett is laying the framework of what we’ll be doing.

If you want to watch live:

A rough agenda:

  • Tim Waldron does intro
  • The Daves (Hitz and Wright) own the room
  • Joe CaraDonna keeps the party going
  • Tim Waldron closes it out

3/16/16 – 10:05AM

Hitz is up.

He promises not to predict the future. Good decision.

Now he’s talking transformation here at NetApp. Part of the transformation is about controlling costs, spending smarter. I’m on board with that.

Drops a nice quote: “The future is already here.”

Video here:

3/16/16 – 10:11AM

Howard Marks spilled his coffee. A bad omen?  3/16/16 – 10:13AM

Hitz is touting the impressive growth rate of NetApp’s flash story. ($600 mil run rate!)

He’s also calling out the notion that SolidFire means the end of this successful business.

3/16/16 – 10:14AM

Chris Evans is asking a nice question regarding the architecture decisions for the NetApp flash story.

Hitz hints that it’s a strategy that’s timing based. Flash prices dropping were the driving factor in the re-direct.

Success blinding us to the reality of the market.

3/16/16 – 10:16AM

Tape hasn’t died. It’s the cockroach of the data center.

Scott Lowe also doesn’t think disk is dead (yet). I agree with him. Still a place for it. I mean, WE STILL HAVE TAPE.

3/16/16 – 10:18AM

We are at the precipice of all flash. Where does it go from here?

Hitz says the E-Series is a hot race car with no windshield. Made for speed. Want a windshield? Get a helmet.

AFF is more of a jack of all trades flash solution.

Commodity hardware, web scaling with SolidFire.

3/16/16 – 10:21AM

Hitz answers another Chris Evans question regarding if we are really in an All-Flash market.

Compares it to manual vs. automatic transmissions. Full disclosure: I only buy manual transmissions.

Howard Marks opinion: Parts used to build the storage doesn’t define the market. The applications do.

3/16/16 – 10:25AM


Cloud fears:

  • Is it safe?
  • Is it legal? (HIPAA, safe harbor)
  • Will I be locked in?

Hitz talks about lunch with Edward Snowden (via robot a la Big Bang Theory):


Asked the hard hitting questions like: Did you have data on NetApp? 🙂

Now talking NetApp Private Storage, AltaVault, Data Fabric story. 5 bucks an hour for ONTAP in the cloud on AWS! Buy or rent the car? Depends on the use case.

3/16/16 – 10:33AM

Data moves in and out of the cloud based on business need. That’s where Data Fabric story comes it. Need a way to move it easily and as non-disruptively as possible.

3/16/16 – 10:35AM

Hitz has covered what NetApp already does. Joe will come up and talk about the future.

Enrico Signoretti asked about HCI. Fair question. Will be answered in Q&A.

3/16/16 – 10:38AM

Joe CaraDonna has been summoned.

Talks about how Cloud ONTAP started out as a VSIM. VSIM could be considered as the original Software Defined Storage.

Joe CaraDonna doesn’t have a Twitter handle. He got a bit of a ribbing, then Dave Hitz offered Tweeting as a Service with his handle. (@davehitz)

Video of Joe’s presentation here:

3/16/16 – 10:40AM

Data Fabric is all about the data.

  • Control
  • Choice
  • Integration
  • Access
  • Consistency

3/16/16 – 10:46AM

Three clouds currently supported with NetApp Private Storage: Amazon, Microsoft, SoftLayer. Failover between clouds in 15 seconds or less!

3/16/16 – 10:48AM

Chris Evans wanted a demo of this failover capability.

Ask and ye shall receive!


3/16/16 – 10:53AM

De-dupe on AWS = saving $$

Real customers using this and asking for things like HA, cloud failover, etc. Not every vendor can do  that today, nor in the near future.

3/16/16 – 10:55AM

Joe’s moved to the AltaVault story. Back anything up to any cloud, using pretty much any backup product.

No one in the room’s heard of AltaVault. But they’ve heard of SteelStore. So, that’s a win!

3/16/16 – 10:56AM

Joe’s talking SnapMirror to Everything now and why that’s such a boon to people using cloud storage. WAN efficient backups. Fast, saves $$. Moves only deltas.

Multiple platform support in the future for SnapMirror. E-Series, FAS, AFF, SolidFire, Edge.

3/16/16 – 10:58AM

Joe’s talking about an upcoming feature called “composite aggregates” that allow you to do object store data tiering. Hot data gravitating to faster storage. SSDs in same aggregates as spinning disk!

Dave Wright chimes in: This feature is way more important than a single slide can describe.

3/16/16 – 11:04AM

Joe is being told to do the demo.

So we are getting a demo.

Let’s see this in action!

3/16/16 – 11:05AM

Demo will be real stuff in a lab. Futures. Not mock ups, faked, etc.

Cloud Manager is being used (that’s currently available). Demo is of moving data between clouds. Data Fabric in action.

Demo of moving data between clouds happens in just a few mouse clicks!

Essentially, creating two instances – one in AWS (already created, has data – simply attaching), one in Azure. Next up – restoring to cloud.

Just a few mouse clicks. Restore of 10GB of data = 25ish seconds!

That’s a *full* restore. No incrementals. All done over SnapMirror.

Video of demo here:

ye 3/16/16 – 11:15AM

Val Bercovici updates what’s currently available:

Cloud ONTAP in different regions and Cloud Manager – Available today!

AltaVault – Available today!

SnapMirror to FAS/AFF/NPS – Available today!

Futures are mainly the inter-platform stuff.

3/16/16 – 11:17AM

Dave Wright is up.

Hitz asked about if his time at GameSpy inspired him to invent the storage that eventually became SolidFire. Answer? YES!

People will solve problems that are not solved yet for themselves if they want to survive.


3/16/16 – 11:21AM

Fantastic opening by Dave Wright.

Ignores the “let’s talk about all flash at NetApp” to uncover the lies that flash storage companies tell.

This oughta be good…

Video recording up here:

3/16/16 – 11:22AM

Lies, lies and damn lies.

Lie #1: Disk architectures can’t be adapted to all-flash

Covers data reduction/complex metadata handling myths and why ONTAP fits. Hard != impossible.

News flash: Dave Hitz wrote WAFL. He sees how well it fits with flash.

NetApp has optimized ONTAP/WAFL for flash. Can’t just throw flash at the problem.

My take: Designed from the ground up only works better if it’s a superior design.

Oh, you want a ground up architecture? SolidFire is your answer.

3/16/16 – 11:42AM

Big vendors are adapting faster than startups can add features.

Lie #2: One flash architecture can cover all primary storage use cases

Modern datacenter is too diverse. No one can design for every use case. That’s where a diverse product portfolio comes in.

Features, flexibility, application speed.

Lie #3: Flash adoption is slow

As prices drop, it’s picking up. Sure, it’s a luxury for some workloads. But when you can get speed *and* disk lifespan near HDD prices per GB, why not?

Would you turn down a Ferrari for the Yugo if it only cost a bit more?

3/16/16 – 11:58AM

2 minute warning! I don’t know if Dave will be able to finish before lunch. The natives are getting restless, I fear…

Did predict that the incumbent storage vendors will take the market from the startups, particularly in the enterprise space. They’re just not adapting fast enough.

3/16/16 – 12:00PM

He hits the shot at the buzzer. March Madness.

Now time for Q&A! (Presented as a Reddit AMA)

Q: Do you drive to work in a Ferrari

A: No!

Q (@JPWarren): What are thoughts on overall storage market?

A: Reality is that the storage market was overfunded. Unicorns! We’re seeing a normalization now.

Q (@JPWarren): Thoughts on HyperConverged? (HCI)

A: SolidFire does a lot of what people like about HCI. So, it can actually replace HCI in some use cases and still accomplish simplicity, scale out.

Q (@JPWarren):  Asks “other Dave” same question.

A: Agrees with D. Wright. Simplicity and scale out are like chocolate and peanut butter. (my words, not his)

Q (@ESignoretti): Asks about block vs NAS (like NFS of course).

A: Docker containers to handle NFS right now. No current announcements, but touts NetApp’s NFS history. (Boom.)

Hitz wants to make sure they don’t drown SF. We want to be a resource, not a driver. SnapMirror is priority #1.

3/16/16 – 12:08AM

Foskett wrapping up. Over 150 viewers. Video will be edited and posted soon to review.

Cameras going down. Back online at 2PM elsewhere it sounds!


One of Clustered Data ONTAP’s Best Features That No One Knows About

Some questions I’ve gotten a few times go like this:

OMG, I deleted my volume. How do I get it back?


I deleted my volume and I’m not seeing the space given back to my aggregate. How do I fix that?

These questions started around clustered Data ONTAP 8.3. This is not a coincidence.

A little backstory

Back in my support days, we’d occasionally get an unfortunate call from a customer where they accidentally deleted a volume/the wrong volume and were frantically trying to get it back. Luckily, if you caught it in time, you could power down the filers and have one of our engineering wizards work his magic and recover the volume, since deletes take time as blocks are freed.

This issue came to a head when we had a System Manager design flaw that made deleting a volume *way* too easy and did not prompt the user for confirmation. Something had to be done.

Enter the Volume Recovery Queue

As a way to prevent catastrophe, clustered Data ONTAP 8.3 introduced a safety mechanism called the “volume recovery queue.” This feature is not entirely well known, as it’s buried in diag level, which means it doesn’t get documented in official product docs. However, I feel like it’s a cool feature that people need to know about, and one that should help answer questions like the ones I listed above.

Essentially, the recovery queue will take a deleted volume and keep it in the active file system (renamed and hidden from normal viewing) for a default of 12 hours. That means you have 12 hours to recover the deleted volume. It also means you have 12 hours until that space is reclaimed by the OS.

From the CLI man pages:

cluster::*> man volume recovery-queue
volume recovery-queue Data ONTAP 8.3 volume recovery-queue
 volume recovery-queue -- Manage volume recovery queue
 The recovery-queue commands enable you to manage volumes that are deleted and kept in the recovery queue.
 modify - Modify attributes of volumes in the recovery queue
purge-all - Purge all volumes from the recovery queue belonging to a Vserver
purge - Purge volumes from the recovery queue belonging to a Vserver
recover-all - Recover all volumes from the recovery queue belonging to a Vserver
recover - Recover volumes from the recovery queue belonging to a Vserver
show - Show volumes in the recovery queue

The above commands, naturally, should be used with caution, especially the purge commands. And the modify command should not be used to change the retention hours to delete things too aggressively. Definitely don’t set it to zero (which disables it).

How it works

When a volume is deleted, the volume gets renamed with the volume’s unique data set ID (DSID) appended and removed from the replicated database volume table. Instead, it’s viewable via the recovery queue for the 12 hour default retention period. During that time, space is not reclaimed, but the volume is still available to be recovered.

For example, my volume called “testdel” has a DSID of 1037:

cluster::*> vol show testdel -fields dsid
vserver volume  dsid
------- ------- ----
nfs     testdel 1037

When I delete the volume, we can’t see it in the volume table, but we can see it in the recovery queue, renamed to testdel_1037 (recall 1037 is the volume DSID):

cluster::*> vol offline testdel
Volume "nfs:testdel" is now offline.
cluster::*> vol delete testdel
Warning: Are you sure you want to delete volume "testdel" in Vserver "nfs" ? {y|n}: y
[Job 490] Job succeeded: Successful
cluster::*> vol show testdel -fields dsid
There are no entries matching your query.
cluster:*> volume recovery-queue show
Vserver   Volume       Deletion Request Time     Retention Hours
--------- ------------ ------------------------- ---------------
nfs       testdel_1037 Fri Mar 11 19:02:40 2016  12

That volume will be in the system for 12 hours unless I purge it out of the queue. That will free space up immediately, but will also remove the chance of being able to recover the volume. Run this command only if you’re sure the volume should be deleted.

cluster::*> volume recovery-queue purge -volume testdel_1037
cluster::*> volume recovery-queue show
This table is currently empty.

Pretty straightforward, eh?

Pretty cool, too. I am a big fan of this feature, even if it means an extra step to delete a volume quickly. Better safe than sorry and all.

There is also a KB article on this, with a link to a video. It requires a valid NetApp support login to view:

This KB shows how to enable it (if it’s somehow disabled):


Updated NetApp NFS Technical Reports Available for Clustered Data ONTAP 8.3.2!

New NAS related technical reports available for 8.3.2!

Why Is The Internet Broken?

Clustered Data ONTAP 8.3.2 GA is just around the corner. (8.3.2RC2 is already available)

Because of that, the latest updates to the following TRs are now publicly available!

Available now:

TR-4067: Clustered Data ONTAP NFS Implementation and Best Practice Guide

This is essentially the NFS Bible for clustered Data ONTAP. Read it if you ever plan on using NFS with clustered Data ONTAP.

TR-3580: NFSv4 Enhancements and Best Practices Guide for Data ONTAP Implementation

This TR covers NFSv4 in both 7-Mode and cDOT. Think of it as a companion piece to TR-4067.

TR-4379: Name Services Best Practice Guide

This TR covers best practices for DNS, netgroups, LDAP, NIS and other related items to name services in clustered Data ONTAP.

Coming soon:

TR-4073: Secure Unified Authentication

This one is currently being updated and does not have a timetable for release currently. Keep checking back here for more information.


View original post 19 more words

Clustered Data ONTAP 8.3.2 is now GA!

Several months ago, I wrote a post describing the new 8.3.2RC1 release and what features it includes. You can read that here.

Now, clustered Data ONTAP is in general availability (GA)! If you’re curious what GA means, check that out here.

You can get the new release here:


Usually, releases don’t add new features between the RC and GA release, but due to popular demand, 8.3.2 has a few nuggets to add to the GA release, in addition to the things that were added to 8.2.3RC.

These include:

  • Simplified System Manager workflows for AFF (basically templates for specific NAS workloads)
  • LDAP signing and sealing
  • FLI support for AFF

In addition, a number of bug fixes are included in this release, so it’s a good idea to schedule a window to upgrade your cluster. Remember, upgrades are non-disruptive!

Behind the Scenes: Episode 30 –DevOps and NetApp Private Storage with Glenn Dekhayser

Welcome to the Episode 30 version of the new series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

This week, we welcome NetApp A-Team member Glenn Dekhayser of Red8 to talk about DevOps and NetApp Private Storage. The idea for a DevOps show was inspired by some things Glenn said on our NetApp A-Team Slack channel about a recent article by a tech rag/tabloid on the topic of DevOps and how admins must adapt to survive.  To let Glenn further expound, we invited him onto the show to talk about it, as well as NetApp Private Storage.

Glenn’s been a member of the NetApp A-Team for almost as long as it has existed and really knows his stuff. Plus, he can talk. We just wind him up and let him go.

Unfortunately, he works out of NYC, so he had to be remote via Skype. And Glenn Sizemore was also dialing in.

 Andrew Sullivan was on his way to Austin, because he’s popular.

Recording the Podcast

The goal this week was to trim the podcast time down to around 45-50 minutes on the good advice of Amy Lewis (@CommsNinja). Success! (We came in at 52 min)

Since both Glenns joined via Skype, I ended up all by myself in the studio…

Check out the new episode below and be sure to send any questions or comments to

Behind the Scenes: Episode 29 –VVols with Pedro Arrow!

Welcome to the Episode 29 version of the new series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

This week, we dragged former Tech ONTAP Podcast host Pete Flecha (aka @vPedroArrow) into the studio to talk VVols after our successful episode with SolidFire’s Aaron Patten last week.

Getting Pedro into the studio was easy – we still keep in touch with him and he was happy to come lend his expertise to the podcast. He also happens to have his own podcast that he just started up at VMware: Virtually Speaking.

I was glad to get Pedro into the studio – he was the guy who championed me to join up with the excellent podcast crew.


Recording the Podcast

The podcast went pretty smoothly – it’s as if Pedro had done this before – but in classic Pedro Arrow fashion, we had to do some re-takes the next day. He’s a bit of a perfectionist.

Check out the new episode below and be sure to send any questions or comments to