A couple of years ago, I wrote up a blog on using NFS with Docker as I was tooling around with containers, in an attempt to wrap my head around them. Then, I never really touched them again and that blog got a bit… stale.
Well, in that blog, I had to create a bunch of kludgy hacks to get NFS to work with Docker, and honestly, it likely wasn’t even the best way to do it, given my lack of overall Docker knowledge. More recently, I wrote up a way to Kerberize NFS mounts in Docker containers that is a little better effort.
Luckily, realizing that I’m not the only one who wants to use Docker but may not know all the ins and outs, NetApp developers created a NetApp plugin to use with Docker that will do all the volume creation, removal, etc for you. Then, you can leverage the Docker volume options to mount via NFS. That plugin is named “Trident.”
Trident + NFS
Trident is an open source storage provisioner and orchestrator for the NetApp portfolio.
You can read more about it here:
You can also read about how we use it for AI/ML here:
When you’re using the Trident plugin, you can create Docker-ready NFS exported volumes in ONTAP to provide storage to all of your containers just by specifying the -v option during your “docker run” commands.
For example, here’s a NFS exported volume created using the Trident plugin:
# docker volume create -d netapp --name=foo_justin
# docker volume ls
DRIVER VOLUME NAME
Here’s what shows up on the ONTAP system:
::*> vol show -vserver DEMO -volume netappdvp_foo_justin -fields policy
vserver volume policy
------- -------------------- -------
DEMO netappdvp_foo_justin default
Then, I can just start up the container using that volume:
# docker run --rm -it -v foo_justin:/foo alpine ash
/ # mount | grep justin
10.x.x.x:/netappdvp_foo_justin on /foo type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.193.67.237,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=10.x.x.x)
Having a centralized NFS storage volume for your containers to rely on has a vast number of use cases, providing access for reading and writing to the same location across a network on a high-performing storage system with all sorts of data protection capabilities to ensure high availability and resiliency.
Customization of Volumes
With the Trident plugin, you have the ability to modify the config files to change attributes from the defaults, such as custom names, size, export policies and others. See the full list here:
Trident + NFS + FlexGroup Volumes
Starting in Trident 18.07, a new Trident NAS driver was added that supports creation of FlexGroup volumes with Docker.
To change the plugin, change the /etc/netappdvp/config.json file to use the FlexGroup driver.
Then, create your FlexGroup volume. That simple!
A word of advice, though. The FlexGroup driver defaults to 1GB and creates 8 member volumes across your aggregates, which creates 128MB member volumes. That’s problematic for a couple reasons:
- FlexGroup volumes should have members that are no less than 100GB in size (as per TR-4571) – small members will affect performance due to member volumes doing more remote allocation than normal
- Files that get written to the FlexGroup will fill up 128MB pretty fast, causing the FlexGroup to appear to be out of space.
You can fix this either by setting the config.json file to use larger sizes, or specifying the size up front in the Docker volume command. I’d recommend using the config file and overriding the defaults.
To set this in the config file, just specify “size” as a variable (full list of options can be found here: https://netapp-trident.readthedocs.io/en/latest/kubernetes/operations/tasks/backends/ontap.html:
Since the volumes default to thin provisioned, you shouldn’t worry too much about storage space, unless you think your clients will fill up 800GB. If that’s the case, you can apply quotas to the volumes if needed to limit how much space can be used. (For FlexGroups, quota enforcement will be available in an upcoming release; FlexVols can currently use quota enforcement)
# docker volume create -d netapp --name=foo_justin_fg -o size=1t
And this is what the volume looks like in ONTAP:
::*> vol show -vserver DEMO -volume netappdvp_foo_justin* -fields policy,is-flexgroup,aggr-list,size,space-guarantee
vserver volume aggr-list size policy space-guarantee is-flexgroup
------- ----------------------- ----------------------- ---- ------- --------------- ------------
DEMO netappdvp_foo_justin_fg aggr1_node1,aggr1_node2 1TB default none true
Since the FlexGroup is 1TB in size, the member volumes will be 128GB, which fulfills the 100GB minimum. Future releases will enforce this without you having to worry about it.
::*> vol show -vserver DEMO -volume netappdvp_foo_justin_fg_* -fields aggr-list,size -sort-by aggr-list
vserver volume aggr-list size
------- ----------------------------- ----------- -----
DEMO netappdvp_foo_justin_fg__0001 aggr1_node1 128GB
DEMO netappdvp_foo_justin_fg__0003 aggr1_node1 128GB
DEMO netappdvp_foo_justin_fg__0005 aggr1_node1 128GB
DEMO netappdvp_foo_justin_fg__0007 aggr1_node1 128GB
DEMO netappdvp_foo_justin_fg__0002 aggr1_node2 128GB
DEMO netappdvp_foo_justin_fg__0004 aggr1_node2 128GB
DEMO netappdvp_foo_justin_fg__0006 aggr1_node2 128GB
DEMO netappdvp_foo_justin_fg__0008 aggr1_node2 128GB
8 entries were displayed.
Practical uses for FlexGroups with containers
It’s cool that we *can* provision FlexGroup volumes with Trident for use with containers, but does that mean we should?
Well, consider this…
In an ONTAP cluster that uses FlexVol volumes for NFS storage presented to containers, I am going to be bound to a single node’s resources, as per the design of a FlexVol. This means that even though I bought a 4 node cluster, I can only use 1 node’s RAM, CPU, network, capacity, etc. If I have a use case where thousands of containers spin up at any given moment and attach themselves to a NFS volume, then I might see some performance bottlenecks due to the increased load. In most cases, that’s fine – but if you could get more out of your storage, wouldn’t you want to do that?
You could add layers of automation into the mix to add more FlexVols to the solution, but then you have new mount points/folders. And what if those containers all need to access the same data?
With a FlexGroup volume that gets presented to those same Docker instances, the containers now can leverage all nodes in the cluster, use a single namespace and simplify the overall automation structure.
The benefits become even more evident when those containers are constantly writing new files to the NFS mount, such as in an Artificial Intelligence/Machine Learning use case. FlexGroups were designed to handle massive amounts of file creations and can provide 2-6x the performance over a FlexVol in use cases where we’re constantly creating new files.
Stay tuned for some more information on how FlexGroups and Trident can bring even more capability to the table to AI/ML workloads. In the meantime, you can learn more about NetApp solutions for AI/ML here: