NOTE: I wrote this blog nearly 2 years ago, so a lot has changed since then regarding Docker.
Check out this newer blog on NFS/Docker for NetApp here:
Docker + NFS + FlexGroup volumes = Magic!
Also, check out how to Kerberize NFS in a container!
Securing NFS mounts in a Docker container
That got me thinking… as the NFS TME for NetApp, I have to consider where file services fit into newer technologies like Docker. What use cases might there be? Why would we use it?
In this blog, I will talk about what sorts of things you can do with NFS in Docker. I’ll say upfront that I’m a Docker rookie, so some of the things I do might seem kludgy to Docker experts. But I’m learning as I go and trying to document it here. 🙂
Can I store Docker images on NFS?
When you run a build of a Docker image, it gets stored in /var/lib/docker.
# ls containers graph repositories-devicemapper vfs devicemapper init tmp volumes execdriver linkgraph.db trust
However, that file size is limited. From the /etc/sysconfig/docker-storage file:
# By default, Docker uses a loopback-mounted sparse file in # /var/lib/docker. The loopback makes it slower, and there are some # restrictive defaults, such as 100GB max storage.
We can see that a sparse 100GB file is created in /var/lib/docker/devicemapper/devicemapper, as well as a 2GB metadata file. This seems to be where our images get stored.
# ls -lah /var/lib/docker/devicemapper/devicemapper total 743M drwx------. 2 root root 32 May 4 22:59 . drwx------. 5 root root 50 May 4 22:59 .. -rw-------. 1 root root 100G May 7 21:16 data -rw-------. 1 root root 2.0G May 7 21:16 metadata
And we can see that there is actually 743MB in use.
So, we’re limited to 100GB and that storage is going to be fairly slow. However, the filesystems supported with Docker for this don’t include NFS. Instead, it uses block-based filesystems, as described here:
Comprehensive Overview of Storage Scalability in Docker
However, you could still use NFS to store the Docker image data and metadata. How?
Mount a NFS mount to /var/lib/docker!
When I stop Docker and mount the directory via NFS, I can see that there is nothing in my 1TB volume:
# service docker stop # mount 10.228.225.142:/docker /var/lib/docker # df -h | grep docker 10.228.225.142:/docker 973G 384K 973G 1% /var/lib/docker
When I start the Docker service, ~300MB is written to the mount and the folder structure is created:
# service docker start Redirecting to /bin/systemctl start docker.service # df -h | grep docker 10.228.225.142:/docker 973G 303M 973G 1% /var/lib/docker # ls containers devicemapper execdriver graph init linkgraph.db repositories-devicemapper tmp trust volumes
However, there is still a 100GB limited sparse file and we can see where the 300M is coming from:
# ls -lah total 295M drwx------ 2 root root 4.0K May 7 21:39 . drwx------ 4 root root 4.0K May 7 21:39 .. -rw------- 1 root root 100G May 7 21:39 data -rw------- 1 root root 2.0G May 7 21:39 metadata
In this particular case, the benefits NFS brings here is external storage in case the host ever tanks or external storage for hosts that don’t have 100GB to spare. It would also be cool if that sparse file could be customized to grow past 100GB if we wanted it to. I actually posted a request for that to happen. Then they replied and I discovered I didn’t RTFM and it actually DOES exist as an option. 😉
So what could we use NFS for in Docker?
When you create a container, you are going to be fairly limited to the amount of space in that container. This is exacerbated when you run multiple containers on the same host. So, to get around that, you could mount a NFS share at the start of the container. With NFS, my storage limits are only what my storage provider dictates.
Another benefit of NFS with Docker? Access to a unified set of data across all containers.
If you’re using containers to do development, it would make sense that the containers all have access to the same code branches and repositories. What if you had an application that needed access to a shared Oracle database?
How do we do it?
One thing I’ve noticed while learning Docker is that the container OS is nothing like a virtual machine. These containers are essentially thin clients and are missing some functionality by design. From what I can tell, the goal is to have a client that can run applications but is not too heavyweight (for efficiency) and not terribly powerful in what can be done on the client (for security). For instance, the default for Linux-based OSes seems to leave systemd out of the equation. Additionally, these clients all start in unprivileged mode by default, so root is a bit limited in what it can and cannot do.
As a result, doing something as simple as configuring a NFS client can be a challenge.
Why not just use the -v option when running your container?
One approach to serving NFS to your Docker image would be to mount the NFS share to your Docker host and then run your docker images using the -v option to mount a volume. That eliminates the need to do anything wonky on the images and allows easier NFS access. However, I wanted to mount inside of a Docker container for 2 reasons:
- My own knowledge – what would it take? I learned a lot about Docker, containers and the Linux OS doing it this way.
- Cloud – If we’re mounting NFS on a Docker host, what happens if we want to use those images elsewhere in the world? We’d then have to mount the containers to those Docker hosts. Our end users would have to either run a script on the host or would need to know what to mount. I thought it might scale better to have it built in to the image itself and make it into a true “Platform as a Service” setup. Then again, maybe it *would* make more sense to do it via the -v option… I’m sure there could be use cases made for both.
Step 1: Configure your NFS server
In this example, I am going to use NFS running on clustered Data ONTAP 8.3 with a 1TB volume.
cluster::> vol show -vserver parisi -volume docker -fields size,junction-path,unix-permissions,security-style vserver volume size security-style unix-permissions junction-path ------- ------ ---- -------------- ---------------- ------------- parisi docker 1TB unix ---rwxr-xr-x /docker
The NFS server will have NFSv3 and NFSv4.1 (with pNFS) enabled.
cluster::> nfs server show -vserver parisi -fields v3,v4.1,v4.1-pnfs,v4-id-domain vserver v3 v4-id-domain v4.1 v4.1-pnfs ------- ------- ------------------------ ------- --------- parisi enabled domain.win2k8.netapp.com enabled enabled
I’ll need an export policy and rule. For now, I’ll create one that’s wide open and apply it to the volume.
cluster::> export-policy rule show -vserver parisi -instance Vserver: parisi Policy Name: default Rule Index: 1 Access Protocol: any Client Match Hostname, IP Address, Netgroup, or Domain: 0.0.0.0/0 RO Access Rule: any RW Access Rule: any User ID To Which Anonymous Users Are Mapped: 65534 Superuser Security Types: any Honor SetUID Bits in SETATTR: true Allow Creation of Devices: true
Step 2: Modify your Dockerfile
The Dockerfile is a configuration file that can be used to build custom Docker images. They are essentially startup scripts.
If you want to use NFS in your Docker container, the appropriate NFS utilities would need to be installed. In this example, I’ll be using CentOS 7. In this case, the Dockerfile would need to include a section with an install of nfs-utils:
# Install NFS tools yum install -y nfs-utils
To run NFSv3, you have to ensure that the necessary ancillary processes (statd, nlm, etc) are running. Otherwise, you have to mount with nolock, which is not ideal:
# mount -o nfsvers=3 10.63.3.68:/docker /mnt mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified
This requires systemd to be functioning properly in your Docker container image. This is no small feat, but is possible. Check out Running systemd within a Docker Container for info on how to do this with RHEL/Fedora/CentOS, or read further in this blog for some of the steps I had to take to get it working. There are containers out there that other people have built that run systemd, but I wanted to learn how to do this on my own and guarantee that I’d get exactly what I wanted from my image.
To run just NFSv4, all you need are the nfs-utils. No need to set up systemd.
# mount 10.63.3.68:/docker /mnt # mount | grep docker 10.63.3.68:/docker on /mnt type nfs4 (rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.17.0.21,local_lock=none,addr=10.63.3.68)
I essentially took the Dockerfile from the systemd post I mentioned, as well as this one for CentOS and modified them a bit (removed the rm -f entries, created directory for mount point, changed location of dbus.service, copied the dbus.service file to the location of the Dockerfile, installed nfs-utils and autofs).
I based it on this Dockerfile: https://github.com/maci0/docker-systemd-unpriv/
From what I can tell, mounting NFS in a Docker container requires privileged access and since there is currently no way to build in privileged mode, we can’t add mount command to the Dockerfile. So I’d need to run the mount command after the image is built. There are ways to run systemd in unprivileged mode, as documented in this other blog.
Using autofs!
On the client, we could also set up automounter to mount the NFS mounts when we need them, rather than mounting them and leaving them mounted. I cover automounting in NFS (for homedirs)with clustered Data ONTAP a bit in TR-4073 on page 160.
Doing this was substantially trickier, as I needed to install/start autofs, which requires privileged mode *and* systemd to be working properly. Plus, I had to do a few other tricky things.
This is what the /etc/auto.master file would look like in the container:
# cat /etc/auto.master # # Sample auto.master file # This is a 'master' automounter map and it has the following format: # mount-point [map-type[,format]:]map [options] # For details of the format look at auto.master(5). # /misc /etc/auto.misc # # NOTE: mounts done from a hosts map will be mounted with the # "nosuid" and "nodev" options unless the "suid" and "dev" # options are explicitly given. # /net -hosts # # Include /etc/auto.master.d/*.autofs # The included files must conform to the format of this file. # +dir:/etc/auto.master.d # # Include central master map if it can be found using # nsswitch sources. # # Note that if there are entries for /net or /misc (as # above) in the included master map any keys that are the # same will not be seen as the first read key seen takes # precedence. # +auto.master /docker-nfs /etc/auto.misc --timeout=50
And the /etc/auto.misc file:
# cat /etc/auto.misc docker -fstype=nfs4,minorversion=1,rw,nosuid,hard,tcp,timeo=60 10.228.225.142:/docker
The Dockerfile
Here’s my Dockerfile for a container that can run NFSv3 or NFSv4, with manual or automount.
FROM centos:centos7 ENV container docker MAINTAINER: Justin Parisi "whyistheinternetbroken@gmail.com" RUN yum -y update; yum clean all RUN yum -y swap -- remove fakesystemd -- install systemd systemd-libs RUN yum -y install nfs-utils; yum clean all RUN systemctl mask dev-mqueue.mount dev-hugepages.mount \ systemd-remount-fs.service sys-kernel-config.mount \ sys-kernel-debug.mount sys-fs-fuse-connections.mount RUN systemctl mask display-manager.service systemd-logind.service RUN systemctl disable graphical.target; systemctl enable multi-user.target # Copy the dbus.service file from systemd to location with Dockerfile COPY dbus.service /usr/lib/systemd/system/dbus.service VOLUME ["/sys/fs/cgroup"] VOLUME ["/run"] CMD ["/usr/lib/systemd/systemd"] # Make mount point RUN mkdir /docker-nfs # Configure autofs RUN yum install -y autofs RUN echo "/docker-nfs /etc/auto.misc --timeout=50" >> /etc/auto.master ###### CONFIGURE THIS PORTION TO YOUR OWN SPECS ###### #RUN echo "docker -fstype=nfs4,minorversion=1,rw,nosuid,hard,tcp,timeo=60 10.228.225.142:/docker" >> /etc/auto.misc ###################################################### # Copy the shell script to finish setup COPY configure-nfs.sh /configure-nfs.sh RUN chmod 777 configure-nfs.sh
This was my shell script:
#!/bin/sh # Start services service rpcidmapd start service rpcbind start service autofs start # Kill autofs pid and restart, because Linux ps -ef | grep '/usr/sbin/automount' | awk '{print $2}' | xargs kill -9 service autofs start
The script had to live in the same directory as my Dockerfile. I also had to copy dbus.service to that same directory.
# cp /usr/lib/systemd/system/dbus.service /location_of_Dockerfile/dbus.service
Once I had the files in place, I built the image:
# docker build -t parisi/nfs-client . Sending build context to Docker daemon 5.12 kB Sending build context to Docker daemon Step 0 : FROM centos:centos7 ---> fd44297e2ddb Step 1 : ENV container docker ---> Using cache ---> 2cbdf1a478bc Step 2 : RUN yum -y update; yum clean all ---> Using cache ---> d4015989b039 Step 3 : RUN yum -y swap -- remove fakesystemd -- install systemd systemd-libs ---> Using cache ---> ec0fbd7641bb Step 4 : RUN yum -y install nfs-utils; yum clean all ---> Using cache ---> 485fac2c1733 Step 5 : RUN systemctl mask dev-mqueue.mount dev-hugepages.mount systemd-remount-fs.service sys-kernel-config.mount sys-kernel-debug.mount sys-fs-fuse-connections.mount ---> Using cache ---> d7f44caa9d8f Step 6 : RUN systemctl mask display-manager.service systemd-logind.service ---> Using cache ---> f86f635b6af7 Step 7 : RUN systemctl disable graphical.target; systemctl enable multi-user.target ---> Using cache ---> a4b7fed3b91d Step 8 : COPY dbus.service /usr/lib/systemd/system/dbus.service ---> Using cache ---> f11fa8045437 Step 9 : VOLUME /sys/fs/cgroup ---> Using cache ---> e042e697636d Step 10 : VOLUME /run ---> Using cache ---> 374fc2b247cb Step 11 : CMD /usr/lib/systemd/systemd ---> Using cache ---> b797b045d6b7 Step 12 : RUN mkdir /docker-nfs ---> Using cache ---> 8228a9ca400d Step 13 : RUN yum install -y autofs ---> Using cache ---> 01a64d46a737 Step 14 : RUN echo "/docker-nfs /etc/auto.misc --timeout=50" >> /etc/auto.master ---> Using cache ---> 78b63c672baf Step 15 : RUN echo "docker -fstype=nfs4,minorversion=1,rw,nosuid,hard,tcp,timeo=60 10.228.225.142:/docker" >> /etc/auto.misc ---> Using cache ---> a2d99d3e1ba3 Step 16 : COPY configure-nfs.sh /configure-nfs.sh ---> Using cache ---> 70e71370149d Step 17 : RUN chmod 777 configure-nfs.sh ---> Running in c1e24ab5b643 ---> 4fb2c5942cbb Removing intermediate container c1e24ab5b643 Successfully built 4fb2c5942cbb # docker images parisi/nfs-client REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE parisi/nfs-client latest 4fb2c5942cbb 49 seconds ago 298.2 MB
Now I can start systemd for the container in privileged mode.
# docker run --privileged -d -v /sys/fs/cgroup:/sys/fs/cgroup:ro parisi/nfs-client sh -c "/usr/lib/systemd/systemd" e157ecdf7269dce8178cffd54c74abef2a242cbfe522298ad410c292896991e8 # docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e157ecdf7269 parisi/nfs-client:latest "sh -c /usr/lib/syst 11 seconds ago Up 10 seconds sleepy_pike
Then I can go into the container and kick off my script:
# docker exec -t -i e157ecdf7269dce8178cffd54c74abef2a242cbfe522298ad410c292896991e8 /bin/bash [root@e157ecdf7269 /]# ./configure-nfs.sh Redirecting to /bin/systemctl start rpcidmapd.service Failed to issue method call: Unit rpcidmapd.service failed to load: No such file or directory. Redirecting to /bin/systemctl start rpcbind.service Redirecting to /bin/systemctl start autofs.service kill: sending signal to 339 failed: No such process Redirecting to /bin/systemctl start autofs.service [root@e157ecdf7269 /]# service autofs status Redirecting to /bin/systemctl status autofs.service autofs.service - Automounts filesystems on demand Loaded: loaded (/usr/lib/systemd/system/autofs.service; disabled) Drop-In: /run/systemd/system/autofs.service.d └─00-docker.conf Active: active (running) since Mon 2015-05-11 21:03:24 UTC; 21s ago Process: 355 ExecStart=/usr/sbin/automount $OPTIONS --pid-file /run/autofs.pid (code=exited, status=0/SUCCESS) Main PID: 357 (automount) CGroup: /system.slice/docker-e157ecdf7269dce8178cffd54c74abef2a242cbfe522298ad410c292896991e8.scope/system.slice/autofs.service └─357 /usr/sbin/automount --pid-file /run/autofs.pid May 11 21:03:24 e157ecdf7269 systemd[1]: Starting Automounts filesystems on demand... May 11 21:03:24 e157ecdf7269 automount[357]: lookup_read_master: lookup(nisplus): couldn't locate nis+ table auto.master May 11 21:03:24 e157ecdf7269 systemd[1]: Started Automounts filesystems on demand.
Once the script runs, I can start testing my mounts!
First, let’s do NFSv3:
[root@e454fd0728bd /]# mount -o nfsvers=3 10.228.225.142:/docker /mnt [root@e454fd0728bd /]# mount | grep mnt 10.228.225.142:/docker on /mnt type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.228.225.142,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=10.228.225.142) [root@e454fd0728bd /]# cd /mnt [root@e454fd0728bd mnt]# touch nfsv3file [root@e454fd0728bd mnt]# ls -la total 12 drwxrwxrwx 2 root root 4096 May 12 15:14 . drwxr-xr-x 20 root root 4096 May 12 15:13 .. -rw-r--r-- 1 root root 0 May 12 15:14 nfsv3file -rw-r--r-- 1 root root 0 May 11 17:53 nfsv41 -rw-r--r-- 1 root root 0 May 11 18:46 nfsv41-file drwxrwxrwx 11 root root 4096 May 12 15:05 .snapshot
Now, let’s try autofs!
When I log in, I can see that nothing is mounted:
[root@e157ecdf7269 /]# mount | grep docker /dev/mapper/docker-253:1-50476183-e157ecdf7269dce8178cffd54c74abef2a242cbfe522298ad410c292896991e8 on / type ext4 (rw,relatime,discard,stripe=16,data=ordered) /etc/auto.misc on /docker-nfs type autofs (rw,relatime,fd=19,pgrp=11664,timeout=50,minproto=5,maxproto=5,indirect)
Then I cd into my automount location and notice that my mount appears:
[root@e157ecdf7269 /]# cd /docker-nfs/docker [root@e157ecdf7269 docker]# mount | grep docker /dev/mapper/docker-253:1-50476183-e157ecdf7269dce8178cffd54c74abef2a242cbfe522298ad410c292896991e8 on / type ext4 (rw,relatime,discard,stripe=16,data=ordered) /etc/auto.misc on /docker-nfs type autofs (rw,relatime,fd=19,pgrp=11664,timeout=50,minproto=5,maxproto=5,indirect) 10.228.225.142:/docker on /docker-nfs/docker type nfs4 (rw,nosuid,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=0,timeo=60,retrans=2,sec=sys,clientaddr=172.17.0.134,local_lock=none,addr=10.228.225.142) <<<< there it is! [root@e157ecdf7269 docker]# df -h | grep docker /dev/mapper/docker-253:1-50476183-e157ecdf7269dce8178cffd54c74abef2a242cbfe522298ad410c292896991e8 9.8G 313M 8.9G 4% / 10.228.225.142:/docker 973G 1.7M 973G 1% /docker-nfs/docker
What now?
Now that I’ve done all that work to create a Docker image, it’s time to share it to the world.
# docker login Username (parisi): Login Succeeded # docker tag 7cb0f9093319 parisi/centos7-nfs-client-autofs # docker push parisi/centos7-nfs-client-autofs Do you really want to push to public registry? [Y/n]: Y The push refers to a repository [docker.io/parisi/centos7-nfs-client-autofs] (len: 1) Sending image list Pushing repository docker.io/parisi/centos7-nfs-client-autofs (1 tags) 6941bfcbbfca: Image already pushed, skipping 41459f052977: Image already pushed, skipping fd44297e2ddb: Image already pushed, skipping 78b3f0a9afb1: Image successfully pushed ec8afb93938d: Image successfully pushed 3f5dcd409a0e: Image successfully pushed 231f18a54eee: Image successfully pushed 1bfe1aa6309e: Image successfully pushed 318c908bb3e7: Image successfully pushed 80f71e4c55e8: Image successfully pushed 1ef13e5d4686: Image successfully pushed 5d9999f99007: Image successfully pushed ae28ae0477aa: Image successfully pushed 438342aef8e1: Image successfully pushed 1069a8beb629: Image successfully pushed 15692893daab: Image successfully pushed 8ce7a0621ca4: Image successfully pushed 4778761cf8bd: Image successfully pushed 7cb0f9093319: Image successfully pushed Pushing tag for rev [7cb0f9093319] on {https://cdn-registry-1.docker.io/v1/repositories/parisi/centos7-nfs-client-autofs/tags/latest}
You can find the image repository here: https://registry.hub.docker.com/u/parisi/centos7-nfs-client-autofs/
And the files here: https://github.com/whyistheinternetbroken/docker-centos7-nfs-client-autofs
Installing Docker on a Mac? Check this out if you hit issues:
https://whyistheinternetbroken.wordpress.com/2016/03/25/docker-mac-install-fails/
Wouldn’t it be easier to mount the NFS share on the docker host and then attach it to a container as a volume 🙂
LikeLike
Actually, yes. And I meant to cover that scenario in the blog, but forgot. Adding it now. 🙂
LikeLike
Great writeup. Now that Docker supports volume plugins I ended up writing one which will auto mount a NFS, AWS EFS or CIFS filesystem directly in the container. You can find it at github.com/gondor/docker-volume-netshare
LikeLike
Thanks! Good to know!
LikeLike
Pingback: Using Docker to Run Twitter in a Firefox Container - Datacenter Dude
Pingback: TECH::Docker + CIFS/SMB? That’s unpossible! | Why Is The Internet Broken?
Pingback: Why Is the Internet Broken: Greatest Hits | Why Is The Internet Broken?
Pingback: How to Start a Career in IT - Datacenter Dude
If your OS supports cloud-config init, then you can mount a NFS on the host during provisioning. That way every host will be preconfigured with NFS and docker containers can simply run with `-v` option.
Great article!
LikeLike
Good point and thanks for the feedback!
LikeLike
Pingback: A breakdown of layers and tools within the container and microservices ecosystem | au courant technology
Nice article Justin! Thanks for sharing the steps and the container image.
LikeLiked by 1 person
Dear Justin,
thanks for the article. I’m new to CoreOS and docker. Can you please help me with steps how to mount an NFS share for use it with docker images in CoreOS? I’m not clear on this article
https://github.com/docker/docker/tree/master/daemon/graphdriver/devmapper
I have opened an question in ServerFault here
http://serverfault.com/questions/763805/how-to-place-docker-images-ontop-of-an-nfs-share-with-coreos
Thanks
LikeLike
No idea how to get it working in CoreOS outside what I’ve posted in this blog. If I find cycles at some point, I’ll give it a shot.
LikeLike
Pingback: Issues installing Docker on OS 10.10 or later? | Why Is The Internet Broken?
what a great thinking!
Thank you for your article.
best wishes:)
LikeLike
Pingback: Is this blog a Top vBlog 2016? | Why Is The Internet Broken?
Do you maybe know how to set up an enviroment with docker that could act as an NFS server?
We are thinking about replacing netapp.
So the idea is that each client would have its own docker.
thanks
LikeLike
Not sure if you’re trolling me, but I’ll answer anyway. 😉
There is a difference in “can” and “should.” Technically, you probably can use Docker containers as NFS servers. But should you? What are you trying to accomplish?
The point of an NFS server is centralized access of a single dataset. That means not only access to that data, but management of locks reads, writes, etc. If you serve data to multiple clients using multiple NFS servers, you’re asking for trouble.
If you’re looking to serve unique data to each client, why use NFS? Why not just store it locally?
If you’re looking for a cheaper way to serve NFS, you have better options in RHEL/CentOS. But you’d still pay for support – either in $$ via a support agreement or via time and outage if you plan on supporting it yourself.
If I were running an enterprise, I’d be betting on the established and reliable NFS server than trying something new that wasn’t really designed for the use case.
If you want a docker volume plugin that uses NFS, take a look at the one NetApp designed:
http://netapp.github.io/openstack/2016/04/19/announcing-the-netapp-docker-volume-plugin/
LikeLiked by 1 person
Docker container as NFS server can be useful in the case where client docker apps need visibility of large amount of data, but only access small amount of data from it, and for read-only. In this setup, the amount of data are too big to be made available on client docker app (without nfs), and with data being accessed read-only, no locking handling is required.
LikeLiked by 2 people
Hi
I am not tryin to troll just loking for a solution.
I managed to:
– create multiple docker containers with separate data volumes from the host
– set up OVS and connect docker container to OVS bridge so that each docker container have separate ip’s visible outside
– on an outside server mount the share
What I would like to know and understand is what the problem with this setup is?
You stated:
“If you serve data to multiple clients using multiple NFS servers, you’re asking for trouble.”
Can you explain the problem in more detail?
thanks,
LikeLike
If you have separate data volumes, you should be fine; I was referring to multiple NFS servers referencing the same volumes. The locking mechanisms wouldn’t be able to help prevent data corruption in that case. But if you are dedicating volumes to each server, there shouldn’t be an issue.
LikeLike
OK, thank you.
Of course docker containers are run with a bunch of parameters. When setting a restart policy would these parameters also in effect when a container restarted? Also the name stays the same?
It is important because after starting a container I am also giving an ip address with the ovs-docker command.
So to make the question simple: how would I make sure a container is in the same state after restarting?
what do you think?
LikeLike
You could always put that information into the dockerfile when you build the container. Basically, keep the files on the Docker host and have the dockerfile source those files. Perhaps the COPY command: https://docs.docker.com/engine/reference/builder/
LikeLike
Hi, thanks for this tutorial.
I’ve tried to mount my nfs3 share to /var/lib/docker but docker fails to start.
I’ve checked the docker logs and it doesn’t give out any details about the problem. Would you be able to share your nfs export or perhaps point me to where I could debug this?
When I run service docker start, the command hangs and need to cancel it.
many thanks
LikeLike
I haven’t played with this since I wrote the blog, so I can’t help too much right now, unfortunately.
Have you looked into the NetApp Docker Volume plugin?
LikeLike
I have the same issue, have you solved that? appreciated!
LikeLike
Pingback: Home Server Architecture with Docker (part 2: configuring FreeNAS 11 and linux NFS clients) – OpenCoder
Pingback: Docker + NFS + FlexGroup volumes = Magic! | Why Is The Internet Broken?
Hi, nice article, have you considered NFS volume driver to deploy volumes over NFS?
Take a look to this article that I wrote: https://ociotec.com/docker-volume-with-nfs-driver/
LikeLike
Yea I’m aware. This article is super old. Probably should update it at some point.
LikeLike
Pingback: How I got started in IT without a CS degree | Why Is The Internet Broken?