As an Infrastructure engineer, I’ve had my fair share of experiences with containerized environments and the challenges of managing data persistence. One of the most significant problems I’ve faced is ensuring that data generated by Docker containers persists even when the container is stopped or deleted. That’s when I discovered the power of leveraging the Network File System (NFS) as volumes in Docker.
It all started while working on a complex, multi-container application that required efficient data management. I knew that Docker volumes were the way to go, but I still needed a solution that would provide centralised storage management, scalability, and seamless file sharing across different containers and hosts. That’s when I turned to NFS.
Let’s understand what is a Docker Volume
In containerised environments, Docker volumes enable the persistent storage of data generated by Docker containers. Within the Docker context, a volume is considered a specially designated directory within one or more containers outside the typical Union File System. This means that data stored in volumes persists even when the container is stopped or deleted, making volumes indispensable for scenarios where data persistence is required.
What does Network File System mean?
NFS, also known as Network File System, is a distributed file system protocol that allows clients to access files and directories stored on remote servers as if they were local. Operated over a network, NFS provides a shared file system that can be accessed by multiple clients simultaneously. NFS offers benefits such as centralised storage management, improved data availability, and seamless file sharing across heterogeneous environments.
Prerequisites
We need the following tools preinstalled before proceeding forward.
- NFS server
- Docker and Docker-compose
My journey began with setting up an NFS server. I created a directory at /srv/nfs
where the NFS server would store the shared files. I then added an entry to the /etc/exports
file, specifying that the directory /srv/nfs
could be accessed by any client with read-write permissions. I also set the permissions for the NFS directory to 755, ensuring that the owner had full read-write-execute permissions while others had read-and-execute permissions.
Use the following script to install the NFS server.
#!/bin/bash
# Step 1: Create a directory for NFS
NFS_DIR="/srv/nfs"
echo "Creating NFS directory at $NFS_DIR..."
sudo mkdir -p $NFS_DIR
# Step 2: Add export entry to /etc/exports
EXPORTS_ENTRY="$NFS_DIR *(rw,sync,no_subtree_check)"
echo "Adding the following entry to /etc/exports:"
echo "$EXPORTS_ENTRY"
echo "$EXPORTS_ENTRY" | sudo tee -a /etc/exports
# Step 3: Grant necessary permissions to the NFS directory
echo "Setting permissions for $NFS_DIR..."
sudo chmod 755 -R $NFS_DIR
# Step 4: Restart the NFS service to apply changes
echo "Restarting NFS service..."
sudo exportfs -ra
sudo systemctl restart nfs-kernel-server
echo "NFS server setup complete!"
Run the above script using the following commands
chmod +x setup_nfs.sh
sudo ./setup_nfs.sh
With the NFS server setup complete, I created an NFS Docker volume using the docker volume create command. I specified the NFS server’s IP address, the mount point, and the device path.
To create a docker volume with nfs configuration, you can use following command:
docker volume create --driver local --opt type=nfs --opt o=addr=<ip-address-of-nfs-server>,rw --opt device=:/srv/nfs nfs-volume
Output:
$ docker volume create --driver local \
--opt type=nfs --opt o=addr=192.168.1.7,rw \
--opt device=:/srv/nfs nfs-volume
nfs volume
To verify that the volume was created successfully, I used this command.
docker volume ls
Output:
$ docker volume ls
DRIVER VOLUME NAME
local nfs-volume
Mounting NFS in a Container: A Moment of Truth
Next, I mounted the NFS volume in a container using the docker run command. I specified the NFS volume and the mount point in the –mount section.
To mount the NFS volume in a container, you can use the following command:
docker run --name nfs_mounted_node_container \
--mount source=nfs-volume,target=/opt node
Output:
$ docker run --name nfs_mounted_node_container \
--mount source=nfs-volume,target=/opt \
alpine
Unable to find image 'alphine:latest' locally
latest: Pulling from library/alphine
5e6ec7f28fbf: Pull complete
Digest: sha256:185518070891758909c9f839cf4ca393ee977ac378609f700f60a6718c2b6d6a
Status: Downloaded newer image for alphine:latest
1d7e2e2b8e2e0d1
To verify, we can use the following command with docker exec:
docker exec -it nfs_mounted_node_container sh
mount | grep /opt
It will display something similar to this
$ docker exec -it nfs_mounted_node_container sh
# mount | grep /opt
:/ on /opt type nfs4
(rw,relatime,vers=4.0,risze=104348,wsize=1932432)
Docker Compose and NFS: Expanding Horizons
But the journey didn’t stop there. I needed to ensure that my NFS volumes could be easily managed across multiple containers. This is where Docker Compose came into play. By creating a docker-compose.yml file, I could define services that utilised NFS volumes:
Use the below command to create a new file
nano docker-compose.yml
Write the following contents inside the Docker Compose file.
version: '3.8'
services:
mongo:
image: mongo
container_name: nfs_mounted_container
volumes:
- mongo_data:/data/db
volumes:
mongo_data:
driver_opts:
type: nfs4
o: addr=<ip-address-of-nfs-server>,rw
device: ":/srv/nfs"
Advantages I Discovered with NFS for Docker Storage
Looking back at this, the benefits of using NFS for persistent volumes in Docker are clear:
- Centralized Storage Management: NFS provides a centralised storage solution where files and directories are stored on remote servers. This centralised approach simplifies storage management as administrators can manage and maintain data from a single location.
- Scalability: NFS allows for seamless scaling of storage capacity by adding more NFS servers or expanding existing ones. This scalability ensures that Docker containers can access sufficient storage resources as application demands grow.
- Improved Data Availability: With NFS, data stored on remote servers is readily available to Docker containers across the network. This improved availability ensures that applications have continuous access to critical data, minimising downtime and improving overall reliability.
- Seamless File Sharing: NFS enables seamless file sharing across multiple Docker containers and hosts. Containers can mount NFS volumes and access shared files and directories, facilitating collaboration and data sharing among different application parts.
- Data Consistency and Reliability: NFS ensures data consistency and reliability by maintaining a single version of truth for files and directories stored on remote servers. This reliability ensures that Docker containers retrieve accurate and up-to-date information from NFS volumes, enhancing application stability. However, using this setup for database containers would be a bad idea.
- Compatibility and Interoperability: NFS is supported by a wide range of operating systems and platforms, making it highly compatible and interoperable with various Docker environments. This compatibility allows Docker containers running on different systems to access NFS volumes seamlessly.
Conclusion
In short, using NFS with Docker offers many benefits; it helps manage storage in one place, scales easily, ensures data is always available, allows easy file sharing, maintains data accuracy, and works well with different systems. Overall, it boosts application performance and reliability by handling data efficiently across different setups. This makes it a strong choice for managing data effectively in modern applications.
Question
Have you tried using NFS with Docker yet? If you have, what did you like, and what didn’t you like?