Feature spotlight: Persist volumes forever
Published on Jul 8, 2020
One of the biggest challenges with working with containers is persisting state. Containers are great for booting applications in a consistent way. However, basically every application relies on some type of database, and it’s a pain to have to reinitialize it every day when you start developing.
Blimp has supported both bind and named
volumes volumes since
the first release, but these volumes used to get deleted when
blimp down was
We’re excited to announce that with release 0.13.11, Blimp now preserves
Try It Out
There’s no change necessary to start taking advantage of this change if you’re already using volumes.
If you’re not familiar with the behavior of Docker volumes, this is how you can try out the new behavior.
docker-compose.yml with the following contents:
version: '3' services: mounter: image: ubuntu command: tail -f /dev/null volumes: - 'vol:/vol' volumes: vol:
docker-compose.yml creates a volume named
vol, and mounts it to
/vol. Any files added to
/vol will persist across container restarts, and
Once you’ve created the file, run the following in your terminal.
# Boot the docker-compose.yml. blimp up -d # Once `mounter` has started, run the following to create a file in /vol. blimp exec mounter touch /vol/persistme # Ctrl-C out of `blimp up`. # Then, delete your sandbox. blimp down # Recreate your sandbox. blimp up -d # Check that the file you created in the volume still exists. blimp exec mounter ls /vol
The most challenging thing about implementing this feature was making sure that volumes could be shared between services, so that we could preserve full compatibility with Docker Compose.
Initially, we hoped that we could just convert each volume to a Kubernetes PersistentVolumeClaim. However, most PersistentVolume implementations only support binding volumes to a single node. To work around this, we use pod affinity rules so that all the pods in a sandbox are scheduled onto the same node. This is also helpful from a security perspective to isolate sandboxes.
Another challenge was that creating and deleting PersistentVolumeClaims is the only way to manage the backing PersistentVolumes without interacting with the underlying volume API directly. Therefore, we had to carefully design the volume allocation code so that every volume is labelled with the user’s sandbox, and so that a previously allocated volume is never reassigned to another user.
At a high level, we ended up with this design:
- When a user runs
blimp upfor the first time, a PersistentVolume is created by creating a PersistentVolumeClaim.
- Once the backing PersistentVolume is created, we permanently associate it with the user’s sandbox by labeling it. We also set the PersistentVolume’s reclaim policy to Retain so that when the associated PersistentVolumeClaim is deleted, the volume is retained.
- All future
blimp ups see that the user has a PersistentVolume allocated, and explicitly binds the PersistentVolumeClaim directly to the volume.
What’s Coming Next?
Right now, there’s no way to delete a subset of your volumes other than to
mount the volumes you want to clear, and
rm their contents directly. We’re
planning to add a
blimp volume command to manage volumes individually, just
This feature also lays the groundwork for migrating containers between nodes, which will let us efficiently autoscale VMs to reduce cluster costs.
Outside of volumes, we’re excited about building new development workflows that are made possible by running your development environment in the cloud. We’re currently designing environment sharing, which will let you share a link to your environment with your coworkers so that they can easily see what you’re working on.
Shoot us a message if you’d like early access to environment sharing. Your feedback will help shape the final design!
See how you can leverage volumes to use data containers to speed up boot time.
Try an example with Blimp to see how local development can be much faster.