Docker Data Volume for SBT Dependencies

I am using docker for continuous integration of a Scala project. Inside the container I am building the project and creating a distribution with “sbt dist”.

This takes ages pulling down all the dependencies and I would like to use a docker data volume as mentioned here:

  • Calling mongodump wrapped into docker
  • Docker-compose hanging on pulling image where Dockerfile doesn't
  • How to run some script inside Docker Container?
  • Dropbox in Docker
  • Dockerfile versioning best practice
  • What is the cost of a container?
  • However, I don’t understand how I could get SBT to put the jar files in the volume, or how SBT would know how to read them from that volume.

  • Docker HTML publishing
  • symfony docker permission problems for cache files
  • Using the RUN instruction in a Dockerfile with 'source' does not work
  • Docker: why i'm loose settings?
  • Exec commands on kubernetes pods with root access
  • Unable to dockerize nginx and index.html page
  • 2 Solutions collect form web for “Docker Data Volume for SBT Dependencies”

    SBT uses ivy to resolve project dependencies. Ivy caches downloaded artifacts locally and every time it is asked to pull something, it first goes to that cache and if nothing found downloads from remote. By default cache is located in ~/.ivy2, but it is actually a configurable property. So just mount volume, point ivy to it (or mount it in a way it will be on default location) and enjoy the caches.

    Not sure if this makes sense on an integration server, but when developing on localhost, I’m mapping my host’s .ivy2/ and .sbt/ directories to volumes in the container, like so:

    docker run ...  -v ~/.ivy2:/root/.ivy2  -v ~/.sbt:/root/.sbt  ...

    (Apparently, inside the container, .ivy2/ and .sbt/ are placed in /root/, since we’re logging in to the container as the root user.)

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.