Getting docker working in daemon mode for a tahoe-lafs storage node?

Right now, for my workflow (personal project, but for sharing with friends I would like to simplfy it), I have the steps:
1. docker run -it -p 3456:3456 -p 39499:39499 maccam912/tahoe-node /bin/bash
2. In docker, run the script tahoe-run.sh which will either do a first-time configuration, or just run the tahoe service if it is not the first run.

My question is how I would simplify this. A few specific points below:

  • Is there any reason to favour concatenated RUN directives over RUNning a script?
  • How to debug a non-working passwordless RSA certificate in OpenSSH on Alpine?
  • Kubernetes: VPN server and DNS issues
  • Rails 5 Regular Tasks Without Cron
  • How to run docker image as a non-root user?
  • Can't run docker commands
  • I would like to tell docker to just run the container and let it do its thing in the background. if I changed it by replacing the -it with -d I understand that would get me closer, but it never seems to leave anything running when I do a docker ps the same way the above workflow does, and then me detaching with CTRL+P, CTRL+Q.

    Do I need to -p those ports? I EXPOSE them in the Dockerfile. Also, since they are being mapped directly to the same port in the container, do I need to write each twice? Those ports are what the container is using AND what the outside world is expecting to be able to use to connect to the tahoe “server”.

    How could I get this working so that instead of running /bin/bash I run /tahoe-run.sh right away?

    My perfect command would look something like docker run -d maccam912/tahoe-node /tahoe-run.sh if the port mapping stuff is not necessary. The end goal of this would be to let the container run in the background, with tahoe-run.sh kicking off a server that just runs “forever”.

    Let me know if I can clarify anything or if there is a simpler way to do this. Feel free to take a look at my Dockerfile and make any suggestions if it would simplify things there, but this project is intentionally following ease of use (hopefully getting things down to 1 command) over separation of duties, as is recommended.

  • How to know whether a layer exists locally on docker 1.10+
  • Is SBT 0.13.5 supported on Docker/Dokku images?
  • Can't Connect to MongoDB on new Azure VM
  • 404 on Nginx running on docker (uwsgi)
  • Symfony 3 and Docker (nginx, php7.1-fpm mysql8) Performances low on Windows
  • How to remove entrypoint from parent Image on Dockerfile
  • One Solution collect form web for “Getting docker working in daemon mode for a tahoe-lafs storage node?”

    I do believe I figured it out:

    The problem with having just -d specified is that as soon as the command given returns/exits, the container stops running. My solution was to end the last line with a &, leaving the script hanging and not exiting.

    As for the ports, the EXPOSE in the Dockerfile does nothing more than make those ports available to the host machine, not to remote machines like a server would need. the -p is still necessary. The doubling of the ports (in:out) is needed because specifying only one will map that port to a random high numbered port in the container.

    Please correct me if I’m wrong on any of this, but in this case the best solution was to modify my script, and will be to use docker run -d -p 3456:3456 -p 39499:39499 maccam912/tahoe-node /tahoe-run.sh and the new script will leave it hanging and the container running.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.