Getting docker working in daemon mode for a tahoe-lafs storage node?

Right now, for my workflow (personal project, but for sharing with friends I would like to simplfy it), I have the steps:
1. docker run -it -p 3456:3456 -p 39499:39499 maccam912/tahoe-node /bin/bash
2. In docker, run the script which will either do a first-time configuration, or just run the tahoe service if it is not the first run.

My question is how I would simplify this. A few specific points below:

  • Why is a shell within a docker container showing dmesg content from the host?
  • how does docker net=host handle port conflict
  • docker-compose up for only certain containers
  • Install MongoDB and Tomcat using Dockerfile
  • Can not scale service to multiple container when binding host port in docker-compose.yml
  • application configuration as a docker image
  • I would like to tell docker to just run the container and let it do its thing in the background. if I changed it by replacing the -it with -d I understand that would get me closer, but it never seems to leave anything running when I do a docker ps the same way the above workflow does, and then me detaching with CTRL+P, CTRL+Q.

    Do I need to -p those ports? I EXPOSE them in the Dockerfile. Also, since they are being mapped directly to the same port in the container, do I need to write each twice? Those ports are what the container is using AND what the outside world is expecting to be able to use to connect to the tahoe “server”.

    How could I get this working so that instead of running /bin/bash I run / right away?

    My perfect command would look something like docker run -d maccam912/tahoe-node / if the port mapping stuff is not necessary. The end goal of this would be to let the container run in the background, with kicking off a server that just runs “forever”.

    Let me know if I can clarify anything or if there is a simpler way to do this. Feel free to take a look at my Dockerfile and make any suggestions if it would simplify things there, but this project is intentionally following ease of use (hopefully getting things down to 1 command) over separation of duties, as is recommended.

  • Published k8s service is not available
  • provide s3fs mount to docker as volume
  • pipeline in docker exec from command line and from python api
  • docker ubuntu container filesystem
  • Cannot acces Asp.Net Core on local Docker Container
  • Why can't I pipe a file to “tar” from “curl” with a Docker image for Debian?
  • One Solution collect form web for “Getting docker working in daemon mode for a tahoe-lafs storage node?”

    I do believe I figured it out:

    The problem with having just -d specified is that as soon as the command given returns/exits, the container stops running. My solution was to end the last line with a &, leaving the script hanging and not exiting.

    As for the ports, the EXPOSE in the Dockerfile does nothing more than make those ports available to the host machine, not to remote machines like a server would need. the -p is still necessary. The doubling of the ports (in:out) is needed because specifying only one will map that port to a random high numbered port in the container.

    Please correct me if I’m wrong on any of this, but in this case the best solution was to modify my script, and will be to use docker run -d -p 3456:3456 -p 39499:39499 maccam912/tahoe-node / and the new script will leave it hanging and the container running.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.