Can we run multi-process program in docker?

I have some code using multi-process like this:

import multiprocessing
from multiprocessing import Pool

pool = Pool(processes=100)
result = []

for job in job_list:        
            handle_job, (job)

This program is doing heavy calculation on very big data set. So we need multi-process to handle the job concurrently to improve performance.

  • Gitlab, docker and sendmail ports
  • Invalid ELF header xgboost (using a pkl in a Docker container)
  • How do I use the “git-like” capabilities of Docker?
  • How to get Image ID of docker in jenkins?
  • docker-py: how can I check if the build was successful?
  • Create a local end-to-end development environment
  • I have been told that to the hosting system, one docker container is just one process. So I am wondering how my multi-process will be handled in Docker?

    Below are my concern:

    1. Since the container is just one process, will my multi-process code become multi-threading in the process?

    2. Will the performance become down? Because the reason I use multi-process is to get job done concurrently to get better performance.

  • Docker Websocket Attaching in NodejS
  • Installing mod_proxy_html on docker httpd 2.1
  • Docker - Problems with the vocabs
  • How to share a Docker container folder when it is not empty?
  • How to avoid redundant container linking in Docker when propagating IP addresses?
  • Should I use user-secrets or environment variables with docker
  • 2 Solutions collect form web for “Can we run multi-process program in docker?”

    Docker includes namespaced process id’s and full support for running multiple processes by the kernel. Inside of a container, you can run ps to see the isolated process list (which will often just be your shell and the ps command).

    The description of docker for running a single application is to separate the application isolation technology from the more familiar OS virtualization tools out there where you’d launch the web server, mail server, ssh daemon, etc in the background.

    A few words of caution:

    • Once pid 1 exits, your container ends, regardless of whether your forked processes are still running or not.
    • Without init, exited processes that are not reaped by their parent will remain as zombies (they will not pass the namespace isolation to reach the host init process). There’s a tini application you can run as your entrypoint to clean up these if this is an issue (tini github repo).
    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.