Can we run multi-process program in docker?

I have some code using multi-process like this:

import multiprocessing
from multiprocessing import Pool

pool = Pool(processes=100)
result = []

for job in job_list:        
    result.append( 
        pool.apply_async(
            handle_job, (job)
            )
        )
pool.close()
pool.join()

This program is doing heavy calculation on very big data set. So we need multi-process to handle the job concurrently to improve performance.

  • Docker windows loading kernel modules
  • Deleting all Docker images from a repository
  • docker: command not found with Jenkins build and publish plugin on Mac
  • deploy containers on swarm cluster using compose file without using docker machine
  • Error adding local file to docker image with docker-py
  • Replacing sbt-native-packager with sbt-docker in Scala Play
  • I have been told that to the hosting system, one docker container is just one process. So I am wondering how my multi-process will be handled in Docker?

    Below are my concern:

    1. Since the container is just one process, will my multi-process code become multi-threading in the process?

    2. Will the performance become down? Because the reason I use multi-process is to get job done concurrently to get better performance.

  • Multicast not being sent to all Docker containers
  • kube-ui dashboard 503 error caused by cAdvisor
  • How to set a path on host for a named volume in docker-compose.yml
  • Unable to start any container when Volumes are enabled Docker Toolbox
  • What goes on behind the scenes in travis causing jobs that are essentially the same to behave so differently?
  • How do I set rack and datacenter name for scylladb using docker?
  • 2 Solutions collect form web for “Can we run multi-process program in docker?”

    Docker includes namespaced process id’s and full support for running multiple processes by the kernel. Inside of a container, you can run ps to see the isolated process list (which will often just be your shell and the ps command).

    The description of docker for running a single application is to separate the application isolation technology from the more familiar OS virtualization tools out there where you’d launch the web server, mail server, ssh daemon, etc in the background.

    A few words of caution:

    • Once pid 1 exits, your container ends, regardless of whether your forked processes are still running or not.
    • Without init, exited processes that are not reaped by their parent will remain as zombies (they will not pass the namespace isolation to reach the host init process). There’s a tini application you can run as your entrypoint to clean up these if this is an issue (tini github repo).
    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.