Pros and cons of multi-threading vs containers for a consumer program [closed]

Consider a RabbitMQ consumer program. You can write a multi-threaded one where each thread consumes its portion of queue’s items. Or you can write a simple CLI program with one thread, connecting to RabbitMQ broker where it consumes queue’s items in a loop and then run a couple of them in different containers!

Which one do you think I should implement? What’s the pros and cons of multi-threading vs containers in this scenario?

  • Docker-compose up does not start a container
  • Jenkins pipeline groovy string compare
  • Python app does not print anything when running detached in docker
  • docker-machine inside Symfony Process
  • Why is Docker Tomcat failing to start?
  • Can't access mounted volume in docker
  • Check if docker used cache for build or not
  • pm2 not start apps on server with Process config loading failed []
  • use inheritance in docker-compose.yml
  • Unable to start docker on SUSE 12
  • Docker automated build Error: Build failed: Dockerfile not found at ./Dockerfile
  • Network timed out while trying to connect to
  • 2 Solutions collect form web for “Pros and cons of multi-threading vs containers for a consumer program [closed]”

    personally, i find multithreading to almost always be a bad idea in my applications. i’m far more comfortable with single threaded and multiple instances.

    if for no other reason, than to allow one of your instances to crash while the rest continue working. if you have a single app with multiple threads, you have to deal with threading issues, first and then if something crashes everything stops working.

    clearly there are valid uses of multi-threading. i’m not usually in a scenario where that applies.

    You are talking about a cluster scenario, where a broker would trigger workers on multiple containers.
    That would involve docker swarm and a discovery agent, and you can find an example at “Running a Distributed Docker Swarm on AWS”:

    The Pro resolves mainly around high-availability and scalability: if a node resource (CPU/memory) is starting to get parse, swarm will spawn a new node as requested.
    It supposes you have the infrastructure (like AWS) to support that capacity to get new nodes.
    The Con is of course its complexity and management.

    As opposed to a single container (multi-thread) model, that you can spawn anywhere, but which will be limited by one host resources.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.