Containers are being spawned however the issued command fails when spawning multiple number of docker containers

We have a custom chrome extension that performs a certain set of web related tasks. The tasks need to be exclusive and hence we went forward with the concept of running each task on an individual docker container running chrome

We currently have 4 ec2 instances (c4.2xlarge) running ubuntu and then chrome inside a docker container.

  • NuGet with Mono on Linux: Error getting response stream
  • Cannot connect to Neo4j from RStudio Server in docker on Centos 7
  • Bluemix Dev Ops: Building a project with private git submodules
  • Autoscale Docker Swarm Services
  • How to restore a mongo Docker container on the Mac
  • create file in shared folder docker container
  • We also have one more smaller ec2 instance that reads tasks from mongodb and allocates them to the servers filling them one after the other. So if a task is added it is assigned to one of the instances which then picks it up creates a container performs the task and then destroys the container. Each of these tasks take roughly 2 to 6 minutes to complete. We are using dockerode to dynamically create and destroy containers on the fly within nodejs.

    We performed a lot of benchmarks and come to the conclusion that each server can run a max of 55 tasks, so we have set a threshold of maximum 45 tasks per server at a given time to be on the safer side.

    This is part of the code that runs on all the 4 instances

    function start_polling_mongo() {
            server_id: instance_id,
            task_status: 1
        }, function(err, all_tasks) {
            //update the tasks
                _id: {
                    "$in": all_tasks
            }, {
                task_status: 2
            }, {
                multi: true
            }, function(err) {
                //perform the tasks
                all_tasks.forEach(function(task) {
                    setTimeout(function() {
                    }, 1000);
        setTimeout(start_polling_mongo, 5000);
    function run_task_in_container(task_id) {
        console.log('Running Task id : ' + task_id);
        var commands = '';
        commands += 'Xvfb :99 & ';
        commands += 'export DISPLAY=:99 & ';
        commands += 'x11vnc -rfbport ' + task_id + ' -display :99 -bg -q & ';
        commands += 'google-chrome --no-first-run --load-extension=/home/chrome_extension' + task_id;
        var exposed_ports = {};
        exposed_ports[task_id + '/tcp'] = {};
        var port_bindings = {};
        port_bindings[task_id + '/tcp'] = [{
            HostPort: task_id.toString()
            Image: 'docker-chrome-final',
            ExposedPorts: exposed_ports,
            Cmd: ['bash', '-c', commands],
            Labels: {
                "task_id": task_id.toString()
        }, function(err, container) {
            if (err) {
                console.log('error1 :');
                Privileged: true,
                PortBindings: port_bindings
            }, function(err, data) {
                if (err) {
                    console.log('error2 :');

    The problem is that ever batch of 150 to 180 tasks there are random containers that are created and started however chrome is not running on the same (usually 2-3 containers on every server). I confirmed that by successfully VNC’ing into the server to be presented with a blank screen (XVFB).

    Any input is highly appreciated!

  • Deploying changes to Docker and its containers on the fly
  • Configuring hazelcast on a docker overlay network
  • Is there a way to disallow other docker containers using the same CPU?
  • Stop and restart containers with docker-compose
  • Unable to start Docker Container with ONOS
  • How to allow my host IP address in the PostgreSQL pg_hba.conf file
  • Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.