exited: scrapy (exit status 0; not expected)

I try to run bash script that launch many spiders in my Docker container.
My supervisor.conf that placed in “/etc/supervisor/conf.d/” looke like that:


but supervisor return this errors:

  • How to connect two HA group containers in Bluemix?
  • Docker and file upload server
  • Docker: How to start service as root
  • Docker : Cannot connect to the Docker daemon. Is the docker daemon running on this host?
  • Docker & Graylog
  • I want to share code content across several containers using docker-compose volume directive
  • 2015-08-21 10:50:30,466 CRIT Supervisor running as root (no user in
    config file)

    2015-08-21 10:50:30,466 WARN Included extra file
    “/etc/supervisor/conf.d/tor.conf” during parsing

    2015-08-21 10:50:30,478 INFO RPC interface ‘supervisor’ initialized

    2015-08-21 10:50:30,478 CRIT Server ‘unix_http_server’ running without
    any HTTP authentication checking

    2015-08-21 10:50:30,478 INFO supervisord started with pid 5

    2015-08-21 10:50:31,481 INFO spawned: ‘scrapy’ with pid 8

    2015-08-21 10:50:31,555 INFO exited: scrapy (exit status 0; not

    2015-08-21 10:50:32,557 INFO gave up: scrapy entered FATAL state, too
    many start retries too quickly

    And my program stop to running. But if I manually run my program , it works very well …

    How to resolve this ? any ideas?

  • Kubernetes executor on Gitlab ERROR: Job failed: image pull failed
  • How to read environment variables during a docker image build process?
  • How to export an environment variable to a docker image?
  • Docker: uwsgi not starting
  • Docker workflow for scientific computing
  • docker: structure needs cleaning
  • 2 Solutions collect form web for “exited: scrapy (exit status 0; not expected)”

    I found the solution to my problem. For the supervisor.conf, change



    command=/bin/bash -c "exec /tmp/start_spider.sh > /dev/null 2>&1 -DFOREGROUND"

    here is my code:


    # list letter
    parseLetter=('a' 'b')
    # change path
    cd $path/scrapy/scrapyTodo/scrapyTodo
    for (( i=0; i<${tLen}; i++ ));
        scrapy crawl root -a alpha=${parseLetter[$i]} &

    here is my scrapy code:

    #!/usr/bin/python -tt
    # -*- coding: utf-8 -*-
    from scrapy.selector import Selector
    from elasticsearch import Elasticsearch
    from scrapy.contrib.spiders import CrawlSpider
    from scrapy.http import Request
    from urlparse import urljoin
    from bs4 import BeautifulSoup
    from scrapy.spider import BaseSpider
    from bs4 import BeautifulSoup
    from tools import sendEmail
    from tools import ElasticAction
    from tools import runlog
    from scrapy import signals
    from scrapy.xlib.pydispatch import dispatcher
    from datetime import datetime
    import re
    class studentCrawler(BaseSpider):
        # Crawling Start
        CrawlSpider.started_on = datetime.now()
        name = "root"
        DOWNLOAD_DELAY = 0
        allowed_domains = ['website.com']
        ES_Index = "website"
        ES_Type = "root"
        ES_Ip = ""
        child_type = "level1"
        handle_httpstatus_list = [404, 302, 503, 999, 200] #add any other code you need
        es = ElasticAction(ES_Index, ES_Type, ES_Ip)
        # Init
        def __init__(self, alpha=''):
            base_domain = 'https://www.website.com/directory/student-' + str(alpha) + "/"
            self.start_urls = [base_domain]
            super(CompanyCrawler, self).__init__(self.start_urls)
        def is_empty(self, any_structure):
            Function that allow to check if the data is empty or not
            :arg any_structure: any data
            if any_structure:
                return 1
                return 0
        def parse(self, response):
            main method that parse the web page
            :param response:
            if response.status == 404:
                self.es.insertIntoES(response.url, "False")
            if str(response.status) == "503":
                self.es.insertIntoES(response.url, "False")
            if response.status == 999:
                self.es.insertIntoES(response.url, "False")
            if str(response.status) == "200":
                # Selector
                sel = Selector(response)
                self.es.insertIntoES(response.url, "True")
                body = self.getAllTheUrl('u'.join(sel.xpath(".//*[@id='seo-dir']/div/div[3]").extract()).strip(),response.url )
        def getAllTheUrl(self, data, parent_id):
            dictCompany = dict()
            soup = BeautifulSoup(data,'html.parser')
            for a in soup.find_all('a', href=True):
                self.es.insertChildAndParent(self.child_type, str(a['href']), "False", parent_id)

    I discovered that BeautifulSoup not working When the spiders are Launched by supervisor….

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.