Docker container cannot resolve hostnames

i am creating a docker-compose.yaml file that has to create a pypi local repository and a container with a dev application i am writing. The problem is that, inspite of the fact that i have created custom network and specified hostnames, containers cannot see each other.

More specifically, pypi has to be up and running; during my tests, i ran it manually, and msalembic should use the local pypi repository to load eggs. But i can’t see the pypi host.

  • How to download Docker images without a direct internet connection
  • Docker and “The OpenSSL library reported an error” when deployed
  • Security measures with unsafe code on Docker
  • Disable docker container log configuration in Chef
  • Possible to set --insecure-registry property in Docker on AWS Elastic Beanstalk
  • openldap + kerberos - unable to reach any KDC in realm
  • version: '3'
    services:
      # Alembic MS
      msalembic:
        build:
          context: .
          dockerfile: AlembicMSDockerfile
        ports:
          - "5432:5432"
        hostname: alembicms
        volumes:
          - "${PWD}/msalembic/postgres/psql_data:/var/lib/postgresql/data"
        environment:
          POSTGRES_USER: ${PGUSER}
          POSTGRES_PASSWORD: ${PGPASSWORD}
          POSTGRES_DB: goodboy
          ENVIRONMENT: ${ENVIRONMENT}
        networks:
          - custom_network
        depends_on:
            - pypi
      # Private internal Pypi repository
      pypi:
        build:
            context: pypi
            dockerfile: Dockerfile
            args:
              HTACCESS: ${HTACCESS}
        hostname: pypi
        volumes:
          - "${PWD}/pypi/:/srv/pypi:rw"
        ports:
          - "9090:80"
        container_name: pypi
        networks:
          - custom_network
    networks:
      custom_network:
    

    Contents of the AlembicMSDockerfile:

    FROM python:3.6
    MAINTAINER Bruno Ripa <XXX>
    #RUN pip install -f http://pypi:9090 --trusted-host pypi alembicms
    RUN ping pypi
    ENTRYPOINT ["alembicms"]
    

    Of course i can browse and publish packages in the local pypi repository.

    Thanks.

  • Docker container with status “Dead” after consul healthcheck runs
  • MySQL container: grant access and create a new image
  • docker: executable file not found in $PATH
  • High available Docker volume
  • How to manage Docker private registry
  • Docker php-fpm/nginx set-up: php-fpm throwing blank 500, no error logs
  • One Solution collect form web for “Docker container cannot resolve hostnames”

    The port mapping you specified is only relevant when the ports of a container are exposed to the host network (e.g. if you want to access them from your host via http://localhost:9090).

    When linking containers inside a docker network, the services are exposed on the ports defined in the Dockerfile. So, according to your port mapping, the pypi container exposes port 80, which you should also use when you want to access it from another container in the same docker network.

    So, when you run the built image of AlembicMSDockerfile, you can access your pypi container on port 80 via the docker network. In your special case, you want to access the pypi container already during build time of the alembic ms image. This is currently not supported within docker-compose as you can read from the issues here.

    As long as this is not released, you could probably run the pypi container on its own and use the docker build --net=host --add-host=pypi_on_host:<yourhostip> --file=AlembicMSDockerfile and then modify the RUN command to use http://pypi_on_host:9090 (where you need the mapped port, as you no longer access it using a docker internal network). Not really elegant, but at least there is no direct reference to your host IP in the Dockerfile then…

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.