How to use GitLab CI in combination with a JUnit Rule that launches Docker containers?

I have a set of integration tests that rely on a postgres database being available. In order for the tests to be independent, I am using this project to start a postgres docker container before the tests:

@Rule
DockerRule postgresDockerRule = DockerRule
            .builder()
            .imageName("postgres:9")
            .expose(databaseConfig.port().toString(), "5432")
            .env("POSTGRES_PASSWORD", databaseConfig.password())
            .waitForMessage("PostgreSQL init process complete; ready for start up.", 60)
            .keepContainer(false)
            .build();

This works fine locally. The rule starts up the container, the tests are run and after the tests, the container is deleted.

  • Loopback handle connection issues (and startup crashes)
  • docker build shows: failed sandbox add: failed to set gateway while updating gateway: invalid argument
  • Can't kill processes (originating in a docker container)
  • How can I detect if there's a new docker image without using docker pull?
  • Docker volume performance degradation
  • Should I be concerned about excess, non-running, Docker containers?
  • However, I am having troubles getting these tests to run on gitlab.com.

    The tests always fail with the following exception (this is the end of a longer stacktrace):

    Caused by: java.io.IOException: No such file or directory
    at jnr.unixsocket.UnixSocketChannel.doConnect(UnixSocketChannel.java:94)
    at jnr.unixsocket.UnixSocketChannel.connect(UnixSocketChannel.java:102)
    at com.spotify.docker.client.ApacheUnixSocket.connect(ApacheUnixSocket.java:73)
    at com.spotify.docker.client.UnixConnectionSocketFactory.connectSocket(UnixConnectionSocketFactory.java:74)
    at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
    at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
    at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
    at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
    at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
    at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
    at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
    at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
    at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:71)
    at org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:435)
    ... 21 more
    

    The project providing the DockerRule uses the spotify docker client to connect to the remote API of the docker daemon. (That is why it throws an IOException stating “No such file or directory” – it cannot find the socket.)

    My .gitlab-ci.yml file looks like this:

    stages:
      - build
      - deploy
    
    build_rest-api:
      image: openjdk:8
      stage: build
      script:
        - ./gradlew clean build -Dorg.gradle.parallel=true
      artifacts:
        when: always
        paths:
          - 'rest-api/build/distributions/*.zip'
          - '*/build/reports/*'
    
    deploy_on_development:
      image: governmentpaas/cf-cli
      stage: deploy
      before_script:
        - cf api ...
        - cf auth ...
        - cf target -o XXX -s development
      script:
        - cf push ....
      only:
        - master
    

    What I would like to achieve is:

    • Integration tests are run locally and during the CI process
    • Integration tests connect to a real database
    • No difference between local and CI test configuration

    I thought about providing the postgres database as a service during the CI process using the services section of .gitlab-ci.yml. But that would mean that I have to manually start up a postgres database before I can run my integration tests locally. What I liked about the junit rule approach was that I could simply run my integration tests like any other tests by just having docker running in the background.

    I would be nice if someone can come up with a solution that allows me to connect to a docker instance during the CI process but I am also happy about ideas on how to change my overall setup of integration testing in order for this to work.

  • How to access Docker container from another machine on lan
  • Running rust on Docker: Empty reply from server
  • Kubernetes for a Development Environment
  • Proper method to run gulp build on jenkins for AWS ECS
  • Connect to HBase running in Docker
  • Restrict ssh user to their respective docker container
  • Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.