How to connect docker's container with pipeline

I would like to have custom server which listens inside docker‘s container (e.g on TCP 192.168.0.1:4000). How to send data in and out from outside of the container. I don’t want to use host ports for bridging. I would rather use pipelines or something which not take host network resources. Please show me full example of docker command.

  • How to make Docker container accessible to other network machines through IP?
  • Centos and Fedora Docker, I can not start mysql
  • Running ipython notebook in a Docker Container
  • docker image for a go package
  • Understanding Docker best practice and running webservers
  • Issues when trying to run Envoy front-proxy example
  • Docker Build and Multi layer dll version
  • should I choose docker for isolated development of python? [closed]
  • Docker: Can I access to system files?
  • How to debug why I got 301 HTTP code instead of 200 when using docker nginx-proxy image
  • Running docker Ubuntu image on Debian enironment
  • Effect of a driver in the host OS to a Docker process
  • 3 Solutions collect form web for “How to connect docker's container with pipeline”

    You can use Docker volumes.

    You start your container as

    docker run -v /host/path:/container/path ...
    

    and then you can pipe data to files in /host/path and they will be visible in /container/path and vise versa.

    As long as your server’s clients are docker containers as well you don’t need to expose any host ports:

    docker run --name s1 -d -p 3000 myserver
    
    docker run -d --link s1:serverName client
    

    Now you can reach your server from the client container at serverName:3000.

    UPDATE: I just saw that you want to be able to send data from outside any containers. You can still use the same approach, depending on your use case/data volume. Every time you want to send data, create a container that sends it. Using the cli it might look like:

    echo "Lots of data" | docker run --rm --link s1:serverName client
    

    Client would have to read from stdin and send the data to serverName:3000. After it’s finished it will be automatically removed.

    I don’t think what you’re asking for makes sense. Let’s say you use a UNIX pipe to capture standard output from a docker container.

    $ docker run --rm -t busybox dd if=/dev/urandom count=1 > junk
    $ du -hs junk
    4.0K    junk
    

    If your docker client is connected to the docker host via tcp, of course that traffic uses the hosts’s networking stack. It uses a method called hijacking to transport data on the same socket as the http-ish connection between the client and host.

    If your docker client is connected to the host via a unix socket, then your client is on the host, and that pipeline is not using the tcp stack. But you still can’t transport that data off the host without using the host’s networking.

    So using the networking stack is unavoidable if you want to get data from the host. That said, if your criteria is just to avoid allocating additional ports, pipelines do allow you to use the original docker host socket instead of creating new ports. But pipelines aren’t the same as a tcp socket, so your application needs to be designed to understand standard input and output.

    One approach that grants you access to an already-created container’s internally-exposed ports is the ambassador container-linking pattern.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.