Scaling Azure Container Service with private ports on containers

In our organization, we are currently trying out the Azure Container Service with Docker Swarm. We have developed a Web API project based on .NET Core and created containers out of it. We have exposed the web api on Container’s Private Port (3000). We want to scale this to say 15 containers on three agent nodes while still accessing the web api through one single Azure load balancer url on public port 8080.

I believe we would need an Internal Load Balancer to do this but there is no documentation around it. I have seen this article on DC\OS but we are using Docker Swarm here. Any help?

  • docker-compose up "ERROR: Error processing tar file(archive/tar: invalid tar header)
  • Browsersync within a Docker container
  • Tool to automate Docker Swarm
  • Docker: change folder where to store docker volumes
  • Environment variables are not recognized by Laravel in Docker
  • How to change “remote process types” for Heroku deploy with Docker
  • Docker: why installing a linux dist inside the container?
  • How Docker can be used for multi-layered application?
  • How do I force `docker stack` to always fetch a new image?
  • Docker-machine and Openstack / SSH
  • problems running logstash with -f flag in docker
  • Docker official image Tomcat can not resolve host in /etc/hosts
  • One Solution collect form web for “Scaling Azure Container Service with private ports on containers”

    Azure Container Service use vanilla Docker Swarm so any load balancing solution for Swarm will work in ACS, e.g.

    Same is true for DC/OS, but in this case it is documented in “Load balance containers in an Azure Container Service cluster” –

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.