Scaling Azure Container Service with private ports on containers

In our organization, we are currently trying out the Azure Container Service with Docker Swarm. We have developed a Web API project based on .NET Core and created containers out of it. We have exposed the web api on Container’s Private Port (3000). We want to scale this to say 15 containers on three agent nodes while still accessing the web api through one single Azure load balancer url on public port 8080.

I believe we would need an Internal Load Balancer to do this but there is no documentation around it. I have seen this article on DC\OS but we are using Docker Swarm here. Any help?

  • Creating multiple Docker container
  • how to deploy .net core web api to linux machine
  • docker image - centos 7 > ssh service not found
  • Understanding Kubernetes resource requests
  • API server failed to start up
  • Freegeoip with Docker Cloud
  • Running wine in ubuntu docker container
  • Kubernetes 1.2.2: api-server fails: can't find mounted certs for TLS on etcd
  • docker nginx stream balancer 404
  • WORKDIR $HOME in Dockerfile does not seem to work
  • Why docker showing localhost data?
  • Docker: how to force graylog web interface over https?
  • One Solution collect form web for “Scaling Azure Container Service with private ports on containers”

    Azure Container Service use vanilla Docker Swarm so any load balancing solution for Swarm will work in ACS, e.g.

    Same is true for DC/OS, but in this case it is documented in “Load balance containers in an Azure Container Service cluster” –

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.