Which commands of the defined Linux Distribution are available in a Docker container?

I’m new to docker and understand that the linux kernel is shared between the host-os and the containers. But I don’t really understand how deep docker emulates a specific linux-distribution. Lets say we have a simple docker file like this:

FROM ubuntu:16.10
RUN apt-get install nginx

It will give me a docker container with nginx installed in an Ubuntu 16.10 environment. So I should be able to use apt-get as default package manager of Ubuntu. But how deep is this? Can I assume that typical commands of those distribution like lsb_release are emulated like in a full VM with Ubuntu 16.10 installed?

  • How many containers should exist per host in production? How should services be split?
  • Docker event output formatting
  • Docker compose php-fpm, nginx, mysql and use wp-cli on nginx
  • Deployment with only SSH Key and dockerfile
  • how to read files from a python module inside docker
  • Container doesn't ping with OpenVSwitch and Ryu controller
  • The reason behind my question is that linux distributions are different. I need to know which commands are avaliable, for example when I run a container with Ubuntu 16.10 like the one above on a host which a different distribution installed (like Red Hat, CentOS etc).

    A Ubuntu image in Docker is about 150 MB. So I think there are not all tools included like in a real installation. But how can I know on which I can desert that they’re there.

  • Docker-swarm error: read-only file system
  • Access IP address of Couchbase container on Docker Swarm cluster
  • What are the option to API gateway with docker?
  • Docker Compose 3 controlling resources (memory, cpu)
  • How to deploy Kubernetes with Docker locally?
  • how to mount an efs file system using docker in beanstalk
  • 2 Solutions collect form web for “Which commands of the defined Linux Distribution are available in a Docker container?”

    Base OS images for Docker are deliberately stripped down, and for Ubuntu they are removing more commands with each new release. The image is meant as the base for a dedicated application to run, you wouldn’t typically connect to the container and run commands inside it, and a smaller image is easier to move around and has a smaller attack vector.

    There isn’t a list of commands in each image version that I know of, you’ll only know by building your image. But when images are tagged you can assume a future minor update will not break downstream images – a good argument for explicitly specifying a tag in your Dockerfile.

    E.g, this Dockerfile builds correctly:

    FROM ubuntu:trusty
    RUN ping -c 1 127.0.0.1 
    

    This one fails:

    FROM ubuntu:xenial
    RUN ping -c 1 127.0.0.1
    

    That’s because ping was removed from the image for the xenial release. If you just used FROM ubuntu then the same Dockerfile would have built correctly when trusty was the latest tag and then failed when it was replaced by xenial.

    A container is presenting you with the same software environment as the non-containerized distribution. It may not have (in fact, probably does not have) all the same packages installed by default, but you can install whatever you need using the appropriate package manager. The availability of software in the container has nothing to do with the distribution running on your host (the Ubuntu image will be the same regardless of whether your are running Docker under CentOS, Fedora, Ubuntu, Arch, etc).

    If you require certain commands to be available, just ensure that they are installed in your Dockerfile.

    One of the few things that works differently inside a container is that there is typically no service management process running (like init or systemd or whatever), so you cannot start services the same way you can on the host without a little bit of work.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.