What is the need for Docker Daemon?

This is the Docker architecture:
enter image description here
I am not able to figure out why does one need the docker daemon. The client is good enough. The client simply accesses the daemon using Unix socket. It can use TCP, but what I notice is usually the client and daemon are on the same machine! So why two separate entities?
As mentioned above.. client can use TCP to communicate with daemon. So what is the preferred way to work in a team? One daemon for whole team on separate server with each dev running a client? Or each dev has his own daemon process.

  • Authenticate with Docker registry API
  • ehcache and multicast failing on docker
  • Access control for private docker registry
  • google container engine having problems pulling image from container registry
  • Access apache inside ubuntu container
  • change selenium hub default port docker-compose
  • Docker - Call to undefined function mcrypt_get_block_size()
  • Tag a Docker image in Google Container Registry with additional tag via command line
  • Automate Deployment of Docker
  • Can not connect to Postgres Container from pgAdmin
  • sudo docker run: unbound variable error
  • Docker list only stopped containers
  • 2 Solutions collect form web for “What is the need for Docker Daemon?”

    Docker client provides cli only, it is just an http api wrapper, Like aws cli.

    Docker daemon is the brain behind the whole operation, like aws itself. When you use docker run command to start up a container, your docker client will translate that command into http API call, sends it to docker daemon, Docker daemon then evaluates the request, talks to underlying os and provisions your container.

    Please note docker cli can connect to remote docker daemon, and your can configure your docker daemon to use tcp IP.

    Q in my mind was, what is the preferred way to work in a team? One daemon for whole team on separate server with each dev running a client? Or each dev has his own demon.

    This is up to you but most of the time developers have a local docker daemon and client, building Images using dockerfiles. If they need to share docker images, You can provide local docker registry or use public ones. This way, Taking advantage docker you can have exact same dev environment at developers disposal. This development environment will be similar to production environment.

    Q in my mind was, what is the preferred way to work in a team? One daemon for whole team on separate server with each dev running a client? Or each dev has his own demon

    Each dev is working with its own Docker daemon and container: the idea with Docker is to be able to specify (Dockerfile) a container that each developer can rebuild and use locally, with the assurance that docker build will produce the exact same image.
    Or they can docker push an image an reuse it on their own local docker daemon instance.

    But in any case, the docker daemon is per server, meaning you would share it through a team only if said team accessed a common server. If not, they can install docker on their workstation in which case, each one have their own docker daemon.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.