Cron in the same container works locally but fails in cluster

I have a super simple container with cron scheduled:

* * * * * root /bin/bash /alive.sh

alive.sh:

  • docker unauthorized: authentication required - upon push with successful login
  • GNU parallel to keep docker-compose in attached mode
  • Accessing the image hash from within the container
  • How to add already existing ECS Instance to newly created ECS Cluster
  • How to connect mysql workbench to running mysql inside docker?
  • Can I build a docker image from an existing database container?
  • #!/bin/bash
    
    /bin/echo "I'm alive"
    /bin/echo $(/bin/date) >> /tmp/alive.log
    

    I build the docker image locally and run it:

    docker build -t orian/crondemo:v0 .
    docker run --rm -it --name crondemo orian/crondemo:v0
    

    And after a minute or so I can check that a new file is being created:

    docker exec crondemo ls /tmp
    

    The I tag and push the image to Google Container Registry:

    TAG=eu.gcr.io/<PROJECT_ID>/crondemo:v0
    docker tag orian/crondemo:v0 $TAG
    kubectl docker -- push $TAG
    

    Starting a pod manually:

    kubectl run crondemo --image=$TAG --replicas=1 --restart=Never
    

    And verifying that it works:

    kubectl exec crondemo ls /tmp
    

    And here is a problem, a /tmp/alive.log file is not being written to. Where is the problem?

    I’ve prepared a repo with sample: https://github.com/orian/k8s-cron-demo

    Notice

    • I’ve also tested overwriting /var/spool/cron/crontabs/root but it didn’t solve the problem.
    • I’m using docker image: openjdk:8-jre. Before switching I used alpine and crond. It’s seemed to work then.

    Edit2 – found (this is crazy):

    • https://forums.docker.com/t/running-cronjob-in-debian-jessie-container/17527/2
    • Issues running cron in Docker on different hosts
    • Why is it needed to set `pam_loginuid` to its `optional` value with docker?

  • docker-compose image named: “prefix_%s_1” instead of “%s”
  • `docker-machine scp` from local directory to machine
  • Why are large files in my Docker image getting pushed each time even when no changes have been made to them?
  • No matching server found with minimum required memory for the node [AppServer] on DCHQ
  • Docker dnu restore fails
  • How to run Arangodb on Openshift?
  • One Solution collect form web for “Cron in the same container works locally but fails in cluster”

    I’ve followed a https://stackoverflow.com/a/21928878/436754 on enabling logs.

    Running: /var/log/syslog

    May  4 12:33:05 crondemo rsyslogd: [origin software="rsyslogd" swVersion="8.4.2" x-pid="14" x-info="http://www.rsyslog.com"] start
    May  4 12:33:05 crondemo rsyslogd: imklog: cannot open kernel log(/proc/kmsg): Operation not permitted.
    May  4 12:33:05 crondemo rsyslogd-2145: activation of module imklog failed [try http://www.rsyslog.com/e/2145 ]
    May  4 12:33:08 crondemo cron[38]: (CRON) INFO (pidfile fd = 3)
    May  4 12:33:08 crondemo cron[39]: (CRON) STARTUP (fork ok)
    May  4 12:33:08 crondemo cron[39]: (*system*) NUMBER OF HARD LINKS > 1 (/etc/crontab)
    May  4 12:33:08 crondemo cron[39]: (*system*crondemo) NUMBER OF HARD LINKS > 1 (/etc/cron.d/crondemo)
    May  4 12:33:08 crondemo cron[39]: (CRON) INFO (Running @reboot jobs)
    May  4 12:34:01 crondemo cron[39]: (*system*) NUMBER OF HARD LINKS > 1 (/etc/crontab)
    May  4 12:34:01 crondemo cron[39]: (*system*crondemo) NUMBER OF HARD LINKS > 1 (/etc/cron.d/crondemo)
    

    This made me Google for cron "NUMBER OF HARD LINKS > 1" and I’ve found: https://github.com/phusion/baseimage-docker/issues/198

    The workaround is to modify a Dockerfile to overwrite the cron file on start instead being mounted by Docker.

    • Dockerfile COPY cronfile /cronfile
    • docker-entrypoint.sh: cp /cronfile /etc/cron.d/crondemo

    A branch with workaround: https://github.com/orian/k8s-cron-demo/tree/with-rsyslog

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.