Tag: kubernetes

Not able to start a pod in minikube by pulling image from external private registry

I have an ubuntu installed on my laptop. I started a private docker registry (ssl enabled + htpasswd secured) and added it on overlay network (so it can be accessed from other hosts/vms) here is the code (docker-compose.yaml): version: “3” services: registry: restart: always image: registry:2 ports: – 5000:5000 environment: REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt REGISTRY_HTTP_TLS_KEY: /certs/domain.key REGISTRY_AUTH: […]

using kerberos in a container, inside of openshift / kubernetes

I was able to get kerberos authentication to function inside of a standalone docker installation for our webapp. Our webapp is NodeJS. However when I try to deploy the POD inside of OpenShift it ceases to function. It appears the authentication headers get lost in translation. The exact same containerized app, gives us this error […]

How do I restore a dump file from mysqldump using kubernet?

I know how to restore a dump file from mysqldump. Now, I am attempting to do that using kubernet and a docker container. The database files are in persistent (nfs) mount. The docker cannot be accessed outside of the cluster as there is no need for anything external to touch it. I tried: kubectl run […]

How to debug kuberenetes system pod elections on masters?

I’m trying to make SSL work with kubernetes, but am stuck with leader election problem. So I think I should be seeing scheduler and controller system pods somewhere, while all I have is this: kubectl get po –namespace=kube-system NAME READY STATUS RESTARTS AGE kube-apiserver-10.255.12.200 1/1 Running 0 18h kube-apiserver-10.255.16.111 1/1 Running 0 20h kube-apiserver-10.255.17.12 1/1 […]

Why kubernetes-scheduler and controller-manager is stopped sometime in etcd master (three nodes)

I had build a master cluster(k8s) with three nodes. But there are two problems: The etcd‘s log on every node report two warnings: (1). apply entries took too long [11.167451ms for 1 entries] (2). failed to send out heartbeat on time I probably know it’s disk too slow from the google but i can’t resolve […]

Kubernetes Pod fails with CrashLoopBackOff

I’m Following this guide in order to set up a pod using minikube and pull an image from a private repository hosted at: hub.docker.com When trying to set up a pod to pull the image i see “CrashLoopBackoff” pod config: apiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: – name: private-reg-container image: ha/prod:latest imagePullSecrets: […]

Automate adding new Node Exporters to the targets array of prometheus.yml

I have a basic prometheus.yml file in my environment i.e .. ### apiVersion: v1 kind: ConfigMap metadata: creationTimestamp: null name: prometheus-core data: prometheus.yml: | global: scrape_interval: 10s scrape_timeout: 10s evaluation_interval: 10s rule_files: – ‘/etc/prometheus-rules/*.rules’ scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. – job_name: ‘prometheus’ […]

Kubernetes NFS PersistentVolumeClaim has status Pending

I am trying to configure my Kubernetes cluster to use a local NFS server for persistent volumes. I set up the PersistentVolume as follows: apiVersion: v1 kind: PersistentVolume metadata: name: hq-storage-u4 namespace: my-ns spec: capacity: storage: 10Ti accessModes: – ReadWriteMany persistentVolumeReclaimPolicy: Retain nfs: path: /data/u4 server: 10.30.136.79 readOnly: false The PV looks OK in kubectl […]

Kubernetes ACS engine: containers (pods) do not have internet access

I’m using a Kubernetes cluster, deployed on Azure using the ACS-engine. My cluster is composed of 5 nodes. 1 master (unix VM) (v1.6.2) 2 unix agent (v1.6.2) 2 windows agent (v1.6.0-alpha.1.2959+451473d43a2072) I have created a unix pod defined by the following YAML: Name: ping-with-unix Node: k8s-linuxpool1-25103419-0/10.240.0.5 Start Time: Fri, 30 Jun 2017 14:27:28 +0200 Status: […]

CrashLoopBackOff in spark cluster in kubernetes: nohup: can't execute '–': No such file or directory

Dockerfile: FROM openjdk:8-alpine RUN apk update && \ apk add curl bash procps ENV SPARK_VER 2.1.1 ENV HADOOP_VER 2.7 ENV SPARK_HOME /opt/spark # Get Spark from US Apache mirror. RUN mkdir -p /opt && \ cd /opt && \ curl http://www.us.apache.org/dist/spark/spark-${SPARK_VER}/spark-${SPARK_VER}-bin-hadoop${HADOOP_VER}.tgz | \ tar -zx && \ ln -s spark-${SPARK_VER}-bin-hadoop${HADOOP_VER} spark && \ echo Spark […]

Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.