Sunteți pe pagina 1din 27

INSTALACIN DE KUBERNETES

1. Curso de Docker Swarm, Kubernetes y CoreOS Fleet (Actualizado 2017)

2. Kubernetes

3. Instalacin de Kubernetes

Kubernetes require una capa de red en la que cada nodo tiene una IP dedicada. El
modelo de red de Kubernetes requiere que cada pod y nodo tiene la habilidad de
comunicarse directamente con cada pod y nodo en el cluster.

Componentes:

Base de datos clave/valor para almacenar la informacin


Controladores: api server, controller manager y scheduler.
Workers: Kubelet, proxy y docker o rkt.

Diferentes tapas en la instalacin de Kubernetes:

Creacin del cluster


Configurar la capa de red
Desplegando los servicios de Kubernetes y otras herramientas

Kubernetes cluster consiste de una base de datos clave/valor y una coleccin de


controladores y worker services.

Creacin simple de un cluster

Usando cloud-config:
The file used by this system initialization program is called a cloud-config file.

update: permite configurar el tipo de actualizacion.


ssh_authorized_keys: (pasa las claves ssh para los usuarios autorizados).
hostname: especifica el hostname de la maquina.
users: anade/borra usuarios.
write_files: anade ficheros con env variables o ficheros de configuracion.
manage_etc_hosts: configura el contenedio de /etc/hosts.

#cloud-config

coreos:
units:
- name: "docker-redis.service"
command: "start"
content: |
[Unit]
Description=Redis container
Author=Me
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker start -a redis_server
ExecStop=/usr/bin/docker stop -t 2 redis_server

Usando inago: - Con Inago, podemos desplegar un cluster de Kubernetes usando


fleet con systemd units. Por ejemplo, esta systemd unit create un api server:
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=etcd2.service
After=etcd2.service

[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=-/usr/bin/rm /opt/bin/kube-apiserver
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-apiserver https://storage.googlea
pis.com/kubernetes-release/release/v0.19.3/bin/linux/amd64/kube-apiserver
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
ExecStart=/opt/bin/kube-apiserver \
--allow_privileged=true \
--insecure_bind_address=0.0.0.0 \
--insecure_port=9090 \
--kubelet_https=true \
--service-cluster-ip-range=172.18.0.0/16 \
--etcd_servers=http://127.0.0.1:4001 \
--logtostderr=true
Restart=always
RestartSec=10

Usando ignition:

Kubernetes master:

ignition_version: 1
systemd:
units:
- name: etcd2.service
enable: true
dropins:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_NAME={{.etcd_name}}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=http://{{.ipv4_address}}:237
9"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=http://{{.ipv4_address
}}:2380"
Environment="ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379"
Environment="ETCD_LISTEN_PEER_URLS=http://{{.ipv4_address}}:2380"
Environment="ETCD_INITIAL_CLUSTER={{.etcd_initial_cluster}}"
- name: fleet.service
enable: true
dropins:
- name: 40-fleet-metadata.conf
contents: |
[Service]
Environment="FLEET_METADATA={{.fleet_metadata}}"
- name: flanneld.service
dropins:
- name: 40-ExecStartPre-symlink.conf
contents: |
[Service]
ExecStartPre=/usr/bin/ln -sf /etc/flannel/options.env /run/flannel/o
ptions.env
ExecStartPre=/opt/init-flannel
- name: docker.service
dropins:
- name: 40-flannel.conf
contents: |
[Unit]
Requires=flanneld.service
After=flanneld.service
- name: k8s-certs@.service
contents: |
[Unit]
Description=Fetch Kubernetes certificate assets
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/ssl
ExecStart=/usr/bin/bash -c "[ -f {{.k8s_cert_endpoint}}/tls/%i ] || curl
{{.k8s_cert_endpoint}}/tls/%i -o /etc/kubernetes/ssl/%i"
- name: k8s-assets.target
contents: |
[Unit]
Description=Load Kubernetes Assets
Requires=k8s-certs@apiserver.pem.service
After=k8s-certs@apiserver.pem.service
Requires=k8s-certs@apiserver-key.pem.service
After=k8s-certs@apiserver-key.pem.service
Requires=k8s-certs@ca.pem.service
After=k8s-certs@ca.pem.service
- name: kubelet.service
enable: true
contents: |
[Unit]
Description=Kubelet via Hyperkube ACI
Requires=flanneld.service
After=flanneld.service
Requires=k8s-assets.target
After=k8s-assets.target
[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
Environment=KUBELET_VERSION={{.k8s_version}}
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers=http://127.0.0.1:8080 \
--register-schedulable=false \
--allow-privileged=true \
--config=/etc/kubernetes/manifests \
--hostname-override={{.ipv4_address}} \
--cluster_dns={{.k8s_dns_service_ip}} \
--cluster_domain=cluster.local
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
- name: k8s-addons.service
enable: true
contents: |
[Unit]
Description=Kubernetes Addons
Requires=kubelet.service
After=kubelet.service
[Service]
Type=oneshot
ExecStart=/opt/k8s-addons
[Install]
WantedBy=multi-user.target

storage:
{{ if .pxe }}
disks:
- device: /dev/sda
wipe_table: true
partitions:
- label: ROOT
filesystems:
- device: "/dev/sda1"
format: "ext4"
create:
force: true
options:
- "-LROOT"
{{else}}
filesystems:
- device: "/dev/disk/by-label/ROOT"
format: "ext4"
{{end}}
files:
- path: /etc/kubernetes/manifests/kube-proxy.yaml
contents: |
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:{{.k8s_version}}
command:
- /hyperkube
- proxy
- --master=http://127.0.0.1:8080
- --proxy-mode=iptables
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
- path: /etc/kubernetes/manifests/kube-apiserver.yaml
contents: |
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: quay.io/coreos/hyperkube:{{.k8s_version}}
command:
- /hyperkube
- apiserver
- --bind-address=0.0.0.0
- --etcd-servers={{.k8s_etcd_endpoints}}
- --allow-privileged=true
- --service-cluster-ip-range={{.k8s_service_ip_range}}
- --secure-port=443
- --advertise-address={{.ipv4_address}}
- --admission-control=NamespaceLifecycle,NamespaceExists,LimitRa
nger,SecurityContextDeny,ServiceAccount,ResourceQuota
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
- --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.p
em
ports:
- containerPort: 443
hostPort: 443
name: https
- containerPort: 8080
hostPort: 8080
name: local
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
- path: /etc/flannel/options.env
contents: |
FLANNELD_IFACE={{.ipv4_address}}
FLANNELD_ETCD_ENDPOINTS={{.k8s_etcd_endpoints}}
- path: /etc/kubernetes/manifests/kube-controller-manager.yaml
contents: |
apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- name: kube-controller-manager
image: quay.io/coreos/hyperkube:{{.k8s_version}}
command:
- /hyperkube
- controller-manager
- --master=http://127.0.0.1:8080
- --leader-elect=true
- --service-account-private-key-file=/etc/kubernetes/ssl/apiserv
er-key.pem
- --root-ca-file=/etc/kubernetes/ssl/ca.pem
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
initialDelaySeconds: 15
timeoutSeconds: 1
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
- path: /etc/kubernetes/manifests/kube-scheduler.yaml
contents: |
apiVersion: v1
kind: Pod
metadata:
name: kube-scheduler
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-scheduler
image: quay.io/coreos/hyperkube:{{.k8s_version}}
command:
- /hyperkube
- scheduler
- --master=http://127.0.0.1:8080
- --leader-elect=true
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251
initialDelaySeconds: 15
timeoutSeconds: 1
- path: /srv/kubernetes/manifests/kube-dns-rc.json
contents: |
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"labels": {
"k8s-app": "kube-dns",
"kubernetes.io/cluster-service": "true",
"version": "v11"
},
"name": "kube-dns-v11",
"namespace": "kube-system"
},
"spec": {
"replicas": 1,
"selector": {
"k8s-app": "kube-dns",
"version": "v11"
},
"template": {
"metadata": {
"labels": {
"k8s-app": "kube-dns",
"kubernetes.io/cluster-service": "true",
"version": "v11"
}
},
"spec": {
"containers": [
{
"command": [
"/usr/local/bin/etcd",
"-data-dir",
"/var/etcd/data",
"-listen-client-urls",
"http://127.0.0.1:2379,http://127.0.0.1:4001",
"-advertise-client-urls",
"http://127.0.0.1:2379,http://127.0.0.1:4001",
"-initial-cluster-token",
"skydns-etcd"
],
"image": "gcr.io/google_containers/etcd-amd64:2.2.1",
"name": "etcd",
"resources": {
"limits": {
"cpu": "100m",
"memory": "500Mi"
},
"requests": {
"cpu": "100m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"mountPath": "/var/etcd/data",
"name": "etcd-storage"
}
]
},
{
"args": [
"--domain=cluster.local"
],
"image": "gcr.io/google_containers/kube2sky:1.14",
"livenessProbe": {
"failureThreshold": 5,
"httpGet": {
"path": "/healthz",
"port": 8080,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"successThreshold": 1,
"timeoutSeconds": 5
},
"name": "kube2sky",
"readinessProbe": {
"httpGet": {
"path": "/readiness",
"port": 8081,
"scheme": "HTTP"
},
"initialDelaySeconds": 30,
"timeoutSeconds": 5
},
"resources": {
"limits": {
"cpu": "100m",
"memory": "200Mi"
},
"requests": {
"cpu": "100m",
"memory": "50Mi"
}
}
},
{
"args": [
"-machines=http://127.0.0.1:4001",
"-addr=0.0.0.0:53",
"-ns-rotate=false",
"-domain=cluster.local."
],
"image": "gcr.io/google_containers/skydns:2015-10-13-8
c72f8c",
"name": "skydns",
"ports": [
{
"containerPort": 53,
"name": "dns",
"protocol": "UDP"
},
{
"containerPort": 53,
"name": "dns-tcp",
"protocol": "TCP"
}
],
"resources": {
"limits": {
"cpu": "100m",
"memory": "200Mi"
},
"requests": {
"cpu": "100m",
"memory": "50Mi"
}
}
},
{
"args": [
"-cmd=nslookup kubernetes.default.svc.cluster.local
127.0.0.1 >/dev/null",
"-port=8080"
],
"image": "gcr.io/google_containers/exechealthz:1.0",
"name": "healthz",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"resources": {
"limits": {
"cpu": "10m",
"memory": "20Mi"
},
"requests": {
"cpu": "10m",
"memory": "20Mi"
}
}
}
],
"dnsPolicy": "Default",
"volumes": [
{
"emptyDir": {},
"name": "etcd-storage"
}
]
}
}
}
}
- path: /srv/kubernetes/manifests/kube-dns-svc.json
contents: |
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "kube-dns",
"namespace": "kube-system",
"labels": {
"k8s-app": "kube-dns",
"kubernetes.io/name": "KubeDNS",
"kubernetes.io/cluster-service": "true"
}
},
"spec": {
"clusterIP": "{{.k8s_dns_service_ip}}",
"ports": [
{
"protocol": "UDP",
"name": "dns",
"port": 53
},
{
"protocol": "TCP",
"name": "dns-tcp",
"port": 53
}
],
"selector": {
"k8s-app": "kube-dns"
}
}
}
- path: /srv/kubernetes/manifests/heapster-rc.json
contents: |
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"labels": {
"k8s-app": "heapster",
"kubernetes.io/cluster-service": "true"
},
"name": "heapster-v1.0.2",
"namespace": "kube-system"
},
"spec": {
"replicas": 1,
"selector": {
"k8s-app": "heapster"
},
"template": {
"metadata": {
"labels": {
"k8s-app": "heapster",
"kubernetes.io/cluster-service": "true"
}
},
"spec": {
"containers": [
{
"command": [
"/heapster",
"--source=kubernetes.summary_api:''",
"--metric_resolution=60s"
],
"image": "gcr.io/google_containers/heapster:v1.0.2",
"name": "heapster",
"resources": {
"limits": {
"cpu": "100m",
"memory": "208Mi"
},
"requests": {
"cpu": "100m",
"memory": "208Mi"
}
}
}
]
}
}
}
}
- path: /srv/kubernetes/manifests/heapster-svc.json
contents: |
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "heapster",
"namespace": "kube-system",
"labels": {
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "Heapster"
}
},
"spec": {
"ports": [
{
"port": 80,
"targetPort": 8082
}
],
"selector": {
"k8s-app": "heapster"
}
}
}
- path: /srv/kubernetes/manifests/kube-system.json
contents: |
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "kube-system"
}
}
- path: /opt/init-flannel
mode: 0544
contents: |
#!/bin/bash
function init_flannel {
echo "Waiting for etcd..."
while true
do
IFS=',' read -ra ES <<< "{{.k8s_etcd_endpoints}}"
for ETCD in "${ES[@]}"; do
echo "Trying: $ETCD"
if [ -n "$(curl --silent "$ETCD/v2/machines")" ]; then
local ACTIVE_ETCD=$ETCD
break
fi
sleep 1
done
if [ -n "$ACTIVE_ETCD" ]; then
break
fi
done
RES=$(curl --silent -X PUT -d "value={\"Network\":\"{{.k8s_pod_net
work}}\",\"Backend\":{\"Type\":\"vxlan\"}}" "$ACTIVE_ETCD/v2/keys/coreos.com/net
work/config?prevExist=false")
if [ -z "$(echo $RES | grep '"action":"create"')" ] && [ -z "$(ech
o $RES | grep 'Key already exists')" ]; then
echo "Unexpected error configuring flannel pod network: $RES"
fi
}
init_flannel
- path: /opt/k8s-addons
mode: 0544
contents: |
#!/bin/bash
echo "Waiting for Kubernetes API..."
until curl --silent "http://127.0.0.1:8080/version"
do
sleep 5
done
echo "K8S: kube-system namespace"
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /s
rv/kubernetes/manifests/kube-system.json)" "http://127.0.0.1:8080/api/v1/namespa
ces" > /dev/null
echo "K8S: DNS addon"
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /s
rv/kubernetes/manifests/kube-dns-rc.json)" "http://127.0.0.1:8080/api/v1/namespa
ces/kube-system/replicationcontrollers" > /dev/null
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /s
rv/kubernetes/manifests/kube-dns-svc.json)" "http://127.0.0.1:8080/api/v1/namesp
aces/kube-system/services" > /dev/null
echo "K8S: Heapster addon"
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /s
rv/kubernetes/manifests/heapster-rc.json)" "http://127.0.0.1:8080/api/v1/namespa
ces/kube-system/replicationcontrollers" > /dev/null
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /s
rv/kubernetes/manifests/heapster-svc.json)" "http://127.0.0.1:8080/api/v1/namesp
aces/kube-system/services" > /dev/null

networkd:
units:
- name: 00-{{.networkd_name}}.network
contents: |
[Match]
Name={{.networkd_name}}
[Network]
Gateway={{.networkd_gateway}}
DNS={{.networkd_dns}}
DNS=8.8.8.8
Address={{.networkd_address}}

{{ if .ssh_authorized_keys }}
passwd:
users:
- name: core
ssh_authorized_keys:
{{ range $element := .ssh_authorized_keys }}
- {{$element}}
{{end}}
{{end}}

Kubernetes worker:

---
ignition_version: 1
systemd:
units:
- name: etcd2.service
enable: true
dropins:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_NAME={{.etcd_name}}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=http://{{.ipv4_address}}:237
9"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=http://{{.ipv4_address
}}:2380"
Environment="ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379"
Environment="ETCD_LISTEN_PEER_URLS=http://{{.ipv4_address}}:2380"
Environment="ETCD_INITIAL_CLUSTER={{.etcd_initial_cluster}}"
- name: fleet.service
enable: true
dropins:
- name: 40-fleet-metadata.conf
contents: |
[Service]
Environment="FLEET_METADATA={{.fleet_metadata}}"
- name: flanneld.service
dropins:
- name: 40-ExecStartPre-symlink.conf
contents: |
[Service]
ExecStartPre=/usr/bin/ln -sf /etc/flannel/options.env /run/flannel/o
ptions.env
- name: docker.service
dropins:
- name: 40-flannel.conf
contents: |
[Unit]
Requires=flanneld.service
After=flanneld.service
- name: k8s-certs@.service
contents: |
[Unit]
Description=Fetch Kubernetes certificate assets
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/ssl
ExecStart=/usr/bin/bash -c "[ -f {{.k8s_cert_endpoint}}/tls/%i ] || curl
{{.k8s_cert_endpoint}}/tls/%i -o /etc/kubernetes/ssl/%i"
- name: k8s-assets.target
contents: |
[Unit]
Description=Load Kubernetes Assets
Requires=k8s-certs@worker.pem.service
After=k8s-certs@worker.pem.service
Requires=k8s-certs@worker-key.pem.service
After=k8s-certs@worker-key.pem.service
Requires=k8s-certs@ca.pem.service
After=k8s-certs@ca.pem.service
- name: kubelet.service
enable: true
contents: |
[Unit]
Description=Kubelet via Hyperkube ACI
Requires=k8s-assets.target
After=k8s-assets.target
[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
Environment=KUBELET_VERSION={{.k8s_version}}
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers={{.k8s_controller_endpoint}} \
--register-node=true \
--allow-privileged=true \
--config=/etc/kubernetes/manifests \
--hostname-override={{.ipv4_address}} \
--cluster_dns={{.k8s_dns_service_ip}} \
--cluster_domain=cluster.local \
--kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
--tls-cert-file=/etc/kubernetes/ssl/worker.pem \
--tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

storage:
{{ if .pxe }}
disks:
- device: /dev/sda
wipe_table: true
partitions:
- label: ROOT
filesystems:
- device: "/dev/sda1"
format: "ext4"
create:
force: true
options:
- "-LROOT"
{{else}}
filesystems:
- device: "/dev/disk/by-label/ROOT"
format: "ext4"
{{end}}
files:
- path: /etc/kubernetes/worker-kubeconfig.yaml
contents: |
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ssl/worker.pem
client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
cluster: local
user: kubelet
name: kubelet-context
current-context: kubelet-context
- path: /etc/kubernetes/manifests/kube-proxy.yaml
contents: |
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:{{.k8s_version}}
command:
- /hyperkube
- proxy
- --master={{.k8s_controller_endpoint}}
- --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml
- --proxy-mode=iptables
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: "ssl-certs"
- mountPath: /etc/kubernetes/worker-kubeconfig.yaml
name: "kubeconfig"
readOnly: true
- mountPath: /etc/kubernetes/ssl
name: "etc-kube-ssl"
readOnly: true
volumes:
- name: "ssl-certs"
hostPath:
path: "/usr/share/ca-certificates"
- name: "kubeconfig"
hostPath:
path: "/etc/kubernetes/worker-kubeconfig.yaml"
- name: "etc-kube-ssl"
hostPath:
path: "/etc/kubernetes/ssl"
- path: /etc/flannel/options.env
contents: |
FLANNELD_IFACE={{.ipv4_address}}
FLANNELD_ETCD_ENDPOINTS={{.k8s_etcd_endpoints}}

networkd:
units:
- name: 00-{{.networkd_name}}.network
contents: |
[Match]
Name={{.networkd_name}}
[Network]
Gateway={{.networkd_gateway}}
DNS={{.networkd_dns}}
DNS=8.8.8.8
Address={{.networkd_address}}

{{ if .ssh_authorized_keys }}
passwd:
users:
- name: core
ssh_authorized_keys:
{{ range $element := .ssh_authorized_keys }}
- {{$element}}
{{end}}
{{end}}

Otros ejemplos: kubernetes cluster en CoreOS usando docker

S-ar putea să vă placă și