Documente Academic
Documente Profesional
Documente Cultură
Fedora IoT is an upcoming Fedora edition targeted at the Internet of Things. It was introduced last year
on Fedora Magazine in the article How to turn on an LED with Fedora IoT. Since then, it has continued
to improve together with Fedora Silverblue to provide an immutable base operating system aimed at
container-focused workflows.
Kubernetes is an immensely popular container orchestration system. It is perhaps most commonly used
on powerful hardware handling huge workloads. However, it can also be used on lightweight devices
such as the Raspberry Pi 3. Read on to find out how.
Why Kubernetes?
While Kubernetes is all the rage in the cloud, it may not be immediately obvious to run it on a small
single board computer. But there are certainly reasons for doing it. First of all it is a great way to learn
and get familiar with Kubernetes without the need for expensive hardware. Second, because of its
popularity, there are tons of applications that comes pre-packaged for running in Kubernetes clusters.
Not to mention the large community to provide help if you ever get stuck.
Last but not least, container orchestration may actually make things easier, even at the small scale in a
home lab. This may not be apparent when tackling the the learning curve, but these skills will help
when dealing with any cluster in the future. It doesn’t matter if it’s a single node Raspberry Pi cluster or
a large scale machine learning farm.
This will download, install and start up k3s. After installation, get a list of nodes from the server by
running the following command:
kubectl get nodes
Note that there are several options that can be passed to the installation script through environment
variables. These can be found in the documentation. And of course, there is nothing stopping you from
installing k3s manually by downloading the binary directly.
While great for experimenting and learning, a single node cluster is not much of a cluster. Luckily,
adding another node is no harder than setting up the first one. Just pass two environment variables to
the installation script to make it find the first node and avoid running the server part of k3s
curl -sfL https://get.k3s.io | K3S_URL=https://example-url:6443 \
K3S_TOKEN=XXX sh -
The example-url above should be replaced by the IP address or fully qualified domain name of the first
node. On that node the token (represented by XXX) is found in the file
/var/lib/rancher/k3s/server/node-token.
This will create a Deployment named “my-server” from the container image “nginx” (defaulting to
docker hub as registry and the latest tag). You can see the Pod created by running the following
command.
kubectl get pods
In order to access the nginx server running in the pod, first expose the Deployment through a Service.
The following command will create a Service with the same name as the deployment.
kubectl expose deployment my-server --port 80
The Service works as a kind of load balancer and DNS record for the Pods. For instance, when running
a second Pod, we will be able to curl the nginx server just by specifying my-server (the name of the
Service). See the example below for how to do this.
# Start a pod and run bash interactively in it
kubectl run debug --generator=run-pod/v1 --image=fedora -it -- bash
# Wait for the bash prompt to appear
curl my-server
# You should get the "Welcome to nginx!" page as output
k3s README
The ingress controller is already exposed with this load balancer service. You can find the IP address
that it is using with the following command.
$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
default kubernetes ClusterIP 10.43.0.1 443/TCP
33d
default my-server ClusterIP 10.43.174.38 80/TCP
30m
kube-system kube-dns ClusterIP 10.43.0.10
53/UDP,53/TCP,9153/TCP 33d
kube-system traefik LoadBalancer 10.43.145.104 10.0.0.8
80:31596/TCP,443:31539/TCP 33d
Look for the Service named traefik. In the above example the IP we are interested in is 10.0.0.8.
Save the above snippet in a file named my-ingress.yaml and add it to the cluster by running this
command:
kubectl apply -f my-ingress.yaml
You should now be able to reach the default nginx welcoming page on the fully qualified domain name
you chose. In my example this would be my-server.10.0.0.8.xip.io. The ingress controller is routing the
requests based on the information in the Ingress. A request to my-server.10.0.0.8.xip.io will be routed to
the Service and port defined as backend in the Ingress (my-server and 80 in this case).
Once they are labeled, it is easy to select suitable nodes for your workload with nodeSelectors. The
final piece to the puzzle, if you want to run your Pods on all suitable nodes is to use DaemonSets
instead of Deployments. In other words, create one DaemonSet for each data collecting application that
uses some unique sensor and use nodeSelectors to make sure they only run on nodes with the proper
hardware.
The service discovery feature that allows Pods to find each other simply by Service name makes it
quite easy to handle these kinds of distributed systems. You don’t need to know or configure IP
addresses or custom ports for the applications. Instead, they can easily find each other through named
Services in the cluster.
Summary
While Kubernetes, or container orchestration in general, may not usually be associated with IoT, it
certainly makes a lot of sense to have an orchestrator when you are dealing with distributed systems.
Not only does is allow you to handle a diverse and heterogeneous fleet of devices in a unified way, but
it also simplifies communication between them. In addition, Kubernetes makes it easier to utilize spare
resources.
Container technology made it possible to build applications that could “run anywhere”. Now
Kubernetes makes it easier to manage the “anywhere” part. And as an immutable base to build it all on,
we have Fedora IoT.