Access Kubernetes at scale
Once an application is deployed it’s accessible within the cluster, you may want to allow external services like a web service to access it as part of the overall business logic. It means we need to open it to the outside world. There are 3 different ways you can expose your app with Kubernetes:
- NodePort
- LoadBalancer
- Ingress
What is a NodePort and how does it work?
After an application is deployed you may see it as a service running using: Kubectl get srv the CLUSTER-IP is the service IP that by default is internal. To turn it into an external service the first option is by using NodePort. So what is a NodePort and how it works? when you carte service in a pod it is accessible by other pods using its IP and port. When we create a NodePort it creates an internal service and in addition, it opens a port on the node, yes on each worker node in the cluster to make the service accessible from the outside. These ports are now called NodePort! Note there is a dedicated port range for NodePorts — 30000–32767 that must be open in your security group.
Once the above YAML is deployed, the result is:
The nginx service is now accessible outside the cluster. Great, now we can access the app, but using IP is not the right way to use an application. Also in an environment with many applications, it is not manageable to use NodePort. It will open many ports on the worker nodes that need to be protected and monitored, it’s not scalable and not manageable. Using NodePort is ok for texting but defiantly not for production. the solution is another service type, a LoadBalancer.
LoadBalancer service type
Let’s understand how a LB works in Kubernetes. LB is created in front of the cluster nodes and accept request outside the cluster as an entry-point and will load balance it to one of the NodPort of the worker node. which then forwards the request to an internal service IP. One thing to remember NodePort and LB and not replacing each other but are built on top of each other. Note, LB is created outside the cluster, meaning it’s an external entity and needs to be created by the cloud admin. That is why you see external IP in pending: Kubectl get service will show the below, only the Kubernetes LB is deployed and is pending until the external LB will be created. The external IP address is not configured by Kubernetes.
Deploying a load balancer service is very straightforward, the same YAML file that deploys the service requires a change in SPEC → TYPE: LoadBalancer. All other parameters stay the same.
So the question is who is responsible to create the load balancer? They are created by the cloud platforms as a service. If you plan to run Kubernetes on the metal you will have to deploy on-prem LB. If you are using a managed service the cloud platform will create the LB for you automatically. AWS will create one LB and Azure with 2 by default and those are different perspectives from each cloud. AWS wants to start small and scale while Azure assumes that you are here to scale and saves you the later labor by adding additional LB’s. When setting the LB only worker nodes should be registered and the route port should be 30000. Accessing via the LB DNS name should let you access your app.
Why a 3rd access service?
LB is a great solution that simplifies access and management to the deployed app running on Kubernetes, but if we follow this best practice we will face several LBs as each app requires its own. It’s not scalable as you may find yourself with tens of LBs and each is an entry point. On the other hand, you will have several domain names (per LB) to access. Users are looking for a single entry! So we need to configure domain names as well and each LB will have to expose its own NodePort so we have multiple issues we need to solve. It’s a fin-ops problem and a management problem. This creates a hierarchy of LBs in which the one on the top handles subdomain names or path URLs and traffic distribution to the right LB that serves its app. Looking at all the management complexity and costs involves in managing such a complex environment we need a more elegant solution that will load balance and secure the connection, the solution for this is INGRESS.
Ingress
Ingress lets us configure routing to different services and applications using Kubernetes native component as well as configure the secure connection to any of your applications. As Ingress resides in the cluster exposing it externally will require the load balancer that will be the single entry point to the cluster. This simplifies the architecture management and scale as we need only one LB instead of LB per app as described above.
Looking at the ingress YAML you see that Ingress is a kind and not a service and in the spec, you can see we can set rules, those are routing rules that define the domain address or all requests to the host (myapp.com) that must be forwarded to the internal service (serviceName: myapp-internal-service). The paths are URL paths you can define as part of your web configuration. Do not get confused with the HTTP/S protocol you see in the YAML, it is not the external protocol that interacts with the end user browner it’s external, the one that interacts with the user browser is created by the LB. Ingress back-end rule looks at internal service YAML metadata and the ports it should use.
To make it all work you must implement an Ingress controller which is a pod that runs on a node in the Kubernetes cluster and does an evaluation of processing and Ingress rules to manage redirections. That makes the Ingress controller the Kubernetes cluster entry point. There are many 3rd party implementations you can select from, there is one from Kubernetes Nginx Ingress controllers but there are others as well. These are a few more use cases for more fine granular routing for applications inside the Kubernetes cluster. The first one will be to define multiple-path of the same host, YAML should indicate those paths in the following structure:
Another use case when instead of using URL per application the usage of subdomains as shown in the YAML below:
Instead of having one host, you will have several hosts in your YAML.
Insted of having one host you willhave several hosts in your YAML.
TLS certificates
To configure HTTPS in Ingress you will need to define the attribute name TLS above the rules section, add your host, and the secretName which is a reference to the secret you have to create within the cluster that hosts that TLS certificate, so the secret configuration will look like this:
Note to the reference of name secret YAML and secretName in Ingress YAML. Also, tls crt and key must be declared and not using a path, namespace also must be added and secrets must be created in it. As for the namespaces, you will need a certificate for them. You can not reference secrets from another namespace, each namespace must have its secrets.
Join my Linkedin