Deploying and Managing Applications with GKE, Helm, and Kubernetes Services & Persistent Volumes

Google Kubernetes Engine (GKE) is a powerful, managed Kubernetes service that simplifies container orchestration in the cloud. With Kubernetes, you can easily manage and scale containerized applications. One of the most efficient ways to deploy applications on GKE is using Helm, a Kubernetes package manager that streamlines deploying complex applications through reusable charts.
In this blog post, we’ll walk through how to use GKE, Helm, Kubernetes Services, and Persistent Volumes (PVCs) to deploy, expose, and persist applications. This tutorial will give you a comprehensive understanding of how to streamline your deployments and manage your infrastructure effectively.
What You Need
- A Google Cloud account with GKE enabled.
- Helm installed on your local machine.
kubectl
configured to interact with your GKE cluster.- A Kubernetes cluster running on GKE.
Let’s dive into the steps of setting up a simple web application using Helm, with a focus on deploying a containerized app, exposing it through a Kubernetes Service, and configuring Persistent Volumes for data storage.
Step 1: Setting Up Helm
First, ensure Helm is installed on your machine. You can install Helm by following the official Helm installation guide: Helm Installation Guide.
Once Helm is installed, initialize it by adding the official Helm chart repository:
helm repo add stable https://charts.helm.sh/stable
helm repo update
Step 2: Creating the Helm Chart for Your Application
Helm charts provide a convenient way to deploy complex applications. A chart typically includes several Kubernetes resource definitions, such as deployments, services, config maps, secrets, and persistent volume claims. For this example, let’s assume we want to deploy a simple FastAPI application.
Create a Helm chart for your application:
helm create fastapi-app
This command will generate a directory structure with all the necessary files for your Helm chart. You can now modify the deployment, service, and PVC configurations within this chart to suit your application’s needs.
Step 3: Defining the Deployment in Helm
A deployment in Kubernetes manages the desired state for a set of pods. For our FastAPI application, we’ll define the necessary configuration in the deployment.yaml
file, which is located under the templates/
directory of the Helm chart.
Modify templates/deployment.yaml
to look like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "fastapi-app.name" . }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ include "fastapi-app.name" . }}
template:
metadata:
labels:
app: {{ include "fastapi-app.name" . }}
spec:
containers:
- name: fastapi-app
image: "my-fastapi-app:latest"
ports:
- containerPort: 80
This deployment.yaml
defines the FastAPI application container, including the image and port configuration. It also specifies one replica for simplicity. You can adjust this to scale your app later.
Step 4: Defining the Service in Helm
In Kubernetes, a service exposes a set of pods to external traffic. We’ll define a ClusterIP
service to expose the FastAPI app within the GKE cluster.
Modify templates/service.yaml
to look like this:
apiVersion: v1
kind: Service
metadata:
name: {{ include "fastapi-app.name" . }}
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: {{ include "fastapi-app.name" . }}
This YAML file defines a ClusterIP
service for the FastAPI app. It uses the same label selector as the deployment to connect the service with the appropriate pods.
Step 5: Setting Up Persistent Storage with PersistentVolumeClaims (PVC)
In a production environment, most applications require persistent storage for data such as databases or logs. Kubernetes provides Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) to manage storage. GKE supports dynamic provisioning of persistent disks, making it easy to create PVCs.
To add a PVC to your application, modify templates/pvc.yaml
as follows:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard
This PVC requests 1Gi of storage, with the default standard
storage class for GKE, which dynamically provisions a Google Cloud Persistent Disk.
Step 6: Mounting the PVC in the Deployment
Now, let’s mount the PVC into the FastAPI container so it can access the persistent storage.
Modify templates/deployment.yaml
to add the volume and volume mount configurations:
spec:
containers:
- name: fastapi-app
image: "my-fastapi-app:latest"
ports:
- containerPort: 80
volumeMounts:
- name: my-storage
mountPath: /path/to/storage
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-pvc
In this configuration:
- We added a volume mount to the FastAPI container to mount the PVC at
/path/to/storage
. - The
volumes
section refers to the PVC we defined earlier.
Step 7: Deploying the Application with Helm
Now that everything is configured, it’s time to deploy your application to GKE. Use the following Helm command:
helm install fastapi-app ./fastapi-app
This command will deploy your application to the GKE cluster, create the necessary services, and provision a persistent volume for your app.
Step 8: Accessing the Application
If you want to access the FastAPI app externally, you can either use a LoadBalancer
service type or set up an Ingress. For example, here’s how you can change your service type to LoadBalancer
:
spec:
type: LoadBalancer
This will automatically provision an external IP in GKE, allowing you to access your application.
You can retrieve the external IP with:
kubectl get svc fastapi-app
Conclusion
In this blog post, we’ve learned how to use GKE, Helm, and Kubernetes Services & Persistent Volumes to deploy and manage a containerized application. We covered how to:
- Set up a Helm chart for deploying an application.
- Expose the application using Kubernetes Services.
- Use PersistentVolumeClaims to persist data for your application.
Thanks for reading! Feel free to ask any questions