Kubernetes Site Reliability Engineers (SREs) often find themselves in challenging situations where they need to debug and troubleshoot complex cluster issues quickly. Traditional methods involve manually inspecting logs, events, and configurations, which can be time-consuming. However, with the rise of AI-powered assistants, tools like k8sgpt and DeepSeek have emerged as powerful copilots for Kubernetes SREs.
What is Groq?
Groq refers to Groq Cloud, a platform providing fast inference APIs for powerful Large Language Models (LLMs), similar to OpenAI or Anthropic. Groq offers access to state-of-the-art models such as Meta’s Llama-3 series and other open-source foundation models, optimized for high-speed inference, often at lower latency and cost compared to traditional cloud AI providers.
Key Highlights:
- LLM Inference APIs: Access models like Llama-3-70B, Llama-3-8B, Mixtral, Gemma, and others.
- Competitive Advantage: Extremely fast model inference speeds, competitive pricing, and simpler integration.
- Target Users: Developers, enterprises, and startups needing quick, scalable, and cost-effective AI inference.
Groq follows the OpenAI API format, which allows us to use the DeepSeek LLM inside k8sgpt under the backend named openai while leveraging Groq’s high-performance inference capabilities.
In this article, we will explore how k8sgpt, integrated with DeepSeek using Groq API, can help troubleshoot a Kubernetes cluster in real time. We will cover:
- Setting up a Kubernetes cluster using KIND
- Installing and configuring k8sgpt
- Obtaining Groq API keys
- Setting up k8sgpt authentication with Groq to use DeepSeek
- Using k8sgpt in interactive mode for live troubleshooting
By the end of this guide, you’ll have a fully operational AI-powered Kubernetes troubleshooting RAG AI Agent (Kubernetes SRE Copilot) at your disposal.
1. Setting up a Kubernetes Cluster using KIND
Before we start troubleshooting, let’s set up a local Kubernetes cluster using KIND (Kubernetes IN Docker).
Step 1: Install KIND
Ensure you have Docker installed, then install KIND:
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.26.0/kind-linux-amd64 chmod +x ./kind mv ./kind /usr/local/bin/kind
Step 2: Create a Cluster
kind create cluster --name k8s-demo
Verify the cluster setup:
kubectl cluster-info --context kind-k8s-demo
Now that we have our cluster running, we can move on to setting up k8sgpt.
2. Installing and Configuring k8sgpt
Step 1: Install k8sgpt
curl -s https://raw.githubusercontent.com/k8sgpt-ai/k8sgpt/main/install.sh | bash
Verify installation:
k8sgpt version
Step 2: Configure k8sgpt to Connect to the Cluster
kubectl config use-context kind-k8s-demo k8sgpt version
At this point, k8sgpt is installed and ready to analyze Kubernetes issues. However, we need an AI backend to process and explain the errors. Let’s set up use DeepSeek using Groq API for this.
3. Obtaining Groq API Keys
To use DeepSeek via Groq, we need an API key from Groq.
- Go to Groq API
- Sign in or create an account
- Navigate to the API section and generate an API key
- Copy the API key securely
Once we have the API key, we can configure k8sgpt to use it.
4. Setting up k8sgpt Authentication with Groq
We will configure k8sgpt to use OpenAI’s backend, but point it to Groq API as the base URL and model as DeepSeek.
k8sgpt auth update -b openai --baseurl https://api.groq.com/openai/v1 --model deepseek-r1-distill-llama-70b -p <YOUR_GROQ_API_KEY>
Verify authentication:
k8sgpt auth list
If the credentials are correct, you should see openai as an available backend.
5. Deploying a Sample Application in the Weather Namespace
Let’s deploy a sample weather application in a weather namespace to test troubleshooting.
kubectl create namespace weather kubectl apply -f https://raw.githubusercontent.com/brainupgrade-in/obs-graf/refs/heads/main/prometheus/apps/weather/weather.yaml -n weather
Check if the pods are running:
kubectl get pods -n weather
If there are errors, we can analyze them using k8sgpt.
6. Using k8sgpt in Interactive Mode for Live Troubleshooting
We can now use k8sgptto analyze and fix issues interactively. Let us scale down the weather replicas to 0 ( kubectl scale –replicas 0 deploy weather -n weather ) and see if k8sgpt can detect the issue and help troubleshoot.
k8sgpt analyze -n weather --explain -i
This command will scan logs, events, and configurations to identify potential issues and provide AI-assisted troubleshooting steps. See below the video demonstrating how this k8sgpt as RAG AI Agent acting as SRE Copilot helps do live troubleshooting!
Kubernetes SRE Copilot using k8sgpt and DeepSeek
Conclusion
With k8sgpt and DeepSeek via Groq, Kubernetes SREs now have a powerful AI-driven copilot that dramatically simplifies and accelerates troubleshooting. This innovative solution automates the complex and tedious processes of issue identification and root cause analysis, delivering precise insights rapidly. Furthermore, the interactive CLI offers step-by-step guidance, enabling engineers to apply accurate fixes confidently and efficiently, significantly reducing the time typically spent on manual diagnostics and repairs.
The integration of AI with Kubernetes operations is undeniably transforming the future of site reliability engineering. Tools like k8sgpt and DeepSeek not only streamline cluster management but also substantially enhance reliability, resilience, and overall operational effectiveness. Embracing this technology empowers Kubernetes SREs to proactively address issues, maintain continuous availability, and optimize infrastructure with ease. Experience the remarkable efficiency of AI-driven troubleshooting by integrating k8sgpt into your Kubernetes workflows today!