Loading video player...
Phase 4 of my "Evolution of Todo" hackathon project is now complete. In this phase, I took the AI-powered Nexa Todo chatbot from Phase 3 and made it cloud-native by deploying it on a local Kubernetes cluster using Minikube. What I implemented in this phase: Containerized both frontend (Next.js + OpenAI ChatKit) and backend (FastAPI + OpenAI Agents + MCP) using Docker Used Docker AI Agent (Gordon) for generating Dockerfiles, multi-stage builds, and compose files Created Helm charts for the application (frontend, backend, and PostgreSQL dependency) Leveraged kubectl-ai and kagent for AI-assisted Helm chart generation, deployment commands, scaling, and cluster health checks Successfully deployed everything on Minikube and verified the full chatbot working via port-forward The entire deployment process was spec-driven and heavily relied on AI tools for infrastructure automation — Gordon for Docker, kubectl-ai and kagent for Kubernetes operations. This made the process faster and showed how AI can help with DevOps tasks. The chatbot continues to work perfectly on Kubernetes: natural language commands for adding, listing, updating, completing, and deleting tasks — all with conversation persistence from the database. Live demo from Phase 3: https://nexa-todo-app.vercel.app Previous phases: Phase 1 (CLI) → Phase 2 (Full-stack web) → Phase 3 (AI Chatbot) Next and final: Phase 5 – Advanced cloud deployment on DigitalOcean Kubernetes (DOKS). If you’ve worked on Kubernetes, Helm, Minikube, or AI-assisted DevOps, I’d love to hear your thoughts or suggestions in the comments.