Loading video player...
Get the full step-by-step project guide here: 👉 https://learn.nextwork.org/projects/ai-devops-api?track=high Welcome to Project 1 of the DevOps × AI Series! Today we’re using FastAPI, Chroma, and a local Ollama model to build a production-ready RAG API with interactive Swagger docs. You’ll learn how to turn your own documents into an intelligent Q&A API right on your laptop—just like the pros. What you’ll learn today: ✔️ Stand up a FastAPI service with automatic Swagger UI docs ✔️ Build a RAG pipeline that searches your docs, then answers with AI ✔️ Ingest and embed your knowledge base using Chroma as a vector store ✔️ Run a local LLM with Ollama for private, fast responses ✔️ Test endpoints end-to-end with live API calls ✔️ Prep your API for the next projects in the series (Docker, CI/CD, monitoring) 🗓️ The DevOps × AI Series Part 1: Build the RAG API (this video) - https://learn.nextwork.org/projects/ai-devops-api Part 2: Containerize with Docker - https://learn.nextwork.org/projects/ai-devops-docker Part 3: Automate with GitHub Actions - https://learn.nextwork.org/projects/ai-devops-ci Part 4: Monitor and detect issues - https://learn.nextwork.org/projects/ai-devops-observability #ragapi #ollama #retrievalaugmentedgeneration #fastapi #devopsengineering #aiengineering