Loading video player...
Gemma 4 can now be used in OpenCode (via llama.cpp). We'll take it for a test drive and see how well it is on coding a local RAG in Python GitHub repo for the project: https://github.com/mlexpertio/gemma-rag Blog: https://blog.google/innovation-and-ai/technology/developers-tools/gemma-4/ Llama.cpp models: https://huggingface.co/collections/ggml-org/gemma-4 OpenCode: https://opencode.ai/ AI Academy: https://mlexpert.io/ Work with me: https://mlexpert.io/consulting LinkedIn: https://www.linkedin.com/in/venelin-valkov/ Follow me on X: https://twitter.com/venelin_valkov Discord: https://discord.gg/UaNPxVD6tv Subscribe: http://bit.ly/venelin-subscribe GitHub repository: https://github.com/curiousily/AI-Bootcamp š Don't Forget to Like, Comment, and Subscribe for More Tutorials! Join this channel to get access to the perks and support my work: https://www.youtube.com/channel/UCoW_WzQNJVAjxo4osNAxd_g/join