<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Gemini on The Coders Blog</title><link>https://thecodersblog.com/tag/gemini/</link><description>Recent content in Gemini on The Coders Blog</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 06 May 2026 22:26:28 +0000</lastBuildDate><atom:link href="https://thecodersblog.com/tag/gemini/index.xml" rel="self" type="application/rss+xml"/><item><title>Google Dev: Subagents Arrive in Gemini CLI</title><link>https://thecodersblog.com/gemini-cli-subagents-2026/</link><pubDate>Wed, 06 May 2026 22:26:28 +0000</pubDate><guid>https://thecodersblog.com/gemini-cli-subagents-2026/</guid><description>&lt;p&gt;Ever felt like your AI assistant is juggling too many tasks, dropping the ball on context and delivering subpar results? That’s precisely the pain point Gemini CLI’s new subagents aim to obliterate. The struggle of managing complex, repetitive, or high-volume commands within a single AI interaction is finally being addressed, and it’s a game-changer for developers.&lt;/p&gt;
&lt;h3 id="the-context-rot-problem"&gt;The Context Rot Problem&lt;/h3&gt;
&lt;p&gt;Traditional AI CLIs often suffer from &amp;ldquo;context rot.&amp;rdquo; As you feed more information, more commands, and more complex instructions, the AI&amp;rsquo;s ability to recall and correctly act upon early parts of the conversation degrades. This leads to redundant explanations, missed details, and ultimately, wasted developer time. Imagine asking your AI to refactor a codebase, then add new features, then write tests – without proper delegation, the AI quickly gets overwhelmed.&lt;/p&gt;</description></item><item><title>Building with Gemini Embedding 2: Agentic Multimodal RAG</title><link>https://thecodersblog.com/gemini-embedding-2-for-multimodal-rag-2026/</link><pubDate>Wed, 06 May 2026 22:22:02 +0000</pubDate><guid>https://thecodersblog.com/gemini-embedding-2-for-multimodal-rag-2026/</guid><description>&lt;p&gt;Forget stitching together disparate models for text, image, and audio. The era of fragmented multimodal AI is over, thanks to Gemini Embedding 2. If you&amp;rsquo;re building retrieval-augmented generation (RAG) systems that need to truly &lt;em&gt;understand&lt;/em&gt; the world, not just read it, this is the game-changer you&amp;rsquo;ve been waiting for.&lt;/p&gt;
&lt;h2 id="the-problem-data-is-messy-ai-needs-to-be-unified"&gt;The Problem: Data is Messy, AI Needs to be Unified&lt;/h2&gt;
&lt;p&gt;Traditional RAG pipelines excel at text. But what happens when your knowledge base includes product manuals with diagrams, video tutorials explaining complex procedures, or audio recordings of customer feedback? Historically, this meant separate embedding models, complex feature extraction pipelines, and a constant struggle to find relevant information across different modalities. The result? Latency, reduced accuracy, and a development nightmare.&lt;/p&gt;</description></item></channel></rss>