<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Agentic AI on The Coders Blog</title><link>https://thecodersblog.com/tag/agentic-ai/</link><description>Recent content in Agentic AI on The Coders Blog</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 06 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://thecodersblog.com/tag/agentic-ai/index.xml" rel="self" type="application/rss+xml"/><item><title>GLM-5V-Turbo: Native Multimodal Foundation Model</title><link>https://thecodersblog.com/glm-5v-turbo-native-multimodal-foundation-model/</link><pubDate>Wed, 06 May 2026 00:00:00 +0000</pubDate><guid>https://thecodersblog.com/glm-5v-turbo-native-multimodal-foundation-model/</guid><description>&lt;p&gt;The blinking cursor on a blank canvas, a pixel-perfect design, a complex UI flow – how do we translate that visual blueprint directly into functional code? For years, the AI community has grappled with the chasm between perception and action, between seeing and doing. Today, Z.ai attempts to bridge that gap with GLM-5V-Turbo, a native multimodal foundation model promising to revolutionize agentic workflows and vision-based coding.&lt;/p&gt;
&lt;h3 id="the-core-problem-bridging-sight-and-code"&gt;The Core Problem: Bridging Sight and Code&lt;/h3&gt;
&lt;p&gt;Traditional AI models excel at specific tasks. Text-in, text-out for language generation, image-in, text-out for captioning. But truly intelligent agents need to process and act upon a confluence of data types. Imagine an agent that can interpret a user&amp;rsquo;s hand-drawn mockup, understand the desired user flow, and then generate the corresponding web code. This requires a deep, &lt;em&gt;native&lt;/em&gt; understanding of how visual information translates into structured, actionable outputs, not just a bolted-on vision layer. This is the problem GLM-5V-Turbo aims to solve.&lt;/p&gt;</description></item><item><title>Mistral Medium 3.5: The Agentic Future of LLMs Is Remote, Not Just Local (2026)</title><link>https://thecodersblog.com/mistral-medium-3-5-and-remote-ai-agents-2026/</link><pubDate>Wed, 29 Apr 2026 16:51:18 +0000</pubDate><guid>https://thecodersblog.com/mistral-medium-3-5-and-remote-ai-agents-2026/</guid><description>&lt;p&gt;Engineers, forget everything you thought about integrating LLMs. Mistral Medium 3.5 isn&amp;rsquo;t just a powerful new model; it&amp;rsquo;s the tip of an iceberg revealing a fundamental architectural shift: the agentic future of AI is decidedly remote, demanding a complete re-evaluation of how we design and build scalable AI systems. This isn&amp;rsquo;t a suggestion; it&amp;rsquo;s a &lt;strong&gt;mandate for architectural foresight&lt;/strong&gt; that will separate resilient, intelligent applications from brittle, outdated ones by 2027.&lt;/p&gt;</description></item></channel></rss>