<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Production on The Coders Blog</title><link>https://thecodersblog.com/tag/production/</link><description>Recent content in Production on The Coders Blog</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 06 May 2026 22:26:38 +0000</lastBuildDate><atom:link href="https://thecodersblog.com/tag/production/index.xml" rel="self" type="application/rss+xml"/><item><title>Microsoft Dev: Azure Cosmos DB Conf 2026 Recap: Lessons from Production</title><link>https://thecodersblog.com/azure-cosmos-db-production-lessons-2026-2026/</link><pubDate>Wed, 06 May 2026 22:26:38 +0000</pubDate><guid>https://thecodersblog.com/azure-cosmos-db-production-lessons-2026-2026/</guid><description>&lt;p&gt;You provisioned Azure Cosmos DB with ample Request Units (RUs), your application&amp;rsquo;s P99 latency is creeping up, and throttling errors are becoming more frequent. Sound familiar? This isn&amp;rsquo;t a capacity problem; it&amp;rsquo;s a design problem. The Azure Cosmos DB Conference 2026 made one thing brutally clear: the platform exposes your data modeling and partition key choices like a harsh spotlight.&lt;/p&gt;
&lt;h2 id="the-unseen-bottleneck-partition-keys-and-skewed-distribution"&gt;The Unseen Bottleneck: Partition Keys and Skewed Distribution&lt;/h2&gt;
&lt;p&gt;The single most impactful decision you make for Cosmos DB is the partition key. Forget throwing more RUs at the problem; if your partition key leads to skewed distribution, you&amp;rsquo;re battling hot partitions. This results in 100% RU utilization on some physical partitions while others languish, leading to relentless throttling and unacceptable latency spikes, even if your aggregate RU usage appears low.&lt;/p&gt;</description></item><item><title>Google Dev: Agents CLI for Production AI Creation</title><link>https://thecodersblog.com/google-agents-cli-for-production-ai-2026/</link><pubDate>Wed, 06 May 2026 22:26:07 +0000</pubDate><guid>https://thecodersblog.com/google-agents-cli-for-production-ai-2026/</guid><description>&lt;p&gt;The AI agent development lifecycle is a fragmented mess of custom scripts, ad-hoc deployments, and manual evaluations. Until now. Google&amp;rsquo;s new Agents CLI promises to bring order to chaos, offering a unified command-line interface for building, testing, and deploying AI agents directly to Google Cloud. This could finally accelerate your time to market, but it&amp;rsquo;s not without its caveats.&lt;/p&gt;
&lt;h3 id="the-deployment-gap-in-ai-agent-development"&gt;The &amp;ldquo;Deployment Gap&amp;rdquo; in AI Agent Development&lt;/h3&gt;
&lt;p&gt;Developing sophisticated AI agents often involves multiple stages: scaffolding, local iteration, rigorous evaluation, and finally, robust production deployment. Each stage typically requires different tools and approaches, leading to a &amp;ldquo;deployment gap.&amp;rdquo; Teams spend valuable time stitching together disparate services, wrestling with environment inconsistencies, and manually verifying agent performance. This friction slows innovation and delays the realization of AI’s true potential. Google&amp;rsquo;s Agents CLI directly targets this pain point, aiming to streamline the entire Agent Development Lifecycle (ADLC) within a single, opinionated framework.&lt;/p&gt;</description></item><item><title>Google Dev: Production-Ready AI Agents: 5 Lessons from Monolith Refactoring</title><link>https://thecodersblog.com/refactoring-monoliths-for-production-ai-agents-2026/</link><pubDate>Wed, 06 May 2026 22:26:05 +0000</pubDate><guid>https://thecodersblog.com/refactoring-monoliths-for-production-ai-agents-2026/</guid><description>&lt;p&gt;The dream of seamless AI automation is often sold as a flick of a switch. But the reality of deploying AI agents in production, especially when migrating from legacy monoliths, is a complex dance of architecture, resilience, and rigorous oversight. Forget brittle prototypes; we&amp;rsquo;re talking about robust, scalable systems. Google&amp;rsquo;s recent experiences, particularly from their &amp;ldquo;AI Agent Clinic,&amp;rdquo; offer a hard-won blueprint. Here are five critical lessons learned from refactoring monoliths to truly power production-ready AI agents.&lt;/p&gt;</description></item><item><title>Docker Compose in Production 2026: Is It Still Viable?</title><link>https://thecodersblog.com/production-readiness-of-plain-docker-compose-in-2026-2026/</link><pubDate>Tue, 05 May 2026 16:28:32 +0000</pubDate><guid>https://thecodersblog.com/production-readiness-of-plain-docker-compose-in-2026-2026/</guid><description>&lt;p&gt;The simple &lt;code&gt;docker-compose up&lt;/code&gt; command. It&amp;rsquo;s the gateway from local development to something more. But as we look towards 2026, is this humble tool still a realistic option for production deployments? The answer is a resounding, but heavily qualified, &lt;strong&gt;yes&lt;/strong&gt;. For a specific set of use cases, plain Docker Compose can indeed be production-ready, provided you’re willing to invest in rigorous configuration and operational discipline.&lt;/p&gt;
&lt;h2 id="the-persistent-allure-and-peril-of-simplicity"&gt;The Persistent Allure and Peril of Simplicity&lt;/h2&gt;
&lt;p&gt;Docker Compose’s enduring appeal lies in its straightforward syntax and ease of use. It elegantly defines multi-container Docker applications, making the transition from a developer&amp;rsquo;s laptop to a single server feel almost seamless. This simplicity is its greatest strength, but also its most significant vulnerability when pushed beyond its intended scope. For complex, highly available, or dynamically scaling distributed systems, its limitations become glaringly obvious.&lt;/p&gt;</description></item></channel></rss>