<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>AI Models on The Coders Blog</title><link>https://thecodersblog.com/tag/ai-models/</link><description>Recent content in AI Models on The Coders Blog</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 06 May 2026 22:26:25 +0000</lastBuildDate><atom:link href="https://thecodersblog.com/tag/ai-models/index.xml" rel="self" type="application/rss+xml"/><item><title>Google Dev: MaxText Expands Post-Training with SFT Introduction</title><link>https://thecodersblog.com/maxtext-post-training-capabilities-with-sft-2026/</link><pubDate>Wed, 06 May 2026 22:26:25 +0000</pubDate><guid>https://thecodersblog.com/maxtext-post-training-capabilities-with-sft-2026/</guid><description>&lt;p&gt;So, you&amp;rsquo;ve trained your massive LLM, and now you need to make it &lt;em&gt;yours&lt;/em&gt;. You&amp;rsquo;re looking for that killer fine-tuning solution that doesn&amp;rsquo;t break the bank or demand a supercomputer cluster. Well, Google&amp;rsquo;s MaxText just made a significant play with its introduction of Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) capabilities, specifically targeting single-host TPU configurations like v5p-8 and v6e-8. This move aims to democratize advanced LLM customization, leveraging the power of JAX and the Tunix library for high-performance post-training.&lt;/p&gt;</description></item><item><title>Grok 4.3: Is x.ai's Latest LLM a Real Leap or Just More Hype? [2026]</title><link>https://thecodersblog.com/grok-4-3-x-ai-s-latest-ai-model-release-2026/</link><pubDate>Fri, 01 May 2026 11:18:14 +0000</pubDate><guid>https://thecodersblog.com/grok-4-3-x-ai-s-latest-ai-model-release-2026/</guid><description>&lt;p&gt;Grok 4.3 is live, promising enhanced agentic performance and cost efficiencies. But for engineers on the front lines, the question isn&amp;rsquo;t the marketing pitch, it&amp;rsquo;s whether x.ai&amp;rsquo;s latest delivers genuine utility or just more hype we need to cut through. We&amp;rsquo;re here to find out.&lt;/p&gt;
&lt;h2 id="core-problem-beyond-the-soft-launch--why-we-need-to-dig-deeper"&gt;Core Problem: Beyond the Soft Launch – Why We Need to Dig Deeper&lt;/h2&gt;
&lt;p&gt;xAI&amp;rsquo;s silent, soft-launch of &lt;strong&gt;Grok 4.3&lt;/strong&gt; for SuperGrok Heavy subscribers, confirmed by Elon Musk, immediately raises questions about its true capabilities and xAI&amp;rsquo;s confidence. This wasn&amp;rsquo;t a grand unveiling; it was a quiet push to a select group, the kind of move that prompts more skepticism than excitement among seasoned developers.&lt;/p&gt;</description></item><item><title>OpenAI's Hypocrisy: Why API Restrictions Choke Developer Innovation [2026]</title><link>https://thecodersblog.com/openai-s-api-restrictions-and-developer-control-2026/</link><pubDate>Fri, 01 May 2026 11:12:30 +0000</pubDate><guid>https://thecodersblog.com/openai-s-api-restrictions-and-developer-control-2026/</guid><description>&lt;p&gt;After years of championing openness, OpenAI&amp;rsquo;s tightening grip on its APIs is now actively suffocating the very innovation it once promised to unleash, leaving developers scrambling for alternatives in a centralized AI landscape.&lt;/p&gt;
&lt;h2 id="the-centralization-trap-openais-hypocrisy-undermining-developer-freedom"&gt;The Centralization Trap: OpenAI&amp;rsquo;s Hypocrisy Undermining Developer Freedom&lt;/h2&gt;
&lt;p&gt;OpenAI burst onto the scene with a bold promise: to democratize AI and foster an open, collaborative ecosystem. Its initial ethos resonated deeply with developers, offering a vision of powerful models accessible to all, driving unprecedented innovation. Fast forward to &lt;strong&gt;2026&lt;/strong&gt;, and that vision feels like a distant memory.&lt;/p&gt;</description></item></channel></rss>