<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Subquadratic on The Coders Blog</title><link>https://thecodersblog.com/tag/subquadratic/</link><description>Recent content in Subquadratic on The Coders Blog</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Sat, 09 May 2026 15:57:32 +0000</lastBuildDate><atom:link href="https://thecodersblog.com/tag/subquadratic/index.xml" rel="self" type="application/rss+xml"/><item><title>LLM Context Windows Shattered: Subquadratic Efficiency Unveiled</title><link>https://thecodersblog.com/subquadratic-context-window-for-llms-2026/</link><pubDate>Sat, 09 May 2026 15:57:32 +0000</pubDate><guid>https://thecodersblog.com/subquadratic-context-window-for-llms-2026/</guid><description>&lt;p&gt;The insatiable hunger of AI for more data has, for years, been bottlenecked by a fundamental architectural constraint: the quadratic complexity of the Transformer&amp;rsquo;s self-attention mechanism. This has relegated even frontier LLMs to relatively paltry context windows, forcing developers into a constant dance of summarization, chunking, and sophisticated retrieval strategies to handle anything beyond a few tens of thousands of tokens. Now, the landscape is shifting dramatically with the emergence of &amp;ldquo;subquadratic&amp;rdquo; approaches, promising not just incremental improvements but a seismic leap in how LLMs perceive and process information. This isn&amp;rsquo;t just about fitting more text; it&amp;rsquo;s about unlocking entirely new classes of AI applications previously confined to the realm of science fiction.&lt;/p&gt;</description></item></channel></rss>