<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Transformers on The Coders Blog</title><link>https://thecodersblog.com/tag/transformers/</link><description>Recent content in Transformers on The Coders Blog</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Fri, 08 May 2026 06:55:05 +0000</lastBuildDate><atom:link href="https://thecodersblog.com/tag/transformers/index.xml" rel="self" type="application/rss+xml"/><item><title>Polynomial Autoencoders Outperform PCA on Transformer Embeddings</title><link>https://thecodersblog.com/polynomial-autoencoder-beats-pca-on-transformer-embeddings-2026/</link><pubDate>Fri, 08 May 2026 06:55:05 +0000</pubDate><guid>https://thecodersblog.com/polynomial-autoencoder-beats-pca-on-transformer-embeddings-2026/</guid><description>&lt;p&gt;Forget linear assumptions: Transformer embeddings are exhibiting a distinct &lt;strong&gt;&amp;ldquo;cone effect,&amp;rdquo;&lt;/strong&gt; a non-linear tail of variance that traditional linear dimensionality reduction methods like PCA simply miss. This isn&amp;rsquo;t just a theoretical quirk; it&amp;rsquo;s a practical bottleneck for model compression and analysis. Recent work, drawing on established &amp;ldquo;quadratic manifold&amp;rdquo; techniques, introduces a &lt;strong&gt;Polynomial Autoencoder&lt;/strong&gt;—specifically, a linear PCA encoder paired with a quadratic decoder—that demonstrably outperforms PCA in capturing this elusive non-linear structure. This isn&amp;rsquo;t about tweaking SGD hyperparameters; it&amp;rsquo;s a computationally elegant, closed-form solution that unlocks richer representations.&lt;/p&gt;</description></item></channel></rss>