* Blog


* Últimos mensajes


* Temas mas recientes


Autor Tema: La burbuja de la IA  (Leído 80096 veces)

1 Usuario y 6 Visitantes están viendo este tema.

muyuu

  • Inmoindultado
  • ****
  • Gracias
  • -Dadas: 3175
  • -Recibidas: 9692
  • Mensajes: 2305
  • Nivel: 207
  • muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.
  • el mercado es tu amigo ☜(゚ヮ゚☜)
    • Ver Perfil
Re:La burbuja de la IA
« Respuesta #120 en: Marzo 10, 2026, 21:17:45 pm »
Muy serio el tema de las "relicencias" - no sé si va mejor aquí o en el hilo del fin del trabajo. Lo pongo aquí y lo cito allí.

Está ocurriendo que la gente seriamente coge y "reescribe" código con ayuda de Claude o de ChatGPT, o completamente por medio de esos sistemas, y redistribuyéndolo con otras licencias.

Esta página lo hace en forma de semi-parodia: https://malus.sh

Un ejemplo más concreto lo vemos aquí expuesto por un apologista: https://lucumr.pocoo.org/2026/3/5/theseus/

Enlace directo al repositorio de chardet al hilo donde el autor original de la biblioteca (Mark Pilgrim) rebate a los mantenedores actuales que puedan cambiar la licencia por supuestamente haber reescrito la librería con Claude Code: https://github.com/chardet/chardet/issues/327#issuecomment-4005195078


Personalmente espero que lo lleve a los tribunales porque es escandaloso que quieran apropiarse del trabajo de otros de esa forma tan descarada. Hasta hoy no han devuelto la licencia original y Mark Pilgrim se lo pidió el 4 de marzo.


tomasjos

  • Administrator
  • Netocrata
  • *****
  • Gracias
  • -Dadas: 31435
  • -Recibidas: 29198
  • Mensajes: 3675
  • Nivel: 519
  • tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.
  • Sexo: Masculino
    • Ver Perfil
Re:La burbuja de la IA
« Respuesta #121 en: Marzo 30, 2026, 15:21:38 pm »
La primera escuela sin docentes abre en Chicago impulsada por inteligencia artificial - Infobae https://share.google/i8p5oea4cHyo3M3tm
La función de los más capaces en una sociedad humana medianamente sana es cuidar y proteger a aquellos menos capaces, no aprovecharse de ellos.

Ceterum censeo Anglosphaeram esse delendam

Cadavre Exquis

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 27722
  • -Recibidas: 70156
  • Mensajes: 20527
  • Nivel: 923
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:La burbuja de la IA
« Respuesta #122 en: Marzo 30, 2026, 18:33:45 pm »
Citar
How the AI bubble bursts

Martín Volpe · 2026.03.30

Picture by Bloomberg

The catalysts for a crash are already laid out, and it can happen sooner than most expect. AI is here to stay. If used right, chances are it will make us all more productive. That, on the other hand, does not mean it will be a good investment.

Big tech doesn’t need to win, just outspend

Magnificent 7 companies are increasing capex to their biggest ever to differentiate their tech from each other and the big AI labs, but the key realization is that they don’t have to spend it to win. It’s a defensive move for them, if they commit $50B, OpenAI and Anthropic need to go raise $100B each to stay competitive, which makes them reliant on investors’ money. As the numbers get bigger, the amount of funds that can write checks of the size required to fill such amounts gets smaller. And many of them are now getting bombed in the Gulf.

This is the reason there’s a push for IPOs, it’s because it’s the only option left to keep the funding coming.

Taking this into account, Google is extremely well positioned to weather the storm. When they announce capex expenditure, they don’t spend it overnight. They can simply deploy month by month until their competitors struggle to raise and get forced to capitulate. At that point they can just ramp down the spending and declare victory in a cornered market. They don’t need capex, they just need to make it very clear for everyone that nobody can outspend them. It is hard to picture as numbers get so big, but Alphabet (Google’s parent) is ten times more valuable than the biggest military company1.

This also has a great implication for the Mag 7, especially Google: their capex will be a lot smaller in practice than projected, and as investors hate to see high capex in tech, the market will probably reward that if it materializes.

Apple didn’t even have to pretend, their strategy of waiting on the sidelines, while selling Mac Minis, for someone to come up with a good-enough model and just buy that when it’s done seems to be working. They may not even do that, they are now hinting at charging models for being available on Siri. Amazon is hedged with an Anthropic investment, and Meta is spending like there’s no tomorrow.

The catalyzer

We’re hitting the worst-case scenarios for the big AI labs: energy, their biggest expense, is at multi-year highs, capital from the Gulf is not available for obvious reasons, there are serious concerns about a rate hike, and RAM prices are crashing because new models won’t need as much, but labs already bought them at sky-high prices. And that last innovation came from their biggest competitor, Google.

Anthropic is already in a push to reduce costs and increase revenue. If investor money dries up, they will be forced to cut their losses and pass the true costs to their users. The question is now if customers will be willing to pay up. Independent reports state that Claude metered models are priced 5x more expensive than their subscribers pay2, and nobody is sure if even their metered pricing is profitable. In investing, stories are way more exciting than reality: a company losing money but growing like crazy is an easier sell than a huge company losing money or with tight margins. Raising prices will for sure decrease demand and that risks killing the growth story. And even if revenue keeps growing, it doesn’t matter if there are no margins — growing revenue without profits just means burning cash faster, especially when competing against companies that can offer the same product as a loss leader bundled into their cloud platforms.

It’s also worth mentioning that Claude’s most expensive subscription plans (Max and Max 5x, priced at $100 and $200 respectively) do not allow for yearly payments, hinting prices will go up3.

OpenAI’s endgame

OpenAI is struggling to monetize. They turned to showing ads in ChatGPT, something Sam Altman once called a “last resort”, while Anthropic is crushing them with the more profitable corporate customers and software engineers. Their shopping feature flopped and they shut down Sora, both supposed to be revenue drivers.

I wouldn’t be surprised at all if in the next couple of quarters we see OpenAI looking for an exit. It will be interesting because the sizes are now so big that we will probably know all the details. The most likely buyer is Microsoft, they already own a lot of it, and because of that, they are the most interested in showing a win. Sam Altman managed to get Microsoft so involved in OpenAI that making sure it lands on its feet is a Microsoft problem to solve. But, would shareholders vote to spend 22% of an established company’s market cap to rescue a money-burning AI lab that has lost most of its differentiators?4

And independent of whether Microsoft makes money or not in their OpenAI endeavor, it kills the story: they were betting the whole growth story on AI, and if that doesn’t work out, then what’s left to justify a high stock price? They lose a big customer for their cloud services. Even worse considering that now, using the AI they helped fund, everyone can compete with their sub-par products. GitHub is a good candidate for disruption, and that’d be just the start.

How this all affects you

You may think that you’re not affected by the big labs struggling. Hell, you may even be happy that they won’t be replacing your job after all. But that is far from reality.

Investments are now so big that writing them off would certainly hurt public companies’ balance sheets, and their growth prospects. This will drag the whole market, reducing valuations and slowing M&A, which further dries up VC money and slows down investments. Just like it happened in 2022.

And this has even more ramifications, pension funds around the world will take a hit. Datacenters that were built with the expectation of growth will now be undercapacity, because as training is the most compute-intensive part of a model, if there’s no capital to train a new one, they won’t be needed. GPUs then sit idle while their value goes down as there’s no demand. Some committed GPUs may never get delivered, or even manufactured. Investment drying up is a disaster for Nvidia, now the biggest company in the world.

It could happen that datacenters are not underused, but they get to charge their customers a way lower rate than they projected before building, so everyone benefits from AI but them.

Building a datacenter is supposed to be a “safe” investment in normal times, so banks give private credit and mortgages to finance them. A write-off of those assets means that banks start realizing losses, hurting their capacity to loan, and some may even be forced to liquidate, just like we saw in 2023. And all this assumes we don’t get disruptions in manufacturing in Taiwan or global supply chains.

Of course, the content of this article is highly speculative, it may end up being that demand for models is just so high it offsets every other problem I lay. But almost all innovations go through a boom and bust cycle and I don’t see a reason this is an exception.

Thanks to Javier Silveira and Augusto Gesualdi for reviewing drafts of this post.



Footnotes
  • As of March 2026, Alphabet’s market cap is ~$2T while Lockheed Martin’s is ~$120B.
  • It’s actually difficult to estimate the true cost of the subscription, as not everyone uses all their credits, and some enterprise customers may not even use it at all.
  • They could also hint clients will renew at higher rates, but given the cash crunch described in this post it seems a way less realistic scenario. Yearly subscriptions are a way to get (almost) a year’s worth of revenue now.
  • Microsoft is worth ~$2.8T and owns ~27% of OpenAI, which was last valued at ~$840B. Acquiring the remaining ~73% would cost ~$613B, roughly 22% of Microsoft’s market cap.
Saludos.

senslev

  • Inmoindultado
  • ****
  • Gracias
  • -Dadas: 15485
  • -Recibidas: 15110
  • Mensajes: 2316
  • Nivel: 176
  • senslev Sus opiniones inspiran a los demás.senslev Sus opiniones inspiran a los demás.senslev Sus opiniones inspiran a los demás.senslev Sus opiniones inspiran a los demás.senslev Sus opiniones inspiran a los demás.senslev Sus opiniones inspiran a los demás.senslev Sus opiniones inspiran a los demás.senslev Sus opiniones inspiran a los demás.senslev Sus opiniones inspiran a los demás.senslev Sus opiniones inspiran a los demás.senslev Sus opiniones inspiran a los demás.senslev Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:La burbuja de la IA
« Respuesta #123 en: Marzo 30, 2026, 22:31:44 pm »
La responsabilidad individual, el pensamiento crítico, la acción colectiva, y la memoria histórica, son las armas con las que podemos combatir la banalidad del mal y construir un mundo más justo y humano.

Quien ha vivido conforme a sus principios, no teme a la muerte ni al fracaso.

muyuu

  • Inmoindultado
  • ****
  • Gracias
  • -Dadas: 3175
  • -Recibidas: 9692
  • Mensajes: 2305
  • Nivel: 207
  • muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.
  • el mercado es tu amigo ☜(゚ヮ゚☜)
    • Ver Perfil
Re:La burbuja de la IA
« Respuesta #124 en: Abril 07, 2026, 04:13:00 am »
https://www.perplexity.ai/page/openai-anthropic-financial-doc-v_BdVVhVRLapfCfd9SNLVw

Citar
OpenAI and Anthropic Financial Documents Reveal Neither Is Profitable Ahead of Planned IPOs

The Wall Street Journal published an inside look at both companies' finances, showing OpenAI expects to break even by 2030 whilst Anthropic targets 2028.

OpenAI closed a record $122 billion funding round at an $852 billion valuation in March; Anthropic raised $30 billion at $380 billion in February.

OpenAI's CFO Sarah Friar said the company is "making very tough trades" and passing on projects due to a compute shortage in 2026, according to Business Insider

muyuu

  • Inmoindultado
  • ****
  • Gracias
  • -Dadas: 3175
  • -Recibidas: 9692
  • Mensajes: 2305
  • Nivel: 207
  • muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.
  • el mercado es tu amigo ☜(゚ヮ゚☜)
    • Ver Perfil
Re:La burbuja de la IA
« Respuesta #125 en: Abril 09, 2026, 11:37:52 am »

tomasjos

  • Administrator
  • Netocrata
  • *****
  • Gracias
  • -Dadas: 31435
  • -Recibidas: 29198
  • Mensajes: 3675
  • Nivel: 519
  • tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.tomasjos Sus opiniones inspiran a los demás.
  • Sexo: Masculino
    • Ver Perfil
Re:La burbuja de la IA
« Respuesta #126 en: Abril 11, 2026, 00:46:54 am »
Trabajadores de OpenAI aseguran que Sam Altman casi no sabe de programación y confunde conceptos básicos https://share.google/rT0DHTmRxRPTk5Ihu

Trabajadores de OpenAI aseguran que Sam Altman casi no sabe de programación y confunde conceptos básicos
Inteligencia artificial
Varios ingenieros que han trabajado directamente con el CEO de ChatGPT aseguran que el directivo carece de experiencia real en programación y en sistemas de aprendizaje automático

El auge de la inteligencia artificial ha convertido a Sam Altman en una de las figuras más influyentes del sector tecnológico. Como director ejecutivo de OpenAI y rostro público de herramientas como ChatGPT, su imagen se ha asociado con la de un visionario capaz de definir la trayectoria de la revolución de la IA. Sin embargo, esa reputación podría estar más vinculada a su capacidad estratégica que al conocimiento técnico.

Un reportaje reciente de The New Yorker, recogido y analizado por el medio tecnológico Futurism, describe una imagen distinta de la que suele proyectarse de cara al público. Según múltiples empleados de OpenAI entrevistados para la investigación, Altman no sería un experto en programación ni en aprendizaje automático, y en ocasiones llegaría incluso a confundir conceptos básicos relacionados con la inteligencia artificial.


Un líder más estratégico que técnico
De acuerdo con la información del reportaje de The New Yorker, varios ingenieros que han trabajado directamente con Altman aseguran que el directivo carece de experiencia real en programación y en sistemas de aprendizaje automático. En algunas conversaciones técnicas, señalan, el CEO habría mostrado un conocimiento sorprendentemente superficial de las tecnologías que desarrolla su propia empresa.

Altman, de hecho, abandonó el programa de informática de la Universidad de Stanford tras cursar únicamente dos años. Aunque este tipo de trayectorias no son raras en Silicon Valley —donde abundan los emprendedores autodidactas—, la situación resulta llamativa si se tiene en cuenta que OpenAI está considerada una de las compañías más avanzadas del mundo en inteligencia artificial.

El mito que rodea a Altman como genio tecnológico podría estar más vinculado a su habilidad empresaarial que a sus conocimientos técnicos

Según relatan los medios estadounidenses, el mito que rodea a Altman como “genio tecnológico” podría estar más vinculado a su habilidad política y empresarial que a sus conocimientos técnicos. Esa reputación le habría permitido consolidar una posición influyente tanto dentro de la industria como en el ámbito político, llegando incluso a participar en debates en Washington sobre el futuro de la inteligencia artificial.

Esta opinión la resumió el exinvestigador de OpenAI, Carroll Wainwright: “Crea estructuras que en teoría lo limitan para el futuro. Pero cuando llega el futuro y se ve obligado a adaptarse a esas limitaciones, se deshace de cualquier estructura que haya creado”. La frase apunta a una habilidad para maniobrar que algunos consideran como destreza estratégica y otros como señal de oportunismo.

Presión financiera y cambio de rumbo en OpenAI
La discusión sobre el liderazgo de Altman coincide además con un momento de cambio en la estrategia de OpenAI. Hace poco más de un año, la compañía protagonizó uno de los anuncios más ambiciosos en la historia reciente de la IA. Tras la segunda investidura de Donald Trump, varios líderes tecnológicos se reunieron en el Despacho Oval para presentar el proyecto Stargate, un gigantesco plan de infraestructura de IA valorado inicialmente en 500.000 millones de dólares.

Durante el acto, Altman llegó a afirmar que el desarrollo de la inteligencia artificial general en EE.UU. no sería posible “sin usted, señor presidente”, en referencia a Trump. La compañía anunció entonces una inversión inicial de 100.000 millones de dólares para impulsar la infraestructura necesaria para entrenar modelos cada vez más potentes.


La función de los más capaces en una sociedad humana medianamente sana es cuidar y proteger a aquellos menos capaces, no aprovecharse de ellos.

Ceterum censeo Anglosphaeram esse delendam

muyuu

  • Inmoindultado
  • ****
  • Gracias
  • -Dadas: 3175
  • -Recibidas: 9692
  • Mensajes: 2305
  • Nivel: 207
  • muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.
  • el mercado es tu amigo ☜(゚ヮ゚☜)
    • Ver Perfil
Re:La burbuja de la IA
« Respuesta #127 en: Abril 11, 2026, 18:39:28 pm »

muyuu

  • Inmoindultado
  • ****
  • Gracias
  • -Dadas: 3175
  • -Recibidas: 9692
  • Mensajes: 2305
  • Nivel: 207
  • muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.
  • el mercado es tu amigo ☜(゚ヮ゚☜)
    • Ver Perfil
Re:La burbuja de la IA
« Respuesta #128 en: Hoy a las 09:45:31 »
https://www.wheresyoured.at/the-subprime-ai-crisis-is-here/

artículo muy largo con mucha tela - me quedo principalmente con este fragmento, aunque no estoy de acuerdo en muchos puntos:


Citar
Anthropic and OpenAI (and Other AI Startups) Have Trained Their Users To Use Their Unsustainable Products In Unsustainable Ways, And Their Users Are Intolerant of Rate Limits and Price Increases

While it’s really easy to make fun of people obsessed with LLMs, I want to be clear that Anthropic and OpenAI are inherently abusive companies that have built businesses on theft, deception and exploitation.

Anybody who’s spent more than a few minutes in one of the many AI Subreddits has read story after story of models mysteriously “becoming dumb,” or rate limits that seem to expand and contract at random. Even the concept of “rate limits” only serves to further deceive the customer. Outside of intentionally asking the model, users are entirely unaware of their “token burn,” or at the very least have built habits around rate limits that, as of right now, are entirely different to even a month ago.

A user who bought a $200-a-month Claude Pro subscription in December 2025, a mere three months later, now very likely cannot do the same things they did on Claude Code when they decided to subscribe, and those who use these subscriptions for their day jobs are now having to sit on their hands waiting for the rate limits to pass, and have no clarity into whether they’ll be able to work at the same rate they did even a month ago, let alone when they subscribed.

All of this is a direct result of Anthropic, OpenAI, and other AI startups intentionally deceiving customers through obtuse pricing so that people would subscribe believing that the product would continue providing the same value, and I’d argue that annual subscriptions to these services amount to, if not fraud, a level of consumer deception that deserves legal action and regulatory involvement.

To be clear, no AI company should have ever sold a monthly subscription, as there was never a point at which the economics made sense. Yet had these companies actually charged their real costs, nobody would have bothered with AI, because even with these highly-subsidized subscriptions, AI still hasn’t delivered meaningful productivity benefits, other than a legion of people who email me saying “it’s changed my life as a programmer!” without explaining to me what that means or why it matters or what the actual result is at the end.

Isn’t it kind of weird that we have these LLM subscriptions to products that arbitrarily become less-accessible or less-performant in a way that’s impossible to really measure, and labs never seem to address? We don’t know the actual rate limits on Claude (other than via CCusage or Shellac’s research), or ChatGPT, or any of these products by design, because if we did, it would be blatantly obvious how unsustainable and ridiculous these products were.

And the magical part about Large Language Models is that your most engaged customers are also your most-expensive, and the more-intensive the work, the more expensive the outputs become.

If you’re about to say “well they’ll just raise the prices,” perhaps you should check Twitter or Reddit, and notice that Anthropic’s customers are screaming like they’re being stung to death by bees because of new rate limits that only let them burn $10 of compute in five hours. Do you think these people would be comfortable with a $130-a-month, $1,300-a-month or $2,500-a-month subscription? One that performs the same way (if not worse) as their $20, $100 or $200-a-month subscription did?

Or do you think they’ll do Aaron Sorkin speeches about Anthropic’s greed and immediately jump to ChatGPT in the hopes that the exact same thing doesn’t happen a few months later?

Much as homeowners were assured that they’d simply be able to refinance their homes before the adjustable rates hit, AI fans repeatedly switch subscriptions to whichever provider is currently offering the best deal, in some cases paying for multiple subscriptions under the explicit knowledge that rate limits existed and would become increasingly-punishing.

Based on the reactions of their users, I don’t really see how the AI labs — or AI startups, for that matter — fix this problem.

On one hand, AI subscribers are acting like babies, crying that their product won’t let them use $2500 of tokens for $200. This was an obvious con, a blatant subsidy, and a party that wouldn’t last forever.

On the other, AI labs and AI startups have never, ever acted with any degree of honesty or clarity with regards to their costs, instead choosing to add “exciting” new features that often burn more tokens without charging the end user more, which sounds nice until you remember that things cost money and money is not unlimited.

The very foundation of every AI startup is economically broken. The majority of them sell some sort of “deep research” report feature that costs several dollars to generate at a time, and many sell some form of expensive coding or “computer use” product, tool-based web search features, and many other products that exist to keep a user engaged while burning tokens, all without explaining to the user “yeah, we’re spending way more than we make off of you, this is an introductory rate.”

This intentional, blatant and industry-wide deception set the terms for the Subprime AI Crisis. By selling AI services at $20 or $50 or even $200-a-month, AI startups and labs created the terms for their own destruction, with users trained for years to expect relatively unlimited access sold at a flat rate for a service powered by Large Language Models that burn tokens at arbitrary rates based on their inference of the user’s prompt, making costs near-impossible to moderate. 

And when these companies make changes to slightly bring costs under control, their users act with revulsion, because rate limits aren’t price increases, but direct changes to the functionality of the product. Imagine if a subscription to a car service was $200-a-month, and let you go 50 miles, or 25 miles, or 100 miles, or 4 miles, or 12 miles depending on the day, and never at any point told you how many miles you had left beyond a percentage-based rate limit. To make matters worse, sometimes the car would arbitrarily take a different route, driving you five miles in the opposite direction, or decide to park on the side of the curb, charging you for every mile.

This is the reality of using an AI product in the year of our lord 2026. A Claude Code or OpenAI Codex user cannot with any clarity say that in three months their current workload or workflow will be possible based on their current subscription. Somebody buying an annual subscription to any AI product is immediately sacrificing themselves to the whims of startup CEOs that intentionally decided to deceive users for years as a means of juicing growth.

And when these limits decay, does it eventually make the ways in which some of these users work with Claude Code impossible? At what point do these rate limit shifts start changing how reliable the experience is and how much one can get done in a day? What use is a tool that gets more unreliable to access and expensive over time? Even if this week’s rate limits are an overcorrection, one has to imagine they resemble the future of Anthropic’s products, and are indicative of a larger pattern of decay in the value of its subscriptions. 

I’m going to be as blunt as possible: every bit of AI demand — and barely $65 billion of it existed in 2025 — that exists only exists due to subsidies, and if these companies were to charge a sustainable rate, said demand would evaporate.

There is no righting this ship. There is no pricing that makes sense that customers will pay at scale, nor is there a magical technological breakthrough waiting in the wings that will reduce costs. Vera Rubin will not save AI, nor will some sort of “too big to fail” scenario, because “too big to fail” was based on the fact that banks would have stopped providing dollars to people and insurance companies would have  stopped issuing insurance.

Despite NVIDIA’s load-bearing valuation and the constant discussion of companies like OpenAI and Anthropic, their actual economic footprint is quite small in comparison to the trillions of dollars of CDOs and trillion plus dollars of mortgages involved in the great financial crisis. The death of the AI industry would be cataclysmic to venture capitalists, bring about the end of the hypergrowth era for the Magnificent Seven, and may very well kill Oracle, but — seriously — that is nothing in comparison to the scale of the Great Financial Crisis. This isn’t me minimizing the chaos to follow, but trying to express how thoroughly fucked everything was in 2008.

On Friday I’m going to get into this more in the premium. This wasn’t an intentional ad, I just realized as I wrote that sentence that that was what I have to do.

Anyway, I’ll close with a grim thought.

What’s funny about the comparison to the subprime mortgage crisis is that there are, in all honesty, multiple different versions of the Stripper With Five Houses from The Big Short:

The AI companies that only have customers because they spend $3 to $10 for every dollar of revenue.

The venture capitalists that are ultra-rich on paper, heavily leveraging their firms in companies like Harvey (worth “$11 billion”) and Cursor (worth “$29.3 billion”) that burn hundreds of millions or billions of dollars and are now both too large to sell to another company and too shitty a company to take public.

The AI labs that have built massive businesses on selling heavily-subsidized subscriptions to customers who don’t want to pay for them and API calls to AI startups that can only pay them if infinite resources exist.

The AI data center companies that, thanks to readily-available debt, have started 200GW of projects (and only started building 5GW of them) for AI demand that doesn’t exist, entirely based on the theoretical sense that maybe it will in the future.

Oracle, who is building hundreds of billions of dollars of data centers for OpenAI (which needs infinite resources to be able to pay its compute costs), is taking on equally-large amounts of debt, all because it assumes that nothing bad will ever happen.

The customers of AI startups that are building lifestyles, identities and workflows around them believing that we’re “just at the beginning” on top of unsustainable AI subscriptions.

All of these entities are acting based on a misplaced belief that the world will cater to them, and that nothing will ever change. While there might be different levels of cynicism — people that know there’re subsidies but assume they’ll be fine once they arrive, or people like Sam Altman that are already rich and don’t give a shit — I think everybody in the AI industry has deluded themselves into believing they have the mandate of Heaven.

Tags:
 


SimplePortal 2.3.3 © 2008-2010, SimplePortal