www.transicionestructural.NET es un nuevo foro, que a partir del 25/06/2012 se ha separado de su homónimo .COM. No se compartirán nuevos mensajes o usuarios a partir de dicho día.
0 Usuarios y 8 Visitantes están viendo este tema.
https://www.expansion.com/inmobiliario/mercado/2025/01/25/67920abce5fdeab9528b4574.htmlSaludos.
El que ha comprado una vivienda para alquilarla se puede meter el sobrevalor pagado por donde le quepa. Comprar caro es de gilipollas.
INTERVENCIÓN DEL PRESIDENTE DEL GOBIERNO, PEDRO SÁNCHEZhttps://www.lamoncloa.gob.es/presidente/intervenciones/Paginas/2025/20250113-acto-vivienda.aspx...Y porque dos tercios de las viviendas en alquiler residencial que hay hoy en nuestro país pertenecen a pequeños propietarios, a gente de clase media trabajadora, que ha trabajado mucho para poder pagarlas y obtener un dinero extra de ellas.
El mercado de la vivienda en España: evolución reciente, riesgos y problemas de accesibilidadBanco de España, Informe Anual 2023https://www.bde.es/f/webbe/SES/Secciones/Publicaciones/PublicacionesAnuales/InformesAnuales/23/Fich/InfAnual_2023_Cap4.pdfDistribución de la propiedad en el mercado del alquiler. El mercado del alquiler residencial en España se caracteriza por la prevalencia de los arrendadores particulares y de los pequeños propietarios, frente al reducido peso que tienen las personas jurídicas y los grandes tenedores. En concreto, las viviendas principales en alquiler de mercado que son propiedad de las personas jurídicas de naturaleza privada representarían una cuantía estimada en un 8 % del total, frente al 92 % de las de los particulares8. El impulso reciente a la inversión en vivienda destinada al alquiler por parte de las personas físicas contribuye a explicar esta estructura de mercado. En particular, los hogares han incrementado su tenencia de viviendas en el mercado del alquiler en un promedio anual superior a las 100.000 unidades adicionales entre 2012 y 2022. Además, se estima que las viviendas arrendadas en territorio fiscal común cuyos titulares son particulares que participan en la propiedad o usufructo de más de 10 viviendas representarían un máximo del 7 % del parque de alquiler de mercado9. Al mismo tiempo, el peso del alquiler social es muy reducido, con una cifra estimada de este tipo de viviendas en torno a las 300.000 unidades (1,5 % de las viviendas principales).
La tasa de hogares que hace un sobreesfuerzo para pagar el alquiler es un 50% más alta en España que en Europahttps://www.20minutos.es/lainformacion/economia-y-finanzas/tasa-hogares-sobreesfuerzo-alquiler-alta-espana-europa-5647051/24.10.2024 Las familias que tienen que dedicar más del 40% de sus ingresos a la renta mensual suponen el 30,6% del total de las que pagan alquileres a precio de mercado frente al 20% de media en la Unión EuropeaEntre el estallido de la crisis financiera en 2007 y finales del año pasado 3,3 millones de personas más se han incorporado al mercado del alquiler en España, según el Instituto Nacional de Estadística (INE). En total son ya más de 9 millones de personas y un porcentaje muy elevado vive en alquileres a precio de mercado. Este colectivo, que en la última década ha visto cómo los precios se disparaban muy por encima de lo que han subido los salarios, presenta mayores dificultades económicas relacionadas con el pago de la vivienda que en otros países europeos. La tasa de sobreesfuerzo o porcentaje de hogares que tienen que destinar más del 40% de su renta al pago de la vivienda y los suministros básicos (agua, energía y comunidad) alcanza el 30,6% del total en los alquileres a precio de mercado, cuando en la Unión Europea ese porcentaje a duras penas supera el 20%. La diferencia es notable en relación a los alquileres inferiores a precio de mercado, donde la tasa de sobreesfuerzo es unas décimas mayor a nivel europeo que nacional. En ambos casos supera ligeramente el 10%, de acuerdo con datos de Eurostat, la oficina de estadísticas europea. El Banco de España (BdE) ha sido uno de los organismos que más ha advertido en los últimos meses sobre el elevado esfuerzo asociado al pago del alquiler y sus consecuencias. Desde 2022, a raíz de la subida acelerada de los tipos de interés que el Banco Central Europeo aprobó para controlar la inflación y el consiguiente despegue del euríbor y del coste de los préstamos hipotecarios -estuvieran o no referenciados a este indicador- muchos hogares quedaron excluidos del mercado en propiedad y se vieron forzados a optar por un alquiler lo que, unido a los flujos migratorios, ha disparado la demanda frente a una oferta menguante. En su informe 'El mercado del alquiler de vivienda residencial en España: evolución reciente, determinantes e indicadores de esfuerzo', publicado la pasada semana, la entidad apuntaba que los más golpeados por ese sobreesfuerzo son los jóvenes, extranjeros y hogares precarios de las ciudades grandes o turísticas. Además, consideraba que estaría justificada la "intervención pública" en este mercado, siempre que fuese para rebajar el "elevado esfuerzo asociado al alquiler" y evitar con ello otros efectos económicos y sociales adversos.
Here’s what the sellside is saying about DeepSeekAI’s cold war is now a price warFor anyone wanting to train an LLM on analyst responses to DeepSeek, the Temu of ChatGPTs, this post is a one-stop shop. We’ve grabbed all relevant sellside emails in our inbox and copy-pasted them with minimal intervention.Backed by High-Flyer VC fund, DeepSeek is a two-years-old, Hangzhou-based spinout of a Zhejiang University startup for trading equities by machine learning. Its stated goal is to make an artificial general intelligence for the fun of it, not for the money. There’s a good interview on ChinaTalk with founder Liang Wenfeng, and mainFT has this excellent overview from our colleagues Eleanor Olcott and Zijing Wu.Mizuho’s Jordan Rochester takes up the story . . . Citarn Jan 20, [DeepSeek] released an open source model (DeepSeek-R1) that beats the industry’s leading models on some math and reasoning benchmarks including capability, cost, openness etc. Deepseek app has topped the free APP download rankings in Apple’s app stores in China and the United States, surpassing ChatGPT in the U.S. download list.What really stood out? DeepSeek said it took 2 months and less than $6m to develop the model - building on already existing technology and leveraging existing models. In comparison, Open AI is spending more than $5 billion a year. Apparently DeepSeek bought 10,000 NVIDIA chips whereas Hyperscalers have bought many multiples of this figure. It fundamentally breaks the AI Capex narrative if true.Sounds bad, but why? Here's Jefferies’ Graham Hunt et al:CitarWith DeepSeek delivering performance comparable to GPT-40 for a fraction of the computing power, there are potential negative implications for the builders, as pressure on Al players to justify ever increasing capex plans could ultimately lead to a lower trajectory for data center revenue and profit growth.The DeepSeek R1 model is free to play with here, and does all the usual stuff like summarising research papers in iambic pentameter and getting logic problems wrong. The R1-Zero model, DeepSeek says, was trained entirely without supervised fine tuning.Here’s Damindu Jayaweera and team at Peel Hunt with more detail.CitarFirstly, it was trained in under 3 million GPU hours, which equates to just over $5m training cost. For context, analysts estimate Meta’s last major AI model cost $60-70m to train. Secondly, we have seen people running the full DeepSeek model on commodity Mac hardware in a usable manner, confirming its inferencing efficiency (using as opposed to training). We believe it will not be long before we see Raspberry Pi units running cutdown versions of DeepSeek. This efficiency translates into hosted versions of this model costing just 5% of the equivalent OpenAI price. Lastly, it is being released under the MIT License, a permissive software license that allows near-unlimited freedoms, including modifying it for proprietary commercial useDeepSeek’s not an unanticipated threat to the OpenAI Industrial Complex. Even The Economist had spotted it months ago, and industry mags like SemiAnalysis have been talking for ages about the likelihood of China commoditising AI.That might be what’s happening here, or might not. Here’s Joshua Meyers, a specialist sales person at JPMorgan:CitarIt’s unclear to what extent DeepSeek is leveraging High-Flyer’s ~50k hopper GPUs (similar in size to the cluster on which OpenAI is believed to be training GPT-5), but what seems liklely is that they’re dramatically reducing costs (inference costs for their V2 model, for example, are claimed to be 1/7 that of GPT-4 Turbo). Their subversive (though not new) claim – that started to hit the US AI names this week – is that “more investments do not equal more innovation.” Liang: “Right now I don’t see any new approaches, but big firms do not have a clear upper hand. Big firms have existing customers, but their cash-flow businesses are also their burden, and this makes them vulnerable to disruption at any time.” And when asked about the fact that GPT5 has still not been released: [/color][/b] “OpenAI is not a god, they won’t necessarily always be at the forefront.”[/color][/i]Best for now that no-one tells Altman. Back to Mizuho:CitarWhy this comes at a painful moment? This is happening after we just saw a Texas Hold’em ‘All In” push of the chips with respect to the Stargate Announcement (~$500B by 2028E) and Meta taking up CAPEX officially to the range of $60-$65B to scale up Llama and of course MSFT’s $80B announcement…..The markets were literally trying to model just Stargate’s stated demand for ~2mln Unis from NVDA when their total production is only 6mn…..(Nvidia’s European trading is down 9% this morning, Softbank was down 7%). Markets are now wondering if this is a AI bubble popping moment for markets or not (i.e. a dot-com bubble for Cisco). Nvidia Is the largest individual company weight of S&P500 at 7%.And Jefferies again.Citar1) We see at least two potential industry strategies. The emergence of more efficient training models out of China, which have been driven to innovate due to chip supply constraints, is likely to further intensify the race for AI dominance between the US and China. The key question for the data center builders, is whether it continues to be a “Build at all Costs” strategy with accelerated model improvements, or whether focus now shifts towards higher capital efficiency, putting pressure on power demand and capex budgets from the major AI players. Near term the market will assume the latter.2) Derating risk near term, earnings less impacted. Although data center exposed names are vulnerable to derating on sentiment, there is no immediate impact on earnings for our coverage. Any changes to capex plans apply with a lag effect given duration (>12M) and exposure in orderbooks (~10% for HOT). We see limited risk of alterations or cancellations to existing orders and expect at this stage a shift in expectations to higher ROI on existing investments driven by more efficient models. Overall, we remain bullish on the sector where scale leaders benefit from a widening moat and higher pricing power.Though it’s the Chinese, so people are suspicious. Here’s Citi’s Atif Malik:CitarWhile DeepSeek’s achievement could be groundbreaking, we question the notion that its feats were done without the use of advanced GPUs to fine tune it and/or build the underlying LLMs the final model is based on through the Distillation technique. While the dominance of the US companies on the most advanced AI models could be potentially challenged, that said, we estimate that in an inevitably more restrictive environment, US’ access to more advanced chips is an advantage. Thus, we don’t expect leading AI companies would move away from more advanced GPUs which provide more attractive $/TFLOPs at scale. We see the recent AI capex announcements like Stargate as a nod to the need for advanced chips.People, such as Bernstein’s Stacy A Rasgon and team, also question the estimates for cost and efficiency. The Bernstein team says today’s panic is about a “fundamental misunderstanding over the $5mn number” and the way in which DeepSeek has deployed smaller models distilled from the full-fat one, R1.“It seems categorically false that ‘China duplicated OpenAI for $5M’ and we don’t think it really bears further discussion,” Bernstein says:CitarDid DeepSeek really “build OpenAI for $5M?” Of course not...There are actually two model families in discussion. The first family is DeepSeek-V3, a Mixture-of-Experts (MoE) large language model which, through a number of optimizations and clever techniques can provide similar or better performance vs other large foundational models but requires a small fraction of the compute resources to train. DeepSeek actually used a cluster of 2048 NVIDIA H800 GPUs training for ~2 months (a total of ~2.7M GPU hours for pre-training and ~2.8M GPU hours including post-training). The oft-quoted “$5M” number is calculated by assuming a $2/GPU hour rental price for this infrastructure which is fine, but not really what they did, and does not include all the other costs associated with prior research and experiments on architectures, algorithms, or data. The second family is DeepSeek R1, which uses Reinforcement Learning (RL) and other innovations applied to the V3 base model to greatly improve performance in reasoning, competing favorably with OpenAI’s o1 reasoning model and others (it is this model that seems to be causing most of the angst as a result). DeepSeek’s R1 paper did not quantify the additional resources that were required to develop the R1 model (presumably they were substantial as well).[ . . . ]hould the relative efficiency of V3 be surprising? As an MoE model we don’t really think so...The point of the mixture-of-expert (MoE) architecture is to significantly reduce cost to train and run, given that only a portion of the parameter set is active at any one time (for example, when training V3 only 37B out of 671B parameters get updated for any one token, vs dense models where all parameters get updated). A survey of other MoE comparisons suggests typical efficiencies on the order of 3-7x vs similarly-sized dense models of similar performance; V3 looks even better than this (>10x), likely given some of the other innovations in the model the company has brought to bear but the idea that this is something completely revolutionary seems a bit overblown, and not really worthy of the hysteria that has taken over the Twitterverse over the last several days.© BernsteinNevertheless, talk of a price war is enough to knock a hole in the Mag7’s already sketchy ROI.“Is absolutely true that DeepSeek’s pricing blows away anything from the competition, with the company pricing their models anywhere from 20-40x cheaper than equivalent models from OpenAI,” Bernstein says.CitarOf course, we do not know DeepSeek’s economics around these (and the models themselves are open and available to anyone that wants to work with them, for free) but the whole thing brings up some very interesting questions about the role and viability of proprietary vs open-source efforts that are probably worth doing more work on…Is any of this a good reason for a wider market selloff? On sentiment, maybe.Per SocGen, Nvidia plus Microsoft, Alphabet, Amazon and Meta, its top-four customers, “have contributed approximately 700 points to the S&P 500 over the last 2 years. “In other words, the S&P 500 excluding the Mag-5s would be 12% per cent lower today. Nvidia alone has contributed 4 per cent to the performance of the S&P 500. This is what we find to be the ‘American exceptionalism’ premium on the S&P 500.”© SocGenDeutsche Bank’s Jim Reid narrows it down to Nvidia alone, and its stunningly quick transformation from a maker of video games graphics cards to the turboprop of economic prosperity:Citarit’s gone from LTM earnings of around $4bn two years ago to around $63bn in the last quarterly release. For context, this is around half the total earnings made by listed stocks in each of UK, Germany and France over the last 12 months. The forecasts are for Nvidia to continue to see significant earnings growth.So this is a company that has gone from relative earnings obscurity to one of the most profitable in the world inside two years and the largest company in the world as of Friday night. The problem is that the AI industry is embryonic. And it’s almost impossible to know how it will develop or what competition current winners might face even if you fully believe in its potential to drive future productivity. The stratospheric rise of DeepSeek reminds us of this.Hang on though. Cheap Chinese AI means more productivity benefits, lower build costs and an acceleration towards the Andreesen Theory of Cornucopia so maybe . . . good news in the long run? JPMorgan’s Meyers again:CitarThis strikes me not about the end of scaling or about there not being a need for more compute, or that the one who puts in the most capital won’t still win (remember, the other big thing that happened yesterday was that Mark Zuckerberg boosted AI capex materially). Rather, it seems to be about export bans forcing competitors across the Pacific to drive efficiency: “DeepSeek V2 was able to achieve incredible training efficiency with better model performance than other open models at 1/5th the compute of Meta’s Llama 3 70B. For those keeping track, DeepSeek V2 training required 1/20th the flops of GPT-4 while not being so far off in performance.” If DeepSeek can reduce the cost of inference, then others will have to as well, and demand will hopefully more than make up for that over time.That’s also the view of semis analyst Tetsuya Wadaki at Morgan Stanley, the most AI-enthusiastic of the big banks.We have not confirmed the veracity of these reports, but if they are accurate, and advanced LLM are indeed able to be developed for a fraction of previous investment, we could see generative AI run eventually on smaller and smaller computers (downsizing from supercomputers to workstations, office computers, and finally personal computers) and the [semiconductor production equipment] industry could benefit from the accompanying increase in demand for related products (chips and SPE) as demand for generative AI spreads.And Peel Hunt again:CitarWe believe the impact of those advantages will be twofold. In the medium to longer term, we expect LLM infrastructure to go the way of the telco infrastructure and become a ‘commodity technology’. The financial impact on those deploying AI capex today depends on regulatory interference – which had a major impact on Telcos. If we think of AI as another ‘tech infrastructure layer’, like the internet, the mobile, and the cloud, in theory the beneficiaries should be companies that leverage that infrastructure. While we think of Amazon, Google, and Microsoft as cloud infrastructure, this emerged out of the need to support their existing business models: e-commerce, advertising and information-worker software. The LLM infrastructure is different in that, like the railroads and telco infrastructure, these are being built ahead of true product/market fit.And Bernstein:CitarIf we acknowledge that DeepSeek may have reduced costs of achieving equivalent model performance by, say, 10x, we also note that current model cost trajectories are increasing by about that much every year anyway (the infamous “scaling laws...”) which can’t continue forever. In that context, we NEED innovations like this (MoE, distillation, mixed precision etc) if AI is to continue progressing. And for those looking for AI adoption, as semi analysts we are firm believers in the Jevons paradox (i.e. that efficiency gains generate a net increase in demand), and believe any new compute capacity unlocked is far more likely to get absorbed due to usage and demand increase vs impacting long term spending outlook at this point, as we do not believe compute needs are anywhere close to reaching their limit in AI. It also seems like a stretch to think the innovations being deployed by DeepSeek are completely unknown by the vast number of top tier AI researchers at the world’s other numerous AI labs (frankly we don’t know what the large closed labs have been using to develop and deploy their own models, but we just can’t believe that they have not considered or even perhaps used similar strategies themselves).To that end investments are still accelerating. Right on top of all the DeepSeek newsflow last week we got META substantially increasing their capex for the year. We got the Stargate announcement. And China announced trillion yuan (~$140B) AI spending plan. We are still going to need, and get, a lot of chips...As for markets strategy, here’s a taster.US stocks have not begun trading at the time of writing, but futures on the big indices and ETFs are indicating a grisly opening, as Bespoke Investment Group notes:CitarIf the reports of DeepSeek’s success at such low costs are true, and this is a big if as there is still a lot we don’t know in terms of how it was developed, it would pose problems for some of the biggest AI winners over the last two years. As we type this, the S&P 500 (proxied by SPY) is trading down about 2.25% which would be the largest downside gap since early August and the 60th largest downside gap in the ETF’s history dating back to 1993.For the Nasdaq 100 (QQQ), the declines are even steeper. With the ETF poised to gap down 3.8% at the open, it would be QQQ’s largest downside gap since early August and the 20th largest downside gap since its inception in 1999. As shown in the chart below, before last August’s downside gap, the last time QQQ gapped down as much as it on pace to today was back in September 2020.Nomura’s Charlie McElligott is also worried that this could escalate into a “monster de-risking” today. We’ve kept his italics and bolding below to preserve his unique voice:CitarI’m not gonna try to play Semi- / AI- expert of the long-term viability and potential AI paradigm shift here…but there are heavy “modern market structure” and mechanical flow implications here for the Stock Market…and the “US Exceptionalism” trade positioning, as “innovation” is a core component to that view…But the larger issue is that Megacap Tech IS the US Equities Market, and anybody with a mandate to own Equities is by default stuffed on these names in order to survive recent years, where Mag8 are 35% of SPX and 49% of NDX index weights, respectivelyAdditionally, we’ve seen substantial “Spot Up, Vol Up” Upside chasing into Calls (e.g. 95%ile + Call Skews across Semi names) and general demand for Calls in MegaCap Tech / AI -names recently again in recent weeks…which can then now “collapse under the weight of their own Delta” on the Spot pullbacksAnd when you add-in the massive allocation that these “Tech Animal Spirits” =-names and “concentric themes” hold within Leveraged ETF product universe at record AUM, there is a potential monster “de-risking” flow today as 1) Options see Calls go out of the money and Dealers adjust hedges / Puts are bought with chunky NEGATIVE $Delta flow...and as 2) Leveraged ETF’s will sell huge $notional to rebalance the products vs these single-name moves (we estimate using pre-mkt prices at -$22B)…which will inherently then “feedback” with Discretionary risk-management and potential front-running of those flows.The corollary to the budding equity market puke is that Treasuries are rallying hard, with the 10-year US government bond yield now down to 4.53 per cent, the lowest in over a month.Ian Lyngen, a rates analyst at BMO, points out that this matters for Treasury market-specific reasons as well:CitarThe Treasury market rally itself is notable for several reasons. First, the outright magnitude of the drop in 10-year yields at >12 bp indicates that investors are anxious about the potential ramifications from a wholesale reevaluation of the tech sector – and potentially equity market more broadly.Moreover, a challenge of 4.50% in 10s has ramifications from a technical perspective and we’ll note that 4.488% represents two standard deviations below the 20-day moving-average. Said differently, entering a sub-4.50% trading range for 10s will represent a material challenge to the prior bond bearish narrative that has been in place since the November election. The overnight move has created an opening gap in 10s that comes in at 4.599% to 4.621% – the existence of which could persist for longer than might typically be the case in the event that the current stock market selloff has further to run.Monday’s departure point will make this week particularly relevant from a technical perspective insofar as the weekly close could readily recast near-term expectations. Today’s Treasury supply (2s and 5s) will undoubtedly benefit from the reversal in risk assets and the absence of meaningful economic data ahead of the Fed suggests that any move will have plenty of room to run. We’re also anticipating a dovish pause, a dynamic that should reinforce any further tone-shift favoring lower yields from here.For the rates/FX angle, back to Mizuho:CitarWe long expected Trump’s inauguration to be the peak of tariff fears and a “buy the rumour, sell the fact” moment for trades, but this Deepseek story provides a more fundamental equity flow story that in the short term will dominate any macro argument.I wouldn’t pay US 2s until we’re below 4.1% and closer to 4% (currently 4.17%), markets are likely to continue to buy duration as they rotate out of stocks with 4.4% for US 10s in mind. If we get down to 4.2% it’s due to a) Deepseek’s revelations being proven completely true and b) US data surprises continue to turn lower.An equity market selloff like this increases the probability of 1) a more dovish FOMC meeting this week, March is pricing at 10bps for a cut – we expect 25bps. 2) Trump perhaps holding off from more aggressive tariff actions at the end of this week on Canada/Mexico and waiting for the more likely timing of April’s deadlines.When equity markets selloff like this the first thing to happen is a position squeeze with margin calls forcing profit taking on winning trades elsewhere. Making consensus calls very quickly unravel. Long USD positioning has climbed to $33.8bn in the futures market and whilst it has been higher in previous USD cycles (up to $48bn in 2015/16) long USD is the obvious position unwind candidate with short EUR, GBP, CHF and JPY the beneficiaries.This could open us up 1.07 in EURUSD. From this level of EUR short positioning it would take another 2% rally before the mkts net short position becomes flat.We had expected the end of the US exceptionalism to be a Q2 story. But if we combine a) US equity market outflows with b) seasonal quirks in European PMIs likely to see better growth data in Q1 and c) not enough priced for a Fed cut in March – perhaps that time is now, and we all need to revise up Q2 EUR/USD forecasts.
n Jan 20, [DeepSeek] released an open source model (DeepSeek-R1) that beats the industry’s leading models on some math and reasoning benchmarks including capability, cost, openness etc. Deepseek app has topped the free APP download rankings in Apple’s app stores in China and the United States, surpassing ChatGPT in the U.S. download list.What really stood out? DeepSeek said it took 2 months and less than $6m to develop the model - building on already existing technology and leveraging existing models. In comparison, Open AI is spending more than $5 billion a year. Apparently DeepSeek bought 10,000 NVIDIA chips whereas Hyperscalers have bought many multiples of this figure. It fundamentally breaks the AI Capex narrative if true.
With DeepSeek delivering performance comparable to GPT-40 for a fraction of the computing power, there are potential negative implications for the builders, as pressure on Al players to justify ever increasing capex plans could ultimately lead to a lower trajectory for data center revenue and profit growth.
Firstly, it was trained in under 3 million GPU hours, which equates to just over $5m training cost. For context, analysts estimate Meta’s last major AI model cost $60-70m to train. Secondly, we have seen people running the full DeepSeek model on commodity Mac hardware in a usable manner, confirming its inferencing efficiency (using as opposed to training). We believe it will not be long before we see Raspberry Pi units running cutdown versions of DeepSeek. This efficiency translates into hosted versions of this model costing just 5% of the equivalent OpenAI price. Lastly, it is being released under the MIT License, a permissive software license that allows near-unlimited freedoms, including modifying it for proprietary commercial use
It’s unclear to what extent DeepSeek is leveraging High-Flyer’s ~50k hopper GPUs (similar in size to the cluster on which OpenAI is believed to be training GPT-5), but what seems liklely is that they’re dramatically reducing costs (inference costs for their V2 model, for example, are claimed to be 1/7 that of GPT-4 Turbo). Their subversive (though not new) claim – that started to hit the US AI names this week – is that “more investments do not equal more innovation.” Liang: “Right now I don’t see any new approaches, but big firms do not have a clear upper hand. Big firms have existing customers, but their cash-flow businesses are also their burden, and this makes them vulnerable to disruption at any time.” And when asked about the fact that GPT5 has still not been released: [/color][/b] “OpenAI is not a god, they won’t necessarily always be at the forefront.”[/color][/i]
Why this comes at a painful moment? This is happening after we just saw a Texas Hold’em ‘All In” push of the chips with respect to the Stargate Announcement (~$500B by 2028E) and Meta taking up CAPEX officially to the range of $60-$65B to scale up Llama and of course MSFT’s $80B announcement…..The markets were literally trying to model just Stargate’s stated demand for ~2mln Unis from NVDA when their total production is only 6mn…..(Nvidia’s European trading is down 9% this morning, Softbank was down 7%). Markets are now wondering if this is a AI bubble popping moment for markets or not (i.e. a dot-com bubble for Cisco). Nvidia Is the largest individual company weight of S&P500 at 7%.
1) We see at least two potential industry strategies. The emergence of more efficient training models out of China, which have been driven to innovate due to chip supply constraints, is likely to further intensify the race for AI dominance between the US and China. The key question for the data center builders, is whether it continues to be a “Build at all Costs” strategy with accelerated model improvements, or whether focus now shifts towards higher capital efficiency, putting pressure on power demand and capex budgets from the major AI players. Near term the market will assume the latter.2) Derating risk near term, earnings less impacted. Although data center exposed names are vulnerable to derating on sentiment, there is no immediate impact on earnings for our coverage. Any changes to capex plans apply with a lag effect given duration (>12M) and exposure in orderbooks (~10% for HOT). We see limited risk of alterations or cancellations to existing orders and expect at this stage a shift in expectations to higher ROI on existing investments driven by more efficient models. Overall, we remain bullish on the sector where scale leaders benefit from a widening moat and higher pricing power.
While DeepSeek’s achievement could be groundbreaking, we question the notion that its feats were done without the use of advanced GPUs to fine tune it and/or build the underlying LLMs the final model is based on through the Distillation technique. While the dominance of the US companies on the most advanced AI models could be potentially challenged, that said, we estimate that in an inevitably more restrictive environment, US’ access to more advanced chips is an advantage. Thus, we don’t expect leading AI companies would move away from more advanced GPUs which provide more attractive $/TFLOPs at scale. We see the recent AI capex announcements like Stargate as a nod to the need for advanced chips.
Did DeepSeek really “build OpenAI for $5M?” Of course not...There are actually two model families in discussion. The first family is DeepSeek-V3, a Mixture-of-Experts (MoE) large language model which, through a number of optimizations and clever techniques can provide similar or better performance vs other large foundational models but requires a small fraction of the compute resources to train. DeepSeek actually used a cluster of 2048 NVIDIA H800 GPUs training for ~2 months (a total of ~2.7M GPU hours for pre-training and ~2.8M GPU hours including post-training). The oft-quoted “$5M” number is calculated by assuming a $2/GPU hour rental price for this infrastructure which is fine, but not really what they did, and does not include all the other costs associated with prior research and experiments on architectures, algorithms, or data. The second family is DeepSeek R1, which uses Reinforcement Learning (RL) and other innovations applied to the V3 base model to greatly improve performance in reasoning, competing favorably with OpenAI’s o1 reasoning model and others (it is this model that seems to be causing most of the angst as a result). DeepSeek’s R1 paper did not quantify the additional resources that were required to develop the R1 model (presumably they were substantial as well).[ . . . ]hould the relative efficiency of V3 be surprising? As an MoE model we don’t really think so...The point of the mixture-of-expert (MoE) architecture is to significantly reduce cost to train and run, given that only a portion of the parameter set is active at any one time (for example, when training V3 only 37B out of 671B parameters get updated for any one token, vs dense models where all parameters get updated). A survey of other MoE comparisons suggests typical efficiencies on the order of 3-7x vs similarly-sized dense models of similar performance; V3 looks even better than this (>10x), likely given some of the other innovations in the model the company has brought to bear but the idea that this is something completely revolutionary seems a bit overblown, and not really worthy of the hysteria that has taken over the Twitterverse over the last several days.
Of course, we do not know DeepSeek’s economics around these (and the models themselves are open and available to anyone that wants to work with them, for free) but the whole thing brings up some very interesting questions about the role and viability of proprietary vs open-source efforts that are probably worth doing more work on…
it’s gone from LTM earnings of around $4bn two years ago to around $63bn in the last quarterly release. For context, this is around half the total earnings made by listed stocks in each of UK, Germany and France over the last 12 months. The forecasts are for Nvidia to continue to see significant earnings growth.So this is a company that has gone from relative earnings obscurity to one of the most profitable in the world inside two years and the largest company in the world as of Friday night. The problem is that the AI industry is embryonic. And it’s almost impossible to know how it will develop or what competition current winners might face even if you fully believe in its potential to drive future productivity. The stratospheric rise of DeepSeek reminds us of this.
This strikes me not about the end of scaling or about there not being a need for more compute, or that the one who puts in the most capital won’t still win (remember, the other big thing that happened yesterday was that Mark Zuckerberg boosted AI capex materially). Rather, it seems to be about export bans forcing competitors across the Pacific to drive efficiency: “DeepSeek V2 was able to achieve incredible training efficiency with better model performance than other open models at 1/5th the compute of Meta’s Llama 3 70B. For those keeping track, DeepSeek V2 training required 1/20th the flops of GPT-4 while not being so far off in performance.” If DeepSeek can reduce the cost of inference, then others will have to as well, and demand will hopefully more than make up for that over time.
We believe the impact of those advantages will be twofold. In the medium to longer term, we expect LLM infrastructure to go the way of the telco infrastructure and become a ‘commodity technology’. The financial impact on those deploying AI capex today depends on regulatory interference – which had a major impact on Telcos. If we think of AI as another ‘tech infrastructure layer’, like the internet, the mobile, and the cloud, in theory the beneficiaries should be companies that leverage that infrastructure. While we think of Amazon, Google, and Microsoft as cloud infrastructure, this emerged out of the need to support their existing business models: e-commerce, advertising and information-worker software. The LLM infrastructure is different in that, like the railroads and telco infrastructure, these are being built ahead of true product/market fit.
If we acknowledge that DeepSeek may have reduced costs of achieving equivalent model performance by, say, 10x, we also note that current model cost trajectories are increasing by about that much every year anyway (the infamous “scaling laws...”) which can’t continue forever. In that context, we NEED innovations like this (MoE, distillation, mixed precision etc) if AI is to continue progressing. And for those looking for AI adoption, as semi analysts we are firm believers in the Jevons paradox (i.e. that efficiency gains generate a net increase in demand), and believe any new compute capacity unlocked is far more likely to get absorbed due to usage and demand increase vs impacting long term spending outlook at this point, as we do not believe compute needs are anywhere close to reaching their limit in AI. It also seems like a stretch to think the innovations being deployed by DeepSeek are completely unknown by the vast number of top tier AI researchers at the world’s other numerous AI labs (frankly we don’t know what the large closed labs have been using to develop and deploy their own models, but we just can’t believe that they have not considered or even perhaps used similar strategies themselves).
If the reports of DeepSeek’s success at such low costs are true, and this is a big if as there is still a lot we don’t know in terms of how it was developed, it would pose problems for some of the biggest AI winners over the last two years. As we type this, the S&P 500 (proxied by SPY) is trading down about 2.25% which would be the largest downside gap since early August and the 60th largest downside gap in the ETF’s history dating back to 1993.For the Nasdaq 100 (QQQ), the declines are even steeper. With the ETF poised to gap down 3.8% at the open, it would be QQQ’s largest downside gap since early August and the 20th largest downside gap since its inception in 1999. As shown in the chart below, before last August’s downside gap, the last time QQQ gapped down as much as it on pace to today was back in September 2020.
I’m not gonna try to play Semi- / AI- expert of the long-term viability and potential AI paradigm shift here…but there are heavy “modern market structure” and mechanical flow implications here for the Stock Market…and the “US Exceptionalism” trade positioning, as “innovation” is a core component to that view…But the larger issue is that Megacap Tech IS the US Equities Market, and anybody with a mandate to own Equities is by default stuffed on these names in order to survive recent years, where Mag8 are 35% of SPX and 49% of NDX index weights, respectivelyAdditionally, we’ve seen substantial “Spot Up, Vol Up” Upside chasing into Calls (e.g. 95%ile + Call Skews across Semi names) and general demand for Calls in MegaCap Tech / AI -names recently again in recent weeks…which can then now “collapse under the weight of their own Delta” on the Spot pullbacksAnd when you add-in the massive allocation that these “Tech Animal Spirits” =-names and “concentric themes” hold within Leveraged ETF product universe at record AUM, there is a potential monster “de-risking” flow today as 1) Options see Calls go out of the money and Dealers adjust hedges / Puts are bought with chunky NEGATIVE $Delta flow...and as 2) Leveraged ETF’s will sell huge $notional to rebalance the products vs these single-name moves (we estimate using pre-mkt prices at -$22B)…which will inherently then “feedback” with Discretionary risk-management and potential front-running of those flows.
The Treasury market rally itself is notable for several reasons. First, the outright magnitude of the drop in 10-year yields at >12 bp indicates that investors are anxious about the potential ramifications from a wholesale reevaluation of the tech sector – and potentially equity market more broadly.Moreover, a challenge of 4.50% in 10s has ramifications from a technical perspective and we’ll note that 4.488% represents two standard deviations below the 20-day moving-average. Said differently, entering a sub-4.50% trading range for 10s will represent a material challenge to the prior bond bearish narrative that has been in place since the November election. The overnight move has created an opening gap in 10s that comes in at 4.599% to 4.621% – the existence of which could persist for longer than might typically be the case in the event that the current stock market selloff has further to run.Monday’s departure point will make this week particularly relevant from a technical perspective insofar as the weekly close could readily recast near-term expectations. Today’s Treasury supply (2s and 5s) will undoubtedly benefit from the reversal in risk assets and the absence of meaningful economic data ahead of the Fed suggests that any move will have plenty of room to run. We’re also anticipating a dovish pause, a dynamic that should reinforce any further tone-shift favoring lower yields from here.
We long expected Trump’s inauguration to be the peak of tariff fears and a “buy the rumour, sell the fact” moment for trades, but this Deepseek story provides a more fundamental equity flow story that in the short term will dominate any macro argument.I wouldn’t pay US 2s until we’re below 4.1% and closer to 4% (currently 4.17%), markets are likely to continue to buy duration as they rotate out of stocks with 4.4% for US 10s in mind. If we get down to 4.2% it’s due to a) Deepseek’s revelations being proven completely true and b) US data surprises continue to turn lower.An equity market selloff like this increases the probability of 1) a more dovish FOMC meeting this week, March is pricing at 10bps for a cut – we expect 25bps. 2) Trump perhaps holding off from more aggressive tariff actions at the end of this week on Canada/Mexico and waiting for the more likely timing of April’s deadlines.When equity markets selloff like this the first thing to happen is a position squeeze with margin calls forcing profit taking on winning trades elsewhere. Making consensus calls very quickly unravel. Long USD positioning has climbed to $33.8bn in the futures market and whilst it has been higher in previous USD cycles (up to $48bn in 2015/16) long USD is the obvious position unwind candidate with short EUR, GBP, CHF and JPY the beneficiaries.This could open us up 1.07 in EURUSD. From this level of EUR short positioning it would take another 2% rally before the mkts net short position becomes flat.We had expected the end of the US exceptionalism to be a Q2 story. But if we combine a) US equity market outflows with b) seasonal quirks in European PMIs likely to see better growth data in Q1 and c) not enough priced for a Fed cut in March – perhaps that time is now, and we all need to revise up Q2 EUR/USD forecasts.
Artículo sobre el peak currantes. Benzino, le va a encantar.https://www.larazon.es/economia/envejecimiento-generacion-babyboomer-hundira-mercado-laboral-10-anos_202501276796d75c1236b6000167f2a1.html?outputType=amp#amp_tf=De%20%251%24s&aoh=17379920672207&csi=0&referrer=https%3A%2F%2Fwww.google.com
La organización considera fundamental que tanto las administraciones públicas como las organizaciones representativas de los trabajadores autónomos "acometan, sin dilación, un plan nacional de relevo generacional".
La irrupción de la IA china DeepSeek desata un terremoto en las bolsas mundialesEl miedo a que EE.UU. pierda el dominio del mercado de chips provoca bruscas caídas de más del 3 % en el Nasdaq, con Nvidia y ASML desplomándose más de un 10 % y un 7 %, respectivamente
Cita de: tomasjos en Enero 27, 2025, 16:57:32 pmArtículo sobre el peak currantes. Benzino, le va a encantar.https://www.larazon.es/economia/envejecimiento-generacion-babyboomer-hundira-mercado-laboral-10-anos_202501276796d75c1236b6000167f2a1.html?outputType=amp#amp_tf=De%20%251%24s&aoh=17379920672207&csi=0&referrer=https%3A%2F%2Fwww.google.com...y pone en riesgo las pensiones Al menos no hay ninguna referencia a traer millones de inmigrantes. CitarLa organización considera fundamental que tanto las administraciones públicas como las organizaciones representativas de los trabajadores autónomos "acometan, sin dilación, un plan nacional de relevo generacional".Van a hacer ahora lo que llevan cuatro décadas esquivando... sí, sí, ya, ya.
[Noticia de ahora mismo: «Nvidia pierde 465.000 millones: la mayor caída de la historia por la llegada de la china DeepSeek». Estaba claro, señoras, señores. ¿A quién se le ocurre nombrar una empresa con un pecado capital? Así de gilipollas son los ilustrados oscuros. Para más inri, el bitcóin ya cotiza por debajo de 100 mil dólares y, encima, ha caído Velika Novosilka. Igual no llegamos al 28 de febrero.]
Siempre pensé que el retroceso laboral vino con una misión: provocar el salariazo dentro de una lógica de ir a menos. Una vez efectivo, se debió haber vuelto a contrataciones rígidas, atar a trabajadores para que el compromiso sea mutuo, deshacer las reformas laborales. Pero con el caramelo en la mano, dijeron que así estaban muy bien.