vmg12 10 hours ago

Here is a charitable perspective on what's happening:

- Nvidia has too much cash because of massive profits and has nowhere to reinvest them internally.

- Nvidia instead invests in other companies that use their gpus by providing them deals that must be spent on nvidia products.

- This accelerates the growth of these companies, drives further lock in to nvidia's platform, and gives nvidia an equity stake in these companies.

- Since growth for these companies is accelerated, future revenue will be brought forward for nvidia and since these investments must be spent on nvidia gpus it drives further lock in to their platform.

- Nvidia also benefits from growth due to the equity they own.

This is all dependent on token economics being or becoming profitable. Everything seems to indicate that once the models are trained, they are extremely profitable and that training is the big money drain. If these models become massively profitable (or at least break even) then I don't see how this doesn't benefit Nvidia massively.

  • moogly 6 hours ago

    > Nvidia has too much cash because of massive profits and has nowhere to reinvest them internally.

    Here's an idea: they could make actual GPUs used for games affordable again, and not have Jensen Huang lie on stage about their performance to justify their astronomical prices. Sure, companies might want to buy them for ML/AI and crash the market again but I'm sure a company of their caliber could solve that if they _really_ wanted to.

    • Lord-Jobo 6 hours ago

      I also just don’t understand, as someone with no business experience, how they aren’t just pouring all of that money into enhancing their production capacity. That’s very clearly their bottleneck here.

      Yes, I’m certain they are spending an astronomical amount on that already, but why not more? Surely paying more money for construction of more facilities still nets gain even if you run into diminishing returns?

      Instead they set up this whacko tax laundering scheme? Just seems like more corporate pocket filling to me, an idiot with no business knowledge.

      • lukev 6 hours ago

        The bottleneck is TSMC, who also make chips for almost every other hardware vendor.

        TSMC is indeed increasing their production capability as fast as possible, but it's not easy... chip foundries are extremely expensive, complex, and take serious expertise to operate.

      • brookst 6 hours ago

        It’s called seeding the market. If they can accelerate the growth of potential customers, it will be more profitable than just increasing production to serve existing customers.

        Think of exponential growth — would you rather increase the base or the exponent?

      • sleight42 6 hours ago

        Hedging their bets against a potential sudden downturn in consumption of their product, e.g., an AI bubble exploding? If they invest heavily in production capacity only to find that there is not commensurate consumption, then they'll have lost badly.

      • mvdtnz 3 hours ago

        > as someone with no business experience, how they aren’t just pouring all of that money into enhancing their production capacity. That’s very clearly their bottleneck here.

        They know this madness can't go on forever. The last thing they need is to be left with billions of dollars of unused capacity when the bottom falls out of this very stupid bubble.

    • HDThoreaun 6 hours ago

      Why would they want to do that? The only sector that matter to nvidia is datacenter, its where 90%+ of their profits are. Making their consumer sector even less profitable just seems like a waste of time

      • moogly 3 hours ago

        How about positive mindshare? Regular people not growing up absolutely hating nvidia's guts and only begrudgingly buying their products. Also ensuring that a pretty big industry won't die from becoming too expensive.

        Plus, diversification is good for when the bubble inevitably bursts.

        But that's long-term thinking and we can't have that. People give Huang credit for having had a long-term vision on AI, but it feels like he definitely has blinders on right now.

        • senordevnyc 2 hours ago

          The consumer gaming card market is minuscule in comparison to their primary market now, to the point where worrying about diversifying there probably doesn’t make sense. Nor does it really matter whether consumer gamers hate them. That is likely to have zero effect on their core customer now.

          • Eisenstein 2 hours ago

            Underestimating compounding and secondary effects, especially while rationalizing the abandonment of their core market and capability is one of the most famous ways that big companies provide evidence of their downward spiral. I can feel the MBA energy from here.

            • senordevnyc 2 hours ago

              Can you name any companies that suffered by switching focus away from one market where they dominate in order to also dominate a market that is 10x the size of the first market already, and growing faster?

              • Eisenstein 14 minutes ago

                Every specific situation is different, but the pattern I mentioned is easy to find. Here are three examples: RCA, GE, HP.

  • mikewarot 42 minutes ago

    >This is all dependent on token economics being or becoming profitable

    What is the actual value of a token? If it were generated by a human expert in a given field? This should set an upper limit for now.

  • tqian an hour ago

    If models are profitable once trained, isn't it weird that chatgpt and Claude have $200 tiers that still have usage limits?

  • pols45 10 hours ago

    Yup. Not just Nvidia. Just look at the quarterly results reported by Amazon, Google, Meta, Microsoft and Apple. Each one is reporting revenues never before seen in history. If you make 100 Billion a quarter you have to spend it on something.

    These guys are running hyper optimized cash extraction mega machines. There is no comparison to previous bubbles, cause so no such companies ever existed in the past.

    • solarwindy 10 hours ago

      100 billion a quarter is Alphabet, right? Given how much click fraud there is, and that every org and business under the sun is held to ransom to feature on the SERP for their own name even — it’s tempting to say Google’s become a private tax on everything.

      • daedrdev 8 hours ago

        No, Apple also has 100 billion dollars in revenue despite floundering AI and running a very hardware dependent business.

    • dingaling 4 hours ago

      What's shocking is the gulf between those companies and corporate 'normality'.

      Eastern Airways, a UK airline, has just gone bust due to accumulated debts of £26 million. That's not even a rounding error for Google, yet was enough to put a 47-year-old company into bankruptcy and its staff out of work.

      I think the only historical parallel to this disparity was the era of the East India Company.

    • trollbridge 10 hours ago

      Odd how they are simultaneously having large layoffs even as reporting record revenues.

      The question is where the profits are.

    • skywhopper 10 hours ago

      So many such profitable companies are the best possible evidence for the need for drastic antitrust intervention. The lack of competition and regulation is leading to a massive drain on every other sector.

      • marbro 7 hours ago

        This bubble is caused by excess competition. There are 4 large companies who believe that a large new market is being created so each is investing large amounts without any evidence that there will be a single winner that dominates the future market. None of these companies has anything remotely resembling a monopoly except for Amazon in online retail.

        • griffzhowl 5 hours ago

          Google: search, chrome, youtube

          Microsoft: desktop software

          Meta: social media

          Maybe on some technical definitions of "monopoly" these aren't monopolies, but nothing remotely resembling a monopoly? come on maan

    • tigershark 7 hours ago

      How much more was worth USD at the beginning of the year?

  • PeterStuer 6 hours ago

    It is'nt just nvidia though.

  • Eisenstein 10 hours ago

    Your conclusion about training being the cost factor that will eventually align with profitability in the inference phases relies on training new models not being an endless arms race.

    • vmg12 10 hours ago

      If the inference is profitable and training new models is actually an endless arms race that's actually the best outcome for nvidia specifically.

  • MangoToupe 6 hours ago

    I'm just confused why people think token-based computing is going to be in such demand in the future. It's such a tiny slice of problems worth solving.

    • thorncorona 6 hours ago

      It's like how every big co these days is ML. It will transition to LLMs as well.

      Just give it a few years.

      • brookst 6 hours ago

        Yep. Same vibes as “ha ha who needs internet connected appliances” (pretty much all appliances are internet connected now). And the apocryphal “there is a worldwide market for maybe 5 computers”.

        • baobabKoodaa 2 hours ago

          No-one "needs" or even wants appliances to be connected to the internet. You claim that "pretty much all" appliances are internet connected, while almost none of the appliances in my house are.

  • belter 10 hours ago

    > Everything seems to indicate that once the models are trained, they are extremely profitable

    Some data would reinforce your case. Do you have it?

    Here is my data point: "You Have No Idea How Screwed OpenAI Actually Is" - https://wlockett.medium.com/you-have-no-idea-how-screwed-ope...

    • trollbridge 10 hours ago

      Right. As far as I can tell, OpenAI, Grok, etc sell me tokens at a loss. But I am having a hard time figuring out how to turn tokens into money (i.e. increased productivity). I can justify $40-$200 per developer per month on tokens but not more than that.

      • koolba 10 hours ago

        There’s about 5M software devs in the US so even at $1000/year/person spend, that’s only $5B of revenue to go around. Theres plenty of other uses cases but focusing on pure tech usage, it’s hard to see how the net present value of that equates to multiple trillions of dollars across the ecosystem.

        • treis 9 hours ago

          It's the first new way of interacting with computers since the iPhone. It's going to be massively valuable and OpenAI is essentially guaranteed to be one of the players.

          • red-iron-pine 4 hours ago

            I'm waiting for my Google Glass smart glasses to be useful for anything other then annihilating the privacy of everyone around me

            Blackberry was a big deal for a while, too

          • _aavaa_ 8 hours ago

            Why is their product not palm? Or windows mobile?

            • treis 7 hours ago

              It's not windows mobile because OpenAI was first and is the clear leader in the market. Windows mobile was late to the party and missed their window.

              Palm is closer but it's a different world. It's established that Internet advertising companies are worth trillions. It's only in retrospect that what Palm could have been is obvious.

              Barring something very unexpected OpenAI is coming out on top. They're prepaying for a good 5-10 years of compute. That means their inference and training for that time are "free" because they've been paid for. They're going to be able to bury their competition in money or buy them out.

              • _aavaa_ 7 hours ago

                Windows mobile by the time it looked like the iPhone was late to the party. But windows had been releasing a mobile os for a long time before that. Microsoft was first, they just didn’t make as good of a product as Apple despite their money.

                OpenAI is also first, but it is absolutely not a given that they are the Apple in this situation. Microsoft too had money to bury the competition, they even staged a fake funeral when they shipped windows phone 7.

                > Barring something very unexpected

                Like the release of an iPhone?

                • treis 5 hours ago

                  Yep. It would have to be something that dramatic to render all the technology and infrastructure OpenAI has obsolete. But if it's anything like massive data training on a huge number of GPUs then OpenAI is one of the winners.

        • HDThoreaun 6 hours ago

          > Theres plenty of other uses cases

          This is where the money is. Anthropic just released claude for excel. If it replaces half of the spreadsheet pushers in the country theyre looking at massive revenue. They just started with coding because theres so much training data and the employees know a lot about coding

      • schwarzrules 10 hours ago

        I'm not trying to be annoying, but surely if you'd justify spending $200/developer/month, you could afford $250/month...

        The reason I wonder about that is because that also seems to be the dynamic with all these deals and valuations. Surely if OpenAI would pay $30 billion on data centers, they could pay $40 billion, right? I'm not exactly sure where the price escalations actually top out.

        • h2zizzle 9 hours ago

          No? That's a 25% expense increase. You just ate the margins on my product/service, and then some.

      • simianwords 10 hours ago

        why would they sell you at a loss when they have been decreasing prices by 2x every year or so for the last 3 years? people wanted to purchase the product at price "X" in 2023 and now the same product costs X costs 10 times less over the years.. do you think they were always selling at a loss?

    • simianwords 10 hours ago

      Inference cost has been going down for a while now. At what point do you think it will be profitable? When cost goes down by 2x? 5x?

    • logicprog 5 hours ago

      I can't read your hyperbolically titled paywalled medium post, so idk if it has data I'm not aware of or is just rehashing the same stats about OpenAI & co currently losing money (mostly due to training and free users) but here's a non paywalled blog post that I personally found convincing: https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...

      • ragingregard 3 hours ago

        The above article is not convincing at all.

        Nothing on infra costs, hardware throughput + capacity (accounting for hidden tokens) & depreciation, just a blind faith that pricing by providers "covers all costs and more". Naive estimate of 1000 tokens per search using some simplistic queries, exactly the kind of usage you don't need or want an LLM for. LLMs excel in complex queries with complex and long output. Doesn't account at all for chain-of-thought (hidden tokens) that count as output tokens by the providers but are not present in the output (surprise).

        Completely skips the fact the vast majority of paid LLM users use fixed subscription pricing precisely because the API pay-per-use version would be multiples more expensive and therefore not economical.

        Moving on.

    • bwfan123 10 hours ago

      This is behind a paywall. Is there a free link you can share ?

jacquesm 11 hours ago

These kinds of deals were very much a la mode just prior to the .com crash. Companies would buy advertising, then the websites and ad agencies would buy their services and they'd spend it again on advertising. The end result is immense revenues without profits.

  • forgetfulness 10 hours ago

    Circular investments were also a compounding factor in the Japanese asset price bubble.

    The practice was known as “zaitech”

    > zaitech - financial engineering

    > In 1984, Japan’s Ministry of Finance permitted companies to operate special accounts for their shareholdings, known as tokkin accounts. These accounts allowed companies to trade securities without paying capital gains tax on their profits.

    > At the same time, Japanese companies were allowed to access the Eurobond market in London. Companies issued warrant bonds, a combination of traditional corporate bonds with an option (the “warrant") to purchase shares in the company at a specified price before expiry. Since Japanese shares were rising, the warrants became more valuable, allowing companies to issue bonds with low-interest payments.

    > The companies, in turn, placed the money they raised into their tokkin accounts that invested in the stock market. Note the circularity: companies raised money by selling warrants that relied on increasing stock prices, which was used to buy more shares, thus increasing their gains from investing in the stock market.

    https://www.capitalmind.in/insights/lost-decades-japan-1980s...

    • WOTERMEON 6 hours ago

      And I guess no one of the people who were doing that paid but the community paid the price for this scam ?

  • zemvpferreira 10 hours ago

    There’s one key difference in my opinion: pre-.com deals were buying revenue with equity and nothing else. It was growth for growth’s sake. All that scale delivered mostly nothing.

    OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.

    • Arkhaine_kupo 10 hours ago

      > they’re using their equity to buy compute that is critical to improving their core technology

      But we know that growth in the models is not exponential, its much closer to logarithmic. So they spend =equity to get >results.

      The ad spend was a merry go round, this is a flywheel where the turning grinds its gears until its a smooth burr. The math of the rising stock prices only begins to make sense if there is a possible breakthrough that changes the flywheel into a rocket, but as it stands its running a lemonade stand where you reinvest profits into lemons that give out less juice

      • J_McQuade 10 hours ago

        There is something about an argument made almost entirely out of metaphors that amuses me to the point of not being able to take it seriously, even if I actually agree with it.

        • powerhouse007 10 hours ago

          As much as I dislike metaphors, this sounded reasonable to me. Just don't go poking holes in the metaphor instead of the real argument.

          • gilleain 9 hours ago

            Indeed, poking holes in the metaphor is like putting a pin in a balloon, rather than knocking it out of the park by addressing the real argument.

      • DenisM 9 hours ago

        OpenAI invests heavily into integration with other products. If model development stalls they just need to be not worse than other stalled models while taking advantage of brand recognition and momentum to stay ahead in other areas.

        In that sense it makes sense to keep spending billions even f model development is nearing diminishing return - it forces competition to do the same and in that game victory belongs to the guy with deeper pockets.

        Investors know that, too. A lot of startup business is a popularity contents - number one is more attractive for the sheer fact of being number one. If you’re a very rational investor and don’t believe in the product you still have to play this game because others are playing it, making it true. The vortex will not stop unless limited partners start pushing back.

        • otherjason 9 hours ago

          But, if model development stalls, and everyone else is stalled as well, then what happens to turn the current wildly-unprofitable industry into something that "it makes sense to keep spending billions" on?

          • accrual 9 hours ago

            I suspect if model development stalls we may start to see more incremental releases to models, perhaps with specific fixes or improvements, updates to a certain cutoff date, etc. So less fanfare, but still some progress. Worth spending billions on? Probably not, but the next best avenue would be to continue developing deeper and deeper LLM integrations to stay relevant and in the news.

            The new OpenAI browser integration would be an example. Mostly the same model, but with a whole new channel of potential customers and lock in.

          • camdenreslink 8 hours ago

            If model development stalls, then the open weight free models will eventually totally catch up. The model itself will become a complete commodity.

            • DenisM 6 hours ago

              It very well might. The ones with most smooth integrations and applications will win.

              This can go either way. For databases open source integration tools prevailed, the commercial activity left hosting those tools.

              But enterprise software integration that might end up mostly proprietary.

          • vineyardmike 7 hours ago

            Because they’re not that wildly unprofitable. Yes, obviously the companies spend a ton of money on training, but several have said that each model is independently “profitable” - the income from selling access to the model has overcome the costs of training it. It’s just that revenues haven’t overcome the cost of training the next one, which gets bigger every time.

            • alangibson 7 hours ago

              > the income from selling access to the model has overcome the costs of training it.

              Citation needed. This is completely untrue AFAIK. They've claimed that inference is profitable, but not that they are making a profit when training costs are included.

              • JohnnyMarcone an hour ago

                I've also seen Open AI and Anthropic say it's pretty close at least. I'll try to follow up with a source.

        • chii 9 hours ago

          The bigger threat is if their models "stall", while a new up-start discovers an even better model/training method.

          What _could_ prevent this from happening is the lack of available data today - everybody and their dog is trying to keep crawlers off, or make sure their data is no longer "safe"/"easy" to be used to train with.

          • DenisM 7 hours ago

            They can also buy out the startup or match the development by hiring more people. Their comp packages are very competitive.

      • brokencode 8 hours ago

        Yeah, except you can keep on squeezing these lemons for a long time before they run out of juice.

        Even if the model training part becomes less worthwhile, you can still use the data centers for serving API calls from customers.

        The models are already useful for many applications, and they are being integrated into more business and consumer products every day.

        Adoption is what will turn the flywheel into a rocket.

        • mentalgear 7 hours ago

          Well, the thing is that that kind of hardware chips quickly decrease in value. It's not like the billions spend in past bubbles like the 2000s where internet infrastructure was build (copper, fibre) or even during 1950s where transport infrastructure (roads) were build.

          • brokencode 5 hours ago

            Data centers are massive infrastructural investments similar to roads and rails. They are not just a bunch of chips duct taped together, but large buildings with huge power and networking requirements.

            Power companies are even constructing or recommissioning power plants specifically to meet the needs of these data centers.

            All of these investments have significant benefits over a long period of time. You can keep on upgrading GPUs as needed once you have the data center built.

            They are clearly quite profitable as well, even if the chips inside are quickly depreciating assets. AWS and Azure make massive profits for Amazon and Microsoft.

    • _heimdall 10 hours ago

      I think that, at best, that description boils down to Nvidia, Oracle, etc inventing fake wealth to build something and OpenAI building their own fake wealth by getting to use that new compute effectively for free.

      There are physical products involved, but the situation otherwise feels very similar to ads prior to dotcom.

      • slashdev 10 hours ago

        The same way the stock market invents a trillion dollars of fake wealth on a strong up day?

        That's capital markets working as intended. It's not necessarily doomed to end in a fiery crash, although corrections along the way are a natural part of the process.

        It seems very bubbly to me, but not dotcom level bubbly. Not yet anyway. Maybe we're in 1998 right now.

        • _heimdall 8 hours ago

          The stock market isn't inventing money. Those investing in the stock market might be, those buying on leverage for example.

          Capital markets weren't intended for round trip schemes. If a company on paper hands 100B to another company who gives it back to the first company, that money never existed and that is capital markets being defrauded rather than working as expected.

        • rapind 9 hours ago

          I think it's worse. The US market feels like a casino to me right now and grift is at an all time high. We're not getting good economic data, it's super unpredictable, and private equity is a disaster waiting to happen IMO. For sure there are smart people able to make money on the gamble, but it's not my jam.

          I don't tend to benefit from my predictions as things always take longer to unfold than I think they will, but I'm beyond bearish at present. I'd rather play blackjack.

          • slashdev 9 hours ago

            More money is lost by bears fighting a bull market, than in actual bear market crashes.

            I’ve made that mistake already.

            I’m nervous about the economic data and the sky high valuations, but I’ll invest with the trend until the trend changes.

        • teiferer 9 hours ago

          > It seems very bubbly to me, but not dotcom level bubbly.

          Not? Money is thrown after people without really looking at the details, just trying to get in on the hype train? That's exactly how the dotcom bubble felt like.

          • slashdev 9 hours ago

            Nvidia has a trailing PE of 50. Cisco was 200 At the height of the dotcom bubble.

            Nowhere near that level. There’s real demand and real revenue this time.

            It won’t grow as fast as investors expect, which makes it a bubble if I’m right about that. But not comparable to the dotcom bubble. Not yet anyway.

            • _heimdall 8 hours ago

              We shouldn't judge whether an indicator is stable or okay only by looking to see if its the highest historical value.

              PE ratios of 50 make no sense, there is no justification for such a ratio. At best we can ignore the ratio and say PE ratios are only useful in certain situations and this isn't one of them.

              Imagine if we applied similar logic to other potential concerns. Is a genocide of 500,000 people okay because others have done drastically more?

              • slashdev 8 hours ago

                I’m not asking if it makes sense, I’m simply pointing out that by that measure this is much less extreme than 2000. As I stated, I think we’re in a bubble, so valuations won’t make much sense.

                If you have a better measure, share it. I trust data more than your or my feelings on the matter.

                • teiferer 7 hours ago

                  Unless you have evidence that this measure of yours is a reliable predictor of how big a bubble is, it's on par with my gut feeling.

        • staticautomatic 10 hours ago

          I sell you a cat for $1B and you sell me a dog for $1B and now we’re both billionaires! Whether the capital markets “want” that or not it’s still silly.

          • slashdev 9 hours ago

            If we’re both willing to pay that in a free market economy, then we both leave the deal happy.

            Things are worth what people are willing to pay for them. And that can change over time.

            Sentiment matters more than fundamental value in the short term.

            Long term, on a timescale of a decade or more, it’s different.

            • _heimdall an hour ago

              Both parties would need the $1B prior to the transaction for it to even potentially be meaningful, and still they just traded a cat for a dog and only paid each other on paper.

              That ultimately wouldn't be a big deal if the paper valuation from the trade didn't matter. As it stands, though, both parties could log it as both revenue and expenses, and being public companies their valuation, and debt they can borrow against it, is based in part on revenue numbers. If the number was meaningless who cares, but the numbers aren't meaningless and at such a scale they can impact the entire economy.

            • overfeed 8 hours ago

              > If we’re both willing to pay that in a free market economy

              The thing is: you've paid nothing - all you did was trade pets and played an accounting trick to make them seem more valuable than they are.

            • fireflash38 8 hours ago

              Is that not fraud?

              • _heimdall an hour ago

                Yes, it is fraud round tripping is fraud, whether the government is willing to prosecute it or not.

    • 0xbadcafebee 9 hours ago

      Eventually when ChatGPT replaces Google Search, they will run ads, and so have that whole revenue stream. Still isn't enough money to buy the trillions worth of infrastructure they want, but it might be enough to keep the lights on.

      • schmidtleonard 9 hours ago

        That's an insightful point! Making insightful points like that one is taxing on the brain, you should consider an electolyte drink like Brawndo™ (it's got what plants crave) to keep yourself sharp!

        Ugh I hate it so much, but you're right, it's coming.

        • upboundspiral 7 hours ago

          One thing I've been contemplating lately is that from a business perspective, when your competitors expand their revenue avenues (generally through ads) you have three options: copy them to catch up, do nothing and perish, and lobby the government for increased consumer protections.

          I've started to wonder why we see so few companies do this. It's always "evil company lobbying to harm the its customers and the nation." Companies are made up of people, and for myself, if I was at a company I would be pushing to lobby on behalf of consumers to be able to keep a moral center and sleep at night. I am strongly for making money, but there are certain things I am not willing to do for it.

          Targeted advertising is one of these things that I believe deserves to fully die. I have nothing against general analytics, nor gathering data about trends etc, but stalking every single person on the internet 24/7 is something people are put in jail for if they do it in person.

    • some_guy_nobel 8 hours ago

      > OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.

      I'm commenting here in case a large crash occurs, to have a nice relic of the zeitgeist of the time.

      • zemvpferreira 7 hours ago

        Happy to have provided. I’m not an AI bull and not in any way invested in the U.S. economy besides a little money in funds, but I do try to think about the war of today vs the war of yesterday. Hopefully that’s always en vogue.

    • bayarearefugee 10 hours ago

      > critical to improving their core technology

      It is at the very least highly debatable how much their core technology is improving from generation to generation despite the ballooning costs.

    • api 10 hours ago

      The assumption is that they have a large moat.

      If they don't then they're spending a ton of money to level up models and tech now, but others will eventually catch up and their margins will vanish.

      This will be true if (as I believe) AI will plateau as we run out of training data. As this happens, CPU process improvements and increased competition in the AI chip / GPU space will make it progressively cheaper to train and run large models. Eventually the cost of making models equivalent in power to OpenAI's models drops geometrically to the point that many organizations can do it... maybe even eventually groups of individuals with crowdfunding.

      OpenAI's current big spending is helping bootstrap this by creating huge demand for silicon, and that is deflationary in terms of the cost of compute. The more money gets dumped into making faster cheaper AI chips the cheaper it gets for someone else to train GPT-5+ competitors.

      The question is whether there is a network effect moat similar to the strong network effect moats around OSes, social media, and platforms. I'm not convinced this will be the case with AI because AI is good at dealing with imprecision. Switching out OpenAI for Anthropic or Mistral or Google or an open model hosted on commodity cloud is potentially quite easy because you can just prompt the other model to behave the same way... assuming it's similar in power.

      • simgt 10 hours ago

        > This will be true if (as I believe) AI will plateau as we run out of training data.

        Why would they run out of training data? They needed external data to bootstrap, now it's going directly to them through chatgpt or codex.

        • delis-thumbs-7e 10 hours ago

          As much ChatGPT says I’m basically a genius for asking it a good Vegan cake recipes, I don’t think that is providing it any data it doesn’t already have that makes it anyway better. Also at this point the massive increases in data and computing power seem to bring ever decreasing improvements (and sometimes just decline), so it seems we are simply hitting a limit this kind of architecture can achieve no matter what you throw at it.

          • DenisM 9 hours ago

            ChatGPT chat logs contain massive amount of data teased out of people’s brains. But much of it is lore, biases, misconceptions, memes. There are nuggets of gold in there but it’s not at all clear if there’s a good way to extract them, and until then chat logs will make things worse, not better.

            I’m thinking they eventually figure out who is the source of good data for a given domain, maybe.

            Even if that is solved, models are terrible at long tail.

            • alonmower 2 hours ago

              The necessity of higher quality data from vetted experts is why Mercor just raised at 10B

            • api 8 hours ago

              When I say models will plateau I don't mean there will be no progress. I mean progress will slow down since we'll be scraping the bottom of the barrel for training data. We might never quite run out but once we've sampled every novel, web site, scientific paper, chat log, broadcast transcript, and so on, we've exhausted the rich sources for easy gains.

              • DenisM 8 hours ago

                Chat logs don’t run out. We may run out of novelty in those logs, at which point we may have ran out of human knowledge.

                Or not - there still knowledge in people heads that is not bleeding into ai chat.

                One implication here is that chats will morph to elicit more conversation to keep mining that mine. Which may lead to the need to enrage users to keep engagement.

      • delis-thumbs-7e 10 hours ago

        Apple new M5 can run models over 10B parametres and if they give their new Studio next year enough juice, it can run maybe 30B local model. How long is it that you can run a full GPT-5 on your laptop or homeserver with few grands worth of hardware? What is going to happen to all these GPU farms, since as I understood they are fairly useless for anything else?

        • treis 9 hours ago

          Very few people own top of the line Macs and most interactions are on phones these days. We are many generations of phones away from running GPT-5 on a phone without murdering your battery.

          Even if that weren't true having your software be cheaper to run is not a bad thing. It makes the software more valuable in the long run.

        • api 8 hours ago

          Quantized, a top-end Mac can run models up to about 200B (with 128GiB of unified RAM). They'll run a little slow but they're usable.

          This is a pricey machine though. But 5-10 years from now I can imagine a mid-range machine running 200-400B models at a usable speed.

    • runarberg 9 hours ago

      Wasn’t there also a bunch of telecom infrastructure created in the dot-com bubble, tangible products created, etc? Things like servers, telephone wires, underwater internet cables, tech-storefronts, internet satellites, etc.

      • spogbiper 9 hours ago

        so much fiber was run that in the US over 90% of it wasn't even used

    • bgwalter 9 hours ago

      Dotcom scams included "vendor financing", where telecom equipment providers invested in their customers who built infrastructure:

      https://time.com/archive/6931645/how-the-once-luminous-lucen...

      The customers bought real equipment that was claimed to be required for the "exponential growth" of the Internet. It is very much like building data centers.

    • moralestapia 10 hours ago

      >they’re using their equity to buy compute that is critical to improving their core technology

      That's only like 1/8th of the flywheel, though.

    • ignoramous 10 hours ago

      > There’s one key difference in my opinion

      The other difference (besides Sam's deal making ability) is, willing investors: Nvidia's stock rally leaves it with a LOT of room to fund big bets right now. While in Oracle's case, they probably see GenAI as a way to go big in the Enterprise Cloud business.

      • afavour 10 hours ago

        > Nvidia's stock rally leaves it with a LOT of room to fund big bets right now

        And then what happens if the stock collapses?

        • mulmen 9 hours ago

          Hence the emphasis on right now.

    • SecretDreams 9 hours ago

      > I have some faith it could go another way.

      I wonder how they felt during the .com era.

    • brazukadev 10 hours ago

      Yes, this time is different, trust big bro sama.

  • boringg 8 hours ago

    The original "Tech" boom was an infrastructure boom by the telecoms funded by leveraged debt. It was an overbuild mismatch with the market timing. If you brought forward the timeline to when that infrastructure was used (late 2000s) you probably would never have had the crash.

    This boom is a data center boom with AI being the software layer/driver. This one potentially has a lot longer to run even though everyone is freaking out now. If you believe the AI is rebuilding compute then this changes our compute paradigm in the future. As well as long as we don't get an over leveraged build out without revenue coming in the door. I think we are seeing a lot of revenue come in for certain applications.

    The companies that are all smoke and mirrors built on chatGPT with little defensibility are probably the same as the ones you are referring to in the current era. Or the AI tooling companies.

    To be clear circular deal flow is not a good look.

    I can see the both sides of bull and bear at this moment.

    • cman1444 4 hours ago

      One interesting aspect of this is that, with the exception of OpenAI, all of the companies leading this boom generate massive amounts of income from other arms of their buinesses. I think this is one reason for the potentially longer run, since they can subsidize AI CapEx with these cash flows for quite a while.

  • TZubiri 10 hours ago

    I'd gander a guess that there's nothing tech specific here and that fraudulent schemes are well defined for the SEC and commercial courts to take action if something is not kosher

    • datadrivenangel 10 hours ago

      It's usually not actually fraud. It's the amazon reinvesting back into growth, except the unit economics don't work if everyone cashes out at the same time, and if anyone starts cashing out the growth stops and everyone cashes out before it's too late.

  • CPLX 11 hours ago

    Exactly, everything old is new again. This was one of the drivers of the original dot-com bubble.

nova22033 10 hours ago

Related

https://www.theregister.com/2025/10/29/microsoft_earnings_q1...

Microsoft seemingly just revealed that OpenAI lost $11.5B last quarter

  • boringg 8 hours ago

    Real question -- how else is OpenAI supposed to fund itself? It has capital requirements that the most moneyed business companies can't provide. So it has to come up with ways to get access to money while de-risking the terms. Not saying the circularity works but I don't know how else you raise at their scale.

    This money is well beyond VC capability.

    Either this lets them build to net positive without dying from painful financing terms or they explode spectacularly. Their rate of adoption it seems to be the former.

    • ASinclair 7 hours ago

      They could try selling their services above cost.

      • cush 7 hours ago

        It's highway robbery how cheap tokens are right now. Enjoy the free lunch while it lasts

        • boringg 7 hours ago

          Exactly - its like every VC company in the history of VC subsidized costs for growth. Once those tentacles have latched beware the exit costs though!

          • omnicognate 7 hours ago

            The tentacles seem a bit limp and disorientated on this one. There are lots of them but they just seem to flop wetly against the windows. I hope they're not going to start decomposing and stink the place up.

    • jgalt212 7 hours ago

      If you can only continue to fund a venture using scam-like structures, then maybe it's time to re-evaluate what the goals and value prop of the unfundable venture is.

      • boringg 7 hours ago

        I don't think you understand how ventures are funded.

        • jgalt212 4 hours ago

          Maybe, or maybe you think all venture funding schemes are scams. I am not totally there just yet.

  • FloorEgg 8 hours ago

    Edit: the following is incorrect. I didn't know that the change to IRC § 174 was cancelled this summer.

    ------

    What's crazy is that with the 2021 changes to IRC § 174 most software r&d spending is considered capital investment and can't be immediately expensed. Has to be amortized over 5 years.

    I don't know how that 11.5B number was derived, but I would wager that the net loss on income statement is a lot lower than the net negative cash flow on cash flow statement.

    If that 11.5B is net profit/loss, then whatever the portion of the expense part of the calculation that's software R&D could be 5x larger if it weren't for the new amortization rule.

    • gausswho 8 hours ago

      Wasn't that change cancelled this summer?

  • guywithahat 10 hours ago

    It's incredible how Tesla used to lose a few hundred million a year and analysis shows would freak out claiming they'd never be profitable. Now Rivian can lose 5 billion a year and I don't hear anything about it, and OpenAI can lose 11 billion in a quarter and Microsoft still backs them.

    I do think this is going to be a deeply profitable industry, but this feels a little like the WeWork CEO flying couches to offices in private jets

    • dmoy 9 hours ago

      > Now Rivian can lose 5 billion a year and I don't hear anything about it, and OpenAI can lose 11 billion in a quarter and Microsoft

      Rivian stock is down 90%, and I fairly regularly read financial news about it having bad earnings, stock going even lower, worst-in-industry reliability, etc etc.

      I don't know why you don't hear about it, but it might be because it's already looking dead in the water so there's no additional news juice to squeeze out of it.

      • guywithahat 6 hours ago

        That's true, I shouldn't have written it off and was too eager to make the analogy.

        There was a point where because of Tesla's enormous profits, it was seen as ok for Rivian to lose that much in a year, which was incredible because it's about the same amount of money Tesla lost during its entire tenure as a public company. You're right though they've been criticized for it and have paid the (stock) price for it.

    • Schiendelman 9 hours ago

      Rivian lost something like $5B in 2024, but they're on track to only lose $2.25B in 2025. That trend line is clear. In 2026 they release a much lower cost model, and a lot of that loss has been development of that model. They probably won't achieve profitability in 2026, but if they get their loss down to $1B in 2026, in 2027 we'll likely see them go net positive.

    • bunderbunder 9 hours ago

      It reminds me a lot of the late 1990s.

      We had an impressive new technology (the Web), and everyone could see it was going to change the world, which fueled a huge gold rush that turned into a speculative bubble. And yes, ultimately the Web did change the world and a lot of people made a lot of money off of it. But that largely happened later, after the bubble burst, and in ways that people didn't quite anticipate. Many of the companies people were making big bets on at the time are now fertile fodder for YouTube video essays on spectacular corporate failures, and many of the ones that are dominant now were either non-existent or had very little mindshare back in the late '90s.

      For example, the same year the .com bubble burst, Google was a small new startup that failed to sell their search engine to Excite, one of the major Web portal sites at the time. Excite turned them down because they thought $750,000 was too high a price. 2 years later, after the dust had started to settle, Excite was bankrupt and Google was Google.

      And things today sure do strike me as being very similar to things 25, 30 years ago. We've got an exciting new technology, we've got lots of hype and exuberant investment, we've got one side saying we're in a speculative bubble, and the other side saying no this technology is the real deal. And neither side really wants to listen to the more sober voices pointing out that both these things have been true at the same time many times in the past, so maybe it's possible for them to both be true at the same time in the present, too. And, as always, the people who are most confident in their ability to predict the future ultimately prove to be no more clairvoyant than the rest of us.

      • teiferer 9 hours ago

        > we've got one side saying we're in a speculative bubble, and the other side saying no this technology is the real deal.

        Um I think nobody is really denying that we are in a bubble. It's normal for new tech and the hype around it. Eventually the bad apples are weeded out and some things survive, others die out.

        The first disagreement is how big the bubble is, i.e. how much air is in it that could vanish. And that's because of the second disagreement, which is about how useful this tech is and how much potential it has. It's clear that it has some undeniable usefulness. But some people think we'll soon have AGI replacing everybody and the opposite is that's all useless crap beyond a few niche applications. Most people fall somewhere in between, with a somewhat bimodal split between optimists and skeptics. But nobody really contends that it's a bubble.

    • OtherShrezzing 9 hours ago

      >and OpenAI can lose 11 billion in a quarter and Microsoft still backs them.

      For Microsoft, and the other hyperscalers supporting OpenAI, they're all absolutely dependent on OpenAI's success. They can realistically survive through the difficult times, if the bubble bursts because of a minor player - for example if Coreweave or Mistral shuts down. But if the bubble bursts because the most visible symbol of AI's future collapses, the value-destruction for Microsoft's shareholders will be 100x larger than OpenAI's quarterly losses. The question for Microsoft is literally as fundamental as "do we want to wipe $1tn off our market cap, or eat $11bn losses per quarter for a few years?" and the answer is pretty straightforward.

      Altman has played an absolute blinder by making the success of his company a near-existential issue for several of the largest companies to have ever existed.

      • TYPE_FASTER 8 hours ago

        > Altman has played an absolute blinder by making the success of his company a near-existential issue for several of the largest companies to have ever existed.

        Yeah true, the whole pivot from non-profit to Too Big to Fail is pretty amazing tbh.

      • whimsicalism 9 hours ago

        They’re dependent on usage of their cloud. I don’t agree that they are as dependent on OAI as you suggest. Ultimately, we’ve unlocked a new paradigm and people need GPUs to do things - regardless of whether that GPU is running OAI branded software or not.

      • adastra22 9 hours ago

        Why? Microsoft has permanent, royalty free access to the frontier models. If OpenAI went under, MSFT would continue hosting GPT-5 on Azure, GitHub Copilot, etc. and not be affected in the slightest.

    • bigwheels 8 hours ago

      > this feels a little like the WeWork CEO flying couches to offices in private jets

      Fascinating! I unearthed the TL;DR for anyone else interested:

      * WeWork purchased a $60 million Gulfstream G650ER private jet for Neumann's use.

      * The G650ER was customized with two bedrooms and a conference table.

      * Neumann used the jet extensively for global travel, meetings, and family trips.

      * The jet was also used to transport items like a "sizable chunk" of marijuana in a cereal box, which might be worse and more negligent than couches.

      Sources:

      https://www.vanityfair.com/hollywood/2022/03/adam-neumann-re...

      https://nypost.com/2021/07/17/the-shocking-ways-weworks-ex-c...

      • guywithahat 5 hours ago

        The couch fascinate me the most because it's almost justifiable. Like offices need furniture and grand openings should be nice; however the cost could never be recovered and the company was way too big to be doing things that don't scale.

        In a similar vein, LLM's/AI are clearly impressive technologies that can be done profitably. Spending billions on a model however may not be economically feasible. It's a great example of runaway spending, whereas the weed thing feels more along the lines of a drug problem to me.

    • lokar 9 hours ago

      Very few industries are “deeply profitable” absent the illegal abuse of monopoly power

      • potato3732842 9 hours ago

        Don't forget the perfectly legal use of legislation and bureaucratic precedent that gives them "soft/lossy monopoly" power or all but forces people do to business with them.

        • lokar 9 hours ago

          OpenAI is pretty clearly pushing for complex government regulation as a way to protect their lead and prevent new entrants in the market.

      • Iulioh 9 hours ago

        And as we saw, once a model is trained you need very little compute to run it and there is very little advantage in begin the 1st model and the 10th model.

        Monopoly in this field is impossible, your product won't ever be so good that the competition does not make sense

        Add to this that AGI is impossible with LLMs...

        • lokar 8 hours ago

          I’m not so sure. Look for more gov regulations that make it hard for startups. Look for stricter enforcement of copyright (or even updates to laws) once the big players have secured licensing deals, to cut off the supply of cheap training data.

    • raincole 9 hours ago

      And did people listen to those "analyses" and dump Tesla, or its stock kept skyrocketing?

    • butlike 8 hours ago

      That was back in the mid 2010s right? Companies had yet to reach 1T valuation. 5bil against 1T is a drop in the bucket

    • alfalfasprout 8 hours ago

      Investors are trying to bet on OpenAI being the first to replace all human skilled labor. Of course, this is foolish for a few reasons:

      1. Performance of AI tools improving but marginally so in practice 2. If human labor was replaced, it's the start of global societal collapse so any winnings would be moot.

    • boringg 8 hours ago

      You can't honestly be comparing a shitty real estate play like WeWork to the real functional benefits people get out of ChatGPT.

      ChatGPT was mind blowing when you first used it. WeWork is a real estate play fronted by a self aggrandizing self dealing CEO.

    • randomNumber7 9 hours ago

      The winner takes it all, so it is reasonable to bet big to be the one.

      • anonymousiam 9 hours ago

        The one what? What is the secret sauce that will distinguish one LLM from another? Is it patentable? What's going to prevent all of the free LLMs from winning the prize? An AI crash seems inevitable.

        • schnitzelstoat 9 hours ago

          It could end up like Search did, at first you had Lycos, AskJeeves, Altavista etc. and then Google became absolutely dominant.

          They want to be the Google in this scenario.

          • skeeter2020 9 hours ago

            Then they're doing it backwards. Google first built a far superior product, then pursued all the tricks to maintain their monopoly. OpenAI at best has the illusion of a superior product, and even that is a stretch.

          • camdenreslink 8 hours ago

            Google was by far the best product. Maybe an LLM provider will emerge in that way, but it seems they are all very similar in capability right now.

            • wyre 8 hours ago

              I don't believe google won the search engine wars because they had the best product, while it may be true, the won because the of the tools they provided to their users. Email, cloud storage, docs/sheets/drive, Chrome, etc

              • ncls 8 hours ago

                They were already pretty dominant in search by the time they released most if not all of those. They got into that position by being the better search engine - better results and nicer to use (clean design, faster loading times).

        • j16sdiz 9 hours ago

          You need the infrastructure, not just the model.

          The model can be free, but the infrastructure (data center) ain't.

        • Workaccount2 9 hours ago

          The goal isn't to be the best LLM, the goal is to be the first self-improving LLM.

          On paper, whoever gets there first, along with the needed compute to hand over to the AI, wins the race.

          • adastra22 9 hours ago

            Maybe in paper, but only on paper. There are so many half baked assumptions in that self-improvement logic.

          • kurisufag 9 hours ago

            The moment properly self-improving AI (that doesn't run into some logistic upper bound of performance) is released, the economy breaks.

            The AI, having theoretically the capacity to do anything better than everyone else, will not need support (in resources or otherwise) from any other business except perhaps once to kickstart its exponential growth. If it's guarded, every other company becomes instantly worthless on the long term, and if not anyone with a bootstrap-level of compute will be able to also, do anything ever on a long enough time frame.

            It's not a race for ROI, it's to have your name go in the book as one of the guys that first obsoleted the relationship between effort, willpower, intelligence, etc. and the ability to bring arbitrary change to the world.

            • forgetfulness 9 hours ago

              The machine god would still need resources provided by humans on their terms to run; the AI wouldn’t sweat having to run, for instance, 5 years straight of its immortality just to figure out a 10 years plan to eventually run at 5% less power than now, but humans may not be willing to foot the bill for this.

              There’s no guarantee that the singularity makes economic sense for humans.

              • kurisufag 8 hours ago

                Presuming the kind of runaway superintelligence people usually discuss, the sort with agency, this just turns into a boxing problem.

                Are we /confident/ a machine god with `curl` can't gain its own resilient foothold on the world?

          • weregiraffe 8 hours ago

            Self-improving LLM is as probable as a perpetual motion machine.

            Practically, LLMs train on data. Any output of an LLM is a derivative of the training data and can't teach it anything new.

            Conceptually, if a stupid AI can build a smart AI, it would mean that the stupid AI is actually smart, otherwise it wouldn't have been able too.

            • Marha01 7 hours ago

              Your logic might make intuitive sense, but I don't think it is as ironclad as you portray it.

              The fact is, there is no law of physics that prevents the existence of a system that can decrease its internal entropy (complexity) on its own, provided you constantly supply it with energy (negative entropy). Evolutionary algorithm (or "life") is an example of such a system. It is conceivable that there is a point when a LLM is smart enough to be useful for improving its own training data, which then can be used to train a slightly smarter version, which can be used to improve the data even more etc... Every time you inference to edit the training data and train, you are supplying a large amount of energy into the system (both inferencing and training consumes a lot of energy). This is where the decrease in entropy (increase in internal model complexity and intelligence) can come from.

        • Joel_Mckay 9 hours ago

          Silicon valley capital investment firms have always exploited regulatory capture to "compete". The public simply has a ridiculously short memory of the losers pushed out of the market during the loss-leader to exploit transition phase.

          Currently, the trend is not whether one technology will outpace the other in the "AI" hype-cycle ( https://en.wikipedia.org/wiki/Gartner_hype_cycle ), but it does create perceived asymmetry with skilled-labor pools. That alone is valuable leverage to a corporation, and people are getting fired or ripped off anticipating the rise of real "AI".

          https://www.youtube.com/watch?v=_zfN9wnPvU0

          One day real "AI" may exist, but a LLM or current reasoning model is unlikely going to make that happen. It is absolutely hilarious there is a cult-like devotion to the AstroTurf marketing.

          The question is never whether this is right or wrong... but simply how one may personally capture revenue before the Trough of disillusionment. =3

      • simonsarris 9 hours ago

        I don't really believe that, and I thought it was interesting on Meta's earnings call that Zuck (or the COO) said that it seems unlikely at this point that a single company will dominate every use of LLMs/image models, and that we should expect to see specialization going forward.

      • loudmax 9 hours ago

        As I understand the argument, it's that AI will reach a level where it's smart enough to improve itself, leading to a feedback loop where it takes off like a rocket. In this scenario, whoever is in second place is left so far in the dust that it doesn't matter. Whichever model is number one is so smart that it's able to absorb all economic demand, and all the other models will be completely obsolete.

        This would be a terrifyingly dystopian outcome. Whoever owns this super intelligence is not going to use it for the good of humanity, they're going to use it for personal enrichment. Sam Altman says OpenAI will cure cancer, but in practice they're rolling out porn. There's more immediate profit to be made from preying on loneliness and delusion than there is from empowering everyone. If you doubt the other CEOs would do the same, just look at them kissing the ass of America's wannabe dictator in the White House.

        Another possible outcome is that no single model or company wins the AI race. Consumers will choose the AI models that best suit their varying needs, and suppliers will compete on pricing and capability in a competitive free market. In this future, the winners will be companies and individuals who make best use of AI to provide value. This wouldn't justify the valuations of the largest AI companies, and it's absolutely not the future that they want.

      • aaronblohowiak 9 hours ago

        Do you have any reasoning to support the notion that this market is winner takes all?

        • chii 9 hours ago

          With enough money to lobby, they can make it a winner takes all market (ala, a regulated monopoly).

          • whimsicalism 9 hours ago

            Want to bet? I see this claim all over the internet and do not believe it for a moment.

          • deadbabe 8 hours ago

            But then you get stuff like Deepseek R1.

      • runarberg 9 hours ago

        Does the winner take it all?

        I agree this is a reasonable bet though but for different reason, I believe this is a large scale exploitation where money is systematically siphoned away from workers and into billionaires via e.g. hedgefunds, bailouts, dividend payouts, underpay, wagetheft, etc. And the more they blow out this bubble the more money they can exploit out from workers. As such it is not really a bet, but rather the cost of business. Profits are guaranteed as long as workers are willing to work for yours.

gregoriol 10 hours ago

Will Sam Altman's fall be as legendary as Sam Bankman-Fried's?

  • h2zizzle 9 hours ago

    I'm assuming Altman wasn't screwing his CFO and letting her post to 4chan about it, so probably not that bad.

    • Lionga 7 hours ago

      Altman was screwing / raping his sister so not quite sure who is worse.

  • rhetocj23 7 hours ago

    Itll be worse. He is doing this for ego, not money from what I see.

  • layer8 10 hours ago

    SBF’s fall is almost forgotten already.

  • Hilift 10 hours ago

    Most of the funds lost to SBF were recovered. And CPZ has a pardon. Crypto has evaporated about $2 trillion in assets since then.

    • kyruzic 10 hours ago

      The funds in USD were recovered because bitcoins value is 5x higher than it was when he got arrested.

      • 7thpower 9 hours ago

        And a set of fundamentally sound investments, including Anthropic iirc.

        • overfeed 6 hours ago

          This is painting a target around the arrow. AFAIK, they had so much money to throw around for a spray and pray, similar to VC firms

    • hiddencost 9 hours ago

      I don't understand why you think it's OK to flagrantly violate financial laws for consumer protection, just because the bet got lucky?

DenisM 6 hours ago

A couple of thoughts on the big picture:

* Rise of AI is one of the biggest “transfers” of IP-generated wealth.

* It is also a dramatic increase in the “software is eating the world” trend, or at least an anticipation of such. It kinda turned from everyone dragging their feet through software andoptin over the course of 30 years into a massive stampede.

uberdru 11 hours ago

I was at a bitcoin conference in 2018. One guy in the booth told me that the company had set up a $100M fund to fund startups that agreed to build apps on their blockchain. I wonder where they are now?

  • ZiiS 11 hours ago

    As long as they kept another $100M in coins then fairly happy.

Flockster 11 hours ago

Okay, that article is a little bit shallow. I just summarises the headlines of the last weeks of circular deals. But is there also a more in depth article that sheds a little more light onto what this actually means? From a financial perspective?

  • delis-thumbs-7e 10 hours ago

    Ed Zitron has been shouting into the void about this for quite some time: https://www.wheresyoured.at/the-case-against-generative-ai/

    He also has a podcast called Better Offline, which is slightly too ad heavy for my taste. Nevertheless, with my meagre understanding of the large corporate finances I was not able to find any errors in his core argument regardless of his somewhat sensationalist style of writing.

    • OfficialTurkey 10 hours ago

      My complaint about Ed Zitron is that he's _always_ shouting into the void about something. A lot of the issues he covers are legitimate and deserve the scorn he gives them but at some point it became hard for me to sort the signal from the noise.

  • Spooky23 11 hours ago

    It’s probably hard to do that in a news context because the real rationales are pretty tight.

    Depending on your POV OpenAI and the surrounding AI hype machine is at the extremes either the dawn of a new era, or a metastasized financial cancer that’s going to implode the economy. Reality lies in the middle, and nobody really knows how the story is going to end.

    In my personal opinion, “financial innovation” (see: the weird opaque deals funding the frantic data center construction) and bullshit like these circular deals driving speculation is a story we’ve seen time and time again, and it generally ends the same way.

    An organization that I’m familiar with is betting on the latter - putting off a $200M data center replacement, figuring they’ll acquire one or two in 2-3 years for $0.20 on the dollar when the PE/private debt market implodes.

    • gitremote 10 hours ago

      > Reality lies in the middle

      The argument to moderation/middle ground fallacy is a fallacy.

      https://en.wikipedia.org/wiki/Argument_to_moderation

      • parineum 10 hours ago

        Not really. The idea that reality lies _in_ the middle is fairly coherent. It's not, on it's face, absolutely true but there are and infinite number of options between two outcomes so the odds are overwhelmingly in the favor that the truth lies somewhere in between. Is either side totally right about every single point of contention between them? Probably not, so the answer is likely in the middle. The fallacy is a lot easier to see when you're arguing about one precise point. In that case, someone is probably right and wrong. But, in cases where a side is talking about a complex event with a multitude of data points, both extremes are likely not completely correct and the answer does, indeed, lie in between the extremes.

        The fallacy is that the true lies _at_ the middle, not in the middle.

        • philistine 9 hours ago

          You're thinking in one dimension. Truth. Add another dimension, time, and now we're talking about reality.

          Ultimately, if both sides have a true argument, the real issue is which will happen first in time? Will AI change the world before the whole circular investment vehicle implode? Or after, like happened with the dotcom boom?

        • gitremote 9 hours ago

          Flat-earthers: The earth is flat.

          Round-earthers: The earth is round.

          "Reality lies in the middle" argument: The earth is oblong, not a perfect sphere, so both sides were right.

          • missinglugnut 7 hours ago

            If we're gonna be pendantic about fallacies, you're using argument by analogy and it's not in any way comparable to the claims GP made about OpenAI.

          • parineum 9 hours ago

            "Round" does not mean spherical and both of these claims are falsifiable and mutually exclusive.

            The AI situation doesn't not have two mutually exclusive claims, it has two claims on the opposite sides of economic and cultural impact that are differences of magnitude and direction.

            AI can both be a bubble and revolutionary, just like the internet.

        • suddenlybananas 9 hours ago

          >infinite number of options between two outcomes so the odds are overwhelmingly in the favor that the truth lies somewhere in between

          This is totally fallacious.

          • parineum 9 hours ago

            It isn't.

            "AI is a bubble" and "AI is going to replace all human jobs" is, essentially, the two extremes I'm seeing. AI replacing some jobs (even if partially) and the bubble-ness of the boom are both things that exist on a line between two points. Both can be partially true and exist anywhere on the line between true and false.

            No jobs replaced<-------------------------------------->All jobs replaced

            Bubble crashes the economy and we all end up dead in a ditch from famine<---------------------------------------->We all end up super rich in the post scarcity economy

            • mamonster 9 hours ago

              It is completely fallacious.

              For one, in higher dimensions, most of the volume of a hypersphere is concentrated near the border.

              Secondly, and it is somewhat related, you are implicitly assuming some sort of convexity argument (X is maybe true, Y is maybe true, 0.5X + 0.5 Y is maybe true). Why?

            • suddenlybananas 9 hours ago

              I agree there is a large continuum of possibilities, but that does not mean that something in the middle is more likely, that is the fallacious step in the reasoning.

    • afavour 10 hours ago

      > Depending on your POV OpenAI and the surrounding AI hype machine is at the extremes either the dawn of a new era

      Eh, in a way they're not mutually exclusive. Look back at the dot com crash: it was all about things like online shopping, which we absolutely take for granted and use every day in 2025. Same for the video game crash in the 80s. They are both an overhyped bubble and and the dawn of a new era.

      • Spooky23 6 hours ago

        Exactly. I think the difference is that we've developed a cadre of people are think 24x7 about capturing value in a way that makes dotcom era moguls look naive.

        AI is a powerful and compelling technology, full stop. The sausage making process where the entire financial economy is pivoting around it is a different matter, and can only end in disaster.

seydor 10 hours ago

They are fueling rich family office money into the bank accounts of their personell. Not bad not bad.

  • nocoolnametom 8 hours ago

    The fact that it is private equity that is going to evaporate when the bubble bursts is the only silver lining I can see. However, my natual cynicism makes me bet they'll spend whatever they've got left over on their pet politicians to use government (ie, public funding) to bail themselves back out.

andsoitis 11 hours ago

I can understand how someone's approach can be "hack all the things", however, at some point you run into the fundamental boundaries of the box you are in and you can't hack your way around those.

  • jacquesm 11 hours ago

    That doesn't really matter: as long as there are idiots who will buy your inflated stock you've externalized the problem for yourself whilst staying within the box.

rchaud 9 hours ago

OpenAI is raising funding based on its own forecasts for AI demand growth, and sending most of it to Oracle, MSFT, Nvidia as well as paying insiders enormous salaries.

There are some interesting parallels here with the business model described in the book Confessions of an Economic Hitman. Developing countries take out huge loans from US lenders to build an electric grid, based on inflated forecasts from US consultancies they hired. The countries take on the debt, but the money mostly bypasses them and lands in the pockets of US engineering firms doing the construction, and government insiders taking kickbacks for greasing the wheels.

When the forecasted growth in industrial production fails to materialize, the countries are unable to repay the debt and have no option but to offer the US access to their resources, ports and votes in the UN.

What happens when OpenAI's forecasts of gargantuan growth fail to materialize and they're unable to sell more stock to pay off lenders? Does Uncle Sam step in with a bailout for "national security" reasons?

rw3 11 hours ago

Hank green did a vlog on this a few weeks ago and it’s a great explainer.

  • gtirloni 11 hours ago

    This one? https://www.youtube.com/watch?v=Vz0oQ0v0W10

    This comment is pretty depressing but it seems to be the path we're headed to:

    > It's bad enough that people think fake videos are real, but they also now think real videos are fake. My channel is all wildlife that I filmed myself in my own yard, and I've had people leaving comments that it's AI, because the lighting is too pretty or the bird is too cute. The real world is pretty and cute all the time, guys! That's why I'm filming it!

    Combine this with selecting only what you want to believe in and you can say that video/image that goes against your "facts" is "fake AI". We already have some people in pretty powerful positions doing this to manipulate their bases.

    • philistine 9 hours ago

      > We already have some people in pretty powerful positions doing this to manipulate their bases.

      You don't have to be vague. Let's be specific. The President of the United States implied a very real voiceover of President Reagan was AI. Reagan was talking about the fallacy of tariffs as engines of economic growth, and it was used in an ad by the government of Ontario to sow divide within Republicans. It worked, and the President was nakedly mad at being told by daddy Reagan.

    • throwaway106382 10 hours ago

      We are heading to an apocalyptic level of psychosis where human beings won't even believe the things they see with their own eyes are real anymore because of being flooded with AI slop 24/7/365.

      • jimbokun 10 hours ago

        We desperately need a technological solution to be able to somehow "sign" images and videos as being real and not generated or manipulated by AI.

        I have no idea how such a thing would work.

        • SoftTalker 9 hours ago

          It won't work, because most people do not understand what a digital signature is and they will just say that has been faked as well.

          • jimbokun 6 hours ago

            Journalists will know how to check it in high profile cases.

            And annoyed and suspicious techies can use it to check other people's content and report them as fake.

            Yeah, there are a lot of dumb people who want to be deceived. But would be good for the rest of us to have some tools.

            • throwaway106382 3 hours ago

              This will just create a black market for AI generated video content that doesn't have a signature. Which I'm sure that China, Russia, hell even the US governments would not have a problem with because that would be extremely useful for them.

    • dizzydes 9 hours ago

      I feel bad for the guy but I think this confusion will be extraordinary and get people off the internet.

    • jimbokun 10 hours ago

      There was a discussion on here recently about a new camera that could prove images taken with it weren't AI fakes, and most of the comments were skeptical anyone would care about such things.

      This is an example of how people viscerally hate anyone passing off AI generated images and video as real.

  • JoBrad 3 hours ago

    Hank had another video on the interconnectedness of the industry right now, that is very much in the spirit of the original article.

    https://youtu.be/Q0TpWitfxPk

AndrewDucker 11 hours ago

The most interesting thing here is that it's now reached the NY Times.

  • tim333 5 hours ago

    The numbers are quite historic in size.

  • trollbridge 10 hours ago

    “I’m not hearing any music.”

SubiculumCode 10 hours ago

Given that AI is a national security matter now, I'd expect the U.S.A to step in and rescue certain companies in the event of a crash. However, I'd give higher chances to NVIDIA than OpenAI. Weights are easily transferrable and the expertise is in the engineers, but ability to continue making advanced chips is not as easily transferred.

  • philipwhiuk 10 hours ago

    If they're too-important-to-fail they're too important not to be broken up or nationalised.

    • jimbokun 10 hours ago

      While that is a sensible opinion the 2008 crash showed that it is not the opinion of decision makers in the US.

    • whimsicalism 9 hours ago

      I’m curious if those of you calling for nationalization have worked for the government or a state-owned enterprise like Amtrak. People should witness the effects of long-term public sector ownership on productivity and effectiveness in a workplace.

      • overfeed 7 hours ago

        The USPS does more for its workers and customers than FedEx. There are addresses FedEx won't service due to "inefficiencies", hand over packages to the USPS for delivery.

      • saulpw 8 hours ago

        Yeah, like IBM and Intel and GE and GM are shining examples of how effectively the private sector runs companies. Maybe large enterprises are by their nature inefficient. Maybe productivity isn't the best metric for a utility. We could, for instance, prioritize resiliency, longevity, accessibility, and environmental concerns.

        • whimsicalism 8 hours ago

          Even those problematic companies exemplify the difference: when enterprises are mismanaged and fail, capital is reallocated away from them.

          • saulpw 6 hours ago

            The US government just allocated $10b towards Intel, and bailed out GM in the past. So what you said is clearly not the case. Now we have publicly-funded private management that is failing. At least if they were publicly owned and managed outright, they wouldn't be gutted by executives prioritizing quarterly profits.

            • whimsicalism 6 hours ago

              Executives should prioritize producing things people are willing to pay money for cheaply. If there is a bias towards short-termism, that is a governance problem that should be addressed.

              I agree that the US taking stakes or picking winners is bad, I don't think it follows that nationalization is the solution.

    • will4274 7 hours ago

      Fwiw, this is a facile argument. You make no attenpt to demonstrate that after major reorganization (breakup / nationalization) that the firm will continue to have the desirable attributes (innovation, efficincy, ability to build) that made them too important to fail.

  • embedding-shape 10 hours ago

    Why is ML knowledge "in the engineers" while chip manufacturing apparently sits in the company/hardware/something else than the engineers/humans?

    • NBJack 10 hours ago

      Read up a bit on the effort needed to get a fab going, and the yield rates. While engineers are crucial in the setup, the fab itself is not as 'fungible' as the employees involved.

      I can spin up a strong ML team through hiring in probably 6-12 months with the right funding. Building a chip fab and getting it to a sensible yield would take 3-5 years, significantly more funding, strong supply lines, etc.

      • embedding-shape 9 hours ago

        > I can spin up a strong ML team through hiring in probably 6-12 months with the right funding

        Not sure what to call this except "HN hubris" or something.

        There are hundreds of companies who thought (and still think) the exact same thing, and even after 24 months or more of "the right funding" they still haven't delivered the results.

        I think you're misunderstanding how difficult all of this is, if you think it's merely a money problem. Otherwise we'd see SOTA models from new groups every month, which we obviously aren't, we have a few big labs iteratively progressing SOTA, with some upstarts appearing sometimes (DeepSeek, Kimi et al) but it isn't as easy as you're trying to make it out to be.

        • whimsicalism 9 hours ago

          There’s a lot in LLM training that is pretty commodity at this point. The difficulty is in data - and a large part of why it has gotten more challenging is simply that some of the best sources of data have locked down against scraping post-2022 and it is less permissible to use copyrighted data than the “move fast and break things” pre-2023 era.

          As you mentioned, multiple no name chinese companies have done it and published many of their results. There is a commodity recipe for dense transformer training. The difference between Chinese and US is that they have less data restrictions.

          I think people overindex on the Meta example. It’s hard to fully understand why Meta/llama have failed as hard as they have - but they are an outlier case. Microsoft AI only just started their efforts in earnest and are already beating Meta shockingly.

        • marcyb5st 8 hours ago

          Fully agree. I also think we are deep into the diminishing returns territory.

          If I have to guess OAI and others pay top dollars for talent that has a higher probability of discovering the next "attention" mechanism and investors are betting this is coming soon (hence the hige capitalizations and willing to loive with 11B losses/quarter). If they lose patience in throwing money at the problem I see only few players remaining in the race because they have other revenue streams

        • noosphr 8 hours ago

          >Otherwise we'd see SOTA models from new groups every month

          We do.

          It's just that startups don't go after the frontier models but niche spaces which are under served and can be explored with a few million in hardware.

          Just like how open AI made gpt2 before they made gpt3.

          • embedding-shape 8 hours ago

            > We do.

            > It's just that startups don't go after the frontier models but niche spaces

            But both of "New SOTA models every month" and "Startups don't go for SOTA" cannot be true at the same time. Either we get new SOTA models from new groups every month (not true today at least) or we don't, maybe because the labs are focusing on non-SOTA instead.

            • noosphr 7 hours ago

              State of the art doesn't mean frontier.

              • embedding-shape 7 hours ago

                I've always taken that term literally, basically "top of the top". If you're not getting the best responses from that LLM, then it's not "top of the top" anymore, regardless of size.

                Then something could be "SOTA in it's class" I suppose, but personally that's less interesting and also not what the parent commentator claimed, which was basically "anyone with money can get SOTA models up and running".

                Edit: Wikipedia seems to agree with me too:

                > The state of the art (SOTA or SotA, sometimes cutting edge, leading edge, or bleeding edge) refers to the highest level of general development, as of a device, technique, or scientific field achieved at a particular time

                I haven't heard of anyone using SOTA to not mean "at the front of the pack", but maybe people outside of ML use the word differently.

                • noosphr 5 hours ago

                  A sota decoder model is a bigger deal than yet another trillion parameter encoder only model trained on benchmarks.

                  I don't get why you think that the only way that you can beat the big guys is by having more parameters than them.

                  • embedding-shape 3 hours ago

                    > I don't get why you think that the only way that you can beat the big guys is by having more parameters than them.

                    Yeah, and I don't understand why people have to argue against some point others haven't made, kind of makes it less fun to participate in any discussions.

                    Whatever gets the best responses (no matter parameter size, specific architecture, addition of other things) is what I'd consider SOTA, then I guess you can go by your own definition.

      • trollbridge 10 hours ago

        Right. I could spin up a strong ML team, an AI startup, build a foundational model, etc give a reasonable amount of seed capital.

        Build a chip fab? I’ve got no idea where to start, where to even find people to hire, and i know the equipment we’d need to acquire would be also quite difficult to get at any price.

      • wongarsu 10 hours ago

        But the fabs don't belong to NVIDIA, they belong to TSMC. I have no doubt that Taiwan and maybe even the US government would step in to save TSMC if for some reason it got existential problems, but that doesn't provide an argument for saving NVIDIA

      • OfficialTurkey 10 hours ago

        > I can spin up a strong ML team through hiring in probably 6-12 months with the right funding.

        Mark Zuckerberg would like a word with you

      • singron 10 hours ago

        Nvidia isn't a fab.

    • tonyarkles 10 hours ago

      First-order: because of the capex and lead times. If you grab a bunch of world-class ML folks and put them in a room together, they're going to be able to start producing world-class work together. If you grab a bunch of world-class chip designers in the same scenario but don't have world-class fabs for them to use, they're not going to be able to ship competitive designs.

      • embedding-shape 10 hours ago

        > If you grab a bunch of world-class chip designers in the same scenario but don't have world-class fabs for them to use, they're not going to be able to ship competitive designs.

        But why such an unfair comparison?

        Instead of comparing "skilled people with hardware VS skilled people without hardware", why not compare it to "a bunch of world-class ML folks" without any computers to do the work, how could they produce world-class work then?

        • jimbokun 10 hours ago

          Much easier and cheaper to source computers than a fab.

          • embedding-shape 9 hours ago

            Right, but to source a fab you need experience as well, nothing you can just hire a random person to do exactly.

            • tonyarkles 8 hours ago

              To simplify it down even more:

              - For the ML team, you need money. Money to pay them and money to get access to GPUs. You might buy the GPUs and make your own server farm (which also takes time) or you might just burn all that money with AWS and use their GPUs. You can trade off money vs. time.

              - For the chip design team, you need money and time. There's no workaround for the time aspect of it. You can't spend more money and get a fab quicker.

              • embedding-shape 8 hours ago

                > - For the ML team, you need money. Money to pay them and money to get access to GPUs. You might buy the GPUs and make your own server farm (which also takes time) or you might just burn all that money with AWS and use their GPUs. You can trade off money vs. time.

                Even if you do those things though, it doesn't guarantee success or you'll be able to train something bigger. For that you need knowledge, hard work and expertise, regardless of how much money you have. It's not a problem you can solve by throwing money at it, although many are trying. You can increase the chances of hopefully discovering something novel that helps you build something SOTA, but as current history tells us, it isn't as easy as "ML Team + Money == SOTA model in a few months".

                • tonyarkles 7 hours ago

                  Sure. No guarantees that you could throw money at putting an ML team together and have a new SOTA model in a few months. You might, you might not.

                  You know what I can guarantee? No matter how much money you throw at it, you will not have a new SOTA fab in a few months.

    • jeffwask 10 hours ago

      The start-up costs of creating a new chip manufacture are significantly higher (you can't just SAAS your way into factories) and the chips themselves more subject to IP and patents owned by that company.

    • bob1029 10 hours ago

      One person can implement a transformer model from scratch in a weekend. Hardware is not the valuable part of machine learning. Data and how it is used are.

      The "magic of AI" doesn't live inside an Nvidia GPU. There are billions of dollars of marketing being deployed to convince you it does. As soon as the market realizes that nvidia != magic AI box, the music should stop pretty quickly.

      • chermi 10 hours ago

        Umm, part of it does. It necessary but not sufficient, at least to achieve it on the timescales we've seen. Scale is part of the "magic".

      • tehjoker 10 hours ago

        That's true, but without the kind of horsepower provided by modern hardware, even though I'm skeptical that it's all needed, especially given DeepSeek's amazing results, AI would be nearly impossible.

        There are some important innovations on the algorithm / network structure side, but all these ideas are only able to be tried because the hardware supports it. This stuff has been around for decades.

        • chermi 10 hours ago

          Deepseek required existing models that required the horsepower.

          • tehjoker 3 hours ago

            That was claimed but never proven. I personally don't believe the American companies making this claim. I suspect they made this up to protect their valuations when they were hideously embarassed and lost a trillion dollars in equity.

    • thesz 10 hours ago

      Chip manufacturing is extremely time consuming, especially when we are talking about masks for lithography.

      The rights on masks for chips and their parts (IPs) belong to companies.

      And one definitely does not want these masks to be sold during bankruptcy process to (arbitrary) higher bidders.

    • jimbokun 10 hours ago

      Chip designs have strong IP protections.

      AI models do not. Sure you can't just copy the exact floating point values without permission. But with enough capital you can train a model just as good, as the training and inference techniques are well known.

      • embedding-shape 9 hours ago

        > But with enough capital you can train a model just as good, as the training and inference techniques are well known

        You're not alone in believing just money can train a good model, and I've already answered elsewhere why things aren't so easy as you believe, but besides this, where are y'all getting that from? Is there some popular social media influencer that keeps parroting this or where it comes from? Clearly you're not involved in those processes/workflows yourself, then you wouldn't claim it's just a money problem, so where are you all getting this from?

  • lz400 10 hours ago

    Even if/when the bubble pops, I don't think NVIDIA is even close to need rescuing or being in trouble. They might end being worth 2 trillion instead of 5 but they're still selling GPUs nobody else knows how to make that power one of the most important technologies in the world. Also, all their other divisions.

    The .com bubble didn't stop the internet or e-commerce, they still won, revolutioned everything, etc. etc. Just because there's a bubble it doesn't mean AI won't be successful. It will be, almost for sure. We've all used it, it's truly useful and transformative. Let's not miss the forest for the trees.

skeeter2020 9 hours ago

>> figure out how to innovate on the financial model

Does it feel rather Orwellian that the original geeks now seem to be the same people who - forget about claiming technological innovation of as their own - completely discount it and apparently the important thing is now the creativity in funding an enterprise? We don't hear about the breakthroughs from the technologists, but the funding announcements from th investors and CEOs. It's not about the benefits of the technology, but how they're going to pay for it. Seems like a wildly perverse version of wag the dog...

  • whimsicalism 9 hours ago

    this is all a function of the media reporting, the change in ‘nerd culture’ has been vastly overreported.

    these companies are staffed by spectrum-y nerds that we are being desperately propagandized into thinking are actually frat ‘bros’.

    • programjames 7 hours ago

      No, they aren't. Locally, the person with the most esoteric knowledge is probably a weird nerd. It's mostly an accident that they chose to invest time in things typically associated with smarts. But globally, the best wizards got there by making it their profession. So maybe at your middling university, the people who could land a job at a frontier lab were nerdy wannabe frats, but at decent universities like MIT or Tsinghua, they're usually just better in every aspect of their lives. E.g. MIT has "math olympiad fraternities" all the cool kids join.

      • whimsicalism 6 hours ago

        I went to a top 5 ranked school globally (~these lists fluctuate) and have been in elite circles since then. I can promise you that even there the autistic nerd fully outcompetes the renaissaince man.

crazygringo 10 hours ago

This is such a strange article -- there's nothing particularly unusual going on here.

The first example basically stands in for all of them -- Microsoft invests $13B in OpenAI, and OpenAI spends $13B on Azure. This is literally just OpenAI purchasing Microsoft cloud usage with OpenAI's stock rather than its cash. There is nothing unusual, illicit, or deceptive about this. This is entirely normal. You can finance your spending through debt or equity. They're financing through equity, as most startups do, and they presumably get a better deal (better rates, more guaranteed access) via Microsoft than via other random investors and then buying the cloud compute retail from Microsoft.

This isn't deceiving any investors. This is all out in the open. And it's entirely normal business practice. Nothing of this is an indicator of a bubble or anything.

Or take the deal with Oracle -- Oracle is building data centers for OpenAI, with the guarantee that OpenAI will use them. That's just... a regular business deal. What is even newsworthy about this? NYT thinks these are "circular" deals, but by this logic every deal is a "circular" deal, because both sides benefit. This is just... normal capitalism.

  • delis-thumbs-7e 10 hours ago

    I remember the same argument being used before the 2008 crash.

    Point is that all of this companies need to start making real profits and pretty damn big ones, otherwise all of this will collapse. Problem is that unless Altman has some super-intelligent super-AI hidden in his closet, it is very unlikely that it will.

    And whose gonna take the bill when it falls? Let me guess… Where have I seen this before…?

    • matwood 8 hours ago

      > Point is that all of this companies need to start making real profits and pretty damn big ones

      MS, Meta, Google, Apple, Nvidia make enormous profits. I think part of this AI push we're seeing is that all of these companies have so much money they don't know how to spend it all. Meta is a great case where they bounced from blowing excess cash on the metaverse and now to AI.

    • crazygringo 10 hours ago

      That's fine, but that's a separate conversation. Maybe this is a bubble, maybe it isn't.

      My point is that the way it's all being financed is just regular financing. This article is trying to present the way it's being funded as novel, as "complex and circular", when it's not. This is how funding and investment works 365 days a year in all sectors. Nothing about the funding arrangements is a bubble indicator.

      So this is a strange article from the NYT, because it's trying to present normal everyday financing deals as uniquely "complex and circular".

      • delis-thumbs-7e 9 hours ago

        I don’t know financial world well enough to say whether that’s here nor there, but can you give me examples from other companies or sectors where a company X funds the company Y with tens to hundreds of billions that the company Y uses to buy a service from the company X.

        Furthermore, yes it might be business as usual but so is fraud and god knows what else in this particular political era. In order to strengthen your argument you have to not only show that the phenomenon is not only common, but good for the overall economy.

  • methodical 9 hours ago

    Circularly passing around tens to hundreds of billions of dollars for things which don't exist and may never exist to fund a technology that hasn't A. lived up to the hype they've marketed and B. proven any strategy to breakeven is fundamentally not that much different than the way in which Enron strategically boasted their revenue numbers by passing the money between shell corporations that their CFO created.

    The main difference of course being that these are actual companies as opposed to just entities intently designed to inflate the apparent financials. While it seems like that difference means this situation is perfectly fine as compared with the fraudulent case of Enron, the net effect is still the same; these companies are posting crazy quarter over quarter revenue growth, sending their stock prices to crazy highs and P/E multiples, while the insiders are cashing out to the tunes of hundreds of millions of dollars.

    I don't really see how exactly you're trying to make the argument that it may or may not be a bubble, it objectively meets the definition of a bubble in the traditional economic sense (when an asset's market price surges significantly above its intrinsic value, driven by speculative behavior rather than fundamental factors). These companies are massively overvalued on the speculative value of AI, despite AI having not yet shown much economic viability for actual profit (not just revenue).

    Worse yet, it's not just one company with inflated numbers, it's pretty much the entire top end of the market. To compare it to the dot com bubble wouldn't be a stretch, it'd basically be apples to apples as far as I see it.

  • philipwhiuk 10 hours ago

    > Microsoft invests $13B in OpenAI, and OpenAI spends $13B on Azure.

    This isn't deceiving any investors.

    It's Microsoft increasing its revenue by selling its stock.

    • crazygringo 10 hours ago

      Microsoft isn't selling any stock. It's using its cash.

      And an increase in revenue isn't the point. Microsoft isn't doing this to try to bump its short-term stock price or anything -- investors know where revenue is coming from. Microsoft is doing it because it thinks OpenAI is a good investment and wants to make money with that investment and have greater control.

  • jpollock 10 hours ago

    The last time this hit the news, it was the dotcom bubble, and Nortel was in a similar position with startups, taking equity for equipment.

    • crazygringo 10 hours ago

      No, that's not the last time this hit the news. This happens literally all the time. Again, this is just business as usual. It's not specific to AI, it's not specific to tech, and it's nothing to do with bubbles.

  • gdulli 10 hours ago

    Sometimes additional context can take the same action that looks harmless in a vacuum and turn it into a bad idea or even a crime!

    • crazygringo 10 hours ago

      Then it would be great to have that context that shows criminality. Because that's an extraordinary claim you're suggesting, which is going to require actual evidence.

      As for "bad ideas", businesses make tons of decisions every day that turn out to be good or bad in hindsight. So again, more specifics are needed here.

      So what exactly are you suggesting? What context do you think the NYT chose to omit, and why would they omit it if it was meaningful?

  • Eisenstein 10 hours ago

    The bubble part is that nvidia is getting revenue from people investing money in their hardware in order to sell something that has not yet been shown to be profitable. If it turns out no one can make enough money selling AI generated data to justify the costs spent on the compute needed to generate it at the current rate, then what nvidia are selling becomes much less valuable, and the whole thing collapses. We haven't figured out yet whether or not that will be the case.

    • crazygringo 10 hours ago

      But that has nothing to do with the arrangement of deals here.

      If it's a bubble, then it will pop. If it's not a bubble, then all these investments will turn out to be great. But that's a different question.

      The point is, all these deals happen all the time. They're not some kind of sign of a bubble. They happen just as much in non-bubbles. They're just capitalism working as usual.

      • bwfan123 10 hours ago

        These deals happen all the time. The case for a bubble is the following.

        When Microsoft offers cloud-credits in exchange for openai equity, what it has effectively done is to purchase its own azure revenues. ie, a company uses its own cash to purchase its own revenues. This produces an illusion of revenue growth which is not economically sustainable. This is happening for all clouds right now wherein their revenues are inflated by uneconomic ai purchases. This is also happening for the gpu chip vendors as well, wherein they are offering cash or warrants to fund their own chip sales.

        • crazygringo 10 hours ago

          But nobody is falling for the "illusion of revenue growth". This is out in the open. This isn't a scam. Investors know this and are pricing accordingly. They see the revenue growth but also see the decrease in cash.

          What Microsoft is actually doing is taking the large profits it would have otherwise made on its cloud compute with retail customers, losing much/all of those profits as it sells the compute more cheaply to OpenAI, and converting those lost profits into ownership of OpenAI because Microsoft's goal is to own more of OpenAI.

          There is nothing "bubble" about this. Microsoft isn't some opaque startup investors don't understand. All of this is incredibly transparent.

          • bwfan123 9 hours ago

            There will be increased transparency since microsoft will now have to report on the performance of its openai equity [1]. The concern is that while chatgpt is a great app, the economic benefits of the current investments are being questioned. There is starting to be skepticism of ai as the public starts to get jaded. This happens in all fads. That explains why the media is buzzing with articles like these which are becoming increasingly critical while earlier they were all aboard the ai-train.

            [1] https://news.ycombinator.com/item?id=45719669

mv4 8 hours ago

How's this legal? Smaller businesses get in trouble for creative deals leading to inflated earnings.

ingigauti 4 hours ago

Reminds me of Iceland pre 2008 - lot of circular & complex deals - but now it's different

ungreased0675 4 hours ago

Complex and circular deals sounds suspiciously close to fraud.

throwaway106382 10 hours ago

Isn't paying a company to dig a hole who then pays you the same amount to fill said hole illegal?

  • baq 10 hours ago

    Even worse in VAT countries where such carousels make you eligible for a tax return on technically zero added value

  • baggachipz 10 hours ago

    In a fair and just system with appropriate oversight, yes. So in this instance, no.

  • danans 10 hours ago

    Only if you defraud investors in hole-digging corp and hole-filling corp by that by doing this you will be able to extract Unobtanium, which will make both companies 1000x profitable.

    • throwaway106382 9 hours ago

      This is just starting to sound more and more like "we're almost at AGI I promise bro just need one more round of investment bro please just one trillion more dollars please bro".

  • chermi 10 hours ago

    Yes, but what does that have to do with this situation? The hole served no purpose. The companies are using the GPUs.

    • array_key_first 7 hours ago

      They might be using the GPUs, but is that use providing real value? You can run a while loop and max out any processor.

      And, well, nobody knows if it is providing real value. We know it's doing something and has some value WE attached to it. We don't know what the real value is, we're just speculating.

    • throwaway106382 5 hours ago

      I suppose we could use a few pennies to hire some security guards to protect the filled holes.

      Now we’re creating jobs!

    • brazukadev 9 hours ago

      99% of code I generated using genAI served no purpose at the end of the day

      • chermi 9 hours ago

        Ok. Maybe use it better? Or don't use it at all. Doesn't mean it's not being used to some end, unlike a hole.

        Keep in mind also that the models are going to continue improving, if only on cost. Just a significant cost reduction allows for more "thinking" mode use.

        Most of the reports about how useless LLMs are were from older models being used by people that don't know how to use LLMs. I'm not someone that thinks they're perfect or even great yet, but their not dirt.

  • TZubiri 10 hours ago

    Seems like a net loss due to transactional costs.

    • klustregrif 10 hours ago

      The increase in value of the companies outweighs the transactional costs and then you borrow against the value of the company and make new circular deals. It works really well for a very long time and then at some point it doesn’t. The trick of the game is to get big corps involved and key decision makers so that the government bails out everyone in the end.

      • automatic6131 10 hours ago

        > The trick of the game is to get big corps involved and key decision makers so that the government bails out everyone in the end.

        This is bad. We should not shrug our shoulders and go "Oh ho, this is how the game is played" as though we can substitute cynicism for wisdom. We should say "this is bad, this is a moral hazard, and we should imprison and impoverish those who keep trying it".

        Or we'll get more.

    • throwaway106382 10 hours ago

      They are banking on:

      * stock prices increasing more than the non-existent money being burnt

      * they are now too big to fail - turn on the real money printers and feed it directly into their bank accounts so the Chinese/Russians/Iranians/Boogeymen don't kill us all

  • zetanor 10 hours ago

    Not if it increases the GDP.

    • throwaway106382 10 hours ago

      Well, I've got great news then 92% of GDP growth in the first half of 2025 was hole filling companies paying hole digging companies to dig holes and in-kind pay them to fill them up again

      what could possibly go wrong

adaisadais 11 hours ago

I’ve been listening to “The Smartest Guys In The Room” (the definitive book on Enron and their scandal) and one of the ways Enron continued to grow and grow is by setting up a really complicated system of moving debt onto equities off of their balance sheet.

While it was sorta legal (at the time) it was not ethical and led to a massive collapse of the #1 company at the time.

Makes you wonder if AI is in such a bubble. (It is).

  • sergiotapia 11 hours ago

    When the AI bubble pops, what will happen to the software engineering jobs?

    • afavour 10 hours ago

      There will be a bunch of layoffs and slowly they'll rehire back to pre-hysteria levels. I think the world is still going to need software engineers no matter what but companies will slow down on new features etc in an economic crunch.

      • forgetfulness 9 hours ago

        The ripple effect will be felt hard, as American engineers are squeezed between offshoring and more engineers with Big Tech resumes being released into the market, and returnees go push back wages in their home countries in turn

    • miltonlost 10 hours ago

      They'll have to come in and redo all the work that people put onto LLMs as actual engineering software. The number of features I've worked on that could have been done with normal computing practices but instead shoehorned in bad AI to make decisions/routing logic is too high.

    • kakacik 10 hours ago

      If it pops, some ai engineers will need to start doing some normal work again, and rest of us... we just continue doing what we were doing for past decades.

      Or maybe not, nobody knows the future any more then next guy in line.

    • brazukadev 9 hours ago

      free AI credits will be a thing of the past, "productivity" (real or not) will dive and real software engineering will become a moat again.

jwpapi 9 hours ago

When I was 16 I started working at a startup buying and reselling used electronics.

There were like 5 competitors all trying to become the winner takes it all. Afaik after 10 years some closed, restructured but most of them burnt a lot of money. One lets call him indie dev made a lot of money building a simple comparison platform and getting 10-20% on all deals.

This is n=1, but I think it still made me really averse to raising money.

boringg 8 hours ago

As an aside does anyone get the feeling that NYT is also training its fire on all California tech companies these days? I know that NYT really doesn't like California (always hasn't - from restaurants to culture to business) but curious if other people see that as well?

throwaway106382 10 hours ago

Speedrunning to "too big to fail". Turn on the infinite money printers and feed them directly into Sam Altman's bank account or the Chinese/Russians/Iranians/Boogeymen will destroy us all.

megaloblasto 9 hours ago

Everyone loves to compare AI with the dot com bubble. My question is, were there any policies put in place after the dot com bubble to mitigate a similar crash? Or did we learn nothing?

random9749832 9 hours ago

A lose lose situation for most people. Either the stock market crashes or AI progress meets expectations in the coming years and people start losing jobs.

  • barbazoo 8 hours ago

    So real estate it is after all?!

sigbottle 8 hours ago

Weird angle, but isn't "believing there will be a crash" sort of framing it as if this were still normal market dynamics?

OpenAI and AI in general has posed itself as an existential threat and tightly integrated itself (how well? let's argue later) with so many facts of society, especially government, that like, realistically there just can't be a crash, no?

Or is this too doomsday / conspiratorial?

I just find it weird that we're framing it as crash/not crash when it seems pretty clear to me they really genuinely believe in AGI, and if you can get basically all facets of society to buy in... well, airlines don't "crash" anymore, do they?

  • camdenreslink 7 hours ago

    If OpenAI were to shut down today, would anything in society really change? It seems all valuations are based on future integration into society and our daily lives. I don't think it has really happened yet.

  • 1899-12-30 7 hours ago

    A crash in the stock market doesn't necessarily mean a crash in the real market, The AI bubble burst being dot com style vs a gfc debacle depends on how much critical financial infrastructure is at risk during the debt deleveraging. If you look at the gdp growth during those two periods, the dot com era was a mild stagnation compared to the gfc's actual gdp decline.

blibble 9 hours ago

seems like it's like a mix of enron, subprime mortgages, and .com boom all in one

9cb14c1ec0 10 hours ago

Complex and circular deals lead to the downfall of Enron. Just saying...

JohnMakin 9 hours ago

"Circular deals" feels like an awfully cute way to say "fraud"

guluarte 8 hours ago

this seems like a fake circular economy, ms invests in openai which uses the money in azure, amazon invests in anthropic which pays aws for hardware and infra, nvidia invests in openai which uses the money to buy nvidia hardware, etc

jmyeet 10 hours ago

Many here now didn't live through the dot-com bubble as an adult so can't really appreciate what it was like. The hype was something hard to describe. Financial analysts and journalists struggled to come up with ways to describe the health of these "companies". My favorite was what revenue multiple companies would trade it.

But the major takeaway was that almost none of these companies were real businesses. This is why I laughed at dot-com comparisons in the 2010s around the tech giants because Apple, Google, Microsoft, etc were money-printing machines on a scale we have trouble comprehending. That doesn't make them immune to economic struggles. Ad spending with Google will rise and fall with the economy.

OpenAI has a paper valuation in the hundreds of billions of dollars now and no prospect of a revenue model that will justify that for many, many years.

Currently, the hardware is a barrier to entry but that won't last. It has parallels in the dot-com era too when servers were expensive. The cost of training LLMs is (at least) halving every year. We're probably reaching the limits of what these transformers can do and we'll need another big breakthrough to improve.

OpenAI's moat is tenuous. Their value is in the model they don't release. But DeepSeek is a warning shot that it will be in somebody's geopolitical interest, probably China's, to prevent a US tech monopoly on AI.

If you look at these AI companies, so many of them are basically scams. I saw a video about a household humanoid robot that was, surprise surprise, just someone in a VR suit. Many cities have delivery drones now but somebody is remotely driving them.

I saw somebody float the theory that the super-profitable big tech companies are engaging in layoffs not because they don't need people but to pay for the GPUs. It's an interesting idea. A lot of these NVidia deals are just moving money around where NVidia comes out on top with a bunch of equity in these companies should they become trillion dollar companies.

Oh and take out data center building from the US economy and we're in recession. I do think this is a bubble and it will burst sooner rather than later.

righthand 11 hours ago

I honestly don’t get it. People love being swindled? Or people have enough cash to throw into the swindling machine even for no gain? Must be nice.

  • ak_111 11 hours ago

    You can do a lot of money in swindles and bubbles if you time your exit well. There is a fair bit of opportunistic investors who did well in the NFT craze, who speculated knowing fully well that NFT is a craze that will go to zero.

    • jacquesm 11 hours ago

      The Greater Fool theory of investing strikes again.

    • itsnowandnever 11 hours ago

      everything will eventually go to zero. we look at some of these things and laugh because we're pretty sure they're going to go to zero within weeks or months vs years. but by the end of all of our lifetimes, most the companies on the stock market will be replaced. the few that won't are probably investment banks like goldman sachs

  • itsnowandnever 11 hours ago

    these deals are made as part of a market so it's more like musical chairs where every time you change a chair you get a ton of money but you don't want to be the one that's stuck without a chair at the end

    • ceejayoz 11 hours ago

      They've all realized the guy without the chair can be the taxpayer.

  • KaiserPro 10 hours ago

    Modern finance is all about debt.

    Central banks don't print money[1] but investment banks do. Think about it like this: Someone deposits $100. The bank pays interest, to make money on to pay that interest, ~$90 is loaned out to someone.

    Now, I still have a bank slip that says $100 in the account, and the bank has given $90 of that to someone else. We now have $190 in the economy! The catch is, that money needs to be paid back, so when people need to call in that cash, suddenly the economy only has $10, because the loan needed to be paid back, causing a cash vacuum.

    But that paying back is also where the profit is, because you sell off the loan book, and you can get all your money back, including future interest. So you have lent out $90, sold the right to collect the repayments to someone else as a bond, so you now have $120, a profit of $30

    That $30 comes pretty much from nowhere. (there are caveats....)

    Now we have my bank account, after say a year with $104 in it, the bank has $26 pure profit AND someone has a bond "worth" $90 which pays $8 a year. but guess what, that bond is also a store of value. So even though its debt, it acts as money/value/whatever.

    Now, the numbers are made up, so are the percentages. but the broad thrust is there.

    [1] they do

ForHackernews 11 hours ago

"You give me a million GPUs for free, I'll announce that you have sacrificed a million GPUs to the machine gods, and your stock price will spike 200 times the value of those GPUs."

OldGreenYodaGPT 10 hours ago

[flagged]

  • schnitzelstoat 9 hours ago

    Yeah, Reddit has a strong anti-AI sentiment.

    I'm not anti-AI, I'm just sceptical that it's as powerful as the AI companies are making out. I don't think we are anywhere near AGI, like centuries away.

    I also don't think AI is going to be able to do all human jobs, in the physical world we have seen relatively little progress in robotics compared to the leaps made with transformers. And in the information world, while the LLMs can assist in many tasks and make workers more efficient I don't think they can entirely replace programmers (who are the expensive workers).

    So yeah, I just don't think we are going to see the kind of world-changing benefits that OpenAI etc. are promising and which their valuations appear to be based upon.

    • rhetocj23 7 hours ago

      AGI isnt coming any time soon. The constraint on the progress toward it is R&D, and R&D requires an increase in high quality labour than what exists today.

  • array_key_first 7 hours ago

    We're all engineers here, you can't handwave all arguments as "luddites"

    That means you'll have to, you know, actually use your brain and try to construct an argument on how any of this is good for people.

    • mrguyorama 5 hours ago

      >We're all engineers here

      Don't do this. HN has no interview, screening, or application process.

      There's plenty people here on HN who are not even programmers. For a long time there were a significant amount of people here who were literally GME bagholders in a cult FFS

      There is no reason to believe HN is any different than any other comments selection beyond some very minimal self selection bias, and that doesn't self select for competence. The self selection is "doesn't mind minimal UI"

      • array_key_first 4 hours ago

        Okay, sure, but we are probably all interested in tech. I know I am! Being interested in tech, though, doesn't mean that I blindly trust any and all new tech and marketing.

        I mean, what OpenAI is saying about their value is marketing. It's sales. It's not technical. Doubting them doesn't make anyone a luddite.

  • brazukadev 10 hours ago

    have you not been here for long enough? HN crowd is treating genAI the same way it treats blockchain, nothing new.

    • barbazoo 8 hours ago

      My team has shipped now heavily used features that are built with GenAI under the hood. I have a hard time not seeing the value in that technology.

      Personally I haven’t seen blockchain make any impact whatsoever but maybe it’s just a little more niche or just a different one.

    • llbbdd 8 hours ago

      Worse, honestly. There was a strong case to be made that crypto was absent of a real problem to solve. Meanwhile people use GenAI for real work every day and a disturbing cut of HN has their ears plugged, insisting it's a bubble and that everyone is lying about it working.