Hacker Newsnew | past | comments | ask | show | jobs | submit | aurareturn's commentslogin

Every Anthropic release uses Claude models.

Not every Google product release used Google search. Some of them were completely outside of Google's domain.


I tried Figma again after a few years expecting that they'd surely have a tool that lets me describe a design and then it generates a Figma design file.

Nope. Figma Make first renders an HTML/React app with your design. Then you could convert to a Figma design file if you have a pro plan. Extremely underwhelming.

There's hardly any difference between using Figma and just designing it with Codex and Claude Code. And now, Claude Design seems to get it right.


Maybe Figma is better for large teams. Even here, teams are getting smaller and smaller.

But for me, I will never use it again.


Is questions asked to AI bad?

It sits on the "Verso · Of What Was Taken" side, so the framing is tilted. But I didn't want to argue it explicitly on the page. Some might see a loss (a human conversation that would have or could have happened otherwise). Some see a gain (knowledge access at scale). The "not people" phrasing is meant to invite the question, not answer it.

Using US GDP isn't going to be meaningful since these AI companies probably get at least 50% of their revenue from outside of the US.

Meanwhile, those US mega projects were strictly for domestic use.

It's the same as the Buffet Indicator which for the last 20 years, have flagged the US stock market as overvalued. But when the Buffet Indicator was relevant, most US companies had the majority of their revenue from inside the US. Today, a company like Google will make half of their revenue from outside.


Funny because many people here were so confident that OpenAI is going to collapse because of how much compute they pre-ordered.

But now it seems like it's a major strategic advantage. They're 2x'ing usage limits on Codex plans to steal CC customers and it seems to be working. I'm seeing a lot of goodwill for Codex and a ton of bad PR for CC.

It seems like 90% of Claude's recent problems are strictly lack of compute related.


> people here were so confident that OpenAI is going to collapse because of how much compute they pre-ordered

That's not why. It was and is because they've been incredibly unfocused and have burnt through cash on ill-advised, expensive things like Sora. By comparison Anthropic have been very focused.


I don't think that was the main reason for people thinking OpenAI is going to collapse here.

By far, the biggest argument was that OpenAI bet too much on compute.

Being unfocused is generally an easy fix. Just cut things that don't matter as much, which they seem to be doing.


Nobody was talking about them betting too much on compute, people were saying that their shady deals on compute with NVIDIA and Oracle were creating a giant bubble in their attempt to get a Too Big To Fail judgement (in their words- taxpayer-backed "backstop").

It really wasn't. Most of the argument was around product portfolio and agentic coding performance.

That’s just short term talk. The main thesis behind their collapse is that they won’t be able to pay their compute bills because they won’t have enough demand to.

That doesn't really track because their compute isn't like a debt obligation.

The compute topic was more around how OpenAI, Nvidia, Oracle, and others were all announcing commitments to spend money in each other in a circular way which could just net out to zero value.


To me it seems like they burn so much money they can do lots of things in parallel. My guess would be that e.g. codex and sora are very independently developed. After all there's a quite a hard limit on how many bodies are beneficial to a software project.

They all compete internally over constrained compute resources - for R&D and production.

Personally its down to Altman having the cognitive capacity of a sleeping snail, the world insight of a hormonal 14 year old who's only ever read one series of manga.

Despite having literal experts at his fingertips, he still isn't able to grasp that he's talking unfilters bollocks most of the time. Not to mention is Jason level of "oath breaking"/dishonesty.


Honestly it seems like each major player here fumbles the ball in turn, quite fun to observe. But hey, it's a difficult game.

> By comparison Anthropic have been very focused.

Ah yes, very focused on crapping out every possible thing they can copy and half bake?


> I'm seeing a lot of goodwill for Codex and a ton of bad PR for CC.

AI is one of the things that you cannot find genuine opinions online. Just like politics. If you visit, say, r/codex, you'll see all the people complaining about how their limits are consumed by "just N prompts" (N is a ridiculously small integer).

It's all astroturfed from all sides.


I agree. And I am seeing it in a lot of venues, especially political discourse. Commenting is increasingly AI driven I fear the whole thing is going to collapse and nobody will be able to rely on online commentary to make decisions. At least not without a lot of independent research, maybe that’s for the best, but it’s definitely going to change the Internet.

Seems very short term. Like how cheap Uber was initially. Like Claude was before!

Eventually OpenAI will need to stop burning money.


OpenAI will need to stop burning money eventually, but so does everyone else in the space. The longer they can do this the more squeeze it puts on their competitors.

I would call out though that I think there is one way in which this differs from the Uber situation. Theoretically at some point we should hit a place where compute costs start to come down either because we've built enough resources or because most tasks don't need the newest models and a lot of the work people are doing can be automatically sent to cheaper models that are good enough. Unless Uber's self driving program magically pops back up, Uber doesn't really have that since their biggest expense is driver wages.

I think it's a long shot, but not impossible, that if OpenAI can subsidize costs long enough that prices don't need to go too much higher to be sustainable.


My standing assumption is the darling company/model will change every quarter for the foreseeable future, and everyone will be equally convinced that the hotness of the week will win the entire future.

As buyers, we all benefit from a very competitive market.


This is the primary reason I won’t sign up for an annual plan.

The market here is extraordinarily vibes-based and burning billions of dollars for a ephemeral PR boost, which might only last another couple weeks until people find a reason to hate Codex, does not reflect well on OAI's long term viability.

In hindsight, it is painfully clear that Antropic’s conservative investment strategy has them struggling with keeping up with demand and caused their profit margin to shrink significantly as last buyer of compute.

they've also introduced a lot of caching and token burn related bugs which makes things worse. any bug that multiplies the token burn also multiplies their infrastructure problems.

Most of the compute OpenAI "preordered" is vapour. And it has nothing to do with why people thought the company -- which is still in extremely rocky rapids -- was headed to bankruptcy.

Anthropic has been very disciplined and focused (overwhelmingly on coding, fwiw), while OpenAI has been bleeding money trying to be the everything AI company with no real specialty as everyone else beat them in random domains. If I had to qualify OpenAI's primary focus, it has been glazing users and making a generation of malignant narcissists.

But yes, Anthropic has been growing by leaps and bounds and has capacity issues. That's a very healthy position to be in, despite the fact that it yields the inevitable foot-stomping "I'm moving to competitor!" posts constantly.


How is droves of your customers leaving, whether they're foot stomping or not, healthy?

Droves? I mean, if we take the "I'm leaving!" posts seriously, the company has people so emotionally invested they feel the need to announce their departure is a pretty good place to be. Some tiny sampling of unhappy customers is indicative of nothing.

Honestly at this point I am pretty firmly of the belief that OAI is paying astroturfers to post the "Boy does anyone else think Claude is dumb now and Codex is better?" (always some unreproducible "feel" kind of thing that are to be adopted at face value despite overwhelming evidence that we shouldn't). OAI is kind of in the desperation stage -- see the bizarre acquisitions they've been making, including paying $100M for some fringe podcast almost no one had heard of -- and it would not be remotely unexpected.


We have no idea the ratio of foot stompers to quite quitters but I'm sure most people don't announce it. I cancelled my subscription and hadn't told anybody. And I quit based on personal experience over the last few weeks, not on social media pr.

I have both Claude and OpenAI, side by side. I would say sonnet 46 still beats gpt 54 for coding (at least in my use case) But after about 45 minutes I'm out of my window, so I use openai for the next 4 hours and I can't even reach my limit.

Is that 2x still going on I thought that ended in early April

Different plan. The old 2x has been discontinued, and the bonus is now (temporarily) available for the new $100 plan users in an effort, presumably, to entice them away from Anthropic.

For the $200 users, it never ended.

It’s for Pro users only, I think the 2x is up to May 31.

They did it again to "celebrate" the release of the $100 plan.

On plus?

Funny because the general consensus is that everyone is burning money so fast that they would not be able to get it back from their AI business in the near future. OpenAI is simply the one with the most aggressive expenditure. Google has its own cash cows. Anthropic has been conservative all around.

That’s more a leadership decision because Anthropic are nerfing the model to cut costs, if they stop doing that then they’ll stay ahead.

Proof they are nerfing the model? It is stable in benchmarks: https://marginlab.ai/trackers/claude-code-historical-perform...

All this just reads like just another case of mass psychosis to me



Proof they don't nerf it only after testing that the benchmarks there stay the same? So overall performance degrades but they isolate those benchmarks?

You are dramatically overestimating how much time people have to waste at these smaller hypergrowth companies

Their top tier plan got a 3x limit boost. This has been the first week ever where I haven't run out of tokens.


> It seems like 90% of Claude's recent problems are strictly lack of compute related.

Downtime is annoying, but the problem is that over the past 2-3 weeks Claude has been outrageously stupid when it does work. I have always been skeptical of everything produced - but now I have no faith whatsoever in anything that it produces. I'm not even sure if I will experiment with 4.7, unless there are glowing reviews.

Codex has had none of these problems. I still don't trust anything it produces, but it's not like everything it produces is completely and utterly useless.


So many people confuse sycophantic behavior with producing results.

All of the smart people I know went to work at OpenAI and none at Anthropic. In addition to financial capital, OpenAI has a massive advantage in human capital over Anthropic.

As long as OpenAI can sustain compute and paying SWE $1million/year they will end up with the better product.


Attracting talent with huge sums of money just gets you people who optimize for money, and it's usually never a good long-term decision. I think it's what led to Google's downturn.

Google is doing great still. One of the few FAANG I am bullish on over the long timescale.

> I think it's what led to Google's downturn.

What downturn is that exactly?


> OpenAI has a massive advantage in human capital over Anthropic.

but if your leader is a dipshit, then its a waste.

Look You can't just throw money at the problem, you need people who are able to make the right decisions are the right time. That that requires leadership. Part of the reason why facebook fucked up VR/AR is that they have a leader who only cares about features/metrics, not user experience.

Part of the reason why twitter always lost money is because they had loads of teams all running in different directions, because Dorsey is utterly incapable of making a firm decision.

Its not money and talent, its execution.


Are those "smart people you know" machine learning researchers?

No, infrastructure engineers. The one who scale the system up so you don’t have to rate limit.

They don't have enough compute for all their customers.

OpenAI bet on more compute early on which prompted people to say they're going to go bankrupt and collapse. But now it seems like it's a major strategic advantage. They're 2x'ing usage limits on Codex plans to steal CC customers and it seems to be working.

It seems like 90% of Claude's recent problems are strictly lack of compute related.


Is that why Anthropic recently gave out free credits for use in off-hours? Possibly an attempt to more evenly distribute their compute load throughout the day?

That was the carrot, but it was followed immediately by the stick (5 hour session limits were halved during peak hours)

> Is that why Anthropic recently gave out free credits for use in off-hours?

That was the carrot for the stick. The limits and the issues were never officially recognized or communicated. Neither have been the "off-hours credits". You would only know about them if you logged in to your dashboard. When is the last time you logged in there?


i suspect they get cheap off peak electricity and compute is cheaper at those times

That's not really how datacenter power works. It's usually a bulk buy with a 95th percentile usage.

I think it's a lot simpler than that. At peak, gpus are all running hot. During low volume, they aren't.

Hard for me to reconcile the idea that they don't have enough compute with the idea that they are also losing money to subsidies.

- Not enough compute for the requests they have

- Selling those requests at less money than it cost to run the compute for those requests (because if you raise price clients go to openai)

The statements are not contradicting each other? They keep subsidizing to try to grow customer base, but they can't serve the customer base they have, they're expecting customer base grows faster than it drops from people bothered with rate limits (it probably will, average user won't hit rate limits enough to change)

Probably expecting a breakthrough in efficiency for compute, or getting enough cash flow (IPO?) to get more compute before it all comes crashing down


they clearly arent losing money, i dont understand why people think this is true

People think it's true because it is true, and OpenAI has told us themselves.

They (very optimistically) say they'll be profitable in 2030.


They're saying Anthropic doesn't have enough compute, not OpenAI. They said OpenAI specifically invested early in compute at a loss.

They are loosing money because the model training costs billions.

Model inference compute over model lifetime is ~10x of model training compute now for major providers. Expected to climb as demand for AI inference rises.

For sure and growth also costs money for buying DCs etc.

They are constantly training and getting rid of older models, they are losing money

Which part of "over model lifetime" did you not understand?

That's not a sufficient condition for profitability if both inference and scaling costs continue to increase over time.

It worked. Although I have a Claude Code subscription, I got the ChatGPT Pro plan, and 5.4 xHigh at 1.5x speed was better than 4.6 with adaptive thinking disabled. I was working all day, about 8 hours, and did not run into any limits. 5.4 surprised me many times by doing things I usually would not do myself, because I am lazy, so yeah, I am sticking with 5.4 for now until all the Claude drama is over.

Betting on continued exponential growth is basically a game of chicken. Growth has to slow down and level off at some point as adoption and usage saturates.

It's a bit like playing roulette by always betting on black and doubling your bet every time you lose. When you eventually, inevitably, do lose, your loss is going to be huge because you've been doubling your bet at each stage.

With LLM model generations and investment, it goes something like this. Let's say profits have been doubling year over year for each new model/investment cycle, and you want to bet on this doubling continuing forever.

Year 1 you get $10B in profit, and spend $20B on extra capacity for next year

Year 2 you get $20B in profit, and spend $40B on extra capacity

Year 3 you get $30B in profit, and spend $??? on extra capacity

You're already in trouble. Profit growth from Year 2 to 3 was "only" 50% vs the doubling you were gambling on, so you've now lost $10B ($40B spent only earnt you $30B of profit), and what are you going to do? Double down like the roulette player?

The longer the pattern of profit doubling goes, before it slows down, the worse it will end for you, since your bets are doubling each year. Saying "woo hoo, look at me! risk pays!" is a bit like saying the same while playing russian (not casino) roulette for money.

I worked for Acorn Computers UK in the early 80's and saw something similar firsthand. The brand new personal computer market was exploding, a once in a lifetime phenomenon, that no-one knew how to forecast. To make matters worse the market was highly seasonal with most sales at xmas, so the company had to guess what continued year-on-year exponential growth might look like (brand new market - no-one had a clue), and plan/spend ahead and stock warehouses full of computers ready for xmas. Sadly Acorn took the Sam Altman highly optimistic/irresponsible approach, got the forecast wrong, and was left with a huge warehouse full of rapidly depreciating computers. The company never fully recovered, although ARM rose out of the ashes.


Honestly, I personally would rather a time-out than the quality of my response noticably downgrading. I think what I found especially distrustful is the responses from employees claiming that no degredation has occured.

An honest response of "Our compute is busy, use X model?" would be far better than silent downgrading.


Are they convinced that claiming they have technical issues while continuing to adjust their internal levers to choose which customers to serve is holistically the best path?

Its a hard game to play anyway.

Anthropics revenue is increasing very fast.

OpenAI though made crazy claims after all its responsible for the memory prices.

In parallel anthropic announced partnership with google and broadcom for gigawatts of TPU chips while also announcing their own 50 Billion invest in compute.

OpenAI always believed in compute though and i'm pretty sure plenty of people want to see what models 10x or 100x or 1000x can do.


I bet that's the real reason why they're not releasing Mythos ;)

Prepare for the prices to go up!

You state your hypnosis quite confidently. Can you tell me how taking down authentication many times is related to GPU capacity?

All of these things can be true at the same time:

* AI makes it that you don't need as many devs as in the past

* Snap is a terrible business

* Snap overhired during Covid

* High interest rates are forcing companies to downsize


Interestingly, the US passed a law in 2023 to prevent the President from pulling out of NATO without 2/3 Congress voting or majority in both chambers. The law was probably designed for someone exactly like Trump or Trump himself.

If he's going to do it, he'll probably do it before midterms since Republicans control both chambers. I doubt he'll get enough votes though given how unpopular the war is. But who knows.

If he doesn't get the votes, I'm guessing Trump will still notify NATO that US is leaving officially, then he'll fight it out with the US court system for legitimacy. It might rock the markets, and he might lose, but I guess some insiders will make a lot of money betting on it.


Yes. Vast majority of software engineers don’t stay up to date with the latest tools or frameworks either. They often just keep using whatever they learned and occasionally look for a new tool if they get stuck.

I don’t see why doctors have the energy to keep up with the latest research. This is why I think something like ChatGPT and doctor might be the best combo.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: