Hacker Newsnew | past | comments | ask | show | jobs | submit | ipaddr's commentslogin

No one sends you the cases they didn't win and they probably had a lot of court cases.

That's not the point. They were gleeful about their behaviour. Its even more despicable than then faux-kind "oh we are so sorry for your trouble and you are a valued customer, but computer says no."

Why wouldn't bots pay?

Some would! But it changes the value proposition significantly. That's why it needs to be a reasonably high amount.

Exactly. Plenty of them can probably afford it easier than averge people.

OpenAI is handling 15% of US traffic.

> OpenAI is handling 15% of US traffic.

The parent post was arguing that they can do this now because they are lighting stacks of cash on fire. And once they stop doing that, their LLM lead will be gone in a hurry. They appear to not have a moat, like other more established players do.


15% of US internet traffic just with text (and a few images)? I doubt it.

Voiceless groups do not appear in the training data? How could they, they are voiceless. You think the voiceless people are represented in todays training data? They cannot they are voiceless.

Nothing tragic about using data from a time period.

Common words used in 1900s are labeled racist now. I doubt anyone was wondering if they filtered those words for modern safe wordx.


There are tons of those offers. Carefull that 9k revenue doesn't come from $9000 of ads.

I'm probably being dim here, but can you elaborate a bit more. Where's the rest of the non-ad revenue coming from?

He means "careful that the 9k of revenue doesn't come from ads that the scamming owner placed so that the site could show higher traffic => ad revenue." In other words, paying $2 for ads to send people to your website, to make $1 on the ads that the ad platform now shows to your "audience."

Tried the same prompt and ended up no where close on the free plan.

Is there a known lag that it takes the Pro plan's abilities to migrate to the free plans?

GPT 5.5 Pro is not available to any plan outside of ChatGPT Pro ($100 or $200) tier or the API as far as consumer access.

Yes, but don't we expect GPT 5.5 Pro will eventually be a free tier? Maybe I'm missing something because I only use the free tier. But the free tier has gotten way better over the last few years. I'm pretty sure, based on descriptions on this site from paid subscribers, that the free tier now is better than the paid tier of say 2 years ago. That's the lag I'm wondering about.

Free ChatGPT is like a fast car with a barely responsive steering wheel. Guardrails on that thing are insane. Even for math. It wont let you think. It will try to fix mistakes you havent even made yet based on intent that was ascribed to you for no reason. It veers off in some crazy directions thinking that's what you meant and trying to address even a little bit of that creates almost a combinatorial explosion of even more wrong things. Is why I stick to Claude. The latter is chill and only addresses what you had typed. Isn't verbose and actually asks you what you getting at with your post. That said, ChatGPT is more technical and can easily solve math problems that stump Claude.

So this doesn't happen in the paid plans of ChatGPT? But why?

Paid plans give you access to much larger, more intelligent models which have thinking enabled (inference time compute). In the example here you can see GPT Pro taking 20-80 minutes to respond with the proof.

All this is far more expensive to serve so it’s locked away behind paid plans.


> thinking enabled (inference time compute)

What do you mean by compute?


I would google or use ChatGPT to a learn more about this, free version should be totally sufficient.

I do not think this is true. You will continue to get smaller, cheaper-to-host models in the free tier that are distilled from current and former frontier models. They will continue to improve, but I’d be very surprised if, e.g., 5.4-mini (I think this is the free tier model) beat o3 on many benchmarks, or real world use cases.

I won’t even leave chatGPT on “Auto” under any circumstances - it’s vastly worse on hallucinations, sycophancy, everything, basically.

Anyway, your needs may be met perfectly fine on the free tier product, but you’re using a very different product than the Pro tier gets.


You should pay for it if you find value in it.

They pay for it with their personal data.

Tangential but I learned today that GPT-5.5 in ChatGPT (Plus) has a smaller context window than the one in the API. (Or at least it thinks it does.)

I'd guess / hope the Pro one has the full context window.


Notably, 5.5 has a higher price on API for context > ChatGPT, and 5.5 Pro on API does not differentiate based on context size (it’s eye bleeding expensive already :)

Do not use the free plan. It is not good.

Does the free plan even have access to thinking models?

Technically yes, gpt-5.4-mini is available on the free plan

Was this a surprise?

His neighbour isn't spending $60,000 on all of those together

Count the Fords on the street.

Now count the Amazon deliveries in a year on said same street. And next year, and the year after, and.. however long one keeps a Ford these days..

It's quite a scary thought exercise.


The average person spends 2,800 with prime or 1100 without. 75% of Amazon shoppers have prime so about $2500 a year. Amazon collects 35% on each sale where they ship and package for you.

Amazon makes 800 dollars off of each person in revenue.

Ford makes $303 per person in revenue.

AWS makes the same.

AI spend for all platforms $450 per person

Their costs to produce aren't equal.


You get to talk to an AI agent

The comment is about smaller models

Right, but what are you going to do with small models? If your time is worth anything at all you'd pay for the $100 claude code/codex pro subscription, rather than fumbling around with the models quantized enough to fit on your mac.

If you're building agentic processes (harnesses) for business processes local models are a great way to do that, while keeping your data, and any personal data, private.

If you're vibe coding a codex/claude subscription makes more sense as a more polished experience.

I don't vibe code, but I use self hosted models with codex for code review and snippet generation.


If small models keep improving for specific purposes and larger models have diminishing returns, then what?

E.g. I can see a world where you have a local model that is specialised just for producing code.


$100 isn't going to buy you much access to claude code when they start charging a profitable fee for using it.

It's all part of getting along. If feedback that ideas are bad is not welcome why would you want to give it?

Being nice to everyone including your boss elimates a lot of problems.

Taking credit for someone elses work gives you additional power over that person and them additional responsibility when things go wrong.

Your boss will take credit for the departments work, his boss will do the same, her boss will do the same the vp will do the same. Their job is to get everyone under them to meet someone goal set by above. Everyone is taking credit for everyone elses work all the way up the chain. We do this as parents my child made the honor roll you might tell a friend knowing it reflects on you.


Yes but you're recasting my complaints as good things.

Many elements of an abusive relationship can be good, but it's the degrees that make them bad. Would you tell someone they weren't in an abusive relationship because the things they told you about could be good? Or would you accept their feelings on the matter?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: