Hacker Newsnew | past | comments | ask | show | jobs | submit | chewz's commentslogin

Not all but few. Read the directive - IP67 or 1000 charges and the model is excluded

Makes sense.

It is over for the little guy - home enthusiasts and vibe coders. Too many of them saturating resources for Max users.

IF you cannot afford few hundred dollars subscription go out and breathe fresh air. But if you can, watch where the ball is rolling - few thousand dollars subscriptions and even less programmers.


Hear HN tell of it, Claude pays for itself 3× over.

Something tells me congitively it's making us misjudge how productive it's making us.

It's clearly massively increasing output, but did the market already soak up all that productivity and now it's not compensated?

If your salary is 50k. And Claude makes you 2x as productive, why aren't you earning 100k?

Why is it anyone can't afford $200/mo if it's truely increasing worker productivity?

There seems to be a paradox here.

Personally I switched to Z.ai and GLM quite some time ago. I've not noticed any decrease in quality or quantity of my work.


Agree about psychological impact outpacing likely actual impact, but that’s a relatively temporary phenomena as we are all adapting to the new way things work.

Productivity wise employment is far more than code production productivity in a vacuum, and productivity gains are rarely captured by employees (see famous chart on worker productivity where that correlation changed around 1970). I wouldn’t expect to see much in the next 1-2 years besides noticing effective teams increasing velocity of features.

I think people in forums like complaining about things and aren’t representative of the broader set of people who are just using the tools, so no real paradox. For vast majority of tech jobs, $200/mo is still an absolute steal in terms of what these tools offer. Only the dullest of companies would not realize this.

Fwiw in the 80s-90s computers also didn’t really register in productivity metrics. Qualitative changes occur long before accurate measurement catches up.


Because most people work for someone else and don't decide their own salaries. It's not doubling productivity, but even a 10-20% boost to productivity for a team of engineers means that, as a business, even $1k per month per seat is perfectly acceptable. For consumers and hobbyists that basically kills access.

yeah the more people who use it means less competitive edge you have. Benefits get devalued. And you're back to square one.

> Personally I switched to Z.ai and GLM quite some time ago. I've not noticed any decrease in quality or quantity of my work.

> Something tells me congitively it's making us misjudge how productive it's making us.

This could be happening to you, too.


Vscode agent mode and github copilot can use Claude models and has feature parity with the .md customization for agents prompts skills etc.

Not too expensive


They slapped a 7.5x “promotional” multiplier on Opus 4.7 and they are removing Opus 4.6 in short order.

I heard they disabled signups for non-business accounts too.

Best forget about using Claude Opus models in Copilot.


> Best forget about using Claude Opus models in Copilot.

I noticed this morning that Opus isn't even one of the models in the `/model` command in Copilot. Highest I can get (on the paid, but least expensive) tier is Sonnet 4.6. I'm pretty sure Opus was allowed recently, but not now.


Yeha not thrilled about that.

Looks like you gotta build your own agent harness and self host or use aws bedrock for "sovereignty"


Indeed; especially since I paid for a sub with some expectations, and those are being changed out from under me.

I heard they offer a full refund for this month if you are understandably unhappy with these changes.

Truly makes no sense. I pay for the $200/month plan and end up using about $3k/month worth of API costs. I imagine that the only reason they haven’t cut me off is because my habits serve as good training data for them.

Guess they’ve decided to move in the direction of allocating compute primarily to power users and enterprise.

But power users are not a sticky customer base. I just bought the ChatGPT Pro plan and would immediately switch over if the model performance is better and/or I get more compute.


Or the API is overpriced. The concept of charging per tokens does not map well to the actual costs an AI company has.

Odd, everyone was insisting this would "democratize" programming though.

Guess it democratizes it if you have money, huh?


Good old "one dollar, one vote" democracy!

Well, God is British. It is customary to blame Him

Do you review all machine code your compiler produces?

...how exactly do you think that's even remotely the same thing?

Compiler output is deterministic based on input code - which is typically reviewed before compiling by someone(s) who will be held accountable for it.


7.5 is promotional rate, it will go up to 25. And in May you will be switched to per token billing.

Opus 4.5 and 4.6 will be removed very soon.

So what is your contingency plan?


Are you saying github copilot is switching to a per token billing model? If so, you have a link to that?

Can you link to a source for anything you're claiming?

https://github.blog/changelog/2026-04-16-claude-opus-4-7-is-...

> Over the coming weeks, Opus 4.7 will replace Opus 4.5 and Opus 4.6 in the model picker for Copilot Pro+.

> This model is launching with a 7.5× premium request multiplier as part of promotional pricing until April 30th

TBF, it's a rumour that they are switching to per-token price in May, but it's from an insider (apparently), and seeing how good of a deal the current per-request pricing is, everyone expects them to bump prices sometime soon or switch to per-token pricing.


The per-request pricing is ridiculous (in a good way, for the user). You can get so much done on a single prompt if you build the right workflow. I'm sure they'll change it soon

Yeah it seems insane that it's priced this way to me too. Using sonnet/opus through a ~$40 a month copilot plan gives me at least an order of magnitude more usage than a ~$40 a month claude code plan (the usage limits on the latter are so low that it's effectively not a viable choice, at least for my use cases).

The models are limited to 160k token context length but in practice that's not a big deal.

Unless MS has a very favourable contract with Anthropic or they're running the models on their own hardware there's no way they're making money on this.


Yeah, you can even write your own harness that spawns subagents for free, and get essentially free opus calls too. Insane value, I'm not at all surprised they're making changes. Oh well. It was a pain in the ass to use Copilot since it had a slightly different protocol and oauth so it wasn't supported in a lot of tools, now I'm going to go with Ollama cloud probably, which is supported by pretty much everything.

Making models smarter is saturated at this point. But making models faster and is low cost vector for growth.

OpenAI will not win against Anthropic that way. But they will make themselves indispensable to bilions of new, invisible micro solutions.


Until May

What’s happening in May?

Github Copilot switches all users from per prompt to per token billing

We are processing same data for the last 2 years.

Inference prices droped like 90 percent in that time (a combination of cheaper models, implicit caching, service levels, different providers and other optimizations).

Quality went up. Quantity of results went up. Speed went up.

Service level that we provide to our clients went up massively and justfied better deals. Headcount went down.

What's not to like?


The decline of independent thoughts for one. As people become reliant on LLMs to do their thinking for them and solve all problems that they stumble upon, they become a shell of their previous self.

Sadly, this is already happening.


We'll need to do faux mental work like how we do faux labor work.

There is no decline. Human assets were always too expensive to process some additional information. We are simply processing lot more of low signal data.

Actually some of our analysts are empowered by the tools at their disposal. Their jobs are safe and necessary. Others were let go.

Clients are happy to get fuller picture of their universe, which drives more informed decissions . Everybody wins.


You are free to believe what you want, but what you describe does not match what I’ve seen from society as a whole. I’m just going to leave this here: https://www.media.mit.edu/projects/your-brain-on-chatgpt/ove...

Are you being satirical?

The headcount that went down probably isn’t too thrilled about it.

Yes, probably. But the others gained skills and tools that made their jobs secure.

Right but the question wasn’t were some people better off. It is what’s not to like?

Quantity have quality of it own. Or in case of Code Assistants - speed.

Use Opus fast, Codex-Spark or Cerebras. No more spinner starring. Instant results. One project at time.


Keeping lid on boiling peasants rage have always been chief qualification of aristocracy.

Plus I don't believe in violence from tech bros and white collar workers - they have been raised too docile...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: