Hacker Newsnew | past | comments | ask | show | jobs | submit | rovr138's commentslogin

Python 2.7 as a requirement

https://github.com/kamilchm/developer-experience/

Sidebar > Changelog > Hit the version 0.0.1 > fix the url to .com > remove tree and version


lol, even the link is not correct.

.con, instead of .com


The original message is,

> So much of the Internet is pay-walled now.

It’s lamenting that more is behind paywalls. Not that the paywalls exist.


And sowbug wrote that paywalls and the open web are not compatible, to which I disagreed.

No offense, you wait. Like everyone's been doing for years in the internet and still do

- When AWS/GCP goes down, how do most handle HA?

- When a database server goes down, how do most handle HA?

- When Cloudflare goes down, how do most handle HA?

The down time here is the server crashed, routing failed or some other issue with the host. You wait.

One may run pingdom or something to alert you.


> When AWS/GCP goes down, how do most handle HA?

This is a disingenuous scenario. SQLite doesn't buy you uptime if you deploy your app to AWS/GCP, and you can just as easily deploy a proper RDBMS such as postgres to a small provider/self-host.

Do you actually have any concrete scenario that supports your belief?


> SQLite doesn't buy you uptime if you deploy your app to AWS/GCP

This is...not true of many hyperscaler outages? Frequently, outages will leave individual VMs running but affect only higher-order services typically used in more complex architectures. Folks running an SQLite on a EC2 often will not be affected.

And obviously, don't use us-east-1. This One Simple Trick can improve your HA story.


> This is...not true of many hyperscaler outages? Frequently, outages will leave individual VMs running but affect only higher-order services typically used in more complex architectures. Folks running an SQLite on a EC2 often will not be affected.

You're trying too hard to move goalposts. Look at your comment: you're trying to argue that SQLite is immune to outages in AWS even when AWS is out, and your whole logic lies in asserting the hypothetical outage will be surgically designed to somehow not affect your deployment because it may or may not consume a service that was affected.

In the meantime, the last major AWS outage was Iran blowing up a datacenter. They should have just used SQLite to avoid that, is it?


All I'm saying is that people mention HA, when there isn't a need for it or when most people are fine with some downtime. For example,

> When AWS/GCP goes down, how do most handle HA?

When they go down, what do most do? Honestly, people still go about their day and are okay. Look how many systems do go down. What ends up happening? An article goes out that X cloud took out large parts of the internet.. and that's it.

Even when there's ways of doing it, they just go down and we accept it. I never said this doesn't go down or can't go down, it's just that it's okay and totally fine if it does.


> All I'm saying is that people mention HA, when there isn't a need for it or when most people are fine with some downtime.

I don't think it's smart to just cherry pick the design constraints you feel don't apply to you, and proceed to argue others should also ignore them.

Just because you are ok to let your pet project crash and be out for long periods of time, why do you assume it's ok for everyone to do the same?

Think about it for a second: what would be the impact of a storefront to crash during a black Friday type event? Do you think people don't get fired for dropping the ball in these circumstances? Heck, you have papers that document how a few extra milliseconds of latency in a store page is correlated to measurable drops in revenue, and here you are claiming that having businesses crash is no biggie.


>They have to be honest about what they can offer for $200

Their expectation must have been a human using the service at a human capacity.

This is different from an automated agent orchestrating a ton of different agents at the same time doing a lot of things.

There is a difference.


If people are finding new ways to use AI, they should change how they bill. Banning third party harnesses is bad for a lot of reasons - it looks like they're trying to force people to use their software. Strategically it might make sense - gives them a tiny moat if their models ever slip - but it discourages the breakneck pace of innovation and the long term effect is that their customers (largely highly skilled with computers and building software) will look to decouple themselves. Claude is good but it's not so far better than anything else that they can pull shit like this and people will just deal with it.

They already have the regular subscription plans (Pro, Max) and a separate billing process for direct API usage. They could absolutely introduce another type of plan optimized toward this kind of usage or just accept that it's a dumb pipe that is being paid for and having these random arbitrary limitations is just making things more confusing and a bad plan for the future.


They already have the way that you're supposed to bill for usages like this, the API usage. The purpose of the subscription plan is strictly for the cases where you are using few enough tokens on average that it's not a money pit for them.

They have subscription plans for their software, and a seperate billing process for the API. There's nothing to change. 'Accepting that it's a dumb pipe' would just mean removing the Pro & Max plans as options.

Clawdbot was clearly against the Consumer Terms of Use the whole time, they’ve just started actively detecting and blocking it.

> Except when you are accessing our Services via an Anthropic API Key or where we otherwise explicitly permit it, [it is forbidden] to access the Services through automated or non-human means, whether through a bot, script, or otherwise.


Start paying by the token if you want to use these tools. Simple as.

Even better: switch to Codex plus get better rate limits. I’m not a captive audience as much Anthropic would like to believe otherwise.

They don’t need to change how they bill. Your subscription is for Claude app/code. Otherwise you pay per token. It’s always been this way.

Claude Code is a subscription tier explicitly designed for agentic, automated, heavy usage. So the 'subscriptions are for human use, API is for automation' line is already blurry by their own offerings.

If the actual concern is use pattern, enforce that directly. What we have instead is metered usage + behavioral restrictions + product fragmentation across three separate offerings.

That's not a clean billing philosophy, it's layers of control stacked on top of each other with no coherent logic tying them together.

If subscriptions are for humans and API is for automation, fine. But then don't meter the human product arbitrarily and don't sell a subscription tier for automation while also restricting automation. Pick a lane.


> Claude Code is a subscription tier explicitly designed for agentic, automated, heavy usage

Except it's not. It's a desktop, web, mobile, and CLI subscription product built on top of a usage-based API with a generous token allowance bundled with it. That generous allowance comes with the restriction that those tokens can only be spent through Claude product surfaces. Why would Anthropic offer their API at a loss and subsidize the profits and growth of other businesses?


The whole industry is about robots telling robots what to do, why wouldn't they have expected automation?

You are correct, but you don't need openclaw to batch your work. People will figure out ways to use their tokens at that fixed price.

Sure there is a difference. It's like when most mobile companies wouldn't allow tethering because then people would actually use the service.

You can try to stop that, but people will price in those inconveniences. They will simply learn that the fee pays for much less than the token limit and that the company is enforcing some unwritten limits by adding extra limitations to usage.

We will see it play out.


Well, an em dash is used in text to identify a pause or alternatives in the text.

..so like a fork in the way it's done, a new way of doing things.

But you need to remove the dev/ai hat in order to go back to writing rules and the real use.


While you have the base rate fallacy, it might also be that people with a bit bigger repos might be paying closer attention.

I disabled all the attribution. I find it noisy and I'm not blaming claude, I'm blaming someone if something is broken.


Interesting. This looks nice. Made me think of webmin which I used... years ago.

Went to look and webmin's changed. Pretty crazy.


i used to set up webmin for the linux challenged admins so they could do basic tasks. it was nice because you could lock them to specific functions in certain modules and make it difficult for them to break things


yeah! I had some things through there early on when I was building sites. I had some custom scripts that could also be triggered by the users.


The best question is, what's been shipped in the past 60 days with those 600,000 lines.

Lots of people trying things for the sake of it, without really achieving anything with it. Maybe they have 'a setup' but the setup ends up being unproven.


> five digits

before or after the 90%?


Considering he mentions ten sessions at once and I'm pretty confident he wouldn't tolerate waiting for the quota to reset itself... maybe like high four digits with the discount applied, definitely five without it.

I could be underestimating both by a digit.


Yes


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: