This is a disingenuous scenario. SQLite doesn't buy you uptime if you deploy your app to AWS/GCP, and you can just as easily deploy a proper RDBMS such as postgres to a small provider/self-host.
Do you actually have any concrete scenario that supports your belief?
> SQLite doesn't buy you uptime if you deploy your app to AWS/GCP
This is...not true of many hyperscaler outages? Frequently, outages will leave individual VMs running but affect only higher-order services typically used in more complex architectures. Folks running an SQLite on a EC2 often will not be affected.
And obviously, don't use us-east-1. This One Simple Trick can improve your HA story.
> This is...not true of many hyperscaler outages? Frequently, outages will leave individual VMs running but affect only higher-order services typically used in more complex architectures. Folks running an SQLite on a EC2 often will not be affected.
You're trying too hard to move goalposts. Look at your comment: you're trying to argue that SQLite is immune to outages in AWS even when AWS is out, and your whole logic lies in asserting the hypothetical outage will be surgically designed to somehow not affect your deployment because it may or may not consume a service that was affected.
In the meantime, the last major AWS outage was Iran blowing up a datacenter. They should have just used SQLite to avoid that, is it?
All I'm saying is that people mention HA, when there isn't a need for it or when most people are fine with some downtime.
For example,
> When AWS/GCP goes down, how do most handle HA?
When they go down, what do most do? Honestly, people still go about their day and are okay. Look how many systems do go down. What ends up happening? An article goes out that X cloud took out large parts of the internet.. and that's it.
Even when there's ways of doing it, they just go down and we accept it. I never said this doesn't go down or can't go down, it's just that it's okay and totally fine if it does.
> All I'm saying is that people mention HA, when there isn't a need for it or when most people are fine with some downtime.
I don't think it's smart to just cherry pick the design constraints you feel don't apply to you, and proceed to argue others should also ignore them.
Just because you are ok to let your pet project crash and be out for long periods of time, why do you assume it's ok for everyone to do the same?
Think about it for a second: what would be the impact of a storefront to crash during a black Friday type event? Do you think people don't get fired for dropping the ball in these circumstances? Heck, you have papers that document how a few extra milliseconds of latency in a store page is correlated to measurable drops in revenue, and here you are claiming that having businesses crash is no biggie.
If people are finding new ways to use AI, they should change how they bill. Banning third party harnesses is bad for a lot of reasons - it looks like they're trying to force people to use their software. Strategically it might make sense - gives them a tiny moat if their models ever slip - but it discourages the breakneck pace of innovation and the long term effect is that their customers (largely highly skilled with computers and building software) will look to decouple themselves. Claude is good but it's not so far better than anything else that they can pull shit like this and people will just deal with it.
They already have the regular subscription plans (Pro, Max) and a separate billing process for direct API usage. They could absolutely introduce another type of plan optimized toward this kind of usage or just accept that it's a dumb pipe that is being paid for and having these random arbitrary limitations is just making things more confusing and a bad plan for the future.
They already have the way that you're supposed to bill for usages like this, the API usage. The purpose of the subscription plan is strictly for the cases where you are using few enough tokens on average that it's not a money pit for them.
They have subscription plans for their software, and a seperate billing process for the API. There's nothing to change. 'Accepting that it's a dumb pipe' would just mean removing the Pro & Max plans as options.
Clawdbot was clearly against the Consumer Terms of Use the whole time, they’ve just started actively detecting and blocking it.
> Except when you are accessing our Services via an Anthropic API Key or where we otherwise explicitly permit it, [it is forbidden] to access the Services through automated or non-human means, whether through a bot, script, or otherwise.
Claude Code is a subscription tier explicitly designed for agentic, automated, heavy usage. So the 'subscriptions are for human use, API is for automation' line is already blurry by their own offerings.
If the actual concern is use pattern, enforce that directly. What we have instead is metered usage + behavioral restrictions + product fragmentation across three separate offerings.
That's not a clean billing philosophy, it's layers of control stacked on top of each other with no coherent logic tying them together.
If subscriptions are for humans and API is for automation, fine. But then don't meter the human product arbitrarily and don't sell a subscription tier for automation while also restricting automation. Pick a lane.
> Claude Code is a subscription tier explicitly designed for agentic, automated, heavy usage
Except it's not. It's a desktop, web, mobile, and CLI subscription product built on top of a usage-based API with a generous token allowance bundled with it. That generous allowance comes with the restriction that those tokens can only be spent through Claude product surfaces. Why would Anthropic offer their API at a loss and subsidize the profits and growth of other businesses?
You are correct, but you don't need openclaw to batch your work. People will figure out ways to use their tokens at that fixed price.
Sure there is a difference. It's like when most mobile companies wouldn't allow tethering because then people would actually use the service.
You can try to stop that, but people will price in those inconveniences. They will simply learn that the fee pays for much less than the token limit and that the company is enforcing some unwritten limits by adding extra limitations to usage.
i used to set up webmin for the linux challenged admins so they could do basic tasks. it was nice because you could lock them to specific functions in certain modules and make it difficult for them to break things
The best question is, what's been shipped in the past 60 days with those 600,000 lines.
Lots of people trying things for the sake of it, without really achieving anything with it. Maybe they have 'a setup' but the setup ends up being unproven.
Considering he mentions ten sessions at once and I'm pretty confident he wouldn't tolerate waiting for the quota to reset itself... maybe like high four digits with the discount applied, definitely five without it.
reply