> Sure, that profit does not cover the model training costs, but that’s a separate issue
It is? If another company comes out with a better model tomorrow and offers it at the same price Anthropic charges for Opus, they’re going to lose customers fast. They have to keep training to keep selling inference.
Most businesses factor in the cost of making their product into the product’s P&L.
also, like super mario kart, SOTA models from the rear will be continually released because theyre sunk costs and open weights will advertise for themselves. Also, its clear FOMO is a DDoS attack on any perceived leader because theres no way they dont oversell.
Lastly, theyll realize like every good capitalist, theres more profit in exclusiveness and cutiing out customers.
> if there's one clear example of "Product Model Fit", it's OpenClaw
You think so? OpenClaw certainly owned the hype cycle for a while. There was a thread on HN last week where someone asked who was actually using it, and the comments were overwhelmingly "tried it, it was janky and I didn't have a good use case for it, so I turned it off." With a handful of people who seemed to have committed to it and had compelling use cases. Obviously anecdotal, but that has been the trend I've seen on conversations around it lately.
Also, the fact that the most starred repo on GitHub in a matter of a few months raises a few questions for me about what is actually driving that hype cycle. Seems hard to believe that is strictly organic.
The early versions of this design arrived in 2008, though it has a sweet sweet flash header complete with audio until 2021.
An even more irrelevant side note: it appears that archive.org has a javascript based flash emulator built in to run old flash websites, which is pretty amazing.
Agreed -- except that all of their docs and marketing pitches it for use cases like "per-user, per-tenant or per-entity databases" -- which would be SO great.
But in practice, it's basically impossible to use that way in conjunctions with workers, since you have to bind every database you want to use to the worker and binding a new database requires redeploying the worker.
If you want to dynamically create sqlite databases, then moving to durable objects which are each backed by an sqlite database seems to be the way to go currently.
And now you've put everything on the equivalent of a single NodeJS process running on a tiny VM. Next step: spread out over multiple durable objects but that means implementing a sharding logic. Complexity escalates very fast once you leave toy project territory.
Constantly rewriting git history with squashes, rebases, manual changes, and force pushes has always seemed like leaving a loaded gun pointed at your foot to me.
Especially since you get all of the same advantages with plain old stream on consciousness commits and merges using:
I find rebases are only a footgun because the standard git cli is so bad at representing them - things like --force being easier to write than --force-with-lease, there being no way to easily absorb quick fixes into existing commits, interdiffs not really being possible without guesswork, rebases halting the entire workflow if they don't succeed, etc.
I've switched over pretty much entirely to Jujutsu (or JJ), which is an alternative VCS that can use Git as its backend so it's still compatible with Github and other git repos. My colleagues can all use git, and I can use JJ without them noticing or needing to care. JJ has merges, and I still use them when I merge a set of changes into the main branch once I've finished working on it, but it also makes rebases really simple and eliminates most of the footguns. So while I'm working on my branch, I can iteratively make a change, and then squash it into the commit I'm working on. If I refactor something, I can split the refactor out so it's in a separate commit and therefore easiest to review and test. When I get review feedback, I can squash it directly into the relevant commit rather than create a new commit for it, which means git blame tends to be much more accurate and helpful - the commit I see in the git blame readout is always the commit that did the change I'm interested in, rather than maybe the commit that was fixing some minor review details, or the commit that had some typo in it that was fixed in a later commit after review but that relationship isn't clear any more.
And while I'm working on a branch, I still have access to the full history of each commit and how it's changed over time, so I can easily make a change and then undo it, or see how a particular commit has evolved and maybe restore a previous state. It's just that the end result that gets merged doesn't contain all those details once they're no longer relevant.
+1 on this, I also switched to jj when working with any git repo.
What's funny is how much better I understand git now, and despite using jj full time, I have been explaining concepts like rebasing, squashing, and stacked PRs to colleagues who exclusively use git tooling
The magic of the git cli is that it gives you control. Meaning whatever you want to do can be done. But it only gives you the raw tools. You'll need to craft your own workflow on top of that. Everyone's workflow is different.
> So while I'm working on my branch, I can iteratively make a[...]which means git blame tends to be much more accurate and helpful
Everything here I can do easily with Magit with a few keystroke. And magit sits directly on top of git, just with interactivity. Which means if I wanted to I could write a few scripts with fzf (to helps with selection) and they would be quite short.
> And while I'm working on a branch, I still have access to the full history of each commit...
Not sure why I would want the history for a specific commit. But there's the reflog in git which is the ultimate undo tool. My transient workspace is only a few branches (a single one in most cases). And that's the few commits I worry about. Rebase and Revert has always been all I needed to alter them.
I think there's a sense that magit and jj are in some way equivalent tools, although I don't have enough experience with magit to be sure. They both sit in top of git and expose the underlying model of git far more cleanly and efficiently than the standard git cli. The difference is that magit uses interactivity to make git operations clearer, whereas jj tries to expose a cleaner model directly.
That said, there are additional features in jj that I believe aren't possible in magit (such as evolog/interdiffing, or checked-in conflicts), whereas magit-like UIs exist for jj.
You want the history of a specific commit because if you, say, fixup that commit, you want to know how the commit has changed exactly over time. This is especially useful for code review. Let's say you've got a PR containing a refactor commit and a fix commit. You get a review the says you should consider changing the refactor slightly, so you make that change and squash it into the existing refactor commit. You then push the result - how can the reviewer see only the changes you've made to only the refactor commit? That is an interdiff.
In this case, because you've not added any new commits, it's trivial to figure out which commit in the old branch maps to which commit in the new, fixed branch. But this isn't possible in general (consider adding new commits, or reordering something, or updating the commit message somewhere). In jj, each commit also has a change ID, and if multiple commits share the same change ID, then they must be different versions of the same changeset.
You want the history of the repository which includes the history of each commit, because it's a lot easier to type `jj undo` to revert an operation you just did than it is to find the old state of the repository in the reflog and revert to it, including updating all the branch references to point at their original locations. The op log in jj truly is the ultimate undo tool - it contains every state the repository has every been in, including changes to tags and branches that aren't recorded in the reflog, and is much easier to navigate. It is strictly more powerful than the reflog, while being simpler to understand.
> But there's the reflog in git which is the ultimate undo tool.
That one sentence outs you as someone who isn't familiar with JJ.
Here is something to ponder. Despite claims to the contrary, there are many git commands that can destroy work, like `git reset --hard`. The reflog won't save you. However there is literally no JJ command that can't be undone. So no JJ command will destroy your work irretrievably.
I’ve just tested that exact command and the reflog is storing the changes. It’s different from the log command which displays the commit tree for the specified branch. The reflog stores information about operations that updates branches and other references (rebase, reset, amend, commit,…). So I can revert the reset, or a pull.
`git reset --hard` destroys uncommitted changes. There is no git command to recover those files. JJ has a similar command of course, but it saves the files to a hidden commit before changing them.
“Use Postgres for everything” is a great philosophy at low/medium scale to keep things simple, but there comes a scaling point where I want my SQL database doing as little possible.
It’s basically always the bottleneck/problem source in a lot of systems.
Of course. The flip side is that many, many more people are in the "low/medium scale" zone than would self report. Everyone thinks they're a scale outlier because people tend to think in relative terms based on their experience. Just because something is larger scale than one is used to, doesn't mean it's high scale.
Yes. For example you'll typically have a "budget" of 1-10k writes/sec. And a single heavy join can essentially take you offline. Even relatively modest enterprises typically need to shift some query patterns to OLAP/nosql/redis/etc. before very long.
can share our work setup we've been tinkering with at a mid size org. iceberg datalake + snowflake for our warehouse, iceberg tables live in s3, that is now shareable to postgres via the pg_lake extension which automagically context switches using duckdb under the hood to do olap queries acrossed the vast iceberg data. we keep the postgres db as an application db so apps can retrieve the broader data they want to surface in the app from the iceberg tables, but still have spicy native postgres tables to do their high volume writes.
very cool shit, it's certainly blurred the whole olap vs oltp thing a smidge but not quite. more or less makes olap and oltp available through the same db connection. writing back to iceberg is possible, we have a couple apps doing it. though one should probably batch/queue writes back as iceberg definitely doesnt have the fast-writes story. its just nice that the data warehouse analytics nerds have access to the apps data and they can do their thing in the environment they work with back on the snowflake side.
this is definitely an "i only get to play with these techs cause the company pays for it" thing. no one wants to front the cost of iceberg datalake sized mountains of data on some s3 storage somewhere, and it doesn't solve for any sort of native-postgres'ing. it just solves for companies that are doing ridic stuff under enormous sla contracts to pay for all manners of cloud services that joe developer the home guy isn't going to be tinkering with anytime soon. but definitely an interesting time to work near data, so much "sql" has been commercialized over the years and it's really great to see postgres being the peoples champ and helping us break away from the dumb attempts to lock us in under sql servers and informix dbs etc. but we still havent reached a one database for everything yet, but postgres is by and large the one carrying the torch though in my head cannon. if any of them will get there someday, it's postgres.
Have an old Mac book pro sitting in my office as a self hosted Mac GitHub runner and it works great.
My biggest complaint used to be that it would occasionally restart after a system update and I’d have to unlock FileVault in person, but macOS 26 now allows unlocks over ssh.
Yeah — I just created an anthropic API key to experiment with pi, and managed to spend $1 in about 30 minutes doing some basic work with Sonnet.
Extrapolating that out, the subscription pricing is HEAVILY subsidized. For similar work in Claude Code, I use a Pro plan for $20/month, and rarely bang up against the limits.
And it scales up - the $200 plan gets you something like 20x what the Pro plan gets you. I've never come close to hitting that limit.
It's obviously capital-subsidized and so I have zero expectation of that lasting, but it's pretty anti-competitive to Cursor and others that rely on API keys.
Ignoring the training costs, the marginal cost for inference is pretty low for providers. They are estimated to break even or better with their $20/month subscriptions.
That being said, they can't stop launching new models, so training is not a one time task. Therefore one might argue that it is part of the marginal cost.
Yeah, Typescript feels like it had has arrived at the point where someone needs to write “Typescript: the good parts” and explains all of the parts of the language you probably shouldn’t be using.
It is? If another company comes out with a better model tomorrow and offers it at the same price Anthropic charges for Opus, they’re going to lose customers fast. They have to keep training to keep selling inference.
Most businesses factor in the cost of making their product into the product’s P&L.
reply