it's now my go-to for when I need to wrangle basically any text file manually - has handled everything I can throw at it (some of which has crashed other editors -looking at you Cursor/VSCode)
A tangential but interesting takeaway for me from this is that Harris Tweed was at some point in danger of dying out and that it was saved (?!) by now King Charles.
English royalty does this from time to time, through different means.
The Duke of Windsor (formerly Edward, Prince of Wales) is credited with popularizing and essentially "saving" or bringing the Fair Isle sweater to global fashion prominence in the 1920s.
I'm not sure I understand why Poolside are training their own models at all - what's the perceived or real advantage of splitting up model training efforts into smaller companies and dividing up resources like this? Is it just to have a US-domiciled LLM lab?
I've been trying to figure out what the long term play is here - is it an angling for a frontier lab acquisition? Or does open-sourcing put Warp in the same sort of category as OpenCode - where charging for LLM tokens becomes the main commercial driver?
I'm very interested in where ghostty ends up - I wonder if they'll follow Zig to Codeberg?
It does seem like it might, in general, be a very opportune time for GitLab (or another host) to publicly step up!
There seems to be a lot of chatter on X recently about wanting an entirely new GitHub usurper that doesn't look like GitHub at all, but in the short- to medium-term I expect this not to gain a huge amount of traction because of the sheer cultural embeddedness of git + GitHub in modern day software development.
GitLab? We use gitlab for work. Its way worse in comparison.
Last week I encountered a bug where my merge request simply didn't show that I deleted a file. Apparently it's because my MR included the creation of a folder with the same name as the basename of the deleted file. Unacceptable for a code hosting platform.
Other than that I miss GH Actions, a clear ui (gitlab has way too many sub-menus), a responsive ui (gitlab feels very sluggish). And while we don't have the Gitlab duo activated, it still pops out regularly eventhough I can't use it besides closing it.
...and I don't even want to start with their issue buard.
It strongly reminds me of Jira in terms of quality, which is no compliment.
Would love to see it become more common for projects with sufficient inertia to host their own forge like GNOME or Inkscape do. Could be a service that foundations like CNCF or LF offer to their projects.
I haven’t really found that free services scale the same way. It’s hard for the “open source community” to contribute and improve the quality of bottlenecks that are only encountered by one operator.
When you take OSS projects that scale well, say Linux, Postgres, Kafka, redis, etc. they either scale up (Linux) which is arguable easier, or were able to scale out because there are thousands, if not millions, that have massive deployments pushing them to their limits.
Unless there is some sort of secure way to “open source” operational data for codeberg, or many others running huge deployments of Forgejo I don’t see it being very effective.
I do see Google having another go at code hosting though.
I suppose I primarily mean marketing - perhaps the most immediate concrete example I can think of is some sort of co-promotion alongside some mainstream vibe-coding tool that positions them as the git host of choice.
This reminds me of a past job working for an e-commerce company. This wasn’t a store like Amazon that “everyone” uses weekly, it was a specific pricey fashion brand. They had put out a shitty iOS app, which was just a very bare-bones wrapper around the website. But they raved about how much better the conversion rate rates were there. Nobody would listen to me about how the customers that bother downloading a specific app for shopping at a particular retailer are obviously just superfans so of course that self-selected group converts well.
So many people who should be smart based on their job titles and salaries, got the causation completely backwards!
Hey, I notice this kind of thing all the time. People use "data" to tell the story they want to -- similar to how it seems humans make a decision subconsciously then weave a rational decision to back it up afterwards.
Do you have principles on how to tackle this? I feel stuck between the irrationality of anecdata and the irrationality of lying with numbers. As if the only useful statistic is one I collect and calculate myself. And, even then, I could be lying to myself.
Review the methodology, if you can, and form your own conclusions. Don't bother trying to change people's minds. It rarely works, and often causes conflict, even in the case of people who say they're data-driven.
In 2026, the number of mobile applications in the App Store and Google Play increased by 60% year over year, largely because entry into the market has become much easier thanks to AI.
Having my credit card already is an overwhelming advantage for the Apple App store and for Steam. I won’t say it is impossible to overcome, but I think I could count on my fingers the number of instances where I, like, typed my card into a website to buy anything, in the last decade.
Yes, but they are mostly paying little or nothing. How much did you spend on phone apps this year? And ads pay a pittance, unless you have massive scale.
If agents are async, is streaming still important? I think the useful set of interactions with an async agent are pretty limited - you'd want to stop, interrupt with a user message, maybe pause, resume, or steer with a user message?
All of those can be done without needing streams or a session abstraction I think, unless I'm misunderstanding.
I think this post ignores, deliberately or not, the large group of async coding agents that have been GA since around early 2025 - probably the most well-known of which is Devin (which has been around since 2024, but not available to the public).
As an aside, I've built and deployed a production system in which disconnecting & reconnecting from an in-progress LLM stream works and resumes from wherever the stream currently is, through a combination of redis/valkey & websockets - it's not all that hard, it turns out!
reply