Hacker Newsnew | past | comments | ask | show | jobs | submit | nickdothutton's commentslogin

No need to study for Cloudflare certifications[], just have your agent do it all.

[] Joke, there are no certifications.


Remember kids. Don't believe in anything. Don't join anything. Don't give even a small part of yourself up to anything. Don't be part of anything bigger than yourself.

Don't be part of anything bigger than yourself that treats you as expendable human oil.

Stop and reflect for a moment. Then continue as usual (quite likely)

I had to check your other comments and now I get it that you still regard flags as having some sacred meaning in the great national past, but for me they always were about gathering as much human expendables underneath.

Sure, they might have had generated enough sacred reverence, those bloodbaths of past.


> you still regard flags as having some sacred meaning

I would like to disagree on this point.


Sorry if I got you wrong!

You forgot to add:

... that blinds you to any alternative; that indoctrinates distrust in different perspectives; that elevates the humanity of fellow believers above others.


much more sound advice than you think…

My terminal has not been ensh*ttified. I used the Internet for work, for knowledge, more than I use it for entertainment. One of the reasons I like TUIs.

Every now and then I look for a vt320 from my university days. Still miss the smell of hot dust on CRT electron guns.

I used old terminals like this to directly interface to the COM ports of older electronic instruments, well into the 2000's.

By that point the most common failure due to age was from cobwebs that had formed internally between the high-voltage CRT circuitry and the PCB containing the low-voltage logic.

For anybody reusing or restoring vintage CRT units, I would blow them out with compressed air to get rid of stuff like this.

Otherwise in a flash with a final scream and a slightly different smell than normal, it's an instant cadaver :(


As an aside, I always wondered why GitHub had a web interface. Admittedly I’m a pre-web SCCS/RCS “old timer” but I wouldn't have put a web interface on it at all.

Managing just about any complex service is far easier in a GUI.

It's targeted from the beginning to the masses.

It's used for non-technical people too; for documentation, dashboards, and bug tracking.

Viewing all this data is far easier in a GUI than a TUI.


Casio/G-SHOCK, one of the few brands which I think could plausibly stretch/apply itself into more tech areas than it currently does. Wearables, re-entering the market for ruggedised android phones, etc.

They are well positioned for wearables certainly; but a phone play would be much too risky. Practically they would end up re-casing an existing phone which would never feel G-SHOCK.

You should google their gshock ring. It’s a mini Casio watch. Same style. Hilarious

How can I be sure this article isn’t sponsored by Big Toast?

I would never.

That's what I'd expect Big Toast to say!

Switched to local models after quality dropped off a cliff and token consumption seemed to double. Having some success with Qwen+Crush and have been more productive.

Would love some more info on how you got any local model working with Crush. Love charmbracelet but the docs are all over the place on linking into arbitrary APIs.

assuming you have a locally running llama-server or llama-swap, just drop this into your crush.json with your setup details/local addresses etc:

Edit: i forgot HN doesn't do code fences. See https://pastebin.com/2rQg0r2L

Obviously the context window settings are going to depend on what you've got set on the llama-server/llama-swap side. Multiple models on the same server like I have in the config snippet above is mostly only relevant if you're using llama-swap.

TL;DR is you need to set up a provider for your local LLM server, then set at least one model on that server, then set the large and small models that crush actually uses to respond to prompts to use that provider/model combo. Pretty straightforward but agree that their docs could be better for local LLM setups in particular.

For me, I've got llama-swap running and set up on my tailnet as a [tailscale service](https://tailscale.com/docs/features/tailscale-services) so I'm able to use my local LLMs anywhere I would use a cloud-hosted one, and I just set the provider baseurl in crush.json to my tailscale service URL and it works great.


I presume they don't yet have a cohesive monetization strategy, and this is why there is such huge variability in results on a weekly basis. It appears that Anthropic are skipping from one "experiment" to another. As users we only get to see the visible part (the results). Can't design a UI that indicates the software is thinking vs frozen? Does anyone actually believe that?

Compute is limited worldwide. No amount of money can make these compute platforms appear overnight. They are buying time because the only other option is to stop accepting customers.

They would honestly have been better off refusing customers if compute is so limited. Degrading the quality leads to customers leaving in the short term, and ruins their long term reputation.

But in either case, if compute is so limited, they’ll have to compete with local coding agents. Qwen3.6-27B is good enough to beat having to wait until 5PM for your Claude Code limit to reset.


The recent Deepseek release probably has them more worried. But locally running these large models requires a lot of infra expertise. Market impact will be minimal. Not to mention the companies that can pull this off have enough cash to just pay Anthropic to begin with.

A favourite of mine. Do please check the interviews with him on youtube. Some authors try to show you the far future, he tried to show us the next 15 minutes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: