Hacker Newsnew | past | comments | ask | show | jobs | submit | faangguyindia's commentslogin

Both Radicle and Tangled miss the point; these are all for public collaborative work, but what about private repos? Many users work on side projects; they use GitHub private for this. Once you learn GitHub, then you also start public projects on GitHub.

The point I am trying to make is, until you offer a user the ability to make a private repo for side projects, it's unlikely to take off.

What people want is the ability to make a private repo, go away for a few months and come back to find their repos right there waiting for them.



That's a bit selfishly-expressed.

Private repos provide nothing to the site by definition. The value model here is, you must pay for private repos, either by paying a subscription or hosting your own node and bearing the related costs.


On Mac, you can select a text you write, right click > Writing tools, which uses AI to rewrite and proofread.

These days I just use a few languages:

1. Go, when I first saw code I wrote almost a decade ago still compiles and runs in Go, I decided to use Go for everything. There were some initial troubles when I started using it a decade ago, but now it's painless.

2. Haskell, I use it for DSL and state machines.

3. Bash for all deployment scripts and everything.

4. TypeScript, well for the frontend.

Lately, I’ve been using Go and SQLite for nearly everything.

I don't think I’ve any motivation to look at any other language.

I gave up on Java, Python, Ruby, Rust, C++, and C# long ago.

Fun fact:

Same thing for cloud, I just don't use managed cloud services anymore. I only use VMs or dedicated servers. I've found when you want to run a service for decades+, you’ve got to run your own service if you want it not to cost a lot in the long run.

I manage a few MongoDB, PostgreSQL clusters. Most of the apps like email lists marketer (for marketing, sending thousands of email each day) are simple Go app + SQLite using less than 512MB RAM.

Same for SaaS billing, the solution is entirely written in Go and uses Postgres. (I didn’t feel safe here using SQLite for this for a multi-tenant setup.)

Our chat/ticketing system is SQLite + Go. Deployment is easy, just upload Go cross-compiled binary + systemd service file, alloy picks up log and drops it graphana which has all alerts there.

I don't need to worry about "speed" for anything I do in Go, unlike Ruby/Python.

When something has to be correct I define it model it in Haskell as its rich type system helps you write correct code. Though setup is not painless as Go, decent performance.

I write good documentation, deployment instructions right into mono repo. For a small team this is more than enough imho.

No Docker, no Kubernetes, just using simple scripts + graphana + prometheus + Loki and for alloy/nodeexporter. Life couldn't be any simpler than this.


I am in a similar place.

Especially regarding Bash.

Used to be in a few companies where most developers just couldn’t/wouldn’t write in more than one language and it was always a pain to maintain different runtimes, languages, packages and internal dependencies of things that could have been a 20-line bash script, and had to be maintained and updated from time to time.

I understand people have their own limitations and reasons, but having to constantly deal with “wrong tool for the job” for the thousandth time gets frustrating.

Especially in cases where four different languages were used across the company because different people had different preferences. Worst case was Python/Ruby/C#/Javascript.

I get that Bash is not perfect, but I enjoy the simplicity and directness, and dislike the multitude of problems caused by not using it have shown to me it’s a better tradeoff.


Funny, I have also converged on shell scripts for simple scripting or configuration, but I use /bin/sh for portability. Many of the machines I use do not even have bash installed.

When i talk about my "bash" scripts, i mean sh. I assume this is the same forgp (tbh, i tend to use AWK over bash in 80% of cases, but i call it from bash anyway, and still call it bash scripting :/)

> tbh, i tend to use AWK over bash in 80% of cases, but i call it from bash anyway, and still call it bash scripting :/

This makes me sad and sounds very naive. AWK is a fantastic language on its own and should be called out when used as such.


I'll make you sadder: Even when I call my own AWK modules I put hours in, I still call that a bash script.

Yep I'm the same. I just call it bash. I think old habits die hard.

I'm in the same boat, I started using go only a year ago, but don't want to really use anything else now for apps or data processing. I wrote an app that loaded a lot of data for reporting into duckdb. I've been doing so much java and JavaScript that I feel like it was just much simpler to deal with overall.

Shell for the scripts. I haven't tried to work through much DSL as I really am not a fan of DSLs. Maybe I'll give haskell a shot again to see if it sticks.


The funny thing is how ubiquitous TypeScript/JavaScript is. There is no escape. I also only use four languages: C#, F# (for DSL), Powershell (for deployment) and... TypeScript.

Despite we have different tastes in language and are in completely different ecosystems, TypeScript is still the lingua franca lol.


Whether there is any escape from JS/TS is a matter of what you are building and who is around you. If you are building SPAs all day, then sure, you will probably have to deal with the JS/TS ecosystem. If you are just building websites, then basically any traditional web framework would do. Only that then it depends on whether you have to work with people, who don't know web basics or people who want to use JS web frameworks even when there is no need, in majority, so that you get no choice, but to work as a team.

In theory most websites could be done statically with rendered HTML and CSS and maybe a little bit JS, but not mandatory, and having noscript fallback flows. MPAs are fine for most things and having noscript fallback flows can also be done kind of systematically, and in many cases isn't that difficult. Just that these days not many people bother or care.


IME Ruby is really good for working alone on tiny projects without an IDE (trying to get more than syntax highlighting causes problems). Sometimes I write single-file scripts or even just use interactive Ruby.

Ruby remains a joy for small things. I also tend to use it in place of Bash when I can.

That sounds really nice. I have a couple of Haskell servers running on VMs, but the build requirements really slow down the process. I have to use dockers to help cache dependencies and avoid recompiling things that have not changed, but it is still slow and puts out large binaries.

The idea of having a language with most the batteries for a web server built-in is nice. I've never considered Golang, but it is compelling. I'll have to check it out. Though Rust keeps catching my eyes.


> I have to use dockers to help cache dependencies and avoid recompiling things that have not changed, but it is still slow and puts out large binaries.

This is actually the biggest pain point I am running into as well, which significantly slows down the speed of deployment.


Well, Java would compile and work for 3 decades straight. If anything, go did have an actual breaking language change (for loop variable capture)

Note that Java makes breaking changes all the time, which is why it publishes a compatibility guide with each major release. These are usually judged to be minor breakages, but if you have a codebase on the order of millions of lines, there's a very good chance that at least one thing will break and require a little bit of work to upgrade. And Java's not unique here, every stable language makes changes all the time that have the potential to break some user in some edge case.

Not that I need to tell you of all people, but I do find that Rust's editions system is one of the better ways to minimise this issue.

Indeed, editions are brilliant for making relatively large changes in a way that fully preserves backwards compatibility for codebases in the wild, but the existence of editions doesn't mean that Rust is exempt from sometimes desiring to make minor breaking changes in new versions for all editions. For that, it has the mechanism of future incompatibility lints, to give people ample advance warning: https://doc.rust-lang.org/rustc/lints/index.html#future-inco...

Sure, though most of the time it's library-only, not language change (with the only exception I have in mind is new keywords, but those are pretty rare with java).

All in all, Java is pretty unique in the level of backwards compatibility it provides, I don't think any other language is comparable to this level. Especially that it is both source and binary compatibility.


> Sure, though most of the time it's library-only, not language change

While this distinction is often useful, here we have to think about it from the perspective of users: you press the button to upgrade your toolchain, and code that formerly worked stops working. If a language supported upgrading your compiler/interpreter separately from your standard library then that would be different, but generally a standard library version is considered tightly coupled to a language version.


This has never been my experience. Have you ever tried to run a minecraft server or something similar? Minor version differences in the JVM result in unpredictable crashes.

I assume some of the extensions/plugins use internals for maximal performance or whatever?

But the platform itself is extremely backwards compatible, you can find some old jar file created for a university course that still runs without an issue. Of course if you have a bunch of libraries that may so stuff like touch internal details, you lose some of that compatibility (sun.misc.unsafe package can access some internal details, memory). Recently even this latter has been locked down more and more, so maybe that's one reason for your experience (e.g. previously one could set a private field to public and access it, now you can only do that if you explicitly give a flag)

And given that Minecraft was (is?) proprietary, reverse engineered code base where plugins were hacked into, I guess this brittleness makes sense.


> 1. Go, when I first saw code I wrote almost a decade ago still compiles and runs in Go, I decided to use Go for everything. There were some initial troubles when I started using it a decade ago, but now it's painless.

And fewer dependencies, and fewer vulnerabilities (if any at all, depending on your few dependencies).

Go is "only" a pain when you want to use your own copy of packages (because `replace` directives are always ignored everywhere except on the "root" package), and whenever you want to work with private Git repositories outside of the forges that have hardcoded config in the Go code (like GitHub) (because Go assumes there's an HTTPS server, and the only way to force it to use only SSH is with ugly workarounds AFAIK).

But despite this I still prefer it for personal projects because I can come back after not touching it for years, and the most I need to do is maybe update `golang.org/x/net` or something like that.


Would love to use go for SaaS but things like OmniAuth (RoR) make me stay with Ruby. I actually never used ruby before, but I think its a swell language to do SaaS in.

Im with you on Go and SQLite, dropped Postgres for many of my projects, I might add: HTMX instead of a TS frontend, very few apps need a TS/React/... frontend. Doubling development effort with minimal gain (except games etc.)

Dabbled with Rust some years ago, I think it is an excellent choice for sudo-rs and such but for GUI and web apps I (perhaps too stupid) end up with arcmutex soup.

https://www.radicalsimpli.city


The sentiment on that page speaks to my soul. But I wonder how relevant it still is, just a few years later, in the age of AI tools?

Certainly replacing a microservices morass with a single bare metal server running a single static go binary makes good fiscally responsible sense for a startup/MVP. But how does a CTO make the case that "we don't need React" when the developer can just get Claude to smack the React app around with a trout until it does what you want?

Basecamp may have done it but I get the feeling that's a major outlier.


I went the same way but with only using Lisp dialects like Elisp and Clojure and Nix. Although I would ditch Nix too if another Lisp could supplant it too.

Obvious follow-up that's begging to be asked -- if you like nix and want a lisp, have you tried guix/guile?

I'm curious about your Clojure setup. Same as GO, I think Clojure has very strong backwards compatibility.

If trying to avoid the cloud, like OP, which hosting option is suitable for Clojure, what do you use? I believe Clojure (JVM) has higher RAM requirements?

And GO has pocketbase.io which looks quite interesting. Do know whether something similar exists for Clojure, or maybe it's straightforward enough to compose your own by using various Clojure libs?


Elisp and Common Lisp for me, although I still use bash in the terminal.

Yeah after writing some Haskell semi production apps (ported an old service at a previous company to Haskell and tried to productionize it enough for our staging environments) that's the conclusion I came to for using Haskell.

Curious if you've tried to use agents to read / write Haskell and how the experience has been?


Why did you give up on Java and Rust?

Java is a resource hog when you use patterns and libraries popular in Java land. When you are working in the Java ecosystem, you just assume that this much resource is needed by the app! But when you'll code the same thing in Go using the same methods, you'll find resource usage is really very low.

We’ve a 1: 1 copy of the app; on JVM, it's using 2GB RAM using Spring Boot, and on Go, it runs on 512MB RAM and is blazingly fast.

ofc, it's possible to tune java app but why bother? when we get same low resource usage and better performance in Go from get go while still writing naive and dumb code?

Deployment is super simple in Go, upload a single cross compiled binary it's done. Very simple and easy.

Rust needs a lot more effort to write correct code than Go in my experience. We get the same performance out of Go, with much less effort. At some point, it's just cheaper to start one extra instance than perform some low-level optimisation; modern hardware is fast enough that Rust-level optimisation is rarely needed for what we do.


You are comparing a (the most?) featureful web framework to a vanilla http server.. of course one will be significantly more resource heavy.

I cant really agree on Rust. It does take a bit more time to write the same code in rust vs go. But in my experience the code is much more likely to be incorrect in go than it is in rust. Which over longer periods means rust is easier to maintain.

This article convinced me to switch from Go to Rust: https://discord.com/blog/why-discord-is-switching-from-go-to...

The issues with Go in that article only surfaced at Discord scale.

On the other hand, most pieces of software in this world are kind of mediocre code written by unmotivated employees within tight timelines.

In such context, I think Go might be a better or at least, more realistic, compromise in most cases.


If you have unmotivated employees then using Go will only exacerbate the shortcomings it has. Cutting corners is much easier in Go than it is in Rust. But in general it's true, if you want a piece of code released a bit faster but spend more in developer hours maintaining it later than Go is the better fit. And there are definitely use cases for that.

You can write exploratory code in Rust fairly quickly, it's just obvious when you've done so due to the heavy boilerplate involved. Keep in mind that the earliest versions of Rust were actually very Golang-like, the language iteratively evolved towards what it is today.

> using SpringBoot

well there's your answer, isn't it?


This is both a fair response and isn’t! The OP was talking about typical Java stuff you’ll encounter, which is overwhelmingly spring boot. But I also agree that you can do much better than that for resource usage if you’re willing to avoid the common defaults the community has embraced.

I'm not sure the effort part makes sense now that we have LLMs? LLMs basically liberate language choice, which has made Rust incredibly attractive to me since I basically get good performance out of the box, while any possibly annoying pedantic obsession with correctness can be easily handed over to the LLM.

If I use a JVM language, running my test suite takes 10 to 30 seconds. With Rust it spends 3 seconds compiling and half a second to run 250 tests.

The irritating parts of Rust are more related with bloated libraries like serde that insist on generating code which massively slows down compilation for not much benefit.


> If I use a JVM language, running my test suite takes 10

Sounds like a bad build tool.


Good for you? I’m glad you have languages that fit your needs.

In the realtime/high assurance systems world, where garbage collection can be a huge source of non-determinism and overhead; we don’t have great options.

Zig is really the only language (idk about Odin?) trying to take the same approach that C did in giving you absolute control over a minimally abstracted CPU model. Us folks who need/want maximum control/performance should be allowed to have nice things too.


I also LOVE Go, but recently rewrote a small tool to Lisette [1]. Its was the most fun i had in a long time while programming.

I can Highly recommend it, specially because you have Haskell experience (you get all the usual suspects, like ADTs, exhaustive pattern matching etc) in Lisette code. It has a fast compiler too, and produces human readable Go code. It also comes with great tooling out of the box (formatter/lsp etc).

1. http://lisette.run


Which Sqlite library are you using? With or without cgo?

It’s because AI can debug a programme and people start thinking it can do fitness and health stuff too, but the thing is, there is no “instant-reacting compiler" for health or fitness. Things change over a long time, till then AI would have run out of context or lost the data from its cache, or the user may have got bored and deleted their account.

i actually do it differently

> (1) Let the LLM randomly perturbate the system.

instead of this i ask LLM to what's least likely to improve performance and then measure it.

sometimes big gains come from places you thought are least likely.


For sure! The hypothesis generation gotta be improved. Your take on the "least likely" is interesting. In the beginning of the repo I was having problems with "hypothesis convergence", your idea may be a nice way to introduce the much needed variability

i am opposed to using anything which is not single binary and not using a sqlite db for self hosted things which don't need to scale to millions of users.

Can't wait for Gimp automation, so i can finally start using it!

Any ideas why anthropic is interested in blender funding?

Presumably because they think agents will become the dominant primary users of tools like Blender, and want a seat at the architectural table to help accelerate that & create useful synergies with Anthropic products and models?

The press release calls out the Blender Python API, specifically, which makes sense for agentic use.


> This support will be dedicated towards Blender core development, to maintain and continuously improve foundational features like the Blender Python API

Pretty much spells it out. They have an interest in extending/supporting the ability for Claude/CC to use and interact with Blender. There may be gaps in endpoints that Anthropic needs to enable certain patterns of automated usage.


As literally stated in the second paragraph of the blog post:

> This support will be dedicated towards Blender core development, to maintain and continuously improve foundational features like the Blender Python API, which enables developers and artists alike to extend and improve the software for custom workflows.

https://news.ycombinator.com/item?id=47936552


When in doubt, dogfooding: "make us popular with the Internet crowd, take a look at what popular companies have done. Here's a budget you can use"

Chances are they were expecting the agent to spoon-feed hundreds of influencers.


I think maybe they want to expand 3d creations and modeling for future video gen with avatar type situation or maybe get into 3d game development.

Anthropic CPO was in Figma's board and stayed there a day before Anthropic-Canva Figma-killer came up /s

good PR, probably

3D printer censorship

I just have a Spot instance we use for our builds. It's turned on via serverless, runs it's job with a timeout and exits.

Lately i don't use any managed services and life couldn't be any simpler.


My team has been using https://runs-on.com/ for AWS instance runners, had a few glitches but largely been great for using AWS instances for runners.

I used it, but it prevented my mac from sleeping. After some investigation I found it's local send.

Does it run in the background?

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: