Hacker Newsnew | past | comments | ask | show | jobs | submit | dimes's commentslogin

I also built a canvas-based, multiplayer product during the pandemic (ohyay).

The product was social-event focused (classes, festivals, etc.) so we focused on multiplayer audio-video experiences rather than general purpose browsing.

One of my favorite memories was when someone used our collaborative YouTube playback to set up a karaoke room. WebRTC added a little latency, but it was close enough to work.


so many people did! the whole watching YouTube/streaming things with friends vibe was so fun haha

and now it feels like a bunch of people are building canvas based products again, but for testing different image gen outputs on a canvas, except now you can vibe code them too!


Correlation doesn’t imply causation. The US has one of the most relaxed opiate policies imaginable until about 10 years ago. You could walk into many doctors and walk out with an opiate script. It didn’t end well.

Of course just relaxing everything is not the point - you have to address the structural and economic reasons why people become addicts in the first place. Few if anyone wants to be an addict

But also has some of the harshest penalties for illegal opiate usage. We didn't get 25% of the worlds prison by not aggressively arresting and jailing drug users

It’s not a fear. It’s reality. It’s literally happening on HN right now.

Take this game, for example: https://news.ycombinator.com/item?id=47698455

Within an hour, someone had cloned the game with addition mechanics that multiple people mentioned they like more: https://news.ycombinator.com/item?id=47729573


That's not an AI company "slurping up data", that's someone using AI tools to accelerate their own personal clone of a project.

I think you're missing the point. The game (no pun intended?) has changed. Working with the garage door up has become a liability.

Doesn't feel particularly different to me, I've been publishing my side projects as open source code on GitHub for over a decade.

The effort required to adapt them has dropped, but I've always exposed them to being adapted.


> Doesn't feel particularly different to me

> The effort required to adapt them has dropped

AI is an entirely different situation because the effort required to copy has dropped by multiple orders of magnitude. You used to be able to build in the open without worrying about copycats because the vast majority of people didn’t want to spend the effort. Now (with AI), even someone with the slightest, most fleeting whim can copy your work.

It’s great that you’re open to being adapted. There’s nothing wrong with that. But if you’re not open to having your ideas outright taken, then it’s not safe to build in the open any longer.


If I cared about people copying my projects and ideas I wouldn't put them on GitHub with a liberal open source license.

It has been known (especially in gamedev circles) that ideas are not worth much. I don't like AI slop, but what's the harm of taking someone's demo and making it better? Then someone else can do the same, and tweak some other mechanic.

no we got something better out of it

Why is that a bad thing? Person 1 built a thing, and then someone came along and made it better? It's a game, so better is subjective, but should ideas only ever come from Person 1, while everyone else just gazes upon them with slack jawed awe, unable to contribute?

I read simonw's comment not as dismissing the reality, but rather highlighting the harm of discouraging sharing.

The slurping can be both real and the induced reluctance to share a harm.


The private equity company that scooped up her music rights, most likely.

As far as I know, the rights are still owned by her family. The albums were issued by an independent label operated by one of the persons who tracked down her music. The recent reissue of "How Sad, How Lovely" is on Jack White's Third Man Records (also independent).

I could be wrong, but I don't think the initial Go implementation was a C transpiler. It was written in C, but it did its own compilation.


I wrote Dihedral, a compile-time dependency injection framework for Go [0]. It was inspired by the Java framework Dagger. It worked pretty well, but was a little clunky with Go's syntax. Ultimately, decided it wasn't worth it given the simplicity of manually constructing Go objects in a service setting.

0: https://github.com/dimes/dihedral


I rewrote the backend on a team I used to work on. The service had a ton of unit tests. Given that this was a full rewrite, those unit test were useless. I spent the first few days writing a comprehensive suite of integration tests I could run against the existing service. These tests directly mimicked client calls, so the same tests should be just as valid for the rewritten service. Using these tests, I was able to catch 90%+ of potential issues before cutting over to the new service.

Personally, I find unit tests to be mostly useless. Every time I touch code with a unit test, I also need to change the unit test. Rather than testing, it feels like writing the same code twice.


> Personally, I find unit tests to be mostly useless. Every time I touch code with a unit test, I also need to change the unit test. Rather than testing, it feels like writing the same code twice.

I think they're mostly useless when refactoring, but they're useful when writing new code and and making relatively small to medium sized changes. For new code, it's helpful to me at least to express my intentions in a more concrete form and it gives me more confidence that I didn't miss something. For making relatively small changes, they help catch fined-gained regressions. Even if I meant to make a change, a failing test forces me to think about handling a particular case correctly that I might have forgotten.

The kind of unit test I do hate are the ones that are so mock-heavy that they're pretty much only testing the structure of your codebase (did you call all the methods the right order and nothing more?). I was once on a team where that was pretty much all they wrote, and they were very resistant to any level of integration unit testing because (I think) they read in a opinionated book somewhere that low level tests were good enough (they weren't).


When refactoring, unit tests confirm that you did it right (or wrong).


Except you often need to rewrite them, so now you've got two places (per 'unit') where you could have introduced a bug. Integration tests and E2E tests are far more valuable because they're attacking it at the business logic side, which is far less volatile, and particularly in a refactor, a useful invariant.


I often feel that people take "unit" test too literal and E2E as well. You can write perfectly valid, fast and useful partly integrated tests with common unit testing frameworks.

The other thing is that like you and come siblings have pointed out, many if not most people write unit tests all wrong and in the end just test the mocks. Those are really bad and you can just throw them away. Same with all those tests that just check that the right internal calls are being made. Tests nothing.

You need to attack the "business end" of your unit (or small groups of units). Inputs in and assert the outputs. Asserting that a certain collaborator was called can still make sense but if that's literally the only thing you do it's not very valuable at all.

You can generally see whether a unit test was a good unit test based on the fact that you were able to refactor the implementation of the method _without_ having to change the unit test. Yes, those definitely do exist, even in larger systems.


> You can generally see whether a unit test was a good unit test based on the fact that you were able to refactor the implementation of the method _without_ having to change the unit test. Yes, those definitely do exist, even in larger systems.

"Test the interface, not the implementation."


Even better, test the specification


The speci-what?

We might be in different types of software development. There's seldom an actual "specification" to a level of detail that you could test to in the sense you're probably thinking of (but I'm having to guess here for lack of detail and context from your end).

In the field I work in for example, a detailed specification in the way I'm guessing you mean would be prohibitively expensive and just not cost effective at all vs. the benefit you can get from throwing something together from imperfect information and improving upon it iteratively.

There was a (WP?) article on HN recently about "Releaseing software the right way" (or similar title), which basically said to use whatever approach actually makes sense in your circumstances. The example IIRC was hardware development (detailed specs) vs. a SaaS company.


In terms of something like embedded systems, aerotech, etc., the specification extends all the way to the unit. In terms of an SaaS, the specifications extend all the way to business logic such as (cartoon examples):

- don't charge the customer twice

- or when I click submit on the front-end the following possibilities happen according to the back-end response

- or when the back-end receives x, the inventory should be updated according to this business logic, as well as y

Say if you're doing this capital-A Agile style, all of these should be present on the acceptance criteria for any user story. As someone who's worked in rapid applications development for mobile, I can say reaching this level of specification increases speed, and reduces redundant communication. It often doesn't take half an hour for someone to write, and then it's iterated on by the team before implementation, and during.


> I often feel that people take "unit" test too literal

Perhaps. And this is where I think a useful distinction can be made between the "unit" (usually a function, or sometimes a single file, e.g., a C-style compilation unit) and a "system" (a collection of functions that perform complementing tasks, sometimes also a unit).

> You need to attack the "business end" of your unit (or small groups of units)

It's worth noting here, that sometimes the business logic extends all the way to the "unit". This is usually in very technical domains. Like say, writing a maths helper library for consumption by other programmers (either internally or externally) would often have a clearly specified outcomes at the unit level.


> Except you often need to rewrite them, so now you've got two places (per 'unit') where you could have introduced a bug.

That's not a bad thing, though. DRY might be fine for your main implementation, but redundancy is a time-tested way of catching errors (a.k.a. "double checking").


In theory, if adequate attention is spent on both maintaining the implementation as well as the tests, this is perfectly valid. In practice, this trade-off between expedience and verification goes towards the former when it comes to rapid development.


I'm a game dev and over time I've settled on using two groups of tests for my projects. Both at opposite ends of the spectrum.

.

1. Unit tests. But I only write them for stuff that needs them.. Eg. Some complex math functions that translate between coordinate systems; the point of the unit tests is to confirm that the functions are doing exactly what I think they are doing. With mathsy stuff it can be very easy to look at the output of some function and think, that looks fine, but in reality its actually slightly off, and not exactly what it should be. The unit tests are to confirm that its really doing what I think its doing.

.

2.Acceptance tests by a human. Theres a spreadsheet of everything you can do in the game and what should happen. Eg. press this button -> door should open.. As we add features we add more stuff to this list. At regular intervals and before any release several humans try every test on various hardware. This is to catch big / complex bugs and regressions. Its super tedious but it has to be done imo. Automating this would be an insane amount of work and also pointless as we are also testing the hardware, you get weird problems with certain GPUs, gamepads, weird smartphones etc.

.

I find those two types of tests to be essential, the bare minimum. But also anything in between, like some kind of automated integration testing is just a shittone of work and will only be useful for a relatively brief period of development, changes will quickly render those sort of tests useless.


Yes, totally agree. Any code that has complicated logic with few / no dependencies benefits from unit testing.


I've arrived at this exact same conclusion for frontend work as well. I always go for integration tests first, and only rely on unit tests if hitting some edge case is hard via integration test.


And to clarify, if any individual function reaches some arbitrary level of irreducible complexity, then I'll absolutely unit test that. It's kind of a "you know it when you see it" kind of thing.


I find that, in this life, you usually get what you pay for, and, compared to other options, unit tests' primary virtue is that they're inexpensive.


Unit tests help verify individual components of a system - which makes them top-of-mind for library code.

I think the issue with them lies in that most developers aren't shipping libraries, they're shipping integrated systems, so there's no component worth testing. (you can always invent one, but that's just overcomplicating the code).

At the same time, it's also genuinely hard to write good, principled tests of integrated systems, harder than it is to code up a thing that kinda-works and then manually debugging it enough to ship. You have to have the system set up to be tested, and feature complexity actively resists this - you fight a losing battle against "YOLO code" that gets the effect at the expense of going around the test paradigm.


How does this scale though? If you've got integration tests that include state, now you've got to either run your tests serially or set up and tear down multiple copies of the state to prevent tests from clobbering each other. As your project expands, the tests will take longer and longer to run. Worse, they'll start to become unreliable due to the number of operations being performed. So you'll end up with a test suite that takes potentially multiple hours to run, and may periodically fail just because. The feedback loop becomes so slow that it's not helpful during actual coding. At best, it's a semi-useful release gate. Is there another way?


> If you've got integration tests that include state, now you've got to either run your tests serially or set up and tear down multiple copies of the state to prevent tests from clobbering each other.

That is a very normal setup.

> Worse, they'll start to become unreliable due to the number of operations being performed. So you'll end up with a test suite that takes potentially multiple hours to run, and may periodically fail just because.

This is called flakiness and is generally a symptom not to be ignored, as it is almost always indicative of bigger issues. It's rare that flakiness is limited to test environments. Instead it's much more likely that whatever your smoke tests are experiencing is a something end-users are also intermittently hitting.

> The feedback loop becomes so slow that it's not helpful during actual coding.

Devs can write their own unit tests when working on their assigned tasks. Smoke tests are designed to run when you're trying to integrate those changes into the existing codebase. At that point, you have the calculus all wrong. Smoke tests slow down devs enough that they don't merge broken code into production. That is a useful release gate unto itself.

If unit tests pass but smoke tests fail, then often (the vast majority of the time in my experience) the issue is that either the dev didn't understand the task or, more often, didn't understand the system they were integrating into.


If you have some code that if its callers changed, they would stop using that code or use it on a different place, it's a unit and it's a good idea to unit test it.

If you have some code that if its callers changed you would want to change it too, then it's on the same unit as the calling code, and it's bad to divide it away.


You probably have resist fingerprinting turned on


Most likely. I don't remember all the settings that I have turned on at some point. :)


Will one be able to embed a Hetchr ATOM into Hetchr in the future?


Mind elaborating? All Hetchr ATOMs are embedded into your workspace.


Modern bridges made from concrete are not designed to last more than a century. I believe the expected usefulness of something like a cable stayed bridge is around 100 years. Compare that to a suspension bridge which can be used almost indefinitely.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: