Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Your codebase doesn't care how it got written (robbyonrails.com)
24 points by robbyrussell 16 hours ago | hide | past | favorite | 22 comments
 help



Not sure what to think about the first part, but what you said about writing style, I think it's still reasonable to judge developers/engineers by writing style:

- Writing style does reveal how people understand problems and their approach for solving them. People that prioritize direct solutions over complex abstractions are still valuable to catch over engineered code.

- People with "good taste" in code can catch when AI generated code takes shortcuts to accomplish a certain task, this happens every day and we can't ignore it.

The state of AI code can be way better by 6 months or 1 year, or even more (we don't really know), but we're not there yet, and we can't wait until there to hire new people without considering those points.


How about we just let nature take its course and rely on developers' laziness, one of the virtues of a good programmer?

I go to ChatGPT for basically any annoying code snippet and even functions now. I'm done ever having to guess at map reduce syntax again, or trying to remember if slice mutates the target array.

I'm messing with with codex more and more. But I still don't trust it to design features for me. Maybe in 6 months, I will. Is it really that important to force developers NOW to get to a place they'll get to in a few months anyway, assuming the hype is real?


I'm hearing the same conversation play out at small software companies right now. Engineers telling their managers: am I being forced to adopt this, or do I need to go somewhere else?

Wrote about why I think the job description already changed, and what I'd rather see teams do about it than have that exhausting conversation on repeat.


never seen people who are hellbent on shooting themselves in the foot.

A.I or more accurately LLMs are currently trained on shitty open source code.

the best practice code out there is locked in some cabinets for private companies.

if you insist on 100% A.I written code - then how are you gonna train the new generation to write software well.

you will come to the singular point - where the new generation knows nothing & the LLMs themselves can't be trained further (we are almost there btw).

LLMs as better autocomplete are perfect use case. or as a rubber duck that talks back in terms of debugging. anything else is frivolous.


Do you have any evidence that open source code is lower quality?

The leaks of proprietary code, and the many examples of known security issues, the quality issues evident is most software, and the opinions of people who work on proprietary software all suggest the opposite.


I remember reading a study on source code quality quite some years ago, maybe it was featured on Slashdot. The result was that the lowest quality and the highest quality source code was proprietary, open source code was "pretty good but not the best". And that high quality proprietary code was stuff like operating system kernels and network equipment firmware, not business applications.

That makes sense.

Open source OSes, for example, seem to be pretty high quality, at least with regard to general purpose OSes. In general open source application code seems pretty good too.

On the other hand there are highly regulated or safety critical fields or where uptime is mission critical where people are very motivated to produce high quality code and a lot of that is proprietary.

I hope people are not vibe coding that type of code!


How about we just let people code how they want if the codebase doesn’t care how it gets written? If it doesn’t matter why must we use one particular tool versus another?

Because most code is paid for and the people paying want as much code as possible for as cheap as they can get it.

If your code is expensive, the fact is that now someone can write it cheaper.


What are we going to do if codebases 'do' develop feelings about how they are written?

Once again, this whole article is predicated on us being at the finish line. You know who will care about how something got written? Very suddenly, it will be the org that has an issue that the AI fails to fix or you don't understand well enough to fix within a span of time they deem reasonable. That has been the battle since the beginning of software and the only thing you have to combat it is your understanding.

I am still baffled about engineer's or developer's use of AI. I see fail and fail on anything other than some novelty one-shotted tool, and even then folks are bending over backwards to find the value because that solution is always a shallow representation of what it needs to be.

My projects are humming away, hand made. They have bugs, but when they show up, I more often than not know exactly what to do. My counterparts have AI refactoring wild amount of code for issue that can be solved with a few lines of adjustment.

TL;DR I feel like I live in a different reality than those who write these blog


I detest the way ChatGPT writes. You can tell immediately when someone had a rough draft or just an idea thrown into the ChatGPT filter. At least tell it to cut to these chase next time, nobody has time for fluff in this attention economy.

Yeah, I think if you're going to use LLMs, the least you can do is distill what you want to say to the minimum necessary text to convey your point accurately.

I noticed that recently blog posts have gotten much longer and that the information content is increasingly spaced out without telling you anything really.

A blog post built on a premise that could fit inside a paragraph and be elaborated to death within 2000 words now gets to have multiple chapters of useless fluff where the main purpose is lost because those words were supposed to convey the life circumstances or mindset of the author that lead to the premise.


Yep, the LLMisms are there. "It's not X, it's Y", "Here's...":

> The codebase doesn’t care how it got written. It cares whether it works, whether it can be maintained, and whether it helps the business do what it needs to do.

> That’s not a bait and switch. That’s what happens when your organization gets access to new tools and the economics shift underneath everyone’s feet.

> What we have now is a training problem. A reclassification problem. And I’m not sure what the best HR-friendly way to frame this is… but here’s a serious question:


Amish craftsmen confronted with a nail gun and an impact driver. Yeah you can still build a house swinging your arm if you want, but others won't.

Yes, notoriously tech-averse software developers who hate tools that make their jobs easier.

Yes, on HN they do, notoriously.

Until just a couple years ago I would regularly read comments complaining when a website doesn't work because the hacker browses with javascript disabled, for example.

This isn't the early-adopter crowd: it's the refuses to even be a late-adopter crowd.


This cohort has always existed though. Users of Notepad and Vim still exist.

Grouping Notepad and vim together under "tools that make your job harder" is pretty wild.

Has anyone ever known a serious, professional programmer who used Notepad to code?


Next you'll be telling me there's punch card programmers still. For love of the punch card craft.

Wherein a podcaster and self promoter sells out open source as if he spoke for everyone.

Please like and subscribe!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: