> Eventually maybe the best experiences will be had with digital companions, etc.
Obviously I can't speak for all of Gen Z (and I realize we're no longer "the younger generation"), but my friends and I don't want any part of this, and feel optimistic rather than bitter that things won't go the way you're describing. I seldom meet anyone in my age group that isn't talking about moving away from social media, cancelling software subscriptions, all of the things that millenials and Gen X seem to be so excited to continue building and promoting.
Even at my workplace the "older" people are the ones that are excited about stuff like AI jazz remixes of rap songs and AI generated short films, while literally everyone else under 30 finds it pretty cringe and makes fun of them in DMs.
So all that to say, I disagree with your outlook, but I guess time will tell.
Talking about and doing something are different things. What are the social and market structures around your friends that lets them avoid having a smartphone, cancelling subscriptions, and uninstalling everything? Do you see this getting better with media consolidations from Substack(Andreassen), Twitter(Musk), and Youtube channels by the hyperscalars/billionaries and questionable merges like Paramount and Warner Bros?
When the social culture is based around platforms and content that has subscriptions, and when media and what you see is consolidated, you can't just exit without losing a big part of the social context because the people around you are eating the same thing.
I dislike slop as much as anyone else. I think it puts a higher burden on the receiver of information to filter the signal in a pile of trash. I just don't really see an actual way out if you look at it from a societal level with the existing structures and incentives.
> you can't just exit without losing a big part of the social context because the people around you are eating the same thing.
That's exactly it. The goal is lose a big part of the social context. It driven by rage bait, AI bots, state actors, and a thousand other influences that are predominantly negative. Of course amazing things happen online. However, the good is not worth bad. I'm raising my kids and they will never have a smart phone. Will they miss out on somethings? Of course! They also won't have their attention span destroyed, their ability to be bored and creative in the real world destroyed, they won't have body issues, they won't be caught up in the alt-right pipeline, they won't have their brains fried by content like Mr. Beast which is designed to be as hyper and addicting as possible. Missing out on the current social context is the entire goal. People were happier before it.
This structure expects all of their friends to live in similar systems. Otherwise their friends will talk about games, memes, series at school while your kids are isolated away as they are not a part of the culture and not in the loop.
I think this is only possible if you find a community with similar values, like religious, or hippie, where the focus is put on other things. Otherwise you might deprive your kids of what you want to give them because they will not feel socially connected.
Maybe not inherently bad, but clearly not inherently necessary or useful if they're already getting so many inquiries from farmers. Could just be that the tech doesn't offer enough meaningful value when the core mechanical functionality can be achieved at a lower price.
Maybe I’m naive since most of that was before I was born, but a lot of the past topics you mentioned seem more interesting because the people talking about them (I assume) had interesting knowledge and opinions to share about them. AI is an extremely boring topic because the people most excited about it are “idea people” rather than people with interesting knowledge and expertise. And idea people are pretty draining to listen to for years on end.
Even the top post on HN about ChatGPT’s image generation is full of a bunch of comments just saying “wow this is epic”, “I can make so many mangas with this”, etc. Or a post about a new model where people are saying bland stuff like “this doesn’t write Typescript as well as Nut43-2.1-Max”. Compare those to a post about language design, for instance, and you’d see a lot more interesting discussion and opinions.
Just my opinion though. It seems like the more interesting topics in AI are related to its divisiveness, and even that is getting super old after years of it going on.
I agree with that sentiment. Most discourse around AI is shallow. There are people who do have a rather profound world view on this and sometimes it surfaces on hackernews. Karpathy for instance is very pragmatic but also philosophical at times.
What we see today is stuff of sci-fi. The amazing and deep stuff Asimov and many others wrote about.
I'm probably in a weird subgroup that isn't representative of the general public, but I've found myself preferring "rough" art/logos/images/etc, basically because it signals a human put time into it. Or maybe not preferring, but at least noticing it more than the generally highly refined/polished AI artwork that I've been seeing.
There’s no reason to think people broadly want “better” writing, images, whatever. Look at the indie game scene, it’s been booming for years despite simpler graphics, lower fidelity assets, etc. Same for retro music, slam poetry, local coffee shops, ugly farmers market produce, etc.
There is a mass, bland appeal to “better” things but it’s not ubiquitously desired and there will always be people looking outside of that purely because “better” is entirely subjective and means nothing at all.
Cheaper/faster tech increases overall consumption though. Without the friction of commissioning a graphics artist to design something, a user can generate thousands of images (and iterate on those images multiple times to achieve what they want), resulting in way more images overall.
I'm not really well versed on the environmental cost, more just (neutrally) pointing out that comparing a single 10s image to a 5-6 hour commission ignores the fact that the majority of these images probably would never have existed in the first place without AI.
Also, ignoring training when talking about the environmental costs is bad faith. Without training this image would not exist, and if nobody generating images like these, the training would not happen. So we should really ask, the 10 seconds it took for inference, plus the weeks or months of high intensity compute it took to train the model.
Really weird that you're basically advocating people to not have principles if they don't align with "broader incentives". Also lol at you pulling the "some people have kids to feed" bullshit in a thread where we're all making way more money than most people.
I think some of you do not have a grasp on systems thinking at all, and its embarassing for people who supposedly frequent communities like these. I'm not advocating anything. I'm making a descriptive statement. I do worry that basic lack of understanding between descriptive and normative claims is contributing to the confusion here.
That's rich out of somebody who obviously has no concept of signaling theory. Not to mention, of course, that "systems thinking" makes no comment on human ethics or morality. Unlike some, the people working in the field seem generally to know their limits.
But you're right that clarity is important. In that spirit, it was your cowardly effort to excuse your behavior, and your obviously motivated effort to ameliorate its moral odium which you feel, that I criticized. This was and is in the course of helping you fully grasp that whatever is driving you, here, feels unconscionable to you because it is unconscionable and you know it, just as you understand in your heart that there is no excuse. Else you would not strive so here, in the hope someone else may supply what you failed to achieve alone.
I don't know just what it is that you're feeling so exercised with guilt over. Nor do I care. You know. For the rest of us, I confide, it will eventually become part of the public record, and I'm happy to wait that day without further unprompted comment here.
I know lots of families who feed their kids just fine on something less than a quarter million US a year. Just about all the families I know with kids, these days.
If we want to get into anecdotes... most of the people I know with kids are seriously struggling. And that aligns more closely with economic data than what you said. Most people do not have a robust emergency fund at all.
I understand why you would rather "get into anecdotes" than answer my point. I don't understand why you keep posting, save perhaps that "the guilty flee where no man pursueth." The account you're using is without history or reputation. All you have to do to make this end is stop.
I did address your point by directly refuting it, and you responded with a total non sequitur. Are you okay buddy? I'm making a relatively basic argument about the ability of households to make ends meet and you're quoting bible passages, looking into my account history, and making random accusations. The guilty flee where none pursueth? You're literally attempting to prosecute me, lmao. But please, "pursueth" away. You are the one who looks a little weird in this scenario.
I made my account today because I wanted to comment on this article and I didn't have an account previously. Is that a crime? Are you going to report me to the thought police? Lmao some of the people on here are a little intense. Maybe take some deep breaths and realize I'm not trying to harm you. I wish you the best. I just disagree with the way you think on this particular issue.
It's remarkable to me that you should be so concerned with my perspective, if you believe me insane as you now claim. (What will it be next? That I have too much time on my hands? That's often next from here.)
Evidently you are concerned with my perspective, considering the effort you keep going to to continue to gain its benefit. I've explained why I think that is, and I'm not likely to change my mind at this point. You should really think about why it means so much to you to keep trying to negotiate otherwise.
As others have said, it's definitely volume, but also the lack of respecting robots.txt. Most AI crawlers that I've seen bombarding our sites just relentlessly scrape anything and everything, without even checking to see if anything has changed since the last time they crawled the site.
Yep, AI scrapers have been breaking our open-source project gerrit instance hosted at Linux Network Foundation.
Why this is the case while web-crawlers have been scrapping the web for the last 30 years is a mystery to me. This should be a solved problem. But it looks like this field is full of wrongly behaving companies with complete disregards toward common goods.
>Why this is the case while web-crawlers have been scrapping the web for the last 30 years is a mystery to me.
a mix of ignorance, greed, and a bit of the tragedy of the commons. If you don't respect anyone around you, you're not going to care about any rules or ettiquite that don't directly punish you. Society has definitely broken down over the decades.
I'm happy to finally see this take. I've been feeling pretty left out with everyone singing the praises of AI-assisted editors while I struggle to understand the hype. I've tried a few and it's never felt like an improvement to my workflow. At least for my team, the actual writing of code has never been the problem or bottleneck. Getting code reviewed by someone else in a timely manner has been a problem though, so we're considering AI code reviews to at least take some burden out of the process.
AI code reviews are the worst place to introduce AI, in my experience. They can find a few things quickly, but they can also send people down unnecessary paths or be easily persuaded by comments or even the slightest pushback from someone. They're fast to cave in and agree with any input.
It can also encourage laziness: If the AI reviewer didn't spot anything, it's easier to justify skimming the commit. Everyone says they won't do it, but it happens.
For anything AI related, having manual human review as the final step is key.
LLM’s are fundamentally text generators, not verifiers.
They might spot some typos and stylistic discrepancies based on their corpus, but they do not reason. It’s just not what the basic building blocks of the architecture do.
In my experience you need to do a lot of coaxing and setting up guardrails to keep them even roughly on track. (And maybe the LLM companies will build this into the products they sell, but it’s demonstrably not there today)
> LLM’s are fundamentally text generators, not verifiers.
In reality they work quite well for text and numeric (via tools) analysis, too. I've found them to be powerful tools for "linting" a codebase against adequately documented standards and architectural guidance, especially when given the use of type checkers, static analysis tools, etc.
The value of an analysis is the decision that will be taken after getting the result. So will you actually fix the codebase or it’s just a nice report to frame and put on the wall?
Code quality improvements is the reason to do it, so *yes*. Of course, anyone using AI for analysis is probably leveraging AI for the "fix" part too (or at least I am).
I find the summary that copilot generates is more useful than the review comments most of the time. That said, I have seen it make some good catches. It’s a matter of expectations: the AI is not going to have hurt feelings if you reject all its suggestions, so I feel even more free to reject it feedback with the briefest of dismissals.
Link to the ticket. Hopefully your team cares enough to write good tickets.
So if the problem is defined well in the ticket, do the code changed actually address it?
For example for a bug fix. It can check the tests and see if the PR is testing the conditions that caused the bug. It can check the code changed to see if it fits the requirements.
I think the goal with AI for creative stuff should be to make things more efficient, not replace necessarily. Whoever code reviews can get up to speed fast. I’ve been on teams where people would code review a section of the code they aren’t familiar with too much.
In this case if it saves them 30 minutes then great!
I agree and disagree. I think it's important to make it very visually clear that it is not really a PR, but rather an advanced style checker. I think they can be very useful for assessing more rote/repetitive standards that are a bit beyond what standard linters/analysis can provide. Things like institutional standards, lessons learned, etc. But if it uses the normal PR pipeline rather than the checker pipeline, it gives the false impression that it is a PR, which is not.
IMO, the AI bits are the least interesting parts of Zed. I hardly use them. For me, Zed is a blazing fast, lightweight editor with a large community supporting plugins and themes and all that. It's not exactly Sublime Text, but to me it's the nearest spiritual successor while being fully GPL'ed Free Software.
I don't mind the AI stuff. It's been nice when I used it, but I have a different workflow for those things right now. But all the stuff besides AI? It's freaking great.
I wouldn't sing them praises for being FOSS. All contributions are signed away under their CLA which will allow them to pull the plug when their VCs come knocking and the FOSS angle is no longer convenient.
The CLA assigns ownership of your contributions to the Zed team[^0]. When you own software, you can release it under whatever license you want. If I hold a GPL license to a copy, I have that license to that copy forever, and it permits me to do all the GPL things with it, but new copies and new versions you distribute are whatever you want them to be. For example Redis relicensed, prompting the community to fork the last open-source version as Valkey.
The way it otherwise works without a CLA is that you own the code you contributed to your repo, and I own the code I contributed to your repo, and since your code is open-source licensed to me, that gives me the ability to modify it and send you my changes, and since my code is open-source licensed to you, that gives you the ability to incorporate it into your repo. The list of copyright owners of an open source repo without a CLA is the list of committers. You couldn't relicense that because it includes my code and I didn't give you permission to. But a CLA makes my contribution your code, not my code.
[^0]: In this case, not literally. You instead grant them a proprietary free license, satisfying the 'because I didn't give you permission' part more directly.
Because when you sign away copyright, the software can be relicensed and taken closed source for all future improvements. Sure, people can still use the last open version, maybe fork it to try to keep going, but that simply doesn’t work out most times. I refuse to contribute to any project that requires me to give them copyright instead of contributing under copyleft; it’s just free contractors until the VCs come along and want to get their returns.
> I refuse to contribute to any project that requires me to give them copyright instead of contributing under copyleft
Please note that even GNU themselves require you to do this, see e.g. GNU Emacs which requires copyright assignment to the FSF when you submit patches. So there are legitimate reasons to do this other than being able to close the source later.
FSF and GNU are stewards of copyleft, and FSF is structured under 501(c)(3). Assigning copyright to FSF whose significant purpose is to defend and encourage copyleft…is contributing under copyleft in my mind. They would face massive backlash (and GNU would likely face lawsuits from FSF) were they to attempt such a thing. Could they? Possibly. Would they? Exceptionally unlikely.
So yes, I trust a non-profit, and a collective with nearly 50 years of history supporting copyleft, implicitly more than I will ever trust a company or project offering a software while requiring THEY be assigned the copyright rather than a license. Even your statement holds a difference; they require assignment to FSF, not the project or its maintainers.
That’s just listening to history, not really a gotcha to me.
It has been decades since I've seen an FSF CLA packet, but if I recall correctly, the FSF also made legally-binding promises back to the original copyright holder, promising to distribute the code under some kind of "free" (libre, not gratuit) license in the future. This would have allowed them to switch from GPL 2 to GPL 3, or even to an MIT license. But it wouldn't have allowed them to make the software proprietary.
But like I said, it has been decades since I've seen any of their paperwork, and memory is fallible.
In my opinion, it's not. They could start licensing all new code under a non-FOSS license tomorrow and we'd still have the GPL'ed Zed as it is today. The same is true for any project, CLA or not.
I found the OP comment amusing because Emacs with a Jetbrains IDE when I need it is exactly my setup. The only thing I've found AI to be consistently good for is spitting out boring boilerplate so I can do the fun parts myself.
I always hear this "writing code isn't the bottleneck" used when talking about AI, as if there are chosen few engineers who only work on completely new and abstract domains that require a PhD and 20 years of experience that an LLM can not fathom.
Yes, you're right, AI cannot be a senior engineer with you. It can take a lot of the grunt work away though, which is still part of the job for many devs at all skill levels. Or it's useful for technologies you're not as well versed in. Or simply an inertia breaker if you're not feeling very motivated for getting to work.
Find what it's good for in your workflows and try it for that.
I feel like everyone praising AI is a webdev with extremely predictable problems that are almost entirely boilerplate.
I've tried throwing LLMs at every part of the work I do and it's been entirely useless at everything beyond explaining new libraries or being a search engine. Any time it tries to write any code at all it's been entirely useless.
But then I see so many praising all it can do and how much work they get done with their agents and I'm just left confused.
Yeah, the more boilerplate your code needs, the better AI works, and the more it saves you time by wasting less on boilerplate.
AI tooling my experience:
- React/similar webdev where I "need" 1000 lines of boilerplate to do what jquery did in half a line 10 years ago: Perfect
- AbstractEnterpriseJavaFactorySingletonFactoryClassBuilder: Very helpful
- Powershell monstrosities where I "need" 1000 lines of Verb-Nouning to do what bash does in three lines: If you feed it a template that makes it stop hallucinating nonexisting Verb-Nouners, perfect
- Abstract algorithmic problems in any language: Eh, okay
- All the `foo,err=…;if err…` boilerplate in Golang: Decent
- Actually writing well-optimized business logic in any of those contexts: Forget about it
Since I spend 95% of my time writing tight business logic, it's mostly useless.
Highlighting code and having cursor show the recommended changes and make them for me with one click is just a time saver over me copying and pasting back and forth to an external chat window. I don’t find the autocomplete particularly useful, but the inbuilt chat is a useful feature honestly.
I'm the opposite. I held out this view for a long, long time. About two months ago, I gave Zed's agentic sidebar a try.
I'm blown away.
I'm a very senior engineer. I have extremely high standards. I know a lot of technologies top to bottom. And I have immediately found it insanely helpful.
There are a few hugely valuable use-cases for me. The first is writing tests. Agentic AI right now is shockingly good at figuring out what your code should be doing and writing tests that test the behavior, all the verbose and annoying edge cases, and even find bugs in your implementation. It's goddamn near magic. That's not to say they're perfect, sometimes they do get confused and assume your implementation is correct when the test doesn't pass. Sometimes they do misunderstand. But the overall improvement for me has been enormous. They also generally write good tests. Refactoring never breaks the tests they've written unless an actually-visible behavior change has happened.
Second is trying to figure out the answer to really thorny problems. I'm extremely good at doing this, but agentic AI has made me faster. It can prototype approaches that I want to try faster than I can and we can see if the approach works extremely quickly. I might not use the code it wrote, but the ability to rapidly give four or five alternatives a go versus the one or two I would personally have time for is massively helpful. I've even had them find approaches I never would have considered that ended up being my clear favorite. They're not always better than me at choosing which one to go with (I often ask for their summarized recommendations), but the sheer speed in which they get them done is a godsend.
Finding the source of tricky bugs is one more case that they excel in. I can do this work too, but again, they're faster. They'll write multiple tests with debugging output that leads to the answer in barely more time than it takes to just run the tests. A bug that might take me an hour to track down can take them five minutes. Even for a really hard one, I can set them on the task while I go make coffee or take the dog for a walk. They'll figure it out while I'm gone.
Lastly, when I have some spare time, I love asking them what areas of a code base could use some love and what are the biggest reward-to-effort ratio wins. They are great at finding those places and helping me constantly make things just a little bit better, one place at a time.
Overall, it's like having an extremely eager and prolific junior assistant with an encyclopedic brain. You have to give them guidance, you have to take some of their work with a grain of salt, but used correctly they're insanely productive. And as a bonus, unlike a real human, you don't ever have to feel guilty about throwing away their work if it doesn't make the grade.
> Agentic AI right now is shockingly good at figuring out what your code should be doing and writing tests that test the behavior, all the verbose and annoying edge cases,
That's a red flag for me. Having a lot of tests usually means that your domain is fully known so now you can specify it fully with tests. But in a lot of setting, the domain is a bunch of business rules that product decides on the fly. So you need to be pragmatic and only write tests against valuable workflows. Or find yourself changing a line and have 100+ tests breaking.
If you can write tests fast enough, you can specify those business rules on the fly. The ideal case is that tests always reflect current business rules. Usually that may be infeasible because of the speed at which those rules change, but I’ve had a similar experience of AI just getting tests right, and even better, getting tests verifiably right because the tests are so easy to read through myself. That makes it way easier to change tests rapidly.
This also is ignoring that ideally business logic is implemented as a combination of smaller, stabler components that can be independently unit tested.
Unit tests value is mostly when integration and more general tests are failing. So you can filter out some sections in the culprit list (you don’t want to spend days specifying the headlights if the electric design is wrong or the car can’t start)
Having a lot of tests is great until you need to refactor them. I would rather have a few e2e for smoke testing and valuable workflows, Integration tests for business rules. And unit tests when it actually matters. As long as I can change implementation details without touching the tests that much.
Code is a liability. Unless you don’t have to deal with (assembly and compilers) reducing the amount of code is a good strategy.
This is a red flag for me. Any given user-facing software project with changing requirements is still built on top of relatively stable, consistent lower layers. You might change the business rules on top of those layers, but you need generally reasonable and stable internal APIs.
Not having this is very indicative of a spaghetti soup architecture. Hard pass.
You can over specify. When the rules are stringent it's best to have extensive test suites (Like Formula 1). But when it's just a general app, you need to be pragmatic. It's like having a too sensitive sensor in some systems.
AI is solid for kicking off learning a language or framework you've never touched before.
But in my day to day I'm just writing pure Go, highly concurrent and performance-sensitive distributed systems, and AI is just so wrong on everything that actually matters that I have stopped using it.
But so is a good book. And it costs way less. Even though searching may be quicker, having a good digest of a feature is worth the half hour I can spend browsing a chapter. It’s directly picking an expert brains. Then you take notes, compare what you found online and the updated documentation and soon you develop a real understanding of the language/tool abstraction.
I’m using Go to build a high performance data migration pipeline for a big migration we’re about to do. I haven’t touched Go in about 10 years, so AI was helpful getting started.
But now that I’ve been using it for a while it’s absolutely terrible with anything that deals with concurrency. It’s so bad that I’ve stopped using it for any code generation and going to completely disable autocomplete.
A good example would be Prometheus, particularly PromQL for which the docs are ridiculously bare, but there is a ton of material and stackoverflow answers scattered al over the internet.
zed was just a fast and simple replacement for Atom (R.I.P) or vscode. Then they put AI on top when that showed up. I don't care for it, and appreciate a project like this to return the program to its core.
I actually think pirating encourages a healthier approach to watching TV/movies. I've fully made the switch to pirating instead of subscribing to any streaming services, and it's led to me thinking more critically about what I want to spend time downloading and watching rather than just flipping mindlessly through endless amounts of readily available garbage on a streaming service.
I do still have Kanopy though, which is great for me but obviously depends on your library.
For me, I only seek out media I plan to actually watch. Rather than flipping through what is available and choosing from there. Currently it is stargate sg1/atlantis what I am watching.
Also, a lot of movies/series are only available dubbed here. (I really effing hate "Sie" in dubbed media. So much so, that it's one of the major reasons I go for subbed in english, at most)
When i first used netflix at my friends house, I immediately used the search bar and looked for Jurassic Park... what kind of movie service doesn't have JP, i thought. It must be around 10 years ago, and I never used it once afterwards.
They've had Jurassic Park repeatedly over the years since then, and I've watched it a couple of those times.
But when Netflix was new to streaming they had so much more content; it was great. Then all the rights-holders decided they didn't want just a cut of Netflix's rates, they'd rather have all of it. Since then, the services have seemingly reluctantly agreed to license some of their stuff, some of the time, to other services, often with temporary exclusivity. If Netflix wanted it all back, they'd need a friendly blue genie and a monkey to defeat a multitudinous Jafars.
Then don’t be a hoarder and only get what you want to watch
I have my watchlist hooked up to *arr so it pulls that stuff automatically. Once I watched it and it’s not something I want to show to others, I delete it.
> led to me thinking more critically about what I want to spend time downloading and watching rather than just flipping mindlessly through endless amounts of readily available garbage
For me it's a bit different.
I have the *arr stack fully automated (with 22Tb of storage for now maaaaybe it's overkill), for friends and family too.
And the experience is nice because it makes content "crowd sourced". If something is on the server it means someone else purposefully added it, so you can still browse, but it's curated based on your friend/family circle.
But also the automation part can be a bit "mindlessly click download on everything even stuff I probably won't watch", but disk space constraints force you to delete it if nobody's watching.
Radarr and Sonarr are my two favourite pieces of software ever. Together with Plex I get an experience FAR superior to any streaming service. For the record I would be happy to pay for such a service, but they're so greedy they'll never offer such a unified service. Instead they keep making the direct to Netflix content worse. Removing content without any notice. Making the app UX worse, including removing useful reviews from the platform, and making content auto play when browsing. The best example of this clusterfuck is the Pokemon where to watch guide: https://www.pokemon.com/us/animation/where-to-watch-pokemon-...
I prefer physical media. However, it can sometimes be a chore to start the movie! Each disc is different. Some discs use non-standard methods to access the home menu. Some require that you at least skip past all the previews at the beginning. The worst discs require several minutes of fiddling in addition to finding and inserting the disc before you can watch it. Compare this with double-clicking an mkv and having it just...start.
Obviously I can't speak for all of Gen Z (and I realize we're no longer "the younger generation"), but my friends and I don't want any part of this, and feel optimistic rather than bitter that things won't go the way you're describing. I seldom meet anyone in my age group that isn't talking about moving away from social media, cancelling software subscriptions, all of the things that millenials and Gen X seem to be so excited to continue building and promoting.
Even at my workplace the "older" people are the ones that are excited about stuff like AI jazz remixes of rap songs and AI generated short films, while literally everyone else under 30 finds it pretty cringe and makes fun of them in DMs.
So all that to say, I disagree with your outlook, but I guess time will tell.
reply