Hacker Newsnew | past | comments | ask | show | jobs | submit | ymolodtsov's commentslogin

I'm still using my time capsule. I don't really trust the hard drive inside of it, but I basically use it to connect to an SSD that I attached to it. Unfortunately, Nest Wi-Fi, that I use as a router doesn't have any USBs, unlike some cheaper routers. I know that it's, I know that it will be gone after Tahoe. I'm still not sure what I'm going to do about this. I mean, I don't really want to fool on us

I mean, it's basically just like a time machine backup plus, uh, a little bit of some older files that I don't want to keep on my main Mac.

seems like any NAS would take way more space than I would love to. I suppose one alternative would be actually getting some kind of like Beelink PC and then maybe setting up a proper home server, moving some of my side projects in there, running plex from it. The problem is that the current ram prices, it's a surprisingly expensive solution.


In my experience their phones last far longer than Androids. Only in the last few years Samsung and Pixel have switched to at least 7 years (now it's the question of whether the hardware will suffice).

Until it broke, I was still using my 2018 iPad just last year.


My primary device is still my 2018 iPad Pro. I have lots to choose from, but it’s perfect.

In the last few years I've tried multiple photo hosting options. 500px, Flickr, Unsplash.

In the end, I just built my own photo blog on Hugo with SveltiaCMS (thanks Claude). I don't care much about the social part per se, just want a place to host my photo journeys.


For a simple static "here are my photos", I‘ve found https://github.com/bep/gallerydeluxe today, and really like the focus on photos, not UI and thingies flying in and out of the viewport all the time.

haha did something similar. Ended up vibe-coding something with Hugo and using backblaze + cloudflare proxy to host the images. So far everything has been free and I have a snappy 'portfolio' :-)'

I've never seen a summarizing mistake from any modern LLM. What are you even talking about?

LLMs hallucinate when they don't have enough context. Not when they're just cutting down the message in their context.


> So, mostly to fraudulent AI spam? Such an utter fantasy.

The only way payments go to AI music if people would actually listened to it.

Of course there's some fraud going on. Any financial system has it. It's still minimal and that's precisely why you know of such cases.


Their revenue and growth justified it. Plus, for xAI that could be the only way to get a SOTA coding model they want so hard.

I thought cursor became mostly obsolete with Claude Code and Codex TUIs?

> I thought cursor became mostly obsolete with Claude Code and Codex TUIs?

I wouldn't think so. At work I have both cursor and claude code and while I use both, cursor is by far the most pleasant to use. If I had to give one up, I'd let claude go.


Are TUIs not yesterday’s hot thing?

The way I work now in the Codex desktop app is that I spin up 3-5 conversations which work in their dedicated git worktree.

So while the agent works and runs the test suite I can come back to other conversations to address blockers or do verification.

Important is that I can see which conversation has an update and getting desktop notifications.

Maybe I could set this up with tabs in the Terminal, but it does not sound like the best UX.


That's probably more a personal preference than objective measurement. A lot of people already spent most of their dev time in the terminal, so for someone like myself that uses neovim claude code or codex cli are much easier than using the GUIs.

The solution is use both. They both have their usecases. Cursor's autocomplete and quickly highlight a few lines -> throw into context, plus it's got a very good file index/API (which burns much less tokens than Claude's grep'ing) and whatever else they are doing underneath to optimize it for coding.

Claude is still gold standard if you're not in an IDE though.


Grep'ing doesn't use tokens, it uses grep.

Reading files is always the biggest token burning when coding. If it can't find stuff quickly or has to use less and head to trim it before finding it, then you're just wasting context window

Cursor both lets you highlight specific lines multiple times per chat and is much quicker at finding stuff.


Claude has to use more tokens to read the grep output.

That matches my anecdatal experience with a couple dozen devs. Many wnet hard on the Cursor train and have mostly gotten off now with CC and Codex TUIs available

Distillation helps for sure.

The EU with its mama energy again.

I use 15 Pro. I don't like the new aluminium iPhones much. So I just went it to Apple service center and had the battery replaced. It costs just 90 euros and I now have a brand new phone, basically.

I very much prefer my phone to be thinner, water resistant, and have a larger battery compared to being able to do it myself.


The law is not about you, but about everyone: 1) Apple doesn't have service centers everywhere: some countries/cities/small towns don't have them 2) Apple doesn't provide service for older devices 3) making it easier doesn't mean you'll be able to swap them live as we did in the 90s, but it means you could do it at home with a reasonable set of tools instead of sending the device to some shop that would need to unglue, unsolder, ...

Out of the 27 EU countries, there are only apple stores in 8 of them.

Why pay 90 when it could be 20

Comparing digital ads to the Stasi is just peak Western snowflake behavior, I'm sorry.

There are many imaginary arguments about harm in the article, with precisely zero actual examples or cases. Data brokers, buy data, stalk somebody. Can you share at least something? No, it's all just hand waving. Because none of these people can offer any. The fact that the author still thinks Cambridge Analytica meant something says a lot as well. This was a scandal out of nothing.

I, for one, am extremely grateful that, as I was growing up with not much money, I was still able to access more or less the same Internet as people in the US. I don't care about a black-box algorithm looking at my habits to figure out that I love backpacks and microbrand watches, especially if it enables free platforms for me.

Stasi didn't watch you to sell your crap online, you know. They had much worse motives.

If anything, we're now going backward because of the enormous marginal costs of inference. With AI, people aren't on the same page (even a $20 subscription is a lot of money in many countries).


Do you think if the Stasi, or someone like them, ever come into power, they'll just ignore all this extra surveillance we've built for free?

Not really a hypothetical - look at China and Russia, or Saudi Arabia [1]. I'll let others make the case to add the US and European countries to that list - not because I don't believe they belong on the list, but just because I'm too lazy to meet the higher bar of evidence that goes with going against groupthink.

[1] https://en.wikipedia.org/wiki/Saudi_infiltration_of_Twitter#...


I know exactly what happens to Russia and all of it has nothing to do with targeted ads or anything.

Putin's cronies literally own the country's only remaining unblocked social network. They tend to prosecute people for public content, even comments or likes, but there's not a single case where ad targeting data was used.

And they have the capacity to locate a person through multiple other ways, primarily cell networks.


Most of the videos uploaded to YouTube are worthless.

AI simplifies the creation, doesn't mean it's good and will be listened to. And if it will, then what's the problem?

You can talk about ethics, IP, etc. but we're not even there yet.


I dunno; there has always been shit videos on YouTube, obviously, but there used to be a sort of natural filter of videos that had nice transitions and decent narration and dialog that was more or less grammatically correct that made it so that I would mostly watch videos I enjoyed.

Now that AI has cargo-culted these traits I'm getting a lot of recommendations of videos that will initially seem "ok", and then I realize after about a minute that the narration will have some weirdness, and the script will have a lot of the typical ChatGPT "tells", and of course the video comes off as pretty low effort after that.

My YouTube recommendations have become increasingly useless, which honestly might be a good thing because it's made it so that I have less desire to use YouTube.


The weirdness is creeping in to regular Youtube content too. For example, I like to watch Ryan Hall's stream during extreme weather (tornado season in the US). In his forecast videos he has to start with something weird to prove to the audience they're not watching a fake AI generated channel, like eat a banana or apple while talking and wave the fruit around. It was very strange until i realized what he was doing. He also started wearing a suit which is very out of character for him, that must also confuse AI trained on his previous videos.

I think there's a couple of things going on here.

The first is AI-generated content. This can start with nothing more than an idea. Some of it is uniquely-presented stuff that's actually kind of interesting: I got sucked into a nice Ken Burns-style narrated documentary about the rise and fall of Baldwin Piano a few weeks ago. It was a little wordy, but it worked. It took awhile before a very glaring error in diction made me rewind for a double-take, note that no human would ever make that mistake while narrating, and then burn the channel from my feed.

The second problem is very different: Cloning individual people and channels. When a person (or nearly as likely, a bot) elects to use a bot to clone someone else's style, persona, and everything else then that's... that's very unsettling.

---

The first problem? It's whatever. I don't like it, but there may come a time when I accept it. At this point it's mostly harmless and really guilty of nothing more than wasting some of my time now and then.

The second problem? It can be reprehensible.

And it's particularly bad with a channel like Ryan Hall. I don't have any idea of how he is as a person (never meet your heroes), but I like to presume that he's generally a swell guy. And moreover: He's important.

When the weather turns iffy, I put his stream on and it's mostly just background noise. I usually give it very little attention.

But when he mentions the name of the small city I live in then that means that shit is just about to get very real here -- very soon. That's astoundingly useful to me, and the safety of the people I care about.

I also find a lot of value in obvious parody. It's can be fun, and it can make people think. The music of Weird Al or There I Ruined It, the crazy stories in The Onion, the memes. That's all good. But this Ryan Hall business? It's bad.

So, there's definitely a line.

And I don't know where the line should be drawn. But using bots to deceive and thereby dilute the value of the content of Ryan Hall's channel is definitely on the wrong side of that line.


I discovered a new band some weeks ago (Hexxenmind) through Spotify. Really liked them, then checked concert dates only to find out it’s AI generated.

Honestly couldn’t tell in the moment but now that I know it’s generated it somehow feels “cheaper” and I dislike listening to them.


I've been caught out a few times after learning a new artist I liked was AI.

The time spent listening to AI music _could_ instead been spent listening to something created by a human.

That is what pisses me off the most!


> it somehow feels “cheaper” and I dislike listening to them

As you should... Soon they'll start selling fake concert tickets, like a pig butchering scam but with music


AI creation kills cultural sharing.

People who create AI music are largely not sharing it with others for any reason other than to create a revenue stream. They are also not consuming new AI music to be able to develop influences and synthesize new ideas. The system builds brick walls where there was once osmosis.

How can art evolve under these conditions?


I'm not sure either Linkin Park or my friend's garage band are in any way affected by AI generated music.

How do you imagine that happening?


I agree that people who refuse to listen to AI music or create AI music will still continue on the same path as before in terms of the evolution of music.

But AI music can make it even harder for people to eke out a living as musicians, since they are competing with something that costs far less to make. And some people who would otherwise learn how to create music themselves will instead choose not to learn this skill because it is easier to use AI.

The net effect is a diminished culture.


>They are also not consuming new AI music to be able to develop influences and synthesize new ideas.

If not they most definitely are listening to other music that influences them. If you have proof that such a producer listens to 0 music feel free to share it.


They're describing the "music" that's churned out almost entirely hands off to siphon royalties. Even the creator isn't listening to 100% of what they're uploading, it's spam that can be produced in massive quantities and can overwhelm a platform if left unchecked (as the article describes, AI music is 1-3% of actual listens by users but 44% of uploads).

Actual artists who need years to create a few hours of handcrafted content don't have a chance in an environment where hundreds of hours of slop can be generated in less than a day for a few hundred bucks. Platforms like Deezer recognize they need to address that imbalance somehow or they'll eventually lose their high quality contributors in a vicious cycle if it becomes impossible to compete.


Why are some crafts more sacred than others?

Because some crafts are more sacred than others. Making a painting is more sacred than smearing my shit in the Barns and Nobel bathroom stall, although arguably less fun.

Who decides that? We do, collectively. Why do we have that power? Because we define art. Why do only humans have this power? Because art is an innately human thing, so we get to decide.


I find one of the most unsettling things about 2025-2026 is how little people seem to agree with this point you're making. It's like this hyper-reductive thinking where it's all just "if the the outcome is the same, so what!?" has just metastasized and everyone is false-equivalence-ing their way into hell It makes me want to scream. like, beauty is not arbitrary, not everything can be homogenized into content paté, that does not make the world better. I'm sure really smart people will argue i'm wrong for some reason but it just makes me feel so sad

I don't agree, because I think the zeitgeist should be as concerned with my lowly output being automated away (as a IT worker drone) just as much. I'm sorry if you view that as a false-equivalence

I don't know what you mean by this. The same effect can be felt in other forms of art.

a lot of gen ai is essentially a pollution machine creating digital single use plastics. Whoever can identity and sift it for value will be the after ai heroes.

> AI simplifies the creation, doesn't mean it's good and will be listened to. And if it will, then what's the problem?

From this attitude you might as well get your entertainment from spam or ads.


I always found it hard to listen to the music while working, so I use Endel, which has AI generated soundscapes.

And I listen to a lot of different music when I'm not working.


That is theoretically how one would think it would play out but that’s not what happens in reality. Instead it becomes like blog spam where it becomes impossible to actually find what you’re looking for because you’re wading through so much crap you don’t want.

Also, a lot of us value the fact that music is made by a person. Digital tools have been around for a long time and people have bickered about that, but ultimately they still require a person with some knowledge to sit down and actually produce the music, to do the thing. Writing prompts until you get something interesting can be fine, but what people are doing is carpet bombing us with whatever nonsense comes out because they have a financial incentive to do so.

I have plenty of experiments back when I did more digital music where I would mess with frequency modulators and such until I just found something interesting. I don’t see the harm in activities like that. But that’s not really what’s happening here. It’s deliberately lazy, corner cutting work to spam music platforms for profit. Yes there is a gray area between these two scenarios but that gray area isn’t the problem.


Honestly I think the thing that most humans appreciate is effort. Using AI tools is not inherently "bad", but these very-literally mass produced AI songs are almost by definition low-effort and as a result pretty bland and unlikeable.

Digital music has always been fine to me, as long as the song being produced feels like it took a human some amount of effort.


This is a much more concise and effective way of communicating my thoughts ha

Yeah I agree with that nuance, as I personally enjoy making AI covers of songs I like in genres that I can't produce myself (old vintage blues covers of 80s new wave songs if you must know). It's a fair amount of work prompting and curating (and editing in some cases). I think they are cool and have shared a few, but they do tend to get lumped in with "ai slop" and some people take offense.

The difference between this and what we are seeing in this article is you aren't sitting down, grinding out dozens/hundreds of these, then spamming them with little to no regard for anyone else for profit. You do it at small scale, for yourself/friends, and clearly care about the results. You are trying to make something intentionally.

Yes, exactly a good way to put it.

I think a lot of people make an assumption that problems like this are fixed-sized; that by making getting a song easier, that that's the end of the the line.

In my mind the better mindset is to think that the problems are not fixed size, and instead these tools can allow for bigger and cooler projects, and/or projects that wouldn't be possible (or at least would be infeasible) without some kind of technological assistance.

AI tools can be used to create slop that is either "bad" or extremely bland at an effectively-infinite speed. It could also be used to make some really cool and interesting stuff if a person is really willing to spend time and effort to make it cool. Usually this requires more than just "prompting" though.


You are describing AI slop.

Not in my book, I know a lot of low/no effort attempts to spam "content creation" channels, no curation, etc that I'd call slop before this. I'm trying to use AI to generate something that did not/would not exist otherwise. It's admittedly probably better because it's using human-written lyrics for the covers (and memories?), but to be honest, 80s new wave lyrics can be pretty hokey. "Any AI = slop" is probably more a belief system than an objective measure.

I don't think what you're doing, or at least what you described, is inherently slop. If you're actively putting in effort to make something you think is cool and to make something you're actively proud of (or at least something that you genuinely want people to enjoy), I don't think that's "slop", or at least I don't think it's bad.

It's certainly different than those low-effort channels that mass upload hundreds of videos a day because they're able to automate the entire video-making process; those are completely soulless, again almost by definition. Those exist to just try and effectively skim revenue from adsense (or subscriber revenue in the case of Deezer), and making something that people will actually "enjoy" isn't the purpose.

Of course, this isn't a new problem; I remember a few years ago (before generative AI became viable for this stuff), there were "tutorials" on the best way to upload hours and hours of noise or silent music to Spotify to extract revenue, and of course let's not forget the infamous "Elsagate" stuff that plagued YouTube. AI has maybe accelerated the problem but it certainly wasn't the first thing to create "slop".

I'm hardly the first person to make this point, but AI is a tool. Tools can be good or bad; if AI is a tool that you can use to actively help you be more creative then I don't think that's bad. If you're just generating something to pad a resume or extract ad revenue, that's slop.


But with music it's easy to find what you're looking for. Look for the band name.

If you want to discover new bands, there are multiple ways to do so: curators, KEXP, concerts and festivals, etc.


Say what you about the Anna’s Archive Spotify scrape: it made me realize how much music exists and how much music was never listened to.

If every track was 3 minutes long, it would be about 1450 years worth of music. You can never experience it all.

You could if you parallelized the operation. Probably tantamount to torture though.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: