Hacker Newsnew | past | comments | ask | show | jobs | submit | halostatue's commentslogin

They also designed with cheap shells that felt loose before a year was out, and offered exactly zero water and dust protection so if your device got wet, it was considered out of warranty.

https://m.gsmarena.com/results.php3?chkRemovableBattery=sele...

Incorrect. Here are 115 phones with removable batteries and rated for > 0 water protection.


That's at most 1/10th the cost of the average Samsung phone.

That's cheap. If you think that a safe first-party replacement battery will sell for less than the 79€ that the whole replacement effort takes, then you're fooling yourself.

I strongly suspect that there's also not good language for blocking against third-party batteries (and the phone manufacturers would have good reason to do so because it might result in overheating or worse with really bad third-party batteries).


Here's a replacement battery for last year's S25 Ultra: https://www.mobilesentrix.ca/replacement-battery-compatible-.... Retails for 14 CAD or approx 9 EUR (11 EUR with a 20% VAT). So yes, 79 EUR would be extremely expensive.

The people for whom €79 is not cheap are not getting flagship Samsungs, but some low tier $100-300 Android.

Doesn't seem to work with Cursor Agent (which may store its data in ~/.cursor).

you are right. thats cursor-agent (the CLI), not the Cursor IDE. CodeBurn only parses the IDE's state.vscdb right now. cursor agent keeps transcripts under ~/.cursor/projects/*/agent-transcripts/ which we dont read yet.

filed an issue to add it: https://github.com/AgentSeal/codeburn/issues/55

cursor support only landed yesterday, so CLI is next. thanks for catching it.


Cursor Agent itself suggests that this probably won't be easy as some of the data is missing.

There's support and "support".

MacPorts has some level of support for PowerPC, but anything that isn't in the most recent ~3-4 releases is likely to be cut off from any number of packages at useful versions. (There's substantial work down to support Rust on much older versions of macOS, but there's also versions above which Rust has cut off older macOS versions.)

I believe that there's a recommended stream for when you need older versions support, but it's definitely a secondary target from what I've been reading on the MLs.


GhosTTY accepts LLM contributions, but has strict rules around it: https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.m...

I accept LLM contributions to most of my projects, but have (only slightly less) strict rules around it. (My biggest rule is that you must acknowledge the DCO with an appropriate sign-off. If you don't, or if I believe you don't actually have the right to sign off the DCO, I will reject your change.) I will also never accept LLM-generated security reports on any of my projects.

I contribute to chezmoi, which has a strict no-LLM contribution (of any kind) policy. There've been a couple of recent user bans because they used LLM‡ and their contributions — in tickets, no less — included code instructions that could not have possibly worked.

Those of us who have those rules do so out of knowledge and self-respect, not out of gatekeeping or ignorance. We want people to contribute. We don't want garbage.

I think that there needs to be something in the repo itself (`.llm-permissions`?) which all agents look at and follow. Something like:

    # .llm-permissions
    Pull-Requests: No
    Issues: No
    Security: Yes
    Translation Assistance: Yes
    Code Completion: Yes
On those repos where I know there's no LLM permissions, I add `.no-llm` because I've instructed Kiro to look for that file before doing anything that could change the code. It works about 95% of the time.

The one thing that I will never add or accept on my repos is AI code review. This is my code. I have to stand behind it and understand it.

‡ I disagree with those bans for practical reasons because the zero-tolerance stance wasn't visible everywhere to new contributors. I would personally have given these contributors one warning (closed and locked the issue and invited them to open a new issue without the LLM slop; second failure results in permanent ban). But I also understand where the developer of chezmoi is coming from.


You're welcome to add me as a co-maintainer on this if you submit it to macports/macports-ports:

     {macports.halostatue.ca:austin @halostatue}
I maintain https://github.com/macports/macports-ports/blob/master/sysut... amongst other things regularly.


This will not be a fun retrospective (the status updates are brutal and clear) -- and I've only been using Coveralls for open source projects for years.

There is a chance — depending on how stubborn the cloud infrastructure provider is (and both Google and AWS can be very stubborn because they don't like admitting that they just might possibly be wrong; I have no experience with Azure but imagine its the same) — that Coveralls will be unable to recover from this because rebuilding the infrastructure from zero on a different provider is going to be hard even if there is 100% IaC and even if there's a solid database backup outside of that provider that's accessible.

I hope they can, because they're a reasonable service and provide a lot of value to open source projects for coverage measurement.

That said, I have seen a number of reliable paths to getting extremely slow responses and eventually 500s from the coveralls servers while trying to look at the coverage details. It has felt like there's been a slow decline in Coveralls server quality because of that. (I've never really reported any of these because (a) I hoped that they were seeing these in their logs, metrics, or notifications and (b) I'm not a paying customer and it's easy for me to `open cover/excoveralls.html` or the equivalent.)

(I have tried the major alternative, codecov.io, in the past, but it's been a long time and I find it disappointing that they appear not to keep their example repos / documentation up to date.)


Disclaimer: I haven't tried this yet.

I would want the equivalent of the trixie-slim Docker image (Debian 13, no documentation). It's ~46 Mb instead of ~4Mb as a Docker image, but gives a reasonably familiar interface.

(This is largely based on some odd experiences with Elixir on Alpine, which is where I am doing most of my work these days.)


> There is no real universal intuition you can build up for programming. There is no point at which you've mastered some degree of fundamentals that you would ever be able to cross language family boundaries trivially.

I don't really agree with you on this, even though I agree with everything else here. Then again, I am an outlier where I've used ~40 programming languages in my career. There are a couple of language families (array languages like APL, exotics like BF) where I cannot read it because I've had no real opportunity to learn them, and there's a significant difference in being able to read a language and use a language (I can read, but not really use Haskell -- although I have shipped a couple of patches to small libraries).

I despair at the number of developers in the profession who understand only one or two programming languages…and badly at that.

(It's worth noting that I wholly disagree with the original post. 24 years ago I chose Ruby over Python because of syntax. Ruby appealed to me, Python didn't — purely on syntax. I never pretended that Python was less capable, only that its syntactic choices drove me away from choosing it as a primary language. I'm comfortable programming in Python now, but still prefer using most other languages to Python … although these days that has more to do with package management.)


TL;DW.

I don't watch video complaints. I don't watch most YT videos except at 2x because by time the person who made the video got started saying what they're trying to say, I could have finished a text article version of the same thing.

Most people speak way too slowly for me to be interested in what they're saying, especially when they could have written an article that is more information dense and it typically shorter in any case.

Videos have value for enhancing reports, but are mostly useless as reports themselves.

So yeah, it's too damned much to ask to watch a video.


You know the saying "a picture is worth a thousand words"?

So yeah, a video that precisely reproduces a UI/UX bug is worth more than anything you can write about it.

Showing exactly what the problem was is much better than describing the problem. It's a lossy conversion that adds noise.

Saying this as someone who doesn't watch videos normally.


It isn't necessarily. A _bad_ video can often be worse than a bad description, because I can read a bad description and reformulate and clarify. This is compounded when the video skips the prerequisite steps that a description often needs to add.

So: TL;DW.


Sorry. That comment is too long. Didn't read. Hope you didn't waste much time on it.

Jokes aside, that video is 2:23 long and it gets to the point within the first 33 seconds, at which point they have demonstrated the issue.

You're being beyond incredibly silly right now.


Not at all.

Video-first is generally as ridiculous as SEO-driven recipes where I can't start cooking what I want to cook because I have to go through someone's nonna's best friend's sister's cooking life story.

It's great that this video gets to the point in the first 33 seconds, but make me want to watch your video.

This post made me not care.

I get video bug reports all the time at work -- but it's accompanied with a description of what the problem is that makes it worth my time to watch the video. (Sometimes, with a well-written description, I don't need the video but watch it to make sure my understanding matches.)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: