Hacker Newsnew | past | comments | ask | show | jobs | submit | debazel's commentslogin

A phone is worthless without software.

> but eventually we should start flagging images with no source attribution as dangerous the way we flag non-https.

Yes, lets make all images proprietary and locked behind big tech signatures. No more open source image editors or open hardware.


C2PA is actually an open protocol, à la SMTP. the whole spec is at https://spec.c2pa.org/, available for anyone to implement.

The standard itself being open is irrelevant. I'm not sure why this is always brought up for attestation standards. It is fundamentally impossible to trust the signature from open-source software or hardware, so a signature from open-source software is essentially the same as no signature.

The need for a trusted entity is even mentioned in your specification under the "attestation" section: https://spec.c2pa.org/specifications/specifications/1.4/atte...

So now, if we were to start marking all images that do not have a signature as "dangerous", you would have effectively created an enforcement mechanism in which the whole pipeline, from taking a photo to editing to publishing, can only be done with proprietary software and hardware.


We already have a centrally curated trust model in https. Browsers only treat connections as "secure" if they chain up to a root CA in their trust store. You can operate outside that system, but users will see warnings and friction. Some level of trust concentration isn’t new.

I'm curious if you think this is worse or not as bad as a best-case broad implementation c2pa...especially if there is a similar Let's Encrypt entity assisting with signatures.


Why would the image itself have to be proprietary to have some new piece of metadata attached to it ?

Until you explore "too deep" and get your whole account banned for suspicious activity and permanently grief your whole career.

And with Anthropic introducing KYC requirements, this is essentially a lifetime ban.

Fun times.


Serious fear I have.

I brought it up two years ago and get downvoted when I brought it up a couple months ago.

There is a story on the front page right now about someone losing their child's family videos from a youtube ban. We hear about this stuff all the time. I suspect we are gonna be in somewhat of an arms race with AI products as the bubble grows over the next 18-24 months. This makes me worried about how disadvantaged people are going to be if they lose access to the better platform (whichever that ends up being).

Do you think AI is going to be so important that we would benefit from legal protections for access?

Or do you think the models and technology will become so small we will be able to personalize / decentralize the tech and it still be useful / competitive?

https://news.ycombinator.com/item?id=40784126


Happening already. My new claude max account got instabanned after just a few messages asking it to debug some stuff for me, that they felt like a TOS violation. Nothing remotely controversial. The main model didn't even complain, some dumber background censorship model flagged it.

Good. More open source tools should be unappealing the the "corporate world". They can fund and pay for their own tooling.


This is not an easy fix. Charge backs will lead to life-time permanent bans. Which means you're now forced to buy an iPhone in order to pass store attestation for essential applications like banking apps, government ID, age verification, etc.


Assuming the chargeback is made in good faith, why do the laws allowing for chargebacks in the first place permit this?


That's irrelevant, a blocked account justified or not should not prevent you from canceling your subscription. It should in fact automatically cancel any subscription upon account suspension.


My experience with actually trying this is that current LLMs benefit greatly from having a framework to build on.

More code in the context window doesn't just increase the cost, it also degrades the overall performance of the LLM. It will start making more mistakes, cause more bugs, add more unnecessary abstractions, and write less efficient code overall.

You'll end up having to spend a significant amount of time guiding the AI to write a good framework to build on top of, and at that point you would have been better off picking an existing framework that was included in the training set.

Maybe future LLMs will do better here, but I wouldn't recommend doing this for anything larger than a landing page with current models.


Yes. Intent and patterns will be much clearer for future sessions. If you have a WORN situation (write once read never (modify never, including by the AI)) perhaps you can skip layering and just big ball of mud your system. I doubt many people want that.


You wouldn't even have to be a high profile target like a sanctioned judge. Simply getting your account banned by some automated process that marked you as "suspicious" will basically render you excluded from society.

It is absolutely insane to put this amount of power in 2 foreign companies that will be able to destroy your life with zero reason, oversight, or due process.


This is not a hypothetical problem and you don't need to be deliberately targeted. It actually happens to normal people. And if it does you have absolutely zero recourse.

Source: I have a banned Google account (it's over 20 years old at this point). I know the password, but Google doesn't let me log into it. Every few years I try to unsuccessfully recover it.

If you have a Google account and having it banned would be a problem for you here's my advice: migrate. Right now. You never know when one of their bots will deem you a persona non grata.


Can't you just create a new account?


You can, but you lose access to anything that was associated with your old account.

Another fun thing Google did is to automatically (without my consent) add a required second-factor authentication to my current Google account. I have this old, e-waste tier phone that I use mostly only as a glorified alarm clock, and at one point I used it to log into my current Google account.

Imagine my surprise when I tried to log in to my Google account from somewhere else, and it asked me for an authentication code from this phone. Again, I have never explicitly set it up as such - Google did this automatically! So if I were to lose this phone I'd be screwed yet again, with yet another inaccessible Google account that I will have no way of recovering.

At this point I don't depend on any Big Tech services; my Google account has nothing of value associated with it (only my YouTube subscription list, which is easy enough to backup and restore), and I pay for my own email on my own domain, etc. So if I get screwed over yet again by a big, soulless corporation that just sees me as a number on their bottom-line, well, I just won't care.


You better hope that whatever is-this-the-same-user heuristics they have on their side never find out for the duration of your entire life.


In his case, I'm pretty sure 20 y/o data is pretty useless nowadays in terms of fingerprinting and usage heuristics.


> It takes five minutes to explain how an idea could open up a new market segment. It takes two seconds to say "that sounds risky." But in a meeting, the two feel equivalent.

In what world do these sound equivalent? Simply saying that something “sounds risky” is not serious criticism and wouldn’t hold any weight at any place I’ve ever worked at. You would have to actually explain why it sounds risky and point to something tangible.


> What type of "experience" are you expecting to have anyway?

Being told upfront what is required to complete the process so you don't have to start over again multiple times?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: