Hacker Newsnew | past | comments | ask | show | jobs | submit | replwoacause's commentslogin

pedantry

I found this exchange both entertaining and informative. Appreciate you sharing an insider's perspective (while also acknowledging I have no possible way to verify if any of this is even true).

Heh... thanks. I don't expect anyone to just believe this information verbatim; as you said, I'm just some rando on HN. But I did offer to discuss it privately with @simonw.

You ok bro?

Much better than you, hon.

Uninstall Reddit too for that matter.

More like 2 hours considering these usage limits

I've been on 5x for a couple of months and the closest I've got to my weekly limits is 75%. I've hit 5-hr limits twice (expected). I'm a solo dev that uses CC anywhere from 8-12+ hr each day, 7 days a week. I've never experienced any of the issues others complain about other than the feeling that my sessions feel a little more rushed. I'd say that overall I have very dialed-in context management which includes: breaking work across sessions in atomic units, svelte claude.md/rules (sub 150 lines), periodic memory audit/cleanup, good pre-compact discipline, and a few great commands that I use to transfer knowledge effectively between sessions, without leaving a trailing pile of detritus. Some may say that this is exhaustive, but I don't find it much different than maintaining Agile discipline.

This being said, I know I'm an outlier.


Perhaps on the 10x plan.

It went through my $20 plan's session limit in 15 minutes, implementing two smallish features in an iOS app.

That was with the effort on auto.

It looks like full time work would require the 20x plan.


I know limits have been nerfed, but c'mon it's $20. The fact that you were able to implement two smallish features in an iOS app in 15 minutes seems like incredible value.

At $20/month your daily cost is $0.67 cents a day. Are you really complaining that you were able to get it to implement two small features in your app for 67 cents?


Yea, actually, people should be complaining.

If you got in a taxi, and they charged you relative to taking a horse carriage, people should be upset.


That last sentence didn't make sense so I'm not sure what your point is. But I'll run with the analogy.

You got into a taxi and they were charging you horse carriage prices initially. They're still not charging you for a full taxi ride but people are complaining because their (mistaken) assumption was that taxis can be provided as cheaply as horse carriages.

People are angry because their expectations were not managed properly which I understand.

But many of us realized that $20 or even $200 was far too low for such advanced capabilities and are not that surprised that all of the companies are raising prices and decreasing usage limits.

OpenAI is not far behind, they're simply taking their time because they're okay with burning through capital more quickly than Anthropic is, and because OpenAI's clearly stated ambition is to win market share, not to be a responsibly, sustainably run company.


Shortly after I ran out of credits in 15 min, they tweeted that they increased usage limits to compensate for the higher token usage, so perhaps it is not as bad now.

Codex, this afternoon, I was able to use for like two hours on the $20 plan. Maybe limits will be tighter in the future. But with new data centers, new GPU generations, and research advances it might rather get cheaper.

Anyway, as you said, this is all pretty cheap. I'll go with the $100 Codex plan, since I now figured out how to nicely work on multiple changes in parallel via the Codex app with worktrees. I imagine the same is possible in Claude Code.


It seems to me a bit naive to think OpenAI would not increase prices/decrease usage limits at some point. $20 might cover a very small fraction of the actual cost that is incurred over a month of sustained usage.

No, I am happy with the results.

For a first test, it did seem like it burned through the usage even faster than usual.

GitHub Copilot’s 7.5x billing factor over 3x with Opus 4.6 seems to suggest it indeed consumes more tokens.

Now I’m just waiting for OpenAI to show their hand before deciding which of the plans to upgrade from the $20 to the $100 plan.


> It looks like full time work would require the 20x plan.

Full time work where you have the LLM do all the code has always required the larger plans.

The $20/month plans are for occasional use as an assistant. If you want to do all of your work through the LLM you have to pay for the higher tiers.

The Codex $20/month plan has higher limits, but in my experience the lower quality output leaves me rewriting more of it anyway so it's not a net win.


Looks really nice! Super clean site too. How did you make the animation at the top?


Thanks! Glad you like! The logo animation is an SVG with CSS keyframe animations.


Straight out of the Trump playbook


He passed on a multi-billion dollar government contract on principle, so there's that.


One of the coolest replies I've ever seen on HN for sure. Thanks for taking the time to write this out!


I disagree, I think we should push back hard on behavior like this. What business is it of LinkedIn's what browser extensions I have installed? I think the framing for this is appropriate.


Why is it possible for a web site to determine what browser extensions I have installed? If there are legitimate uses, why isn't this gated behind a permission prompt, like things like location and camera?


This, to me, seems like the more salient point. A headline like “Major browsers allow websites to see your installed extensions” seems more appropriate here.

We’ve known for a long time that advertisers/“security” vendors use as many detectable characteristics as possible to constrict unique fingerprints. This seems like a major enabler of even more invasive fingerprinting and that seems like the bigger issue here.


Well it would be more appropriate headline if it would be about broken browser behavior.

But this is about major corporation sneakily abusing this to ilegally extract specific sensitive data which they are abusing.


It's possible to write a headline that directs blames at both parties: "Major Browsers Fail to Block Websites that Invade Your Privacy"

The fact that the website is doing this is a bigger problem than the browser not preventing it. If someone breaks into a house, it's the burglar who is prosecuted, not the company that made the door.

If you scanned LinkedIn's private network, you'd be criminally charged. Why are they allowed to scan yours with impunity? And why is this being normalized?

The best solution is a layered defense: laws that prohibit this behavior by the website and browsers that protect you against bad actors who ignore the law.


> If you scanned LinkedIn's private network, you'd be criminally charged. Why are they allowed to scan yours with impunity? And why is this being normalized?

First, I think it’s a major issue that Chrome is allowing websites to check for installed extensions.

With that said, scanning LinkedIn’s private network is not analogous to what is going on here. As problematic as it is, they’re getting information isolated to the browser itself and are not crossing the boundary to the rest of the OS much less the rest of the internal network.

Problematic for privacy? Yes. Should be locked down? Yes. But also surprisingly similar to other APIs that provide information like screen resolution, installed fonts, etc. Calling those APIs is not illegal. I’m curious to know what the technical legal ramifications are of calling these extension APIs.


What law is it breaking?

If a company leaks my sensitive data, I get some nice junkmail offering me some period of time of credit monitoring or whatever so what are browsers doing to prevent this?

The issue should never be 'We want entities to have this data but only use it in some constrained and arbitrary manner that we can't even agree about it's definition.' instead 'This data shouldn't be made available to X'


This is a Chrome thing. It’s a safe bet that if you use Google products you don’t care about privacy anyway. “Google product collects info about you: news at 11.”


Google cares deeply about privacy. Google defines privacy as them not giving your private data that they have collected to anyone else unless you ask them to.


Google cares deeply about privacy. Google defines privacy as them not giving your private data that they have collected to anyone who hasn't paid them for it or can compel them to give it up.


There's a fourth amendment case on the Supreme Court docket (Chatrie v. U.S.) about Google searching a massive amount of user data to find people in a location at a specific time, at police request. The case is about whether the police's warrant warranted such a wide scope of search (if general warrants are allowed).

Point being: Google will 100% give your info to the police, regardless of whether the police have the legal right to it or not, and regardless of whether you actually committed a crime or not.

Bonus points: the federal court that ruled on the case said that it likely violated the fourth amendment, but they allowed the police to admit the evidence anyway because of the "good faith" clause, which is a new one for me. Time to add it to the list of horribly abusable exceptions (qualified immunity, civil asset forfeiture, and eminent domain coming to mind).


They knowingly participated in PRISM, too.


Why would the police go to all that hassle of compelling google to give it up when it can simply buy it on the open market.


The breaking point with me that caused me to de-google myself was finding out that Google was buying Mastercard records in order to cross-reference them with Android phone data. That shit is not okay.



So no compelling here. The police asked for it and google gave it, either for free or in exchange for money. They didn't say "no" to the police, they didn't wait for a court order.

The bad guy here is google. And the people that champion data collection by private companies because of free market == good.


In that case, the main bad guy was the police who didn't bother to do even the most basic investigating after "check Google's GPS records to see who was at the house" including "Check Google's GPS records to see how how long they were there" which would have shown them this was a drive by, but yeah Google is absolutely a villain


Ah yes, I should have said I was describing the official line, not the behaviour. In all fairness the “can compel them to give it up” doesn’t seem to be optional but otherwise, yeah. Agreed.


> This is a Chrome thing.

This is blatant misinformation. Firefox (and all of its derivatives) also does this.

https://bugzilla.mozilla.org/show_bug.cgi?id=1372288


This only works if the web page knows the random per-install id associated with an extension.

That can only happen if the extension itself leaks it to the web page and if that happens, scanning isn't necessary since it already leaked what it is to the webpage. It also doesn't tell you what extension it is, unless again, the extension leaks it to the webpage.

The attack on Chrome is far more useful for attackers as web pages can scan using the chrome store's extension ID instead.


And this bug was reported eight years ago, with no serious attempt to fix it since.


It does two things:

1. Do a request to `chrome-extension://<extension_id>/<file>`. It's unclear to me why this is allowed.

2. Scan the DOM, look for nodes containing "chrome-extension://" within them (for instance because they link to an internal resource)

It's pretty obvious why the second one works, and that "feels alright" - if an extension modifies the DOM, then it's going to leave traces behind that the page might be able to pick up on.

The first one is super problematic to me though, as it means that even extensions that don't interact with the page at all can be detected. It's unclear to me whether an extension can protect itself against it.


> 1. Do a request to `chrome-extension://<extension_id>/<file>`. It's unclear to me why this is allowed.

Big +1 to that.

The charitable interpretation is that this behavior is simply an oversight by Google, a pretty massive one at that, which they have been slow to correct.

The less-charitable interpretation is that it has served Google's interests to maintain this (mis)feature of its browser. Likely, Google or its partners use similar to techniques to what LinkedIn/Microsoft use.

This would be in the same vein as Google Chrome replacing ManifestV2 with ManifestV3, ostensibly for performance- and security-related purposes, when it just so happens that ManifestV3 limits the ability to block ads in Chrome… the major source of revenue for Google.

The more-fully-open-source Mozilla Firefox browser seems to have had no difficulty in recognizing the issues with static extension IDs and randomizing them since forever (https://harshityadav.in/posts/Linkedins-Fingerprinting), just as Firefox continues to support ManifestV2 and more effective ad-blocking, with no issues.


> This would be in the same vein as Google Chrome replacing ManifestV2 with ManifestV3, ostensibly for performance- and security-related purposes, when it just so happens that ManifestV3 limits the ability to block ads in Chrome… the major source of revenue for Google.

uBlock Origin Lite (compatible w/ ManifestV3) works quite well for me, I do not see any ads wherever I browse.


The mv3 problem was never about "does it work now". It was about "can it keep up". Ad blocking is a cat and mouse game, and the mouse is kneecapped now. You're being slow boiled.


Well said. I'm glad that as blockers have managed to develop effective approaches under Mv3, but it took a tremendous amount of engineering effort that was only necessary because Google was trying to impose these very large costs on them.


> chrome-extension://<extension_id>/<file>

These are web accessible resources, e.g. images and stylesheets you can reference in generated HTML. Since content scripts operate directly on the same DOM, it’s unclear how you can tell an <img> or <link> came from the modification of a content script or a first party script. You might argue it’s possible to block these in fetch(), but then you also need to consider leaks in say Image’s load event.

This behavior has been improved in MV3, with option to make the extension id dynamic to defeat detection:

> Note: In Chrome in Manifest V2, an extension's ID is fixed. When a resource is listed in web_accessible_resources, it is accessible as chrome-extension://<your-extension-id>/<path/to/resource>. In Manifest V3, Chrome can use a dynamic URL by setting use_dynamic_url to true.

This should really be the default though.

https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...


For widget style services: If you need the functionality of an extension to operate, then you can check if it's already installed so you don't ask to install it again.

This is better than forcing the extension to announce it's presences on every web site.


Agreed, but also, permission prompts are way overused and often meaningless to anyone at all, even fellow software engineers. “This program [program.exe] wants to do stuff, yes/no?” How should I know what’s safe to say yes to?

I think Android’s ‘permissions’ early on (maybe it’s improved?) and Microsoft’s blanket ‘this program wants to do things’ authorisation pop up have set a standard here that we shouldn’t still be following.


Generally the whole thing needs to be flipped upside down. Extensions is the easy one, there's not reason a random website can list your installed extensions, zero.

For other capabilities, like BlueTooth API, rather than querying the browser, assume that the browser can do it and then have the browser inform the user that the site is attempting to use an unsupported API.


Because Google.


Who makes browsers? Ad companies.

Of course Google is going to back door their browser.


> Who makes browsers? Ad companies.

> Of course Google is going to back door their browser.

Aside from the fact that other browsers exist, this makes no sense because Google would stand to gain more by being the only entity that can surveil the user this way, vs. allowing others to collect data on the user without having to go through Google's services (and pay them).


To broaden my point, I think we’d find that many websites we use are doing this.

My point isn’t that this is acceptable or that we shouldn’t push back against it. We should.

My point is that this doesn’t sound particularly surprising or unique to LinkedIn, and that the framing of the article seems a bit misleading as a result.


I've love it if LinkedIn got successfully sued for millions and it resulted in similar lawsuits against every other website that did this sort of thing.


> To broaden my point, I think we’d find that many websites we use are doing this.

Your point of "I think we’d find that many websites we use are doing this" doesn't make LinkedIn's behavior ok!

By your logic, if our privacy rights are invaded which is illegal in most jurisdiction, and then it become ok because many companies do illegal things??


Absolutely not. At no point am I saying this is ok.

I’m saying that the framing of the article makes this sound like LinkedIn is the Big Bad when the reality is far worse - they’re just one in a sea of entities doing this kind of thing.

If anything, the article undersells the scale of the issue.


You really need to work on your reading comprehension, dude.


> What business is it of LinkedIn's what browser extensions I have installed?

The list of extensions they scan for has been extracted from the code. It was all extensions related to spamming and scraping LinkedIn last time this was posted: Extensions to scrape your LinkedIn session and extract contact info for lead lists, extensions to generate AI message spam.

That seems like fair game for their business.


And instead LinkedIn is scraping all users computers?


This doesn’t fit the description of scraping by any normal definition. It’s a classic feature probe structure, where the features happen to be scraping extensions.

I think it’s kind of funny that HN has gone so reactionary at tech companies that the comments here have become twisted against the anti-spam measures instituted on a website that will never trigger on any of their PCs, because HN users aren’t installing LinkedIn scrape and spam extensions.


HackerNews users used to be the type that would do the scraping, so they could Hack the data into whatever format or integration they desired.

It's unfortunate to see folks here who don't support that – interoperability is at the heart of the Hacker Ethic. LinkedIn (along with any other big tech companies locking down and crippling their APIs) is wrong to even try to block it.

Is it an issue of the resources scrapers consume? No: Even ordinary users trying to get API access on a registered persistent account linked to their name are stymied in accessing their own data. LinkedIn simply doesn't want you to access your own data via API, or in any manner that isn't blessed by them. That ain't right.


LinkedIn has an API you can use at your convenience: https://learn.microsoft.com/en-us/linkedin/

Accessing other users' LinkedIn data via the API requires their OAuth consent, as it should be. But you are welcome to access your own data via the API.


Can I, an ordinary user, get access to that API and use it to fetch my messages?

Last time I checked, I could not.


> The list of extensions they scan for has been extracted from the code. It was all extensions related to spamming and scraping LinkedIn

Not according to the website which says:

The scan doesn’t just look for LinkedIn-related tools. It identifies whether you use an Islamic content filter (PordaAI — “Blur Haram objects, real-time AI for Islamic values”), whether you’ve installed an anti-Zionist political tagger (Anti-Zionist Tag), or a tool designed for neurodivergent users (simplify). Under GDPR Article 9, processing data that reveals religious beliefs, political opinions, or health conditions requires explicit consent. LinkedIn obtains none.

It also scans for every major competitor to Microsoft’s own products — Salesforce, HubSpot, Pipedrive — building company-level intelligence on which businesses use which software. Because LinkedIn knows your name, employer, and role, each scan aggregates into a corporate technology profile assembled without anyone’s knowledge.


Sounds a little like "OpenAI must protect itself against copyright infringement by any means necessary, including copyright infringement of everyone else"


If I had to guess, LinkedIn would be primarily searching for extensions that violate their terms of service (e.g. something that could be used to scrape data). They put a lot of effort into circumventing automated data collection. I could be wrong.


> I think we should push back hard on behavior like this.

Indeed, so I gather all of you have canceled your LI account over this?

I never made one in the first place because it was pretty clear to me that this company - even before the acquisition - had nothing good in mind.


So why not say that LinkedIn is murdering people? I mean, if all you care about is raising awareness with maximal clickbait...


Most sane people don't use linkedin. Only corporate cocksleeves use it and they won't push back against abuse and debasement because they get off to that shit.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: