I was expecting an ad for their product somewhere towards the end, but it wasn't there!
I do wonder though: why would this company report this vulnerability to Mozilla if their product is fingeprinting?
Isn't it better for the business (albeit unethical) to keep the vulnerability private, to differentiate from the competitors? For example, I don't see many threat actors burning their zero days through responsible disclosure!
I don't understand what you mean. What separates this from other fingerprinting techniques your company monetizes?
No software wants to be fingerprinted. If it did, it would offer an API with a stable identifier. All fingerprinting is exploiting unintended behavior of the target software or hardware.
It makes sense to me, they're likely not trying to actually fingerprint Tor users. Those users will likely ignore ads, have JS disabled, etc. the real audience is people on the web using normal tooling.
They can just flag all Tor users as high risk. They don't strictly need to fingerprint them when it's generally fine for websites to just block signups for Tor users or require further identification via phone number or something.
You want fingerprinting to identify low risk users to skip the inconvenient security checks.
Most users seem to not care about ad tech/tracking as much as technical users. Even further, most seem to want to enable more tracking to [protect the children or whatever the reason is] pretty regularly (at least in opinion polls about various legislation). ToR users are not at all like that + could be harmed in a very different way... so I think it's fair to frame them differently even if I'd personally say people should be wanting to treat both as similar offenses because neither should be seen as okay in my eyes.
> Most users seem to not care about ad tech/tracking
I don't think this is true.
Most people don't understand that they're being tracked. The ones that do generally don't understand to what extent.
You tend to get one of two responses: surprise or apathy. When people say "what are you going to do?" They don't mean "I don't care" they mean "I feel powerless to do anything about it, so I'll convince myself to not care or think about it". Honestly, the interpretation is fairly similar for when people say "but my data isn't useful" or "so what, they sell me ads (I use an ad blocker)". Those responses are mental defenses to reduce cognitive overload.
If you don't buy my belief then reframe the question to make things more apparent. Instead asking people how they feel about Google or Meta tracking them, ask how they feel about the government or some random person. "Would you be okay if I hired a PI to follow you around all day? They'll record who you talk to, when, how long, where you go, what you do, what you say, when you sleep, and everything down to what you ate for breakfast." The number of people that are going to be okay with that will plummet. As soon as you change it from "Meta" to "some guy named Mark". You'll still get nervous jokes of "you're wasting money, I'm boring" but you think they wouldn't get upset if you actually hired a PI to do that?
The problem is people don't actually understand what's being recorded and what can be done with that information. If they did they'd be outraged because we're well beyond what 1984 proposed. In 1984 the government wasn't always watching. The premise was more about a country wide Panopticon. The government could be watching at any time. We're well past that. Not only can the government and corporations do that but they can look up historical records and some data is always being recorded.
So the reason I don't buy the argument is because 1984 is so well known. If people didn't care, no one would know about that book. The problem is people still think we're headed towards 1984 and don't realize we're 20 years into that world
> If you don't buy my belief then reframe the question to make things more apparent. Instead asking people how they feel about Google or Meta tracking them, ask how they feel about the government or some random person.
This is exactly what I was saying - if you look at the polls, people actually tend to support things like the UK's Online Safety Act. Explaining it more does not usually result in a change of that. The difference with a PI is you're asking about them individually instead of everyone - of course they trust themselves, they just want everyone surveilled for that same feeling of confidence.
> If you don't buy my belief then reframe the question to make things more apparent. Instead asking people how they feel about Google or Meta tracking them, ask how they feel about the government or some random person. "Would you be okay if I hired a PI to follow you around all day? They'll record who you talk to, when, how long, where you go, what you do, what you say, when you sleep, and everything down to what you ate for breakfast."
Yes and no, because people still will think that when it's done at scale it's different from some stalker following YOU explicitly, and not just following everybody. Also, the mental model is "they just want to sell me something, but I can just ignore and don't buy if I'm not really interested". And especially going down this second rabbit-hole opens a whole world about consumerism that not many people are comfortable with.
At the same time there are people that are totally against consumerism that should be more informed and care more about tracking and privacy; with those people it's probably easier to have that conversation.
Some good counterpoints. But you're suggesting more people would be okay with 'PI following them' hypothetical than GP suggests—simply with the knowledge that others are subject to the same degree of surveillance?
I'm not so sure that counterpoint in particular holds. I think to say the "number of people that are going to be okay with that will [still] plummet" is an understatement. I'd go so far as to say no one, at least no rational person, would be okay with a "record [of] who you talk to, when, how long, where you go, what you do, what you say, when you sleep", etc., just because of the scale.
Let me focus it from a slightly different side: my believe - from observing the world around me - is that physical privacy violation is perceived differently from a software one because of the side-effects: you gaze out of your window and see the same car with some guy in it parked there, you see the same car following you when you are going to the mall etc. There is some similar side-effect with online tracking, which is the typical "ad in my Instagram feed for something I searched for last week in Google", and there are people that are "scared" by this. But since it's just about buying things, well hey I might actually tap on that Instagram ad!
I see some success by telling people "what if was our government doing the same thing to us, even by extorting private companies? what if that same government, or the next one, just hates you for whatever reason?"
I take your point about the 'abstract' nature of online privacy. But another angle might be suggesting to those that are ambivalent on the issue that the pervasive (and for all intents and purposes, permanent) recordkeeping nature of 'software surveillance' should be much scarier than some guy sitting outside. I mean, at the very least, even with some guy sitting outside, you'd still have privacy inside.
But again, I hear you. Most people unfortunately have come to view the issue as being just about targeted advertising (which some go so far as to espose as a good thing).
> As soon as you change it from "Meta" to "some guy named Mark".
There is a huge difference between those.
If someone hires a PI to follow me, they are spending like $10000/week on that. Which means that their expected value is more than that, or that PI will never pay for itself. Where will this value come from? Likely from me, after all it's me they are tracking. So I am really worried, as I am about to lose a huge amount of money (or something else valuable).
On the other hand, if a store installs a whole bunch of cameras so I am tracked anytime I am in there, then it probably costs them only a few cents to track me. So I really don't care much about how losing anything valuable.
> Which means that their expected value is more than that
But this definitely doesn't follow. Your assumption about "value" is misplaced here. You're strictly thinking monetary value. But if we want to think about monetary value, well Google currently has a market cap of 4.1T, Meta is 1.7T, and even companies like OpenAI are aiming for a 1T IPO. Companies which depend on exactly that data. If you ask me, that data is pretty fucking valuable. Trillions of dollars worth, to be precise...
> ... if a store installs a whole bunch of cameras ... then it probably costs them only a few cents to track me.
Which is a great counterpoint to the argument you were making.
The camera not only works for you, but also everybody else in the store. The cost savings is through scale. So consider the situation where "Mark" is hired to not only follow you but a lot of other people. More specifically, people who interact with one another. That data can be collected in parallel, dramatically cheapening the cost per person being tailed.
--------
But your point is off-base regardless. The point of my comment was about the data being collected. A physical person being the data collector doesn't scale very well and if we're being honest "Mark" doesn't collect nearly as much as the digital tracking systems.
The point is that it is awareness of being tracked. The average person isn't aware that they're being tracked nor aware of what is being tracked.
Let's put it this way. If I hire some guy named "Mark" to follow you and you never find out he was following you, then you'll never be upset. But suppose I later tell you. Do you then become upset?
Most people will say "yes". So the issue wasn't "how much money" it cost. Nor was it actually "I was aware I was being followed". The issue is that you were /being followed/.
Not knowing you were being followed doesn't suddenly make it okay. But realistically that's the situation we're in. People do not know they are being followed. People that do know they're being followed don't know how much is being recorded. People that do know feel powerless to take steps against it. People that feel powerless just try to move on with their lives and not think about it because it is better to think about things you can change instead of getting depressed.
The issue is someone paying attention to me specifically. Imagine instead of hiring guy to follow you, someone hires a guy to go through your trash. Or someone hires a guy to talk to all your friends. Or even someone hires a guy to go to the grocery store next to your house and buy every box of your favorite ice cream.
Most people would find it creepy. So the issue is not "you are being followed", the issue is "a stranger is paying attention to me".
This is a lot of text to say that people don't recognize digital tracking as a threat, even when it is explained to them. Which is basically exactly what parent post you replied to said.
My read of the comment is that it's almost never actually fully explained to them. And that they would almost certainly care if they actually understood what was happening. That's my experience. Once you explain that it's more information than a private investigator tailing you all day, stealing your phone could gather people usually wise up to the fact that they actually don't like it.
In my experience those users express a mix of surprise and irritation when they get ads about something they did minutes or hours before, but they accept that's the way things are.
I joke that I'm a no-app person, because I install very few apps and I use anti tracking tech on my phone that's even hard to explain or recommend to non technical friends. I use Firefox with uMatrix and uBlock Origin and Blockada. uMatrix is effective but breaks so many sites unless one invests time in playing with the matrix. Blockada breaks many important apps (banking) less one understands whitelisting.
> Most users seem to not care about ad tech/tracking as much as technical users.
Part of the problem is the misconception that the data being collected is only being used to determine which ads to show them. Companies love to frame it that way because ultimately people don't actually care that much about which ads they get shown. The more people get educated on the real world/offline uses of the data they're handing over the more they'll start to care about the tracking being done.
This is definitely a point that should be emphasized more in this discussion. Even still, where it ultimately falls flat (currently) is the lack of hard proof to show people that it's truly happening.
Also, the degree to which some are more comfortable with the personal privacy/'feeling of personal safety' tradeoff notwithstanding, the examples that do get media traction are predictably extremes that the average person doesn't feel applies to them.
Instead of trying convince-by-assertion, maybe you could try offering an actual objection to the argument raised up-thread?
On what basis do you claim that software developers, who did not establish a means of for third parties to get a stable identifier, nevertheless intended that fingerprinting techniques should work?
TBF the idea that any and all fingerprinting falls under the umbrella of exploiting a vulnerability was also presented as an assertion. At least personally I think it's a rather absurd notion.
Certainly you can exploit what I would consider a vulnerability to obtain information useful for fingerprinting. But you can also assemble readily available information and I don't think that doing so is an exploit though in most cases it probably qualifies as an unfortunate oversight on the part of the software developer.
You haven’t made an actual argument. You’ve made a repeated assertion that you feel so religiously about that you simultaneously can’t justify it and get very abrasive when someone asks you to back it up.
1) wanting functionality that isn't provided and working around that
and
2) restoring such functionality in the face of countermeasures
The absence of functionality isn't a clear signal of intent, while countermeasures against said functionality is.
And then there is the distinction between the intent of the software publisher and the intent of the user. There is a big ethical difference between "Mozilla doesn't want advertisers tracking their users" and "those users don't want to be tracked". If these guys want to draw the line at "if there is a signal from the user that they want privacy, we won't track them", I think that's reasonable.
The presence of the "Do Not Track" header was a pretty clear indicator of the intent of the user. Fingerprinting persisted exactly in the face of such countermeasures.
Even if the intent is clear I don't think the act of reading an available field qualifies as exploiting a vulnerability. IMO you need to actually work around a technical measure intended to stop you for it to qualify as an exploit.
Sure, my wording isn't perfect. I don't have a watertight definition ready to go. To my mind the spirit of the thing is that (for example) if a site has an http endpoint that accepts arbitrary sql queries and blindly runs them then sending your own custom query doesn't qualify as an exploit any more than scraping publicly accessible pages does. Whereas if you have to cleverly craft an sql query in a way that exploits string escapes in order to work around the restrictions that the backend has in place then that's technically an exploit (although it's an incredibly minor one against a piece of software whose developer has put on a display of utter incompetence).
The point isn't my precise wording but the underlying concept that making use of freely provided information isn't exploiting anything even if both the user and the developer are unhappy about the end result. Security boundaries are not defined post hoc by regret.
No, it is not. I'm talking in the context of OP, which refers to a fingerprinting "vulnerability", specifically using the word "vulnerability" to describe it.
Side channels that enable intended behavior, versus a flat-out bug like the above, though the line can often be muddied by perspective.
An example that comes to mind that I've seen is an anonymous app that allows for blocking users; you can programmatically block users, query all posts, and diff the sets to identify stable identities. However, the ability to block users is desired by the app developers; they just may not have intended this behavior, but there's no immediate solution to this. This is different than 'user_id' simply being returned in the API for no reason, which is a vulnerability. Then there's maybe a case of the user_id being returned in the API for some reason that MIGHT be important too, but that could be implemented another way more sensibly; this leans more towards vulnerability.
Ultimately most fingerprinting technologies use features that are intended behavior; Canvas/font rendering is useful for some web features (and the web target means you have to support a LOT of use cases), IP address/cookies/useragent obviously are useful, etc (though there's some case to be made about Google's pushing for these features as an advertising company!).
> Ultimately most fingerprinting technologies use features that are intended behavior
Strong disagree.
> IP address/cookies/useragent obviously are useful
Cookies are an intended tracking behavior. IP Address, as a routing address, is debatable.
> Canvas/font rendering is useful for some web features
These two are actually wonderful examples of taking web features and using them as a _side channel_ in an unintended way to derive information that can be used to track people. A better argument would be things like Language and Timezone which you could argue "The browser clearly makes these available and intends to provide this information without restriction." Using side channels to determine what fonts a user has installed... well there's an API for doing just that[0] and we (Firefox) haven't implemented it for a reason.
n.b. I am Firefox's tech lead on anti-fingerprinting so I'm kind of biased =)
The thing is, technology is either enabling something or not. The exploration space might be huge, but once an exploit is found, the exploitation code / strategy / plan can trivially proceed and be shared worldwide. So you have to deal with this when you design and patch systems.
Example: preserving paths in URLs. Safari ITP aggressively removes “utm_” and other well-known querystring parameters even in links clicked from email. Well, it is trivial to embed it in a path instead, so that first-party websites can track attribution, eg for campaign perfomance or email verification links etc. In theory, Apple and Mozilla could actually play a cat-and-mouse game with links across all their users and actually remove high-entropy path segments or confuse websites so much that they give up on all attribution. Browser makers or email client makers or messenger makers could argue that users don’t want to have attribution of their link clicks tracked silently without their permission. They could then say if users really wanted, they could manually enter a code (assisted by the OS or browser) into a website, or simply provide interactive permission of being tracked after clicking a link, otherwise the website will receive some dummy results and break. Where is the line after all?
A vulnerability is distinct from unintended behavior.
Unintended identification is less than ideal but frankly is just the nature of doing business and any number of niceties are lost by aggressively avoiding fingerprinting.
In software intentionally optimized to avoid any fingerprinting however it is a vulnerability.
The distinction being that fingerprinting in general is a less than ideal side effect that gives you a minor loss in privacy but in something like Tor Browser that fingerprinting can be life or death for a whistleblower, etc. It's the distinction between an annoyance and an execution.
> fingerprinting in general is a less than ideal side effect that gives you a minor loss in privacy
In what way is collecting a record of a person's browsing history a "minor loss" of privacy. For many people, tracking everywhere they go online would easily expose the most sensitive personal information they have.
Logically, they are doing correlation via publically available information - maybe better than others can - and an identifier would hurt their business since competition can use it as well.
I think HN needs a refresher on responsible disclosure, and that even vulnerability scanners engage in this practice for obvious reasons in that it benefits both parties. One party gains exposure, and the other gets exposure and their bug squashed without the bug wrecking havoc while they try to squash it.
The real reason is that fingerprint.com's selling point is tracking over longer periods (months, their website claims), and this doesn't help them with that.
I’m going to go out on a limb and guess that you define “vulnerability” as something like “thing that will be fixed soon”. After all, Joe Random not liking a behavior doesn’t make it a vuln, there needs to be a litmus test. Am I close?
When I go to https://noscriptfingerprint.com/ all I see is a blank page. My browser is pretty locked down in other ways which probably helps, but I'm still taking that as a good sign.
Should not, true, but in the case of many websites the reality is that allowing JS means you lost your privacy. Just like one cannot allow webgl and canvas by default any longer.
Thanks to all the web devs who helped creating this web dystopia.
Yes, my point is that this does not mean it is an "opt in checkbox". I appreciate that it allows people to be nasty, it just isn't a "please be nasty" toggle.
The person I have responded wrote the "should have" construction without giving any proofs why is it so. Maybe in the world of pink ponies everyone should have a free bread on the breakfast, but some things might be unintuitive in the our one.
You can't go out in public naked and just ask everyone to look away. If you want someone you don't trust to run unvetted general purpose code on your machine you have to accept that you are trading away some privacy. You can sandbox them (wear cloths) but that doesn't give you strict privacy.
I do wear clothes (all JS code runs in a sandbox).
This is a bit like saying "you should lock the door to your house" and therefore refusing to prosecute someone who steals from a house with a broken window frame. I did lock my door, and it's still a crime regardless!
100% we should ensure that Browser's restrict fingerprinting as much as posible. I certainly set my Firefox to have many inconviniencies to reduce the fingerprint. I am just saying this is an engineering compromise and the tradeoff will be different for different people. Wishing we can have our cake and eat it dosn't help; you do have to choose between privacy and functionality.
It means they are suspect. I think its right to be wary of motives if they are involved in the very thing they aim to bring awareness too. Questions arise in my mind as to why they would do something like this in the first place.
Its been my experience that the general public doesn't seem to follow patterns and instead focus on which switch is toggled at any given moment for a company's ethical practices. This is the main reason why we are constantly gamed by orgs that have a big picture view of crowd psychology.
I don't trust them more because of this and maybe they've disclosed it for the wrong reasons, like not allowing a competitor to use it when they don't, but at the end of the day they did disclose a serious issue, and that's good for users.
I understand where you're coming from, by the way, but sometimes the worst person you know does the right thing and it's not fair to criticize them for doing it (you could say nothing, don't have to change your opinion about them, etc). We also don't want someone to go "if I'm bad no matter what I do, then might as well make some money with this" and sell the exploit.
> I understand where you're coming from, by the way, but sometimes the worst person you know does the right thing and it's not fair to criticize them for doing it (you could say nothing, don't have to change your opinion about them, etc). We also don't want someone to go "if I'm bad no matter what I do, then might as well make some money with this" and sell the exploit.
I hear you. I guess I just want to promote more vigilance. Looking at patterns and motives helps us stay balanced about these things IMHO.
What are you even saying? It's like getting upset at somebody who criticizes a criminal because they once helped some grandma across the street. I'm not upset at the criminal because they helped a grandma across the street obviously that's not the fucking point.
I'm not upset, I just don't think we should criticize someone for doing something good. Maybe they're a terrible org, maybe they deserve criticism most of the time, but not in this instance.
It's not like you can't point out that they did a good deed, but that they're still in the shitty business of fingerprinting users.
Also, if people only get the stick no matter what they do, then eventually some will embrace the dark side and at least make money out of it. And that's not good for you.
The inverse is also true, letting them whitewash their image by pretending they care about your privacy and seek to protect you will be good for their public relations, but only if we let them. I refuse to be this gullible and run to their defense for no apparent reason.
They can pretend all they want. I know what their business is, my opinion on the practices haven't changed.
And yet, they did a good thing. I will criticize everything else, but not what they did right. It doesn't mean I'll go out of my way to praise them either... if it wasn't your comment, I wouldn't have said anything at all.
And like a broken clock that is right twice a day, sometimes a corporation also does the right thing, even if for the wrong reasons.
Nothing wrong with pointing out hypocrisy and bullshit, but criticizing something they did right? That's not how I operate. You are, of course, free to do things differently.
It's more like criticising a criminal when they are helping some grandma across the street, thereby treating them more harshly than the criminals that don't do that.
If you take their claim that they don’t use vulnerabilities in their products as true, then I don’t see a contradiction. If it isn’t true, then obviously there is a contradiction.
But your considering of all methods that enable fingerprinting as vulnerabilities is your own opinion. There are definitely measurable signals that are based on a user’s behavior, rather than data exposed by the browser itself.
the business answer is boring: you don't sit on a browser zero-day that your own product depends on. if it leaks form somewhere else, the blog post writes itself and the trust you've built with every privacy researcher and enterprise buyer evaporates. honestly the hiring page line alone, 'we found and reported X to Mozilla', is probably worth more than the fingerprinting edge they'd keep.
>> why would this company report this vulnerability to Mozilla if their product is fingeprinting?
Maybe because is not as serious as them and their title, made it to be? Did you read it fully?
The identifier described is not process lifetime stable, not machine stable, or profile stable, or installation stable. The article itself says it resets on a full browser restart...
So this is not a magic forever ID and not some hardware tied supercookie. Now what should we do with that title, and the authors of it?
While architecture astronauts are clutching pearls, I've built multiple profitable products with Laravel without caring the slighest about the internals, both before and after AI.
PHP was always all about just building stuff while ignoring code quality. Laravel is a natural extension of that approach. Let us live.
No, Symfony is singlehandedly keeping PHP relevant, to the point that every other framework depends on its packages, Laravel included.
Most people like you who don't care about code quality and want to "just build" another B2B SaaS unmaintainable pile of spaghetti are now purely relying on AI and not writing any code themselves anymore, so why use PHP at all instead of JS like all the other vibe coders?
> so why use PHP at all instead of JS like all the other vibe coders?
Because there is nothing remotely close to Laravel for JS. I don't want to think about auth, job queues, mailing, cache layers, auditing etc. I want an opinionated default from my framework that is thoroughly documented and part of the AI training corpus. Laravel gives that to me.
> Agile just finally embraced that specs are incomplete and can even be wrong because the writer of the spec does not yet really know or understand what they want. So they need working software to show the spec in action and then we can iterate on the results.
I agree, but what you describe is agile, not Agile (capital A).
Agile (capital A) is Scrum (capital S) where you have Backlog Grooming (patent pending) where the team clears any ambiguity to define a spec (ticket).
Deviating from said spec is seen as Scope Creep (gasp) and might lead to complaints during Sprint Review (trademark).
So yes, agile prefers working software over detailed spec. But typical manifestations of Agile (capital A) are exactly the opposite.
The US public discourse is so dehumanized today that anyone who is not "with them" is literally not a human anymore. Even within the country itself "the leftards" are considered an obstacle which can be removed if only enough force is applied.
Sending armed agents at protesters is seen as being the same thing as sending pest control to clear out beaver dams on the creek. Nobody cares what the beavers think, they are not human, they do not have feelings. They are simply a menace to be dealth with.
The supporters of imperialism all about nonviolent protest and democratic principles if it seems feasible it could bring about US foreign policy goals: https://news.ycombinator.com/item?id=47111067
Or, if an anonymous and uncorroborated source claims tens of thousands of said protestors were allegedly massacred.
If it doesn't, and the strategy now involves blowing up desalinization plants ( https://apnews.com/article/trump-iran-threat-desalination-pl... ) and invoking a humanitarian crisis on the level of a nuclear catastrophe, well... then they're a bit less concerned about human rights.
You will be even more horrified to learn that installing the entire list of deps of a project that would take a few seconds on my home laptop may take up to 20 minutes at some clients because many FS calls do a network round-trip.
We are not talking about exceptions either. This is pretty standard stuff when you work outside of the IT-literate companies.
At one client, they provided me with a part time tester, they neglected to give him the permissions to install git. Took 3 weeks to fix.
The same client makes us dev on Windows machine but deploy on Linux pods. We can't directly test on the linux, nor connect to them, only deploy on it. In fact, we don't even have the specs of the pods, I had to create a whole API endpoint in the project just to be able to fetch them.
Other things I got to enjoy:
- CTO storing the passwords of all the servers in an libre office file
- lead testing in prod, as root, by copying files through ftp. No version control.
- sysadmin that had an interesting way of managing his servers: he remote controlled one particular windows machine using team viewer which ones the only one that could connect through ssh to them.
The list is quite long.
This makes you see the entire world with a whole new perspective.
I always thought that all devs should spend a year doing tech support for a variety of companies so that they get a reality check on what most humans actually have to deal with when working on a computer.
It's also literally factually incorrect. Pretty much the entire field of mechanistic interpretability would obviously point out that models have an internal definition of what a bug is.
> Thus, we concluded that 1M/1013764 represents a broad variety of errors in code.
(Also the section after "We find three different safety-relevant code features: an unsafe code feature 1M/570621 which activates on security vulnerabilities, a code error feature 1M/1013764 which activates on bugs and exceptions")
This feature fires on actual bugs; it's not just a model pattern matching saying "what a bug hunter may say next".
This is more of an article describing their methodology than a full paper. But yes, there's plenty of peer reviewed papers on this topic, scaling sparse autoencoders to produce interpretable features for large models.
There's a ton of peer reviewed papers on SAEs in the past 2 years; some of them are presented at conferences.
(Not GP) There was a well recognized reproducibility problem in the ML field before LLM-mania, and that's considering published papers with proper peer-reviews. The current state of afairs in some ways is even less rigourous than that, and then some people in the field feel free to overextend their conclusions into other fields like neurosciences.
We're in the "mad science" regime because the current speed of progress means adding rigor would sacrifice velocity. Preprints are the lifeblood of the field because preprints can be put out there earlier and start contributing earlier.
Anthropic, much as you hate them, has some of the best mechanistic interpretability researchers and AI wranglers across the entire industry. When they find things, they find things. Your "not scientifically rigorous" is just a flimsy excuse to dismiss the findings that make you deeply uncomfortable.
Current LLMs do not think. Just because all models anthropomorphize the repetitive actions a model is looping through does not mean they are truly thinking or reasoning.
On the flip side the idea of this being true has been a very successful indirect marketing campaign.
My point was not that I’m 100% convinced that LLMs can think or are intelligent.
My point was that we don’t have a great definition for (human) intelligence either. The articles you posted also don’t seem to be too confident in what human intelligence actually entails.
> There is controversy over how to define intelligence. Scholars describe its constituent abilities in various ways, and differ in the degree to which they conceive of intelligence as quantifiable.
Given that an LLM isn’t even human but essentially an alien entity, who can confidently say they are intelligent or not?
I’m very sceptic of those who are very convinced one or the other way.
Are LLMs intelligent in the way that humans are? I’m quite sure they aren’t.
Are LLMs just stochastic parrots? I don’t find that framing convincing anymore either.
Either way it’s not clear, just check how this topic is discussed daily in most frontpage threads for the last couple of years
I was expecting an ad for their product somewhere towards the end, but it wasn't there!
I do wonder though: why would this company report this vulnerability to Mozilla if their product is fingeprinting?
Isn't it better for the business (albeit unethical) to keep the vulnerability private, to differentiate from the competitors? For example, I don't see many threat actors burning their zero days through responsible disclosure!
reply