Perhaps it also conveys different type of meaning by having them in this context.
Errors [1] in community discussion threads like this are positive signals that I am human not a bot. A couple of decades ago, I would be unhappy with myself for it, today accent and idiosyncratic writing are perhaps signals[3] that you are human.
[1] i.e. not proof reading for them, not introducing them deliberately.
[2] I can only see one typographical error (it->if) and many grammar errors, did I miss something ?
[3] Not definitive and not as a personal signature, as it can be easily faked/replicated, but the variations at scale is for now not seen in models. Today's model instances do not get unique personas, accents, idiosyncrasies in writing that would make them unique.
Damn, maybe I can throw an agent at trying to unlock IMEI spoofing on my Unifi LTE modem. That one guy on twitter who does all the LTE modem unlocking never replied to my tweet :(
I was recently introduced to the term "vcware", ala shareware or vaporware, to describe these products. "Don't use that, it's vcware, enshitification is coming soon."
1) wanting functionality that isn't provided and working around that
and
2) restoring such functionality in the face of countermeasures
The absence of functionality isn't a clear signal of intent, while countermeasures against said functionality is.
And then there is the distinction between the intent of the software publisher and the intent of the user. There is a big ethical difference between "Mozilla doesn't want advertisers tracking their users" and "those users don't want to be tracked". If these guys want to draw the line at "if there is a signal from the user that they want privacy, we won't track them", I think that's reasonable.
The presence of the "Do Not Track" header was a pretty clear indicator of the intent of the user. Fingerprinting persisted exactly in the face of such countermeasures.
Even if the intent is clear I don't think the act of reading an available field qualifies as exploiting a vulnerability. IMO you need to actually work around a technical measure intended to stop you for it to qualify as an exploit.
Sure, my wording isn't perfect. I don't have a watertight definition ready to go. To my mind the spirit of the thing is that (for example) if a site has an http endpoint that accepts arbitrary sql queries and blindly runs them then sending your own custom query doesn't qualify as an exploit any more than scraping publicly accessible pages does. Whereas if you have to cleverly craft an sql query in a way that exploits string escapes in order to work around the restrictions that the backend has in place then that's technically an exploit (although it's an incredibly minor one against a piece of software whose developer has put on a display of utter incompetence).
The point isn't my precise wording but the underlying concept that making use of freely provided information isn't exploiting anything even if both the user and the developer are unhappy about the end result. Security boundaries are not defined post hoc by regret.
They should just add a "Security Console", with black background and green text, and a simple shell interface for enabling/disabling flags that gate whether these requests are automatically denied or create a permissions popup. Anything dangerous starts disable by default.
Short of crippling capabilities to save dumb users, the best we can do is make the process scary enough that Grandma won't do it without calling her grandson first.
Having stars isn't a positive metric, it's more that not having stars is a disqualifier unless I want to use someones brand new toy.
My first scan of a GitHub repository is typically: check age of latest commit, check star count, check age of project. All of these things can be gamed, but this weeds out the majority of the noise when looking for a package to serve my needs. If the use case is serious, proper due diligence follows.
Guess: there is likely some repetition in articles in a series, but there is a ton in the discussion here, and that is what HN wants to avoid. Discussion on a link that bundles together the parts of a series helps avoid excessive rehashing in the comment sections.
reply