Hacker Newsnew | past | comments | ask | show | jobs | submit | frognumber's commentslogin

This is not a tool which can be used to assume information is anonymized.

The way OpenAI describes it is ...

... concerning.

"Our goal is for models to learn about the world, not about private individuals. Privacy Filter helps make that possible." This means they're using sensitive PII to train models.

A smart AI will re-identify all the information -- including that in the 96% -- in a snap. That's already a solved problem.


I had a physics professor I worked with who had a Nobel Prize.

He didn't win it. It was won by a team of students / collaborators / mentees, who felt he deserved it. I can't disagree with them. Among the nicest people in the world.

I don't think anyone meant it in the sense of "You're a Nobel Prize Winner," so much as "We couldn't have done this without your mentorship, and you deserve to hold onto this." He certainly doesn't consider himself to be a Nobel Prize winner.


This was painful to read. It become better and simpler with a basic signals & systems background:

- His breaking up images into grids was a poor-man's convolution. Render each letter. Render the image. Dot product.

- His "contrast" setting didn't really work. It was meant to emulate a sharpen filter. Convolve with a kernel appropriate for letter size. He operated over the wrong dimensions (intensity, rather than X-Y)

- Dithering should be done with something like Floyd-Steinberg: You spill over errors to adjacent pixels.

Most of these problems have solutions, and in some cases, optimal ones. They were reinvented, perhaps cleverly, but not as well as those standard solutions.

Bonus:

- Handle above as a global optimization problem. Possible with 2026-era CPUs (and even more-so, GPUs).

- Unicode :)


Perhaps you're right but I won't believe you until you whip up a live-rendering proof of concept. It's a bit rude to dismiss somebody's cool work as "painful", with some hypothetical "improvements" that probably wouldn't even work.


Jeez nobody’s going to respect you more for writing like a jackass


It's probably much more exciting to implement stuff like this when you can experiment with your own ideas to figure out the solution from scratch, compared to someone who sees it as a trivial exercise in signal processing, which they can't be bothered to implement.


Years ago, I picked cell carrier because of this. When I ran out, it switched to O(200kbps), which is fine for email, basic web search, etc.

It was actually a bit ironic that, at the time, you could burn through the whole high-speed quota in seconds or minutes, if you went to the wrong web page. Most carriers would stop or bill you an arm-and-a-leg after.


5G data roaming is hilarious for this. Verizon offered 500MB of high speed data roaming per day in Canada before throttling down to ~128kbps. I ran one single speedtest in the middle of Ottawa on Rogers 5G, didn't even finish the speedtest (hitting an error at the end that it failed), and got the text message going "You've run out of high speed data today. Do you want to buy another 500MB for $5?"

At least it's 2GB/day now. And my 5G roaming is off...


Roaming in some countries is like $10,000/gigabyte...

At that price, I dunno why they offer it at all. Are they just hoping to sue someone to get their whole house because they once watched some netflix overseas and forgot to use wifi?


They were deals that were made back in the WAP days where spending $1 a few times a day to check your business email made some semblance of sense, that then got neglected.


Companies should be required by law to nominate an explicit "credit limit" for every account, and customers should be allowed to reduce it to whatever they want. Morally there's no difference between a credit card with a $5,000 credit limit, and a cell phone plan where you can rack up $5,000 in charges if you do the wrong thing.


I think you're wrong, and you're underestimating the transformational impact of Ad-Words.

Free internet existed before paid internet, true, but mostly because people did things for other motives (like fun). Altavista was a tech demo for DEC. Good information was found on personal web pages, most often on .edu sites.

Banner ads existed, but they were confined to the sketchy corners of the Internet. Thing today's spam selling viagra. Anyone credible didn't want to be associated with them.

What Google figured out was:

1) Design. Discrete ad-words didn't make them look sketchy. This discovery came up by accident, but that's a longer story.

2) Targeting. Search terms let them know what to ads to show.

I can't overstate the impact of #2. Profits went up many-fold over prior ad models. This was Google's great -- and ultra-secret -- discovery. For many years, they were making $$$, while cultivating a public image of (probably) bleeding $$$ or (at best) making $. People were doing math on how much revenue Google was getting based on traditional web advertising models, while Google knew precisely what you were shopping for.

By the time people found out how much money Google's ad model was making, they had market lock-in.


John describes exactly what I'd like someone to build:

"To make something really different, and not get drawn into the gravity well of existing solutions, you practically need an isolated monastic order of computer engineers."

As a thought experiment:

* Pick a place where cost-of-living is $200/month

* Set up a village which is very livable. Fresh air. Healthy food. Good schools. More-or-less for the cost that someone rich can sponsor without too much sweat.

* Drop a load of computers with little to no software, and little to no internet

* Try reinventing the computing universe from scratch.

Patience is the key. It'd take decades.


Love this idea and wondering where that low cost of living place would be. But genuinely asking;

What problem are we trying to solve that is not possible right now? Do we start from hardware at the CPU ?

I remember one of an ex Intel engineer once said, you could learn about all the decisions which makes modern ISA and CPU uArch design, along with GPU and how it all works together, by the time you have done all that and could implement a truly better version from a clean sheet, you are already close to retiring .

And that is assuming you have the professional opportunity to learn about all these, implementation , fail and make mistakes and relearn etc.


> Love this idea and wondering where that low cost of living place would be

Parts of Africa and India are very much like that. I would guess other places too. I'd pick a hill station in India, or maybe some place higher up in sub-Saharan Africa (above the insects)

> What problem are we trying to solve that is not possible right now?

The point is more about identifying the problem, actually. An independent tech tree will have vastly different capabilities and limitations than the existing one.

Continuing the thought experiment -- to be much more abstract now -- if we placed an independent colony of humans on Venus 150 years ago, it's likely computing would be very different. If the transistor weren't invented, we might have optical, mechanical, or fluidic computation, or perhaps some extended version of vacuum tubes. Everything would be different.

Sharing technology back-and-forth a century later would be amazing.

Even when universities were more isolated, something like 1995-era MIT computing infrastructure was largely homebrew, with fascinating social dynamics around things like Zephyr, interesting distributed file systems (AFS), etc. The X Window System came out of it too, more-or-less, which in turn allowed for various types of work with remote access unlike those we have with the cloud.

And there were tech trees build around Lisp-based computers / operating systems, SmallTalk, and systems where literally everything was modifiable.

More conservatively, even the interacting Chinese and non-Chinese tech trees are somewhat different (WeChat, Alipay, etc. versus WhatsApp, Venmo, etc.)

You can't predict the future, and having two independent futures seems like a great way to have progress.

Plus, it prevents a monoculture. Perhaps that's the problem I'm trying to solve.

> Do we start from hardware at the CPU ?

For the actual thought experiment, too expensive. I'd probably offer monitors, keyboards, mice, and some kind of relatively simple, documented microcontroller to drive those. As well as things like ADCs, DACs, and similar.

Zero software, except what's needed to bootstrap.


Software is bloated and unreliable. It's clearly a "local minimum".


If it's so bloated then just start cutting

Whatever expertise you need to prune a working system is less than the expertise you'll need to create a whole new one and then also prune it as it grows old


Absolutely not.

Software is bloated in part because it's built in layers. People wrap things over, and over, and over. Stripping down layers is neigh-impossible later. Starting from scratch is easy.

Starting from scratch fails in practice because you don't get feature parity in time short enough for VC (or grant) funding cycles.

If we build a tech tree around 200MHz 32MB machines, except for things like ML and video, we'd have a tech tree which did everything existing machines do, only 10x more quickly in 0.1% of the memory. Machines back then were fine for word processing, spreadsheets, all the web apps I use on a daily basis (not as web apps), etc.

Need would drive people to rebuild those, but with a few less layers.


Perverse incentives are everywhere...


been writing an OS for ever 10 years to try.

its seriously not something you want to do if you want to get anywhere.

then again,its a lot of fun, maybe imagining where it could be some day if you had an army of slave programmers (because still it wont make money lol)


Continuing the thought experiment: There's an interesting sort-of contradiction in this desire: I, being dissatisfied with some aspect of the existing software solutions on the market, want to create an isolated monastic order of software engineers to ignore all existing solutions and build something that solves my problems; presumably, without any contact from me.

Its a contradiction very much at the core of the idea: Should I expect that the Operating System my monastic order produces be able to play Overwatch or be able to open .docx files? I suspect not; but why? Because they didn't collaborate with stakeholders. So, they might need to collaborate with stakeholders; yet that was the very thing we were trying to avoid by making this an isolated monastic order.

Sometimes you gotta take the good with the bad. Or, uh, maybe Microsoft should just stop using React for the Start menu, that might be a good start.


>maybe Microsoft should just stop using React for the Start menu, that might be a good start.

Agree but again worth pointing out the obvious. I don't think anyone is actually against React per se, as long as M$ could ensure React render all their screens at 120fps with No Jank, 1-2% CPU resources usage, minimal GPU resources, and little memory usage. All that at least 99.99% of the time. Right now it isn't obvious to me this is possible without significant investment.


An isolated monastic order in the hills around the Himalayas should ideally be completely isolated from Overwatch and .docx files.


Not saying these are perfect, but consider reviewing the work of groups like the Internet Society or even IEEE sectors. Boots on the ground to some extent such as providing gear and training. Other efforts like One Laptop Per Child also leaned into this kind of thinking.

What could it could mean for a "tech" town to be born, especially with what we have today regarding techniques and tools. While the dream has not really bore out yet (especially at a village level), I would argue we could do even better in middle America with this thinking; small college towns. While its a bit of existing gravity well, you could do a focused effort to get a flywheel going (redo mini Bell labs around the USA solving regional problems could be a start).

Yes it takes decades. My only thought on that is, many (dare say most) people don't even have short term plans much less long term plans. It takes visionaries with nerves and will of steel to stay on paths to make things happen.

Love the experiment idea.


Pick a university, and given them $1B to never use Windows, MacOS, Android, Linux, or anything other than homebrew?

To kick-start, given them machines with Plan9, ITS, or an OS based on LISP / Smalltalk / similar? Or just microcontrollers? Or replicate 1970-era university computing infrastructure (where everything was homebrew?)

Build out coursework to bootstrap from there? Perhaps scholarships for kids from the developing world?


They will just face the same problems we solved decades ago and reinvent the mostly similar solution we also had decades ago.

In a few decades, they will reach to our current level, but then, rest of our world didn't idle for these decades and we don't need to solve the old problems.


Honestly sounds like a very cool Science fiction concept.


A bit like Anathem.


Not quite the same but check out A Canticle for Leibowitz


Who needs good schools? Make it "The Summer of code in Sardinia"


or "The Summer of code in Pyonyang"


Laughed out loud


I'd rather drop a load of musical instruments into said village but I guess I'm completely missing the point.


He might be describing Elbonia.


I want this job.


As a former EE, it's not just pay.

The cog-in-a-machine corporate culture is not fun. Tech culture is much healthier.

There's no upside to big electronics companies here.


My bill for LLMs is going up over time. The more capable, higher-context models dramatically increase my productivity.

The spend prices most of the developing world out -- an programmer earning $10k per year can't pay for a $200/month Claude Max subscription..

And it does better than $6k-$10k programmers in Africa, India, and Asia.

It's the mainframe era all over again, where access to computing is gated by $$$.


> The spend prices most of the developing world out -- an programmer earning $10k per year can't pay for a $200/month Claude Max subscription..

No, but a computer earning $10k per year can probably afford a $200 used ThinkPad, install Linux on it, build code that helps someone, rent a cheap server from a good cloud provider, advertise their new SaaS on HN, and have it start pulling in enough revenue to pay for a $200 Claude Max subscription.

> It's the mainframe era all over again, where access to computing is gated by $$$.

It's still the internet era, where access to $$$ is gated by computing skill :)


I always consider different options when planning for the future, but I'll give the argument for exponential:

Progress has been exponential in the generic. We made approximately the same progress in the past 100 years as the prior 1000 as the prior 30,000, as the prior million, and so on, all the way back to multicellular life evolving over 2 billion years or so.

There's a question of the exponent, though. Living through that exponential growth circa 50AD felt at best linear, if not flat.


So you concede that there's nothing special about AI versus earlier innovations?


> Progress has been exponential in the generic.

Has it? Really?

Consider theoretical physics, which hasn't significantly advancement since the advent of general relativity and quantum theory.

Or neurology, where we continue to have only the most basic understanding of how the human mind actually works (let alone the origin of consciousness).

Heck, let's look at good ol' Moore's Law, which started off exponential but has slowed down dramatically.

It's said that an S curve always starts out looking exponential, and I'd argue in all of those cases we're seeing exactly that. There's no reason to assume technological progress in general, whether via human or artificial intelligence, is necessarily any different.


I think you're talking about much shorter timelines than I am.

That's all noise.


> We made approximately the same progress in the past 100 years as the prior 1000 as the prior 30,000

I hear this sort of argument all the time, but what is it even based on? There’s no clear definition of scientific and technological progress, much less something that’s measurable clearly enough to make claims like this.

As I understand it, the idea is simply “Ooo, look, it took ten thousand years to go from fire to wheel, but only a couple hundred to go from printing press to airplane!!!”, and I guess that’s true (at least if you have a very juvenile, Sid Meier’s Civilization-like understanding of what history even is) but it’s also nonsense to try and extrapolate actual numbers from it.


Plotting the highest observable assembly index over time will yield an exponential curve starting from the beginning of the universe. This is the closest I’m aware of to a mathematical model quantifying the distinct impression that local complexity has been increasing exponentially.


Personally, my estimate of a rational valuation would be:

$1-2T with no legal risk.

$300B assuming a rational and uncorrupt government, which should, at some point, kick them back to non-profit status, and convict people for fraud

Of course, too-big-to-fail means this won't happen.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: