Hacker Newsnew | past | comments | ask | show | jobs | submit | sigbottle's commentslogin

What exactly is the Platonic Representation Hypothesis?

You just don't "learn reality" by getting good at representations. You can learn a data set. You can learn a statistical regularity in things such as human languages. You can analyze the concept spaces of LLM's and compare them numerically. I agree with that.

What the hell does "learning an objective shared reality" mean?

This reminds me of EY saying that a solomonoff inductor would learn all of physics in a few days of a 1920x1080 data stream. Either it's false (because it needs to do empirical testing itself), or it's true, but only if you presuppose the idea that it has a perfect model of all the interactions of the world and can decide between all theories a priori... so then why are we even asking if it's a "perfect learner"? It already has a model for all possible interactions already, there's nothing out of distribution. You might argue, "Well, which model is the correct one?" That's the wrong question already - empirical data is often about learning what you didn't know that you didn't know, not just learning about in-distribution unknowns.

I just get an ick because I associate people talking about this hypothesis as if "LLM's converge on shared objective reality => they are super smart and objective, unlike humans". LLM's can be smart. They can even be smarter than humans. It's also true that empiricism is king.


I developed from a very early age a sort of "always assume the worst about yourself" mentality.

I think part of it was influenced by social media (I was a tween debatelord). Part of it was self improvement (only focus on yourself! get ahead! never blame the enviornment!). Part of it was genuinely depressing things in my life.

As an example, I was obsessed with "finding my passion" at some point. Looking back, I was looking for a way to say, "This thing I'm committed to is way more important than all the other things in my life, so I don't need to go do them". As another example, frequently I would go into epistemic spirals - I was aware of psychoanalysis, so clearly there's capability for deep self delusion. But how do I know the navel gazing isn't self delusion? How do I know framing it as "navel gazing" is not an attempt to cope? And infinite recursion ensues. Another example is constantly feeling like I needed to steelman opponents, and so I would do the utmost research and understand the "best" arguments for the opponent's side before responding.

Incidentally, I think this is why I loved computer science so much - because you often proved worst case guarantees. I had a deep disdain for heuristic solutions.

But this mentality is still bad. Let's take the steelman example. How could steelmanning your opponent possibly be a bad thing? Well, are you actually steelmanning them, or are you trying to find some sort of greater upper bound to their argument, then attacking that... for what? Efficiency? Feeling secure in yourself? Why not actually listen to them? Oh, but surely if they accept premises A, B, C, then D, E, F must follow! Do they, though? Is it possible they could not go down that route, and for valid reasons?

It's still a deep contradiction I work through, since to me personally, all of these things invoke a deep "you are not being remotely rational or moral" gut feeling when I do go down those routes. But I know that I need to sit more in grey zones and just.... live in the grey.

(I still love formal computer science and dislike heuristics. But it's much more balanced now.)


Curious, which app/ forum/ subreddit/ group were you a tween debatelord on, and in what years? (got a link, so we can see?) To what extent did your formation depend on that crowd and its cultural values?

Oh and I should mention, the desire to hear everybody out too. Incidentally, on the first few times I had these types of revalations, of course I would go and completely go extreme in the opposite way.

Ah, life is complicated.


Apropos of nothing, “I was a tween debatelord” sounds like a b-movie title

That was the impression I got as well, but it seems like other people disagree.

It's always amusing to see what crimes people demand to have strict liability for, yes. "He posted a wrong location online, of course that'd disrupt the search for the wolf, right to jail, right away".

Centralization is the #1 solution. It works. It's "ugly", but it works.

You see even on this thread people begging for one single standard.

What actually happens with that one single standard?

- Behind it, you have a shittload of people implicitly optimizing for the general use case and hiding all the said complexity for you

- No need to worry about [semantic conflict](https://www.sigbus.info/worse-is-better)

Once you have centralization, "composition" is not so hard. You get to define all your edge cases, define how you see the real world. Everybody doesn't have their own way of doing things, you have only one way of doing things.

Of course, then comes the extension of the software. People will see the world differently. And we have not algorithmically figured out how domains themselves evolve. The centralization abstraction breaks because people disagree and have different use cases.

I don't see how you get around this fundamental limitation. Are you going to impose yet another secret standard on everybody to get the interoperability you want? If you had full control over the world, yes, things are easy.

I'm not saying this as a diss. I truly do believe centralization works. AWS? Palantir? Building the largest centralized platforms in history and having everybody go through your tooling, when executed carefully, is a dummy effective strategy. In the past, monopolies were effectively this too (though I'd say buying steel is much different than "buying" arbitrary turing-complete services to help deal with a wide variety of semantic issues, and that's what precisely makes the 'monopoly' model break in the 21st century). And hey, at least AWS is a pretty good service, insofar that it makes certain things braindead easy. Is it a "good" service, intrinsically or whatever? I don't know.


I'm not disagreeing but I was reminded of a counterexample: https://www.theregister.com/2026/01/29/birmingham_oracle_lat...

No I mean like, centralization is unfortunately the thing that just works.

I work at a company that thinks extremely deeply about interoperability issues and everybody is on the opposite side: it can be said that we were made as a response to xkcd 927, to try and solve the issue.

I think the company is right in that semantic decentralization with interoperability would be a good end goal, but I think just plain darwinism explains the necessity of the opposite.


> Although the council had planned to implement Oracle "out-of-the-box," it created several customizations including a banking reconciliation system that failed to function properly. The council struggled to understand its cash position and was unable to produce auditable accounts. It has spent more than £5 million on manual workaround labor.

Not a great example of a single centralised system. The errors came from trying to write custom reconciliation code between two systems, the ERP and the bank - perfect example of the problems OP raises.


Fair point but AWS is also highly extensible, and i'm not sure about Palantir but i guess it must be too to a point? Maybe it's a classic case of good abstractions vs bad ones

It might just be social. When I use the open source http library, much of the reason I use it is because someone has put in the work of making sure it actually works across a diverse set of software and hardware platforms, catching common dumb off by ones, etc.

Sure, the LLM theoretically can write perfect code. Just like you could theoretically write perfect code. In real life though, maintenance is a huge issue


Super interesting stuff, got reading on this?

That... is a more plausible explanation I didn't think of.

Yes we collab with them!

Sorry this is a bit of a tangent, but I noticed you also released UD quants of ERNIE-Image the same day it released, which as I understand requires generating a bunch of images. I've been working to do something similar with my CLI program ggufy, and was curious of you had any info you could share on the kind of compute you put into that, and if you generate full images or look at latents?

Yes we have started doing diffusion GGUFs but it's in it's infancy :) But yes we do generate images to test quants out!

Is quantization a mostly solved pipeline at this point? I thought that architectures were varied and weird enough where you can't just click a button, say "go optimize these weights", and go. I mean new models have new code that they want to operate on, right, so you'd have to analyze the code and insert the quantization at the right places, automatically, then make sure that doesn't degrade perf?

Maybe I just don't understand how quantization works, but I thought quantization was a very nasty problem involving a lot of plumbing


that is true. gguf does not support any Architecture.

for the most recent example, as of April 16, 2026 (today)

Turboquant isnt still added to GGUF


Forget the "Humans must always be in the loop for accountability" argument against AI, we already don't have such checks today!

Ha, the question is always "which humans"!

Interesting you single out commercial and government entities but not people. What defines the difference? Bureaucracy? Concentration of resources? Legal theory?

I guess I'm trying to wonder why this line of thinking (in theory) doesn't turn to paranoia about everybody. I don't know much ethics or political theory or anything.


> … paranoia about everybody

It does. People drive these entities. People hide behind the liability shields and authority of these entities. Also notice that I generalized with the phrase “…and trusting anyone…”


The issue is power.

I'm not an expert in political theory or ethics either, but in my worldview, power relationships matter in these discussions. I believe power and responsibility should go hand in hand, and I hold entities to a standard that is proportional to their power to influence others lives.

If an entity's power is decentralized, for example when it is democratically organized to some degree, then that disperses both power and responsibility.


You can tell that broad alignment between people is natural just by looking at the effort that corporations and governments make to undermine it. Alignment between people is perhaps not a state of nature, but it really is a pretty normal consequence of a fairly small amount of education and of middle-class existence that is left to itself (i.e. without brain-washing and deliberately working to create out-groups). If you're eating enough and have a few brain cells to rub together, then you definitely want that for your neighbors too because it promotes stability.

> broad alignment between people is natural

Uh, what? People have been killing each other over values misalignments since there have been people. We invented civilization in part to protect our farms and granaries from people who disagreed with us on whose grain was in said granaries.


We would never have even reached "farms and granaries" if alignment between people didn't happen pretty naturally

Fair enough. We are a social species. But those alignments occur in small groups. You don’t need effort by “corporations and governments” for nations of millions of people to schism. If anything, those large institutions drive broad-based alignment.

Methinks you've been sitting in your armchair too long.

Broad-based alignment doesn't come from nothing, but it is surprisingly easy to achieve when a population recognizes a shared stake. A synthesis between selfishness and altruism emerges when you consider who you can call a "neighbor".


> it is surprisingly easy to achieve when a population recognizes a shared stake

Sure. But it takes work for anything larger than a small, close-knit community. I’m pushing back on the notion that this comes naturally and is a default state. It’s not, at least not relative to people naturally forming in and out groups.

The armchair commenters are probably folks who have never organized a group of people before outside a commercial context.


You might be treating "neighbor" too literally. People understand the global nature of the limits on resources and by extension the world economy better every year. The boundary of who shares 'stake' grows likewise.

> boundary of who shares 'stake' grows likewise

But that shared stakeholding doesn’t naturally drive alignment. You need journalists, fiction writers, organizers and delegates. Travel and curiosity. These each take effort, resources and organization. It’s something we do well. But it isn’t spontaneous in the way small-group kinship is—it literally emerges if you put people in proximity.


I'd say it's "typical" that one person witnessing another's plight will identify with them based on the similar conditions of struggle, oppression, etc. As you point out, the trick is to expose them to those scenes in the first place. But this is proximity just the same, in a social and experiential sense if not in a "my bed is within walking distance of yours" sense. So it is spontaneous given those caveats. The question, then, assuming camaraderie and kinship is the goal, is how do we expose people to each other's lives' conditions without the narrative spin machine altering the message to distance people from each other rather than bringing them closer together?

Couldn't read the next sentence before wading in, huh?

> Couldn't read the next sentence before wading in, huh?

Whatever the difference between naturalness and a state of nature, it has nothing to do with education or middle-class existence.


Critical bit:

> i.e. without brain-washing and deliberately working to create out-groups


And if my grandmother had wheels she’d be a bicycle. The process of creating an in group naturally creates out groups. The “brainwashing” OP describes is just as natural as social alignment through an innate drive for conformity.

Conformity I think follows the innate drive to coerce the nonconformant into compliance

Sure. Push and pull. The point is that needs effort to work at larger scales. We don’t “naturally” organize into nations of three hundred million or a billion. To the extent we do, we also “naturally” go to war.

Again, I think the rewards of controlling others explains both, and not whatever handwavy natural/unnatural attributes might be identified.

There is a pretty interesting study of a large group of chimps. I dont remember where exactly but they have been civil warring the last 15 years or so. Point is, it seems that there is some kind of innate group formation process.

> You can tell that broad alignment between people is natural

It really isn't. The whole point of the market system is to collectively align people's actions towards a shared target of "Pareto-optimized total welfare". And even then the alignment is approximate and heavily constrained due to a combination of transaction costs (which also account for e.g. externalities) and information asymmetries. But transaction costs and information asymmetries apply to any system of alignment, including non-market ones. The market (augmented with some pre-determined legal assignment of property rights, potentially including quite complex bundles of rules and regulations) is still your best bet.


Please read David Graeber.

What you describe is factually not how human society formed.


AIUI David Graeber famously pointed out that people in small groups can form the equivalent of a "market" simply by exchanging favours ("I'll scratch your back if you scratch mine") in an informal gift economy, without any money-like token or external unit of account. That's quite in line with what I said.

You understanding is mistaken. Graeber's "everyday communism" is not a market, and his whole larger point is that contorting everything to the lens of markets is simply ahistorical and unempirical.

I'd strongly suggest reading his books. They profoundly changed my understanding of how human institutions and society form.


Unless it's some sort of complete post-scarcity, it has to be understandable in market terms. What happens if people try to free-ride on the whole "communist" system? If they get excluded from its benefits, that's equivalent to enforcing some bundle of property rights.

> Unless it's some sort of complete post-scarcity, it has to be understandable in market terms.

No, it does not, and that's Graeber's whole point.

"Markets" are not some sort of physical law of the universe.

A simple example of this is it's the norm in hunter gatherer societies to take care of people who never will make an equal contribution back in the transactional sense.

Because the social ties in those societies are not simply transactions.

If your model fails to accurately describe empirical reality, time to improve/expand the model.


These social ties are real (they are a kind of wealth, or social capital, for the persons involved) but they're also limited to very small social groups, the equivalent of a modern small village neighborhood or HOA. The point of the market is that it scales well beyond those.

Translating every aspect of human existence into some kind of “capital” is deeply unhealthy.

> it has to be understandable in market terms

I like economics and math too, but the whole discussion of markets is a terrible starting place for deriving results in ethics/psychology. If you insist though, notice that unions will happen unless some other organization is working to prevent them. What do you suppose this means? People are aligned with each other exactly because they've noticed their coworkers are not corporations or governments.

Although the two are entangled, politics is a more relevant framing than economics here. If people weren't broadly aligned on basic stuff, then autocrats, theocrats, kleptocrats and so on would simply not be interested in dismantling democracies. They make that effort because they must.


> the whole discussion of markets is a terrible starting place for deriving results in ethics/psychology.

Historically, we did essentially the opposite. We figured out many aspects of human ethics and psychology first, and deduced from them how and why markets work as they do.

> ... If people weren't broadly aligned on basic stuff, then autocrats, theocrats, kleptocrats and so on would simply not be interested in dismantling democracies. They make that effort because they must.

This implies that people are only weakly aligned in the first place, otherwise no such attempt at dismantling could ever succeed. That's not a very interesting claim; it does not refute the usefulness of some external mechanism to more directly foster aligned action. Markets do this with a maximum of decentralized power and a minimum of institutional mechanism.


> Historically

This is not the history, it is a mythology in opposition to the empirical evidence.

Which is why you should read Graeber.


It's history of ideas. What Graeber says is ultimately aligned to this, as I pointed out in a sibling thread.

Yes, and your comment makes clear you haven't actually read Graeber and mischaracterized his work.

Anyhow, replying is clearly past the point of utility here.


You're not even wrong, as they say... I'm tempted to add 'seeing like a state' to your reading list.

"Understandable in market terms" doesn't mean the thing is actually understood, and in fact may be dangerously misunderstood.


Reddit is over there ->

Broad alignment =/= Wealth maximization.

The market aligned us with children working in sweat shops after we outlawed it by convincing us it was OK if it was foreign kids and we got to share in pocketing the savings not just the evil factory owner.

Yes I'm well aware. Of course that's not how things are advertised to people, and they absolutely hate it when this is pointed out to them. This tells me that deep down they don't actually agree with how the system operates.

Incentives and resources to promote said incentives.

> Interesting you single out commercial and government entities but not people. What defines the difference? Bureaucracy? Concentration of resources? Legal theory?

Not OP, but for me, kind family and friends, and various feel-good pieces of fiction and other writing, at least let me envision the possibility of a perfectly kind/dedicated/innocent/naieve individual who is truly on my side 100%. But even that is mostly imagination and fiction... although convincing others of that isn't necessairly an argument worth making.

Commercial entities have a fundamental purpouse of profit. While profit doesn't have to be a zero-sum game - ideally, everyone benefits in a somewhat balanced way - there's some fundamental tension, in that each party's profit is necessairly limited by the other party's.

Government entities have a fundamental purpouse of executing the will of the state, which is rather explicitly not the same thing as the will of you as an individual.

Both commercial and government entities also tend to involve multiple people, which gets statistics working against you - you really gathered that many people who would put your needs above their own, with exactly zero "imposters" - which in this context just means people with a bit of rational self interest?

> I guess I'm trying to wonder why this line of thinking (in theory) doesn't turn to paranoia about everybody. I don't know much ethics or political theory or anything.

Just because you're paranoid, doesn't mean they aren't out to get you. Trust, but verify.

You might not be able to put absolute blind trust in anybody. I certainly can't. However, one can hedge one's bets, and diversify trust. Build social circles of people with good character, good judgement, and calm temperments - and statistics will start working for you. It's unlikely they'll all conspire to betray you simultaniously, especially if you've ensured betrayal costs much and gains little. While petty and jealous people can indeed be irrational enough to betray under such circumstances, it'll be harder for them to create the kind of conspiracy necessary for mass betrayal that might cause significant enough damage to warrant proper paranoia. You might still have to watch out for gaslighters stealing credit (document your work!) and framing people (document your character!) and other such dishonest and manipulative behavior... but if everyone's looking out for the same thing, well, that's just everyone looking out for everyone else! That's a community looking out for each other, and holding everyone honest and accountable. Most find comfort in that, rather than the stress paranoia implies.

Put yourself in a room full of manipulators and schemers, on the other hand, and "parnoia about everyone" might be the only reasonable or rational response!


> But even that is mostly imagination and fiction... although convincing others of that isn't necessairly an argument worth making.

There was a Japanese visual novel in the 2000s about a girl who was your personal maid, and was so devoted would always take your side in any conflict, accept and support you just the way you are, even if you were a horrid person to your friends. It turns out she was a ghost, or a kind of yokai, or something. Anyhoo, back on 2ch she attracted a fandom, and there was a second group of people on 2ch who labelled her a "useless person manufacturer" because if you actually had a person who always accepted you just the way you are and never pushed back, that can be actually a trap that prevents you from developing.

It's a theme that's relevant today when people have AI servitors that always glaze them. It puts even certain utopian AI fiction, like Richard Stallman's story "Made for You", into a whole new light.


My family accepts me just the way I am a bit too much. I can't bring myself to blame them, when past "reformist" pressures have been misguided/misapplied and backfired, but I recognize the trap. It'd also be hypocritical to blame them, when I also accept me just the way I am a bit too much! I'd like to think I'm decent enough to people, but I'm certainly more useless than I'd like to be. (Un?)fortunately, I'm not in a position to suffer, and I'm at least aware of the problem!

One of the ideas I've toyed with, even before all the AI hype, is a dumb, semi-adversarial servitor. Something to nag or taunt me about chores not done, to interrupt me when I'm doomscrolling, to use as a vessel for precommitment, to challenge me in various ways. I've been too lazy to build it thus far. Many tools overlap the problem space, so I shouldn't be using that as an excuse - perhaps I should give StayFocusd another shot.

Conflict and other stressors - in moderation, within the limits of one's ability to handle - are important for growth and health. A tree shielded from wind is weakened as it fails to develop stress wood and structural strength. A good debate can sharpen my thoughts and mind, walking to lunch keeps my cardiovascular system healthy, rising to life's various challenges gives me the security of knowing I can rise to the occasion and gives me more skills.


Which VN is this?

It was called Suigetsu

> each party's profit is necessairly limited by the other party's

Profit is obtained by maximizing traded benefits and minimizing costs. None of this requires taking anything away from any other party.


> Profit is obtained by maximizing traded benefits and minimizing costs.

Gain is obtained by the easiest means available. Your narrow definition of profit is seldom the easiest, cheating is far "superior" especially when it's legal for some.

> None of this requires taking anything away from any other party.

"required" and "preferred" (e.g. because it's far easier) are different like night and day.


Trade is just a combination of give and take. I give you X, and in exchange, take Y. Without the "take", it's not a trade, it's just a gift.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: