Hacker Newsnew | past | comments | ask | show | jobs | submit | somewhereoutth's commentslogin

Anti-Genism? Antija for short.

Conversely, just because an invention causes a bubble doesn't mean it is useful and world changing.

What are some examples of this?

Blockchain is a strong candidate

Tulips? NFTs? South Sea Bubble? Mortgage Backed Securities and derivative products (CDOs etc)? Beanie Babies?

LLMs, for instance

I believe wealth taxes (really, wealth restitution) should go into sovereign wealth funds - not least as then the public can see how that money is working for them, and so support the continuance and expansion of such taxes.

Agreed, we should also nationalise resource extraction and put the funds in there. Canadian resources should be for Canadians.

No.

Historically, this just ends up with Toronto and Montreal (and to a more limited extent, Vancouver) treating the rest of the country as a resource colony. The pretense that consent of the governed is equally geographically distributed is, naturally, very useful to you.

If you do that again, as you did in the '60s, Canada will only be Toronto and Montreal.


> this just ends up with Toronto and Montréal (and to a more limited extent, Vancouver)

So, where most Canadians live?


There’s a reconciliation dimension that complicates that framing, at least in BC.

More importantly - revenues from the funds should be used for reducing income taxes. That's how you get broad-based public support for wealth taxes.

Submarine nationalisation/communism. Just use a wealth tax until the wealthy don't own anything. Please ensure you simultaneously destroy any self-made businesses.

No country requires business growth or an economy /s.


LLM proponents believe that these higher level encodings in latent space do in fact match the real world concepts described by our language(s).

However, a much simpler explanation for what we see with LLMs is that instead the higher level encodings in latent space match only the patterns of our language(s), and no deeper encoding/understanding is present.

It's Plato's Cave - the shadows on the wall are all an LLM ever sees, and somehow it is expected to derive the real reality behind them.


Could be, yes for sure but I think it would be very naive in the current state of progress we are in, to down play what progress is happening.

At least Mythos model with its 10 Trillion parameter might indicate that the scaling law is valid. Its a little bit unfortunate that we still don't know that much more about that model.


depends if post-correction it is worth anyone's money to keep training new frontier models. It could be that it isn't, so we are left with models that were trained in the bubble, but are now increasingly out of date, or (open?) models that are trained much more cheaply somehow with consequent lack of utility.

Good point. At some point there will be a reality check for the giant pile of burning cash that is new model training.

I like open note exams (and perhaps open book exams, as you need to know the book well to know which page to look at) - it forces you to condense the material to the salient points and operationalise it to solve what would be more challenging problems than a simple recall exam.

When I see 'cheat sheets' - designed to be hidden on the back of calculators or whatever - then I see true application of human ingenuity and intellect.


My understanding is that the major part of the cost of a given model is the training - so open models depend on the training that was done for frontier models? I'm finding hard to imagine (e.g.) RLHF being fundable through a free software type arrangement.

No, the training between proprietary and open models is completely different. The speculation that open models might be "distilled" from proprietary ones is just that, speculation, and a large portion of it is outright nonsense. It's physically possible to train on chat logs from another model but that's not "distilling" anything, and it's not even eliciting any real fraction of the other model's overall knowledge.

I don't know what to make of it, I am skeptical of OpenAI/Anthropic claims about distillation, but I did notice DeepSeek started sounding a lot like Claude recently.

The first 90% of this is standard set theory.

I'm unclear what the last 10% of 'category theory' gives us.


[flagged]


This is LLM written comment

Really not the same. Assembly / machine code is entirely deterministic - they are a notation for your thoughts. LLM produced content is more a smorgasbord of other people's thoughts, and cannot help you with clarity, conviction, etc etc.


Yes Assembly is deterministic (barring severe hardware bugs). But that's the point. People are no longer writing Assembly.


They meant to say that swithing from assembly to high-level programming is not the same as switching from high-level programming to LLMs, because the latter loses you the guarantee that the computer will do what you told it to.


Sure, it's less common that people are writing full-fledged applications in nothing but assembly.

However, I would strongly disagree that people are no longer writing/using assembly. I was writing a bit of assembly the other day, for example.

Come on over to the game emulation, reverse engineering, exploitation writing, CTF, malware analysis, etc. hobby spaces. Knowledge of assembly is absolutely mandatory to do essentially anything useful.


My point is that the coding LLMs are another point on the reliability / ease of use spectrum. We already mostly moved to another point with HLL compilers from machine language. This is another leap where the transform is unreliable but it's very easy to use (and it could preserve output edits, to some indeterminate extent).


Great examples, those.

Here's another:

https://github.com/jart/sectorlisp/blob/main/sectorlisp.S

Given this, you might write less asm.


and prose, and sketching.

All these things (code, prose, sketching) are about thinking through making.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: