Hacker Newsnew | past | comments | ask | show | jobs | submit | Mithriil's commentslogin

Add the feature of doing a high five for the rare cases when it's actually good.

> instantly

Shor's and Grover's still are algorithm that require a massive amount of steps...


I don't think they meant "in O(1) steps", I think they meant "the day someone figures out how to keep many thousands of qubits entangled while operating on them with gates will be the same day we have the first QC that can start breaking encryption in reasonable time". Where, of course, same day is also an exaggeration. But the general point is that we need a single breakthrough to achieve this, and it's very hard to estimate how long a breakthrough might take to appear.

Exactly

You could say it'd be a quantum jump in capabilities.


I would expect such a law to be lobbied to death.

The Google's n-gram dataset link is outdated. You can get them here: https://storage.googleapis.com/books/ngrams/books/datasetsv3...

The half-life idea is interesting.

What's the loop behind consolidation? Random sampling and LLM to merge?


No LLM in the loop. The consolidation pass is deterministic:

Pull the N most recent active memories (default 30) with embeddings Pairwise cosine similarity, threshold 0.85 For each similar pair, check if they share extracted entities Shared entities + similarity 0.85-0.98 → flag as potential contradiction (same topic, maybe different facts) No shared entities + similarity > 0.85 → redundancy (mark for consolidation) Second pass at 0.65 threshold specifically for substitution-category pairs (e.g., "MySQL" vs "PostgreSQL" in otherwise-similar sentences) — these are usually real contradictions even at lower similarity Consolidation then collapses the redundancy set into canonical memories with combined importance/certainty. No LLM call, no randomness. Reproducible, cheap, runs in a background tick every ~5 minutes.

The LLM could improve this (better merge decisions, better entity alignment) but the tradeoff is cost and non-determinism. v1 is deterministic on purpose.

Source: crates/yantrikdb-core/src/cognition/triggers.rs and consolidate.rs next to it.


> with embeddings Pairwise cosine similarity, threshold 0.85

So, your system is unable to differential between AWS and Azure (~95 similarity). Probably unable to consistently differentiate between someone saying they love and hate something.


Bayesian network is a really general concept. It applies to all multidimensional probability distribution. It's a graph that encodes independence between variables. Ish.

I have not taken the time to review the paper, but if the claim stands, it means we might have another tool to our toolbox to better understand transformers.


Worry not, I came here full speed after the first paragraph to say the same thing.


Whether foreign companies pay or not for the tarrifs is clear here. However, I want to point that not receiving income from reduced trade is an impact of its own. An indirect way to pay for the tariffs, so to speak.


I think what people tend to forget when speaking of inevitability is that the scope of their statement is important.

*Existence* of a situation as inevitable isn't so bold of a claim. For example, someone will use an AI technology to cheat on an exam. Fine, it's possible. Heck, it is mathematically certain if we have a civilization that has exams and AI techs, and if that civilization runs infinitely.

*Generality* of a situation as inevitable, however, tends to go the other way.


> "Mastery, even partial, is one of the few genuine avenues toward agency."

Philosophical claims have been made around this point. See, for example, "The Moral Obligation to Be Intelligent", an essay by John Erskine.

So many problems would be solved if a fraction of people would be more inclined to understand what's in front of them.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: