Hacker Newsnew | past | comments | ask | show | jobs | submit | search_facility's commentslogin

With current standartization the issue of "page not working on non-Chrome browser" is non-existent. Thanks god nowadays everything (pages) work everywhere in very similar manner, I am using chrome, firefox, safary and opera and have zero problems last 5+ years. Old days are gone.

But on the other hand, adding LLM with strong guards (not yet here but doable for popular attack vectors) into the human loop can drastically eliminate insider factor, imho.


No, it just replaces one vector with another.


probably 8M params are too much even :)


As long as you use the best parameters then it doesn't matter


Grab her by the pointer.


Well, this is a business model, not a coincidence... They are in the battle of selling fresh hardware each and every year consistenly


Exactly!


Interesting! Can it be used in Google Colab to open temporary access to python server? NGROK can be attached this way in 6 lines of code


that is a very good use case. I am definitely adding this to the roadmap at least a maybe.

Thanks for the question.


Our 3D visualization relies havily on photons doing the heavy lifting of traversing 3D space in straight lines, people get, you know, accustomed to it. In fact how we see things is frozen by physics, not brains - they are just accommodated to reality

There are no such utility particles doing any heavy lifting in 4D, so nothing to accommodate to.


I think the idea is that the geometry of straight lines in 4D should be similar enough to picture using the same mental abilities.

How we see is frozen by not only physics, but also biology. We can't actually see in 3D, only in the 2D of our retinae (and the embedded 2D of light-exposed surfaces). That's true for both 3D and 4D objects. I suppose fish, with their electroreceptive abilities, might be the only animals that can sorta "see" in true, volumetric 3D.


biology plays the role, certainly, but nature was trying to capture a model for 3D physical interactions first of all, physics first. And final choice of two 2D sensors is explicitly optimal and minimally effective for 3D - so it can not be similarly descriptive for 4D, just not fair to expect results on same level imho.

For meaningful 4D perception on similar level our body need three volumetric sensors, separated, to define volume with 4D direction


Interesting research, but it is still fascinates me why AI devs of current SOTAs ignore possibility to add numbers as first-grade citizens to AI. like for example suggested here: https://huggingface.co/papers/2502.09741

clean separation matter, it’s really strange to force models to mimic numbers and math via incredibly unfit token-mangling stuff, imho


Seems the team and working conditions worth mentioning it twice, nonetheless.

Good there are places to work with normal knowledge culture, without artificial overfitting to “corporate happiness” :)


They definitely does not aware of soviet reality that “roof over head” usually is not in the place where human want to live, same with job. if student after university decided (not by student, by state distributing workforce) to go work at city on polar circle - that means that student will go live and work here, without sunlight for the rest of his life! not joking, personal story with soviet collapse as happy ending (moved to normal place after that)


Relevant nickname then? ;)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: