Hacker Newsnew | past | comments | ask | show | jobs | submit | elaus's commentslogin

I don't really see how the vacuum can effectively clean a whole room or flat using only a CNN of the current image in front of the robot. This would help detect obstacles, but a bumper sensor would do that as well.

All but the most basic vacuum robots map their work area and devise plans how to clean them systematically. The others just bump into obstacles, rotate a random amount and continue forward.

Don't get me wrong, I love this project and the idea to build it yourself. I just feel like that (huge) part is missing in the article?


https://opencv.org/structure-from-motion-in-opencv/

Not saying that it’s viable here to build a world map since things like furniture can move but some systems, e.g. warehouse robots do use things like lights to triangulate on the assumption that the lights on the tall ceiling are fixed and consistent.


The classic Roombas from a decade or so ago worked without any sort of mapping or camera at all -- they basically did a version of the "run and tumble" algorithm used by many bacteria -- go in one direction until you can't any more then go off in a random new one. It may not be efficient but it does work for covering territory.


Sounds like it would only work for a single room with not too many obstacles.

I guess the mapping capabilities vary greatly between vendors. I had a first gen Mi Robot vacuum and it was amazing. It would map the entire floor with all the rooms, then go room by room in a zigzag pattern, then repeat each room, having no issues going from one room to another and avoiding obstacles. It also made sure not to fall down the stairs. Then later it broke and I bought a more noname model and despite having lidar tower, it didn't perform as well as Xiaomi vacuum did. It worked for a single room, but anything more and it would get lost.


Eh, it worked fine in my multiroom apartment - again, this is how all first generation robot vacuums worked. Mine eventually died and I got a new one with lidar, and the main adventage is that with mapping I can specify areas to avoid like a chair whose base tends to trap robot vacuums.


I think the only reason for mapping is to be able to block off 'no go' areas (no escaping out the front door!) and to be able to go home to the charger.

For the actual cleaning, random works great.


Surely mapping also helps reducing the time it takes to achieve the task?


A robot vacuum isn't time constrained. It literally has all day.


They make noise, and people remote, so that might not be the case.

In addition, more working time equals more wear and tear for parts.


You are right. The original Roomba was discussed on HN 3 months ago:

https://news.ycombinator.com/item?id=46472930


my previous robot vacuum did not do any mapping, but did always manage to find its way back to the charger. It'd just follow the walls until it saw the chargers IR beacon.

Clever design if you ask me. Doing a lot with a little.


They're terrible. A $200 SLAM equipped vacuum (like open box or something off eBay) will do in 15mins what those took an hour to do


Apart from just detecting obstacles, we wanted to build a robot which is intelligent enough to take in semantic cues like this is a doorway so I can go through it, or this is a kitchen I can clean it this way and so on


There was a time when they were all what you consider basic, and they could still clean a whole room or flat.


It navigates by Brownian motion.


Quite a lot of sample sound strange. Some even have echo or other weird artifacts. I hope it's not AI generated.


I agree. freeCAD has become a tool that I just use without thinking about it. Earlier versions always made me question my choice and try out other software.


Most of those suggestions would be incredible confusing for anyone not familiar with the concept.

Users expect to see exactly 1 new char (either the key pressed or an asterix) when they type something. Seeing up to three chars appearing or disappearing after some time imho is worse than what we have today.


It really is boring and lifeless. Most of those vibe coded sites have senseless boilerplate UI like the "send feedback" link that opens "beautiful" UIs that are completely without function.


It seems easy enough to circumvent: "We're launching our product in 2 weeks, so let the AI create and 'warm up' 20 new HN users so they're ready to shill".

It's really not a problem that can be solved easily :(


If someone is going to put that much effort into to it, let them. I think the ideas here are to try to get some low hanging fruit to see if that works “good enough”. You’ll never block all AI generated accounts, but you may not have to and still have the desired effect.

But if someone wants to plant 20 new accounts, grow them out with karma votes, so that they can game the voting, there are probably other ways to detect that.


The issue is that it’s not that much effort anymore.

We rely on friction for most of our social norms.


Any amount of friction reduces the amount of slop. What proportion of clankers are going to realize that they need to warm up the accounts two weeks in advance? Answer: a proportion that your never going to see with that barrier in place.

With a couple few layers of defense, you'll weed out almost all of the bad actors. Without strong monetary incentives for spamming, you also avoid most persistent actors.


With enough layers you will also weed out almost all of the good actors. Normal people are busy and don't have time nor patience to jump over too many hoops to promote their cool new research, or to respond in a thread where someone linked it.


Reddit has more friction to sign up or post while new or low karma.

The main subreddits will basically shadowban you until your account is aged and has more than X karma.


This is why I don’t create a Reddit account or post there: there are so many rules that dissuade new accounts. I don’t even bother to try.


Reddit is fantastic, to me. It's worth the struggle to get past the initial bullshit.

There are a lot of flaws, though. Their appeal system is very broken, for instance.


Which in itself is annoying, IMO. It creates a whole separate set of problems. You need karma, so people post in karma-farming subs to get a few crumbs. Then you get auto-banned from a dozen of the top subreddits preemptively for farming.

Reddit hasn't been as overrun by bots yet, for the most part, although how long they can hold out I don't know.


maybe not overrun by spam, but the amount of bots I see on popular subs is definitely not 0


You don’t have a choice.

We live with GenAI, and the human to bot ratio is now leaning in a different direction. The old norms are dead, because the old structures that held them up are gone.

This idea that theres “more hoops - losing participation” on this thread keeps assuming that the community is unaffected by the macro trends.

It’s weirdly positing that HN posts and users, are somehow immune/unaffected by those trends.


I think as humans it's very hard to abstract content from its form. So when the form is always the same boring, generic AI slop, it's really not helping the content.


And maybe writing an article or a keynote slides is one of the few places we can still exerce some human creativity, especially when the core skills (programming) is almost completely in the hands of LLMs already


I understand that you need an account to save/publish stuff on their servers, but I would really love a "guest mode" where I can just try it out, maybe even save to LocalStorage.


I too feel like the latest versions are quite a big improvement and I finally lost that feeling of slowing myself down just for the sake of using OSS.

But I still hope for a "blender moment" where a concerted effort gets rid of old cruft, improves UI/UX and jump-starts growth (also in developers/funding) and further improvements.


It's probably impossible for FreeCAD to catch up with the industry-standard CAD systems (SOLIDWORKS, NX, Fusion) unless they somehow pour a stupendous amount of money into their geometry kernel [1].

All major CAD systems use mature geometry kernels like Parasolid [2]. Parasolid was developed for 40 years and is still in active development. This is the piece of code that enables CAD systems to do things like computing an intersection of a G3 smooth fillet with embossed text, handling all corner cases.

FreeCAD runs on OpenCASCADE [3], which is both less sophisticated today and is slower to gain new features than Parasolid, being seemingly maintained by one person [4]. FreeCAD's geometry is hard limited by what OpenCASCADE can do.

This is the main difference from Blender. Blender ultimately operates on vertices, which doesn't require nearly the same level of inherent complexity. Blender isn't bottlenecked in what it can do like FreeCAD is.

[1]: https://en.wikipedia.org/wiki/Geometric_modeling_kernel

[2]: https://en.wikipedia.org/wiki/Parasolid

[3]: https://en.wikipedia.org/wiki/Open_Cascade_Technology

[4]: https://github.com/Open-Cascade-SAS/OCCT/commits/master/


As part of my donated work in Godot Engine, my approach is to improve manifold https://github.com/elalish/manifold to get a "geometric kernel"

I think I've succeeded and many CAD tools use manifold for geometric kernels on 3d boundary meshes.

I was able to get Godot Engine and Blender to adopt elalish/manifold.

List of CAD tools that adopted elalish/manifold.

OpenSCAD Blender IFCjs Nomad Sculpt Grid.Space badcad Godot Engine OCADml Flitter BRL-CAD PolygonJS Spherene Babylon.js trimesh Gypsum Valence 3D bitbybit.dev PythonOpenSCAD Conversation AnchorSCAD Dactyl Web Configurator Arcol Bento3D SKÅPA Cadova BREP.io Otterplans Bracket Engineer


You are correct in that OpenCASCADE is less refined than parasolid, but I would argue that most people just don't need it. Practically, FreeCAD is fit for all purposes, except those for which you require knowledge of what a geometric kernel even is, and then you know who you are and how to serve yourself.


I kinda wish blender could just do CAD honestly,

It feels like all those 3D modeling apps like 3DSmax,Fusion even Zbrush share like 90% of their feature set but your are forced to literally juggle(for videogame dev at least) because of one or two arguably extremely niche capability.


It may look like they're all easily interchangable because the UI and actions are similar (you have a viewport and can do extrudes, etc..) but fundamentally, they're all working on very different objects at their core. Blender and 3DS Max are the most alike, but Zbrush is an entirely different paradigm and so is parametric CAD. An extrude in Blender is massively different from a pad in FreeCAD.

Maybe, with a ton of time and effort the blender UI could be abstracted from most of the box-modeling approach and then pasted over a different paradigm, but It'd take tens of thousands of hours I imagine,.


You can do sculpting in Blender as well as parametric objects, similarly you can emulate most of substance designer with shaders, maybe just not _quite_ good enough that's the thing.

It feels like we have been so so close to an unified 3D content creation tool kit for many years now!


Blender is a mesh editor at its heart. That isn't suitable for CAD work.


>> I kinda wish blender could just do CAD honestly

Have you tried the "CAD sketcher" add-on? I think Blender should have similar functionality built-in, but for now this looks like a nice add-on.

Blender is a very very long way from being used as a general purpose CAD tool, and IMHO it should not strive to be that. But having this ability to do simple CAD designs without opening and learning a different program is cool.


That's my opinion but I think that game/cinema/whatever 3D modeling should lean more and more toward CAD like workflow.

If we want to bring those medium to the next level.


Oh gosh, please no.


For me it's the eink display that makes them interesting. Being programmable or looking cool is nice, but for that I could also buy an Apple/Google/Samsung watch - that's not unique.


The Pebble display isn't e-ink, or unique amongst watches, it's an off-the-shelf MIP LCD from Sharp.

You can get the same thing in watches from Garmin, Coros, Polar, Suunto, Casio and probably more.


I think you're confusing Pebble with something else. All current models on the website as well as the OG pebble (according to Wikipedia) use eink displays.


https://en.wikipedia.org/wiki/Pebble_(watch)#Hardware

> The watch featured a 32-millimetre (1.26 in) 144 × 168 pixel black and white memory LCD using an ultra low-power "transflective LCD" manufactured by Sharp

Later generations are color, but it's the same tech. If you've ever used actual e-ink then it should be obvious enough that the Pebble displays are something else, it would be nowhere near responsive enough to keep up with pebbleOS's animations.



They’re Sharp memory displays, functionally LCDs but with memory for retention under each pixel. They are not and have never been eink.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: