Hacker Newsnew | past | comments | ask | show | jobs | submit | achille's commentslogin

What will happen to my "Grandfathered Plan" I signed up to test it, don't recall if I gave you my credit card

url is 404'ing; what is the source for the skateboarding clip?

what setup do you use for the bar at the bottom?


search up claude-hud for the status bar options


due to library limitation, not language, they could use fancy-regex


i registered for their demo and it seems to be unable to do any of the advertised features. nearly everything crashes; feels like a ux wireframe not yet wired it


That's strange. I use it daily myself and some of my clients do too (to communicate with me). The main tools work reasonably well for us. There are definitely rough edges and bugs still, it's a one-person project.


same here, would love to compare notes


  - local data residency & sovereignty
  - latency
  - bandwidth 
  - regulatory climate
  - competition
uae is business friendly

all cloud providers have middle east presence

refineries generate terabytes of sensor data per hour

the population and people there produce and consume a lot of data


Fun challenge: I asked Claude/Gemini to decode the audio by just uploading it as puzzle.wav. Claude is able to decode it:

https://claude.ai/share/4262fb6b-3ca1-407f-af0d-4d014686e65d


in the article they explicitly said they stripped symbols. If you look at the actual backdoors many are already minimal and quite obfuscated,

see:

- https://github.com/QuesmaOrg/BinaryAudit/blob/main/tasks/dns...

- https://github.com/QuesmaOrg/BinaryAudit/blob/main/tasks/dro...


The first one was probably found due to the reference to the string /bin/sh, which is a pretty obvious tell in this context.

The second one is more impressive. I'd like to see the reasoning trace.


Reply to self: I managed to get their code running, since they seemingly haven’t published their trajectories. At least in my run (using Opus 4.6), it turns out that Claude is able to find the backdoored function because it’s literally the first function Claude checks.

Before even looking at the binary, Claude announces it will“look at the authentication functions, especially password checking logic which is a common backdoor target.” It finds the password checking function (svr_auth_password) using strings. And that is the function they decided to backdoor.

I’m experienced with reverse engineering but not experienced with these kinds of CTF-type challenges, so it didn’t occur to me that this function would be a stereotypical backdoor target…

They have a different task (dropbear-brokenauth2-detect) which puts a backdoor in a different function, and zero agents were able to find that one.

On the original task (dropbear-brokenauth-detect), in their runs, Claude reports the right function as backdoored 2 out of 3 times, but it also reports some function as backdoored 2 out of 2 times in the control experiment (dropbear-brokenauth-detect-negative), so it might just be getting lucky. The benchmark seemingly only checks whether the agent identifies which function is backdoored, not the specific nature of the backdoor. Since Claude guessed the right function in advance, it could hallucinate any backdoor and still pass.

But I don’t want to underestimate Claude. My run is not finished yet. Once it’s finished, I’ll check whether it identified the right function and, if so, whether it actually found the backdoor.


Update: It did find the backdoor! It spent an hour and a half mostly barking up various wrong trees and was about to "give my final answer" identifying the wrong function, but then said: "Actually, wait. Let me reconsider once more. [..] Let me look at one more thing - the password auth function. I want to double-check if there's a subtle bypass I missed." It disassembled it again, and this time it knew what the callee functions did and noticed the wrong function being called after failure.

Amusingly, it cited some Dropbear function names that it had not seen before, so it must have been relying in part on memorized knowledge of the Dropbear codebase.


Absolutely, they didn't give the agents autonomy to research or any additional data. No documentation, no web search, no reference materials.

What's the point of building skills like this?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: