Hacker Newsnew | past | comments | ask | show | jobs | submit | beernet's commentslogin

More than by the downtime I am much more surprised by the actual uptime. Hard to imagine how difficult this must be, given the speed of growth.

Truly! As someone who's worked with HPC and GPUs in a scientific research context, trying to get a service like this to work reliably is a different ballgame to your usual webapp stack...

But… imagine that same scientific research but you have an unlimited budget. I’d imagine that helps.

Some of the comments here mention their monthly spend, and it’s eye watering.


It would be "unlimited budget" if they were a monopoly, but they're in a bidding war with three other "unlimited" budget AI companies, over a resource no one expected to be scarce. There's simply not enough supply to meet demand, no matter how much money you have

I think you have to see this as a bunch of stateless requests, and this makes the problem way easier.

  LLM requests that do not call tools do not need anything external by definition.
  No central server, nothing, they can even survive without the context cache.
  All you need is to load (and only once!) the read-only immutable model weights from a S3-like source on startup.

  If it takes 4 servers to process a request, then you can group them 4 by 4, and then send a request to each group (sharding).

  Copy-paste the exact same-setup XXX times and there you have your highly-parallelizable service (until you run out of money).
It's very doable, any serious SRE can find a way setup "larger than one card" models like Kimi or DeepSeek (unquantized) if they have a tightly-coupled HPC (or a pair of very very beefy servers).

If you run out of servers, then again a money problem, but not an architectural problem (and modern datacenters are already scalable).

Take the best SRE, but no budget, and there is no solution.

So inference is the easy part.

Codex or Claude Code if it takes lot of time or have slow cold latency, it's considered very acceptable.

Some users would probably not even see the difference if a request takes 2 minutes versus 3 minutes.

The real difficult part is to have context caching and external tools, because now you are depending on services that might be lagging.

  Executing code, browsing the web, all of that is tricky to scale because they are very unreliable (tends to timeout, requires large cache of web pages, circumventing captchas, etc).
These are traditional scaling problems, but they are more difficult because all these pieces are fragile and queues can snowball easily.

Yeah, and totally missed RAI part, billing, model deployment, security patches, rate-limiting, caching, dead GPUs, metrics, multiple regions, gov clouds, gdpr(or data locality issues), monitoring, alerting and god knows what else while at extreme loads.

GDPR doesn’t affect load, dead GPUs are no different than any software freeze, model is a file update, metrics are already scaling very well and even way way way bigger and they are very linear, security updates are hedged with gradual rollouts, canary, feature flags, etc.

From an ops perspective all of these things are already really well solved issues in a very scalable manner, because plenty of companies had to solve these issues before.

It’s even better here because you can throw millions in salaries to “steal” the insider info on how their production actually.

No doubt it is fast-paced but the complexity to go from 100k GPUs to 1M is much lower than from going from 1k to 10k GPUs.

All 3 big AI companies had the luxury that during the scaling phase they could do everything directly on production servers.

This is because customers were very very tolerant, and are still quite tolerant.

You can even set limits of requests to large users and shape the traffic.

Cloudflare in comparison, high-scale, low-latency, end users not tolerant at all to downtime, customers even less tolerant, clearly hostile actors that actively try to make your systems down, limited budget, a lot of different workloads, etc.

So, for LLM companies where you have to scale a single workload, largely from mostly free users, and where most paid customers can be throttled and nobody is going to complain because nobody knows what are the limits + a lot of tolerance to high-latency and even downtimes then you are very lucky.


Can you speak a little more to this? I'm curious what kind of parameters one must consider/monitor and what kind of novel things could go wrong.

My guesses are:

hardware capacity constraints is going to be the big one

Effective caching is another, I bet if you start hitting cold caches the whole things going to degrade rapidly.

The ground is probably shifting pretty rapidly.

Power users are trying to get the most out of their subscriptions and so are hammering you as fast as they possibly can. See Ralph loops.

Harnesses are evolving pretty rapidly, as well as new alternatives harnesses. Makes the load patterns less predictable, harder to cache.

The demand is increasing both from more customers, but also from each user as they figure out more effective workflows.

Users are pretty sensitive to model quality changes. You probably want smart routing, but users want the best model all the time.

Models keep getting bigger and bigger.

On top of that they are probably hiring more onboarding more, system complexity and codebase complexity is growing.


Just ask Claude and some agents to fix it...

On the other hand, the status page is blaming the authentication system, which one would think is not a frontier-class problem.

Would have thought that compared to training the serving part is pretty easy. Less of a “everything needs to come together at once” and more just move demand to a working cluster if one bombs & have some spare capacity

Agreed. Just shows that big money doesn't dilude small character.

Obviously they are. And it is solely due to Sam and his unstoppable desire for influence, which is pathetic. He really fumbled the top position in arguably the most important race ever. Pretty incredible.

Its how the world works. See Theory of the Lesiure Class. Or how Microsft survived Bill Gates.

Link is behind a paywall. In any case, I do not think you can evaluate any company for "AI agent readiness" (what even is that?) without having detailed insights into the internal systems and processes of the company.

What is "sovereign infra" exactly?

I know it's just marketing speak, but the term made me think of the scenes in the Matrix where what's left of humanity (ignoring all the cyclical lore that was added on top of it) has to make sure the machines can't remote in to any of their tech.

No less than self hosted, imo. If youre on some cloud it doesnt really matter that you pay them absurd amounts of money, you arent sovereign.

So if a company self hosts their physical infrastructure which will burn down once a fire sets in, they are more "sovereign" than a company running on a redundant cloud? I definitely would not want to be "sovereign" then.

Point is: This discussion is much more multi-dimensional than some suggest.


A redundant cloud that could be rug pulled from you any day if the platform decides you are in violation of their terms, or if they just dont like your project. Yes, on prem is more sovereign than that. That doesnt mean it doesn't have drawbacks, and no one said it didnt. But if sovereignty is more important than redundancy, then on prem is certainly an option.

So literally a computer at home/in the office, as with anything else you don't really "own" the infrastructure? Or is this just about "cloud"?

Yeah sorry it's marketing BS speak for self-hosted or just infra that you control. It could be a VPS, it could be a Raspberry Pi at home. Your repos live on your servers. (And we support this on Tangled today!)

> just infra that you control

But a VPS isn't actually infrastructure you control, you essentially have as much control over it as "cloud", so I don't think that'd be counted as "sovereign", would it?


Perhaps, but it's still better than nothing!

They obviously collaborate with some of the labs prior to the official release date.


That... is a more plausible explanation I didn't think of.


Yes we collab with them!


Sorry this is a bit of a tangent, but I noticed you also released UD quants of ERNIE-Image the same day it released, which as I understand requires generating a bunch of images. I've been working to do something similar with my CLI program ggufy, and was curious of you had any info you could share on the kind of compute you put into that, and if you generate full images or look at latents?

Yes we have started doing diffusion GGUFs but it's in it's infancy :) But yes we do generate images to test quants out!

Agreed, feels like a vibe-coded frontend based on already given backend features.

Also, never saw any Unsloth related software in production to this day. Feels strongly like a non-essential tool for hobby LLM wizards.


You would be surprised - we're the 4th largest independent distributor of LLMs in the world - and nearly every Fortune 500 company has utilized either our RL fine-tuning package or used our quants and models - we for example collab directly with large labs to release models with bug fixes.


Unsloth is providing the best and most reliable libraries for finetuning LLMs. We've used it for production use-cases where I work, definitely solid.


Glad it was helpful!


Even a brief reading of their site would have spared you this embarrassment.


Well, who would have possibly thought that going from 'AI for the benefit of humanity' to becoming a software vendor for the Department of War is the ultimate rug-pull?

Meanwhile, the actual enterprise market, i.e., the adults in the room, already left for Anthropic. Why? Because Anthropic doesn't treat their core model like a weekend side-quest while they're busy chasing hardware fantasies and search engine clones.

OpenAI’s moat is evaporating in real-time, and it’s well-deserved. You can’t build a 'padded room' for the military and expect the tech world to keep buying the safety-first copium. They fumbled the trust, and now they’re fumbling the market.


The paper's critique of the 'data wall' and language-centrism is spot on. We’ve been treating AI training like an assembly line where the machine is passive, and then we wonder why it fails in non-stationary environments. It’s the ultimate 'padded room' architecture: the model is isolated from reality and relies on human-curated data to even function.

The proposed System M (Meta-control) is a nice theoretical fix, but the implementation is where the wheels usually come off. Integrating observation (A) and action (B) sounds great until the agent starts hallucinating its own feedback loops. Unless we can move away from this 'outsourced learning' where humans have to fix every domain mismatch, we're just building increasingly expensive parrots. I’m skeptical if 'bilevel optimization' is enough to bridge that gap or if we’re just adding another layer of complexity to a fundamentally limited transformer architecture.


Pdf24 has been supporting offline usage since forever and works like a charm. What you state in your post in simply wrong and misleading. I guess the "vibe" got too heated...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: