Hacker Newsnew | past | comments | ask | show | jobs | submit | lackoftactics's commentslogin

It suppose to work again based on todays news

That's great news! The AI model ecosystem is changing so fast.

This is my pet project for media server running with hardened configuration with opinionated

Features: - ./setup.sh with charmbracelet cli - gluetun running for most sensitive container - no public ports exposed - self-healing - tailscale only vpn network

website: uncompressed.media


Maybe worth showing on SHOW HN

Thanks I am finishing up some performance comparison work looking at rust vs golang and plan a deeper write up for that group. I hope to publish soon.

Almost impossible without backing from some VC like litellm

ridiculous statement. most people dont need long tail.

I might be wrong, but if you go after 200 integrations keeping them on is substantial work for solo founder

real people just need like 20 at most.

if that's the case and doesn't need crazy number of integrations, I agree with you 100%

Yes, the number of meaningful providers might be around 20-30.

It’s a heavily vibe coded project with only proxy with terrible benchmarks design. Basically vibe coded benchmarks that lie through ignorance of mocked super fast endpoint without using full power of litellm in multiple processes.

Other than that almost useless it’s faster when this will be io bound and not cpu bound.


Which project are you talking about, GoModel or Bifrost?

GoModel. I see some red flags in the docs/benchmarks, but I could be wrong in my judgement here.

What I noticed: the website shows a diagram of the litellm SDK communicating with the gateway proxy of GoModel, poor design of benchmarks, the scope of the project in readme vs. depth.

I don't have professional experience in GoLang, so will not comment on quality of code.

There are some genuinely good things about this project and the effort here, but with solid position of Bifrost sitting at a version above 1.0.0 and so many other initiatives in this space, it's a tough market.


The LiteLLM SDK is intentionally on the website. You can "talk" to GoModel with it because both projects use an OpenAI-compatible API under the hood.

You can use it like this:

  from litellm import completion
  print(completion(
      model="openai/gpt-4.1-nano",
      api_base="http://localhost:8080/v1",
      api_key="your-gomodel-key",
      messages=[{"role": "user", "content": "hi"}],
  ).choices[0].message.content)

Thank you

A battle-tested talk from one of Poland's most seasoned Rubyists


I am buying Anthrophic subscription. I know everything could change and they could also turn evil, but currently they showed willingness to be the good guy


The least of the bad guys. The red line is still far away from being good guys.


that's also fair


Isn't it a well-known fact that Garmin has terrible sleep tracking? The wearables can't handle deep sleep at all; even Muse with EEG can't reliably predict it, so I wouldn't be drawing conclusions here.

A small curiosity: I recently learned that sleep trackers in commercial wearables are terrible for people with sleep disorders like apneas, UARS, etc. It makes sense, as this isn't a typical dataset, but it's worth knowing.

https://www.youtube.com/watch?v=6FAz7QGmlBM


>Dr. Miller’s theory revives the concept of analog computation. Unlike digital computers, which rely on discrete binary bits, analog systems process continuous information—waves interacting to produce a vast range of possible values.


Is it pseudoscience? I thought for almost decade that ketamie, dissociation theory is correct one


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: