Hacker Newsnew | past | comments | ask | show | jobs | submit | boppo1's commentslogin

I'm new to it and having trouble finding guides:

- how do I apply it as a coating? I want it to be ~ 1/6" to 1/8" thick and as hard as possible

- will turpentine dissolve or soften it?



I'm reading that turp does not dissolve it, which is ideal so I can mix paint on top of it.

It's a relatively soft plastic and I don't think you can realistically build a uniform, good-looking layer that's 1/8" thick, if that's what you mean. If you need that thickness, high hardness, and nice appearance, I think your best bet is just a sheet of glass or acrylic on top.

It can be used as protective varnish, but that would be a very thin layer, probably 0.1 mm or something like that.


Does it not level from gravity like other resins?

It's solvent-based, so it won't set well in thick layers and it will shrink significantly as the solvent evaporates. You can do thick layers with solvent-free thermoset resins such as epoxy, but epoxy will yellow over time.

Purchase as crystals and dissolve in acetone or ethanol to desired concentration. It will self level based concentration, allow to evaporate before applying next layer

Oil painter here, this is news to me and if it doesn't dissolve in gamsol this is EXACTLY what I've been looking for for about 2 years.

I followed the link to flexographic ink, and now I'm wondering whether boutique fine art flexography could or should exist. Like lithography, but more plastic.

What's the application?

It is used to strengthen materials. For example if plaster has crumbled, or the paint on a canvas has become flakey, or wood rotten, Paraloid B-72 can be used to hold everything together. The issue is that generally it is not reversible. Therefore one should always look at varnishes that can easily be removed and reapplied, but sometimes only Paraloid can hold everything toghther.

I meant in regards to use with gamsol but thanks for the insight

Which, unfortunately is still slow unusable garbage compared to fronteir models.

Not at all, it's more than enough for a large range of tasks. As for slow, that's just a function of how much compute you throw at it, which you actually control unlike with closed weights models.

Depends on your hardware.

Yep. QE was a monumental mistake that killed economic mobility. Asset owners vs wage earners.

>massive interest in local AI

Gosh I just read a really hellish thread on what frontier LLMs will become as they're infected with advertising, I hope apple manages to break locsl LLMs (and training?) Into the public discourse


I've been way out of the local game for a while now, what's the best way to run models for a fairly technical user? I was using llama.cpp in the command line before and using bash files for prompts.

Running llama-server (it belongs to llama.cpp) starts a HTTP server on a specified port.

You can connect to that port with any browser, for chat.

Or you can connect to that port with any application that supports the OpenAI API, e.g. a coding assistant harness.


I havent been using my claude sub lately but I liked 4.6 three weeks ago. Did something change?

2 weeks ago the rolling session usage plummeted to borderline unusable. I'd say I get a weekly output equivalent to 2 session windows before change.

I didn't experience that at all. I know there are lots of rumblings around here about that, but I'm posting this to show this wasn't a universal experience.

https://marginlab.ai/trackers/claude-code/

Seems like there is evidence for that.


Even just in chats with Opus 4.6 I noticed hitting limits so much faster.

>Especially as local LLM continues to develop so fast.

I'm sorry is there anything even close to sonnet, much less opus, that can be run on a 4080? Or 64gb of ram, even slowly?


Well, I reinstalled LM Studio today after some ~10 months since I last used it, just to test Gemma 4. On my PC with 32GB RAM and 4070 Ti (12GB VRAM), it (Gemma 4 26B A4B Q4_K_M) loads and runs reasonably fast, with no manual parameter or configuration tuning - just out of the box, on fresh install - and delivers results usable results on the level I remember expecting from SOTA cloud models 12-16 months ago. And handles image input, too. I'm quite impressed with it, TBH. It's something I can finally see myself using, and yay, it even leaves some RAM and VRAM left for doing other stuff.


And the smaller Gemma 4 models can do audio too.

The Qwen models are also really good.


Look for the current crop of local Mixture of Experts models, where it seems like they've made inroads on the O(n^2) context attention cost problem. Several folks have mentioned Qwen, but there's many more of that ilk. Several of them actually score really high on benchmarks. But when I mess with one of them locally by hand myself, (I have a 3090), it feels a bit like last year's Sonnet. They don't quite make the leaps of understanding you get from Opus.

* Weird thing of the day: https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-...


You can run SOTA local MoE models very slowly by streaming the weights in from a fast PCIe 5 SSD. Kimi 2.5 (generally considered in the ballpark of current sonnet, not opus of course) has been measured as 2 tok/s on Apple M5 hardware, which is the best-case performance unless you have niche HEDT hardware with lots of PCIe lanes to attach storage to and figure out how to use that amount of parallel transfer throughput.


Qwen 3.5, Gemma 4


A ~$5000 USD Macbook can run open source models that are competitive with GPT 3.5 or Sonnet 3. So on nice consumer hardware you can have the original groundbreaking ChatGPT experience that runs locally.


Last I heard, claude was the model powering maven when it bombed that school. Most aren't up-to date on that because anthropic launders their culpability through palanntir. Anthropic is better at optics not ethics.


No matter what you say, you know yourself the truth that the DoW wanted to go over the red lines of anthropic and they said no, while openai said yes. This is as clear as day to everyone and you are just lying yourself to believe something else.


>render video assets without needing FFmpeg on the server.

Help me understand: able to do video with less compute? Or offload compute to client browsers?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: