It's a relatively soft plastic and I don't think you can realistically build a uniform, good-looking layer that's 1/8" thick, if that's what you mean. If you need that thickness, high hardness, and nice appearance, I think your best bet is just a sheet of glass or acrylic on top.
It can be used as protective varnish, but that would be a very thin layer, probably 0.1 mm or something like that.
It's solvent-based, so it won't set well in thick layers and it will shrink significantly as the solvent evaporates. You can do thick layers with solvent-free thermoset resins such as epoxy, but epoxy will yellow over time.
Purchase as crystals and dissolve in acetone or ethanol to desired concentration. It will self level based concentration, allow to evaporate before applying next layer
I followed the link to flexographic ink, and now I'm wondering whether boutique fine art flexography could or should exist. Like lithography, but more plastic.
It is used to strengthen materials. For example if plaster has crumbled, or the paint on a canvas has become flakey, or wood rotten, Paraloid B-72 can be used to hold everything together. The issue is that generally it is not reversible. Therefore one should always look at varnishes that can easily be removed and reapplied, but sometimes only Paraloid can hold everything toghther.
Not at all, it's more than enough for a large range of tasks. As for slow, that's just a function of how much compute you throw at it, which you actually control unlike with closed weights models.
Gosh I just read a really hellish thread on what frontier LLMs will become as they're infected with advertising, I hope apple manages to break locsl LLMs (and training?) Into the public discourse
I've been way out of the local game for a while now, what's the best way to run models for a fairly technical user? I was using llama.cpp in the command line before and using bash files for prompts.
I didn't experience that at all. I know there are lots of rumblings around here about that, but I'm posting this to show this wasn't a universal experience.
Well, I reinstalled LM Studio today after some ~10 months since I last used it, just to test Gemma 4. On my PC with 32GB RAM and 4070 Ti (12GB VRAM), it (Gemma 4 26B A4B Q4_K_M) loads and runs reasonably fast, with no manual parameter or configuration tuning - just out of the box, on fresh install - and delivers results usable results on the level I remember expecting from SOTA cloud models 12-16 months ago. And handles image input, too. I'm quite impressed with it, TBH. It's something I can finally see myself using, and yay, it even leaves some RAM and VRAM left for doing other stuff.
Look for the current crop of local Mixture of Experts models, where it seems like they've made inroads on the O(n^2) context attention cost problem. Several folks have mentioned Qwen, but there's many more of that ilk. Several of them actually score really high on benchmarks. But when I mess with one of them locally by hand myself, (I have a 3090), it feels a bit like last year's Sonnet. They don't quite make the leaps of understanding you get from Opus.
You can run SOTA local MoE models very slowly by streaming the weights in from a fast PCIe 5 SSD. Kimi 2.5 (generally considered in the ballpark of current sonnet, not opus of course) has been measured as 2 tok/s on Apple M5 hardware, which is the best-case performance unless you have niche HEDT hardware with lots of PCIe lanes to attach storage to and figure out how to use that amount of parallel transfer throughput.
A ~$5000 USD Macbook can run open source models that are competitive with GPT 3.5 or Sonnet 3. So on nice consumer hardware you can have the original groundbreaking ChatGPT experience that runs locally.
Last I heard, claude was the model powering maven when it bombed that school. Most aren't up-to date on that because anthropic launders their culpability through palanntir. Anthropic is better at optics not ethics.
No matter what you say, you know yourself the truth that the DoW wanted to go over the red lines of anthropic and they said no, while openai said yes. This is as clear as day to everyone and you are just lying yourself to believe something else.
- how do I apply it as a coating? I want it to be ~ 1/6" to 1/8" thick and as hard as possible
- will turpentine dissolve or soften it?
reply