Hacker Newsnew | past | comments | ask | show | jobs | submit | tdeck's commentslogin

Does this work in the browser? How will paths to different resources used by the web app work?

works with curl, maybe there is a case to either build a proxy for UDS and expose them to a browser, or open a request ticket to browser maintainers to support UDS

What is the benefit of using HTTPS for this particular use case?

Some browser functions only work over https, localhost is the exception. So if you change localhost:5173 to myapp.vibe it needs a valid certificate.

And localhost being the exception is often quite painful - I've stuck into several projects that worked just fine on localhost, and then were a pain in the neck to convert to run in secure contexts

There's so little detail here. And for some reason the article makes a false dichotomy between being genuinely "warm and friendly all the time" on one hand, and being rude on the other.


Eternal sloptember?

Heavens forfend

I can't say it isn't, but I have been writing code since about 2004 and this is the first time I've become aware that this is a thing.

Is this the "prompt engineering" that I keep hearing will be an indispensable job skill for software engineers in the AI-driven future? I had better start learning or I'll be replaced by someone who has.

If you aren't telling your computer to ignore goblins, you're going to be left behind.

I'm goblinmaxxing myself.

Is GPT5.5 goblingooning fr?

We’re definitely not escaping the permanent goblin underclass with this one.

permanent goblin underclass

I wonder how much energy OpenAI spends each day on pink elephant paradoxing goblins. A prompt like that will preoccupy the LLM with goblins on every request.

That is a great point. Machine consumes energy of adding goblins in every response. The machine consumes energy on removing goblins from every response. That is a great attack vector. If (wild imagination ensues) an adversary can do that x100 (goblins, potatoes, dragons, Lightning McQueen, etc.) they can render the machine useless/uneconomical from the standpoint of energy consumption.

In Terminator 7, everyone will carry goblin plush toys to defend themselves against the machines.

I mean probably not or they wouldn’t have shipped it, right?

Prompt engineering is mostly structured thought. Can you write a lab report? Can you describe the who, what, when, where, and why of a problem and its solution?

You can get it to work with one off commands or specific instructions, but I think that will be seen as hacks, red flags, prompt smells in the long term.


If I could do those things, I wouldn't be using an LLM to write for me, now would I?

You don’t let the LLM write prise for you, you get it to translate natural language into code somewhat coherently.

In this instance I'm assuming most of the "goblin" references were in prose rather than in source code, so the goal of this particular prompt edit was directed toward making the prose better.

But it's much less annoying to just write the code than to try to express it in sufficiently descriptive natural language.

Converse for me so ymmv.

skill issue

> Or I'll just buy as little as possible and buy used whenever possible.

You're forgetting that consuming newly created products is the only way to express yourself or gain a modicum of fleeting happiness. Also, if you're not consuming, you can't "vote with your dollar" which is of course the most effective way in history for ordinary people to hold the powerful accountable.


In high school my Spanish teacher told us that Crema Catalana was the Spanish name for Creme Brulee.

Last time I had to build an ORDER BY clause in MySQL, it didn't support query parameters in prepared statements, which is probably how this happens. It's not an excuse at all but the standard path of "just throw a ? (or whatever) in there and use bound params" doesn't work for order by (or at least it didn't at some time in the recent past). You would end up having to concatenate strings somehow or other.

Without all thw unnecessary headings, "color", and constant "Let. That. Sink. In."-esque recaps this would be 2 paragraphs. Just let it be! Readers don't need the slop.

Can't we just normalise publishing whatever you put into the LLM instead? I'm sure the author typed things into their favourite AI assistant that regurgitated that long form, LLM-speak style version. I'm sure the original prompt has all the relevant content and was a lot more pleasant to read.

Can't wait for this style of prose to become an incredibly embarrassing faux-pas.


> Can't we just normalise publishing whatever you put into the LLM instead?

Something that has cropped up from time to time in the art world since at least the Dadaists was the idea that rather than distribute the artefact, you distribute the instructions to construct the artefact.


I think if we want to combat slop we have to be honest about why it happens, and the honest truth is that a blog post whose whole content was "I was only using the terminal and Claude code for a couple hours and it drained my whole battery wtf write me an article about this" would not be gaining traction. Some amount of polish and effort is needed, but we can still avoid the most annoying tropes.

Sounds like a viral Tweet. Why wouldn't it be able to gain traction?

It wouldn't gain traction as a blog post, which is my point. Maybe this person would rather have a blog than engage on Twitter.

I hate hate hate the style of writing LLMs make.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: