Hacker Newsnew | past | comments | ask | show | jobs | submit | ekelsen's commentslogin

https://www.baen.com/Chapters/1439133476/1439133476___5.htm

Arthur c Clarke's short story, "Superiority," describes this dynamic perfectly.


Ahhhh the AI writing! The goggles, they do nothing!

Maybe the author should be more worried about AI allowing us to be lazy and forgetting how to write.


You're wrong.


It's hard to even tell any more. So many people are using AI that even the non-AI-using people are starting to write like that.


Read the entries in the blog that predate AI, the style is very clearly different.

I would be shocked if someone consciously or unconsciously adopted AI style so perfectly so quickly. Changing your style is not easy and if you're capable of it, probably this is not the style you'd pick.


I'm willing to bet hard money.

I went to some of the oldest writing on the blog, the style is completely different.


Sorry but your comment is off topic and not in the spirit of discussing the article, hence my downvote.

I’m sure you’re otherwise a lovely human, but man.. you gotta move on from this.


It's absolutely a discussion of the article. It's literally about how it was written.


The project is cool, but the LLM generated blog bothers my brain.


I find your (and my!) reaction to LLM generated text fascinating. It has a distinct smell, and I honestly can't really put words to why I find it repellent, I just know that I do.


It's overly verbose, the phrasing and sentence structure are very unusual for the topic, and it has the classic LLM slop tropes.


Are you sure this is AI? Normally when I read AI written stuff I zone out because it can go entire paragraphs without saying anything. The sentences here seem short and to the point.

Their previous posts published before ChatGPT seem similar enough. Although, they have way more em dashes and this one has none, almost like they were removed on purpose... lol

I don't know what is real anymore.


I'm fairly sure not because I have proof, but because of all the "not this, but that!" clauses.

If you spend time generating text with LLMs, there is a style that you learn to recognize pretty quickly.

Also, to be clear -- I'm not saying that we shouldn't use LLMs to help us produce the best text/prose we can -- but letting them just generate a lot of the text doesn't led to the best outcome imo.


I tend to feel the same way, although I'm actively trying to move past it. I'm OK at writing, but thanks to a combination of educational background and natural aptitude, I'm darned near illiterate at higher math. That puts me behind the 8-ball as an engineer, even though I've been reasonably successful at both hardware and software work. I tend to miss tricks that are obvious to my peers, but when I do manage to come up with something useful, I'm able to communicate with my peers and connect with my customers. While I don't need or want LLM assistance with writing, I can't deny that recent models have been a godsend for getting me out of trouble in the math department.

Now, here's somebody who's clearly strong on the quantitative side of engineering, but presumably bad at communicating the results in English. I consider both skill sets to be of equal importance, so what right do I have to call them out for using AI to "cheat" at English when I rely on it myself to cover my own lack of math-fu? Is it just that I can conceal my use of leading-edge tools for research and reasoning, while they can't hide their own verbal handicap?

That doesn't sound fair. I would like to adopt a more progressive outlook with regard to this sort of thing, and would encourage others to do the same. This particular article isn't mindless slop and it shouldn't be rejected as such.

Besides all that, before long it won't be possible to call AI writing out anyway. We can get over it now or later. Either way, we'll have to get over it.


> before long it won't be possible to call AI writing out anyway

Once we're there, we're there. Tree falling in a forest with no one around, etc. Once that happens then I'll stop reacting badly to it, but it hasn't yet (not without careful prompting anyway).


I cannot even figure out what the "modern" part is. Like, "netlist aware tracing" ... sounds like state of the art from the 80s at best.


+1


Same thing was true of his interview with Tony Blair. It was such a night and day difference between the two. Tony's skill, knowledge and polish saved the interview and made it enjoyable despite the interviewer.


I think it commutes even when one or both inputs are NaN? The output is always NaN.


NaNs are distinguishable. /Which/ NaN you get doesn't commute.


I guess at the bit level, but not at the level of computation? Anything that relies on bit patterns of nans behaving in a certain way (like how they propagate) is in dangerous territory.


> Anything that relies on bit patterns of nans behaving in a certain way (like how they propagate) is in dangerous territory.

Why? This is well specified by IEEE 754. Many runtimes (e.g. for Javascript) use NaN boxing. Treating floats as a semi-arbitrary selection of rational numbers plus a handful of special values is /more/ correct than treating them as real numbers, but treating them as actually specified does give more flexibility and power.


> Many runtimes (e.g. for Javascript) use NaN boxing.

But I've never seen them depend on those NaNs surviving the FPU. Hell, they could use the same trick on bit patterns that overlap with valid float values if they really wanted to.


Can you show me where in the ieee spec this is guaranteed?

My understanding is the exact opposite - that it allows implementations to return any NaN value at all. It need not be any that were inputs.

It may be that JavaScript relies on it and that has become more binding than the actual spec, but I don't think the spec actually guarantees this.

Edit: actually it turns out nan-boxing does not involve arithmetic, which is why it works. I think my original point stands, if you are doing something that relies on how bit values of NaNs are propagated during arithmetic, you are on shaky ground.


See 6.2.3 in the 2019 standard.

> 6.2.3 NaN propagation

> An operation that propagates a NaN operand to its result and has a single NaN as an input should produce a NaN with the payload of the input NaN if representable in the destination format.

> If two or more inputs are NaN, then the payload of the resulting NaN should be identical to the payload of one of the input NaNs if representable in the destination format. This standard does not specify which of the input NaNs will provide the payload.


As the comment below notes, the language should means it is recommended, but not required. And there are indeed platforms that do not implement the recommendation.


Oh right sorry. That is confusing.


Don't have the spec handy, but specifically binary operations combining two NaN inputs must result in one of the input NaNs. For all of Intel SSE, AMD SSE, PowerPC, and ARM, the left hand operand is returned if both are signaling or both or quiet. x87 does weird things (but when doesn't it?), and ARM does weird things when mixing signaling and quiet NaNs.


I also don't have access to the spec, but the people writing Rust do and they claim this: "IEEE makes almost no guarantees about the sign and payload bits of the NaN"

https://rust-lang.github.io/rfcs/3514-float-semantics.html

See also this section of wikipedia https://en.wikipedia.org/wiki/NaN#Canonical_NaN

"On RISC-V, most floating-point operations only ever generate the canonical NaN, even if a NaN is given as the operand (the payload is not propagated)."

And from the same article:

"IEEE 754-2008 recommends, but does not require, propagation of the NaN payload." (Emphasis mine)

I call bullshit on the statement "specifically binary operations combining two NaN inputs must result in one of the input NaNs." It is definitely not in the spec.


Blame the long and confusing language in spec:

> For an operation with quiet NaN inputs, other than maximum and minimum operations, if a floating-point result is to be delivered the result shall be a quiet NaN which should be one of the input NaNs.

The same document say:

> shall -- indicates mandatory requirements strictly to be followed in order to conform to the standard and from which no deviation is permitted (“shall” means “is required to”)

> should -- indicates that among several possibilities, one is recommended as particularly suitable, without mentioning or excluding others; or that a certain course of action is preferred but not necessarily required; or that (in the negative form) a certain course of action is deprecated but not prohibited (“should” means “is recommended to”)

i.e. It required to be a quiet NaN, and recommended to use one of the input NaN.


Thanks for the direct evidence that the output NaN is not required to be one of the input NaNs.


Unless you compile with fast-math ofc, because then the compiler will assume that NaN never occurs in the program.


private jets already solve these problems.


Which already is a bad signal for the article's argument. We already have a way to significantly reduce that travel time and it's a niche.

Could Boom Supersonic or whoever actually survive selling only to a hundred Taylor Swifts? How are they going to keep the lights on for the 30 years those jets fully saturate the market?


But for the private jet market, the reduction would be huge. They're already paying a premium to save time. The top end will pay an even larger premium to save even more time.

I agree with you that for commercial, anything other than super long haul (which is technically very hard), the time saving advantages are much less compelling.


sometimes not nearly so pleasant for them.


During the French Revolution, they tried to make a right angle have 100 degrees and even recomputed all new trig tables for this new standard. It obviously did not catch on :)

https://en.wikipedia.org/wiki/Gradian


There's no reason you can't have 400 degrees in a circle and therefore 100 for a right angle.

It's a degree scale: you can choose any number you want.


Indeed, gradiens are a scale where a circle is divided into 400 equal parts. Really fucked me up a few times when I got a new calculator and wasn’t paying attention to what the little “grad” meant.


But I can't subdivide 400 in to as many ways as 360. Think about the pie industry. They could be put out of business!!


I usually want to cut pies into 14 pieces. Some might want 11 or 13. (17 is just too many.) I demand that we implement a system where a circle is 2 * 3 * 4 * 5 * 7 * 3 * 11 * 13 = 360360 degrees, so that we can cut pies evenly at anywhere from 2 to 15 slices. If my baker cuts a slice at 25739 degrees, I want a refund! (I'll keep the pie, because the pie is obviously useless.)

(720720 might be OK too so we can cut 16 pieces, but honestly, if you're cutting 16 pieces, you're not going to measure. You're just going to divide pieces in half until you have 16. 360360 is the future.)


Of course that's true, that doesn't mean you should.


The Indiana pi bill mandated certain mathematical values be changed to the wrong value.

https://en.wikipedia.org/wiki/Indiana_pi_bill


“The bill, written by a physician and an amateur mathematician, never became law.”


Some pocket calculators from not too long ago supported this unit for some reason, along with radians and degrees. That's the third option on "DRG" button.


Whenever I'm late to a meeting I blame it on the french revolutionary calendar.


Lousy Smarch weather


Google has the revenue to cover their spending and then some.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: