Absolutely. Our heuristics for judging human output are useless with LLMs. We can either trust it blindly, or tediously pick over every word (guess which one people do). I've watched this cause havoc over and over at my job (I work with many different teams, one at a time).
AI signatures don't mean low quality, they just mean AI. And humans do use them (I have always used the common AI signatures). And yes, humans produce good-looking garbage, but much more commonly they produce bad-looking garbage. This is all tangential to the point.
> Our heuristics for judging human output are useless with LLMs.
We used to call the negative side of these heuristics "code smells", but I see no one has used that term yet in these comments. Code smells are what the post is referring to, what LLMs get rid of.
The name of the brand is "Massey Ferguson" not "Massy Fergusson".
The reason I know that is not that I'm a farmer. It's that 20 years ago a bunch of friends and I wrote and performed a parody song of Gainsbourg/Bardot song "Harley Davidson" where the motorbike brand was replaced with the tractor one.
One problem is that when people delegate tasks to AI, they don't themselves learn anything from doing the task -- not just in the general sense of personal improvement, but in the very concrete sense of "what is it that was produced".
Before AI, when someone showed you a presentation or an Excel sheet, even if it was complete horseshit that they had made up, they knew what was in it: they knew more about it than you, by definition.
Now, not so much; people output things they know nothing about, and when they show it to you they are discovering it just as you are.
I have a tech support buddy who, while good, allows himself more arrogance than his skills deserve. I asked him what CRC errors, and he said to ask AI, kindly providing me its output:
> CRC (Cyclic Redundancy Check) errors on Wi-Fi indicate that data frames were corrupted during transmission, often caused by high electromagnetic interference (EMI), physical layer issues, or faulty hardware. They cause packet loss, slow speeds, and intermittent connectivity. Common solutions include replacing cables, reducing interference, updating drivers, and adjusting radio power
This is all well and good except: read the prompt carefully. It never actually says what CRC errors are. This is the average AI user: literally work on, build, and fix things without the slightest clue about what it is you're actually working on.
But you also spend less time on those tasks, allowing you to do more of those tasks. And if you still spend the same amount of time, and use AI as am assistant, and do review the AI's work, then you can actually learn from it faster.
But I understand that many people don't do that and just finish their task with AI and then don't do anything anymore.
It's from the ground up at this point. I'm in my last Master's course at a very well known and expensive private university in the Northeast. When we have presentations it sometimes feels like maybe 10 to 20% of us actually know the material in our slides. I'm all about generating templates and whatnot, but when every bullet point has an em hyphen and you are stumbling over your words, reading the sentences verbatim and having a hard time expanding on them... That is not someone worthy of being a Master in their field IMO. But these people pay full tuition so I'm assuming they graduate and are all working amongst us.
> multiple scenes that specifically required a very thin depth of field
The images at the end of the post are indeed amazing, but I find it funny that we're so obsessed with shallow depth-of-field as a sign of "quality" and/or meaning.
For most of the history of moving pictures, cinema had the exact opposite problem: it looked for the deepest depth-of-field possible in order to make every part of the image count and not waste it to blurriness.
> we're so obsessed with shallow depth-of-field as a sign of "quality" and/or meaning.
Nicco here. I didn't use a shallow depth of field here for either reason. I wanted it because all of those scenes are memories of years ago compared to the main events. Thus, I wanted to give the feeling of details blurring out as memories fade. By contrast, I shot the main events at ~f8 on the Helios, so the background is quite sharp.
The advances of modern AF and focus pulling systems truly has led to a world of consequences in amateur and even professional film making. In a world where anyone can take half decent video with the phone they always have, its a sign of "I have dedicated hardware to have taken this". The chase for toneh https://www.youtube.com/watch?v=aQ8VodC19-g
Not only do many see it as a sign of quality, it lets you ignore the set and stage more than ever. Imperfections? Anomalies? Bah they're blurred out of recognition. Of course it can be used still mindfully and tastefully however such nuance is ever more rare.
Most of my cameras both digital and film alike are medium format. While I'm more of a photographer than someone who does much with video it pains me to have to remind people regularly, just because I can get insanely shallow DoF with the creamiest bokeh they've seen doesn't mean it always makes sense to. Theres a story to be told with foregrounds and backgrounds, and how they can be used to guide the viewer.
> we're so obsessed with shallow depth-of-field as a sign of "quality" and/or meaning
It's not necessarily a sign of "quality", but it is something we see less often, which makes it more interesting. Phone cameras can't do shallow depth of field, for example.
And of course, the human eye also has a limited DoF range. It is interesting to see things in a way that we cannot directly perceive.
The harder to achieve has prestige due to rarity. When the rarity goes away the prestige makes whatever the item was highly popular before the prestige fades. Then the older form becomes more rare and valued by some, in a manner not quite the same as prestige but as a sort of decerning choice.
White bread did this, as did purple dye, and synthetic materials.
It’s annoying because it’s scarcity for scarcity’s sake. The reality is that low depth of field cameras constrict the actors and make them unable to perform naturally. Blocking out the background is also just hyper-convenience for the audience IMO: you’re telling them exactly where to look. It’s visual handholding.
The only reason why people think this is valuable is that it’s scarce, and scarcity is a terrible metric for art
Technically the images look great, very impressive. Production-wise I can also see how this could be useful for low-budget interior dialogue scenes where you don't want the set dressing to distract. It really draws focus to the actors and lets the director paint a more impressionistic backdrop.
The exterior shots I've got more mixed feelings about. I think these shallow lenses work best when you have a very controlled backdrop that can be deliberately staged. Using it in a wide outdoor shot feels like a real risk unless you're doing some Kubrickian blocking to make sure everyone is arranged just-so. Or you're making them stand stock-still.
Another question is, how is Iran going to enforce this?
It doesn't seem Iran still has a navy that could board ships and force them to stop without actual violence.
What happens if a tanker decides to not pay and chance it? Will Iran sink it? That would constitute an act of war (a reprise of the war). Hard to pull off politically (even if it's easy to do technically).
reply