>I write detailed specs. Multifile with example code. In markdown.
Then hand over to Claude Sonnet. With hard requirements listed, I found out that the generated code missed requirements, had duplicate code or even unnecessary code wrangling data (mapping objects into new objects of narrower types when won't be needed) along with tests that fake and work around to pass.
Stop doing that. Micromanage it instead. Don't give it the specs for the system, design the system yourself (can use it for help doing that), inform it of the general design, but then give it tasks, ONE BY ONE, to do for fleshing it out. Approve each one, ask for corrections if needed, go to the next.
Still faster than writing each of those parts yourself (a few minutes instead of multiple hours), but much more accurate.
Might as well just write the code yourself at that point. And as a bonus, end up with a much better understanding of the codebase (and way better code)
>Might as well just write the code yourself at that point
"We have this thing that can speed your code writing 10x"
"If it isn't 1000x and it doesn't give me a turnkey end to end product might as well write the whole thing myself"
People have forgotten balance. Which is funny, because the inability of the AI to just do the whole thing end to end correctly is what stands between 10 developers having a job versus 1 developer having a job telling 10 or 20 agents what to do end to end and collecting the full results in a few hours.
And if you do it the way I describe you get to both use AI, AND have "a much better understanding of the codebase (and way better code)".
Yes. The feature is quickly produced slop. Future LLMs will train on it too, getting even more sloppy. And "fresh out of uni juniors" and "outsourced my work to AI" seniors wont know any better.
Was that on a country that went on a genocidal rampage just before and lost the war after killing millions all around Europe, which was decided to be divided in several parts, of which USSR got to control one, and which still developed into an independent country less than a decade later?
Yes, but you're leaving out the other 9 countries the Soviet Union occupied, and immediately started killing the population to keep their conquests: Poland, Austria’s Soviet zone, Hungary, Romania, Bulgaria, Czechoslovakia, Estonia, Latvia, and Lithuania.
By contrast, the US retreated. And also didn't start killing any population.
"Killing their population" as in executing some Nazi collaborators, of which there was no shortage in all, down to full cooperation? Like the ones involved in the Axis alliance and in the eastern front offensives that caused the deaths of millions of their own people?
>And also didn't start killing any population.
Yes, just Korea, Vietnam, Cambodia, and anybody who leaned national sovereignity/left in the Latin America and later the middle east.
>Meta has about 10% more employees now than they did at the end of 2021.
So? They likely already had too many in 2021.
>They currently have less than half the employees of Google or Apple; only a third of Microsoft.
Technology (hw/sw) wise, they also have 1/10 the internal tech and public product breadth and scope of Google or Apple and Microsoft. Maybe 1/50 even. They do like 4-5 social media and chat apps (that they hardly ever update anymore), and some crappy VR stuff nobody cares for.
It seems you haven't done the due diligence on what the parent meant :)
It's not about "constructing a prompt" in the sense of building the prompt string. That of course wouldn't be costly.
It is about reusing llm inference state already in GPU memory (for the older part of the prompt that remains the same) instead of rerunning the prompt and rebuilding those attention tensors from scratch.
You not only skipped the diligence but confused everyone repeating what I said :(
that is what caching is doing. the llm inference state is being reused. (attention vectors is internal artefact in this level of abstraction, effectively at this level of abstraction its a the prompt).
The part of the prompt that has already been inferred no longer needs to be a part of the input, to be replaced by the inference subset. And none of this is tokens.
If you have asd or adhd (not uncommon in programmers) it can be a definitive minus for well-being. But even if you don't, between office politics and idiotic corporate mandates, it can be draining.
Especially as for the average office worker, originally you had an office of your own or at worse with one or two other people, then starting from the 80s you had a cubicle, then we got the hellish open plans. You're asked to focus on a screen and a codebase in an environment full of distractions, and full of activity around you.
And that's before we added any commute, and preparing for the commute, which can easily eat an additional 1-2 hours of your day, every day.
This is me. I'm not anti-social by any means, and I like people, but constant chatter around me drives me nuts. So I put my headphones on and now I'm unapproachable. It's tough.
This. And on top of that, headphones at office suck, at least for me.
They don't drown out enough even with large, well insulated cups. So you add noise cancelling. Which drowns out more but not everything. In fact it keeps some very annoying stuff around that is suddenly actually audible VS being drowned out without the headphones. And having noise cancelling on for 8 hours straight for days in a row actually creates some significant pain in my ears. The next idea is music to drown out what's left but that just distracts me too.
Remote is the only good way.
In fact, being remote means I have "social interaction budget" for the family again VS it all having been used up during work hours (being an introvert)
> The next idea is music to drown out what's left but that just distracts me too.
You could try using white noise, either an app or if you have a Mac or iPhone they have native white noise generation (Accessibility -> Hearing -> Background Sounds iirc)
Maybe I'll have to try something new at some point. Fair. It's been a while.
I just googled this and what I found was this for example:
The Sony WH-1000XM3 is much better at canceling noise above 100Hz than the Bose is. However, because the Bose QC35 II can block out more sub-100Hz noise, it does a better job at killing unwanted car engines and low rumbles.
So sounds like it's just gonna be a different kind of noise that will still come through. So instead of still hearing voices, but much clearer I might hear more of the AC humm. Sounds like a wash unfortunately. And one the company won't pay for ;)
One thing that immediately turned me off when finding the Sonys on Amazon: It says "Alexa". Sorry, immediate and 150% no thank you, see you, bye.
No, just try a pair from Amazon and return if you need to. I can mow the lawn with these on and it's nearly silent. There's a feature to recalibrate for air temp and ambient noise (use this every time you put them on). They are really good.
Wearing over the ear headphones all day can contribute to cranial pressure, tiring out your jaw muscles and strain your temporomandibular joint.
It can also encourage ear infections and clogging of the eustachian tubes, because covering or plugging your ears slows down the self cleaning process.
At first you won't notice, but after a decade, these problems will slowly creep up on you and fixing them is very expensive, because you're basically slowly deforming your bones.
I personally wouldn't let kids/teenagers use headphones that apply any amount of noticeable pressure.
Yep, ADHD and God know's what else here. Oddly enough, I am too gregarious, and it often gets me in a lot of trouble. So, by being WFH, I am not surrounded by distractions, and I am much more productive.
reply