Hacker Newsnew | past | comments | ask | show | jobs | submit | tempest_'s commentslogin

They do say that hearing loss in old age can speed degradation, or maybe it is just correlated.

It definitely speeds the effects of dementia and similar because your brain insists on filling in what you didn’t hear and it tends to be wildly negative, at least in my two experiences of having gone through it.

> Mental atrophy due to less learning/thinking, isolation, loss of meaning and purpose happens first.

Except early onset Alzheimers happens and it also happens to plenty of people for which none of those are true.


I mentioned this could be a possible falsification of the idea. It's also possible there are multiple causes and the modality I mentioned is a cause for some. I'm not sure. There are definitely cases where isolation contributes to cognitive decline.

Exactly. My mom lost her job because of early onset. She was very social, read tons of books, etc…. Now, I’m happy she at least still knows who am, but she can’t put a sentence together.

Example: Claude Shannon

I feel like big / old companies thrive on process and are bogged down in bureaucracy.

Sure there is a process to get a library approved, and that abstraction makes you feel better but for the guy who's job it is to approve they are not going to spend an entire day reviewing a lib. The abstraction hides what is essentially a "LGTM" its just that takes a week for someone to check it off their outlook todos.

Maybe your experience is different.


I use CC, and I understand what caching means.

I have no idea how that works with a LLM implementation nor do I actually know what they are caching in this context.


They are caching internal LLM state, which is in the 10s of GB for each session. It's called a KV cache (because the internal state that is cached are the K and V matrices) and it is fundamental to how LLM inference works; it's not some Anthropic-specific design decision. See my other comment for more detail and a reference.

CC can explain it clearly, which how I learned about how the inference stack works.

A lot of people are provided their access through work.

They don't actually pay the bill or see it.


Plenty of options for putting auto steer on a dumb tractor already exist.

Cheap ones too -- aliexpress has them.

But there's more to agtech than driving a tractor around, a lot of what these big integrated systems do (at the high end) is very data driven -- determining where and how to plant, irrigate, fertilize, etc. There's a lot of integration work beyond just making the tractor drive.


35 years in the tech industry has taught me one thing: incumbents that have been around for a long time are almost always more clueless and more full of shit than you think, what they do isn't as hard as they claim and you can probably do better given a fraction of the time they spent just because you don't have legacy systems to worry about and because technology and tooling has moved on.

Incumbents thrive on the myths about what they do being hard and impossible to replicate.

Yes, it is a lot of work to replace what you can get off the shelf today. But it isn't like the basic tech itself is all that hard to replicate step by step if you accept that it takes time and the first N development stages will give you something that isn't as feature rich and polished. And if one makes it open source, interoperability will be easier to do something about.

Perhaps some of the analysis tools/services you can buy today will be hard to replicate, but I doubt they are that hard to replicate. And it is worth having slightly suboptimal results for a couple of seasons than being on the receiving end of a hostage-situation.

But yes, it is certainly a huge effort to get what you actually need.


The Pareto principle applies. For highly complex systems it’s easy to build most of what the incumbents have. It’s the last 20% where it is hard to catch up just because the incumbents have decades of a head start and have the momentum. And even more so here because it’s not just software. It’s very science and hardware heavy.

For farming, it’s even more tough because the market has a really uneven distribution. Usually the best place to tackle huge incumbents is in the midmarket. They’re big enough to need your automation, but they’re small enough to take a risk to save some money, and the features you haven’t built yet aren’t blockers for them.

But there’s basically no midmarket farming, all farms are pretty much either really big or really small.


Another clue into this is how hard they litigate. Can't innovate, litigate is a phrase for a reason

> But there's more to agtech than driving a tractor around, a lot of what these big integrated systems do (at the high end) is very data driven -- determining where and how to plant, irrigate, fertilize, etc.

How difficult is this to implement outside of big ag-tech? I feel that a community of experienced farmers and programmers (or programmer-farmers) could tackle this.


It really depends.

The bigger agcorps have tones of integration.

The machine, from tractor to combine and everything in between often feeds data together to produce a holistic understanding.

Things like - How much fuel was used - Where your tractors and sprayers drove - Soil samples and content - How and where every bit of chemical and fertilizer was applied - What weather hit your field - How much and and the moisture content of every bit of the field you harvested

It goes on an on.


> The bigger agcorps have tones of integration.

Yes, but how useful is the integration?

The sprayers/spreaders can be connected cheap computer to achieve most of what you describe.

I used to do literally that but in aircraft. Must be easier and cheaper in tractors


It's not complex if you have like three machines.

But if you're observing a fleet of 100+ machines you kinda need some integration and a central location. Which in turn connects to multiple other services like weather, crop markets, fuel prices etc.


I think that is a different market than the market for dumb tractors. There might be some overlap, but I doubt the people who want to fix their own tractors are different than the corporations that are tracking 100 tractors across hundreds/thousands of fields.

I think this has all suddenly shifted with high-quality programming AIs available. How difficult is this to implement with Claude?

The software is certainly easier to build, but there's a lot of hardware involved here beyond the tractor. Claude is not necessarily going to make it easier to do soil sampling or measuring field conditions or yield outputs.

Farmers would be foolish to rely on an LLM because farming margins are too low to makeup for even a small quick mistake. Many farms will profit 1% on investment over 1-2 decades, although year to year yield can vary 30%.

What kind of sensors do those cheap kits come with?

A tractor is a big thing to have rolling around unsupervised. I would want a lot of safeguards. Blindly going from one GPS point to another sounds like a nightmare.


The cheapie aliexpress specials simply drive the line they're programmed to drive. They have GPS and a gyro to account for the slope of the land. You're supposed to stay in the tractor while they're operating as a safety... but this doesn't always happen in some parts of the world.

30 years ago you had a hand-gas and clamped the wheel to drive the tractor in a line. Using GPS is a litle bit more safe than that. And I talk about Germany!

Here you go, local grain farmer (4,500 hectares, barley, grains) reviews a fully automated driverless swarm bot in boom spray configuration:

https://www.youtube.com/watch?v=ljEKN7CsjnM


Right, but that has nothing to do with a vendor making a dumb tractor. Why do we need to dismissively move the conversation from TFA. The data driven approach is made up of several parts, and we're looking at a specific part

Making a dumb tractor for the use-case of dumb tractor is obviously a winning idea.

I just don't think you're going to effectively compete with big agtech by putting a bunch of parts in a box, shaking it, and hoping you end up with a beautifully integrated solution. Integration hell is the reason big commercial firms dominate when it comes to large integrated systems.


Why not? They sell telematics systems separately from cars. It’s possible to do this and it might not be too difficult depending on how the system is composed.

Precision ag is orders of magnitude more complicated of a system than vehicle telematics. Again, driving the tractor is the easy part, and you can already get cheap systems to do this.

admittedly, i'm not a farmer nor an expert in data driving farming. but getting a farmer the ability to precisely drive a tractor in a field so that planting seeds, applying fertilizer, and any of the other steps would be a huge win. The settings used when doing that can easily come from bigFarmData gained from other sources. Can it be used even more precisely when everything is gathered/integrated by one company? That's a question that I'm not by default saying yes to, but it seems like you do think that is true. Even if it is true, does that mean the difference from a farmer going broke because his DIY tractor behaved slightly differently than your solution? I'd posit that a farmer only being allowed to play the bigFarmData game by only being allowed to buy from one vendor that is expensive while also forcing any repairs to be expensive will cause farmers to financially unnecessarily struggle.

The economics of farming (at least in the US) are brutal. Scaling up is really the only way to make a living long term. Some of this is due to equipment cost (look up how much a combine costs), and some is due to competition. It's not unusual for a farmer to be land rich and cash poor.

If you want to see a couple of guys learning how to farm from scratch, visit https://www.youtube.com/@spencerhilbert. Spencer and his brother made a bit of money off games and Youtube and have been starting out on corn, hay, as well as raising beef. It gives a pretty good insight into how pervasive tech is in farming, and how despite that, how much of farming still relies on hard, physical work.


I'll check out Spencer's channel. For a comedy perspective, there's Clarkson's Farm or Growing Belushi. Even though they are for entertainment, there's a still a lot of info in those shows to not be written off.

However, I'm not as interested in being a farmer at that level. I'm much more interested in the homesteading aspect of farming. I'm not trying to feed the world as much as me and mine and maybe some extra. So not just farming, but also some ranching with sheep/goats/chickens/pigs. I have friends doing this that I'm keeping an eye on. They had a head start as their kids grew up in FFA and are already familiar with raising live stock, and then having them processed to make that part much less daunting.


I get that. Crop farming is so different than raising animals.

Good luck, but there’s a reason why subsistence farmers move to city slums as soon as they can.

Yes, because doing it with low tech and for money is backbraking. But doing it for fun with other sources of income is a different story.

Very offtopic, but:

> raising beef

Is that cows? English isn't my first language, so I thought beef was the word just for the meat, with all Normans eating while Saxons raising thing.


That would be a correct interpretation. Depending on how "cowboy" you want to go, there's plenty of slang. Raising hamburgers and steaks. Bacon seeds. Lamb chops. Just idiomatic sayings referring to the ultimate end products. I've heard all sorts of things to be cute.

Scale is a huge factor. It makes the most sense to invest in precision ag tech when you have enough acres that the investment pays off. At 5000+ acres, farms are using integrated systems that combine satellite data, on-tractor sensors, soil sensors, drone sensors, in-field weather sensors, with a lot of science to squeeze the most out of the land. At that scale, there's a lot of money invested in a season and you aren't looking for a DIY project, you need production quality product with proven scientific rigor. You probably don't have the manpower to do a DIY project anyway, you are relying heavily on automation and outsourcing. And at the low end, it it more effort to implement any of this than you'll get out of it.

So a DIY solution is aiming for somewhere in the center of the market -- enough scale that it makes sense to bother, but not enough enough money to avoid the headache of DIY. It might make sense for some mid-sized farms in developing economies, but it seems to be a narrow window to me.


Is suspect most farmers would prefer the diy add-on version of these than the single manufacturer integrated one. A modern smartphone and stay of I/o sensors send like it could do pretty much the entire job

The kid? :)

I had to scroll back up to see what this reply was to, to get the full chuckle and yup, I was told frequently by my male parental unit that the top two reasons for having kids was chores and tax deductions. But there's a reason farm families leaned on the large side. The more hands you had helping the less hard things could be while never being easy

Im not really in the space but all the CAD things I see lately are browser based "cloud offerings"

Im not sure is CAD stuff is just served by a basic graphics card at this point or if there is some server side work going on.

OS doesnt mean that much when every industry decided that Chrome was going to be their VM


No one is using that cloud crap professionally. The bread and butter of the CAD world is Windows PCs with tons of RAM and certified GPUs.

> No one is using that cloud crap professionally.

I would bet there are at least some people using Onshape at their job. https://www.onshape.com/en/resource-center/case-studies/


Hardcore CAD systems like Solidworks or CATIA still aren’t browser based.

I suppose it depends on what you use it for (doesnt look 1 to 1 for cloud to local) but it looks like both have offerings in the space

https://www.3ds.com/cloud

https://www.solidworks.com/product/solidworks-xdesign

but like I said I just see what gets advertised at me in youtube ads


A carrier battle group can easily be seen and tracked by commercial satellite constellations.

At minimum they travel with 6 or 7 ships and leave a wake a mile long and they only go tens of miles an hour, it isnt a speed boat.

Here is an Indian carrier (formerly Russian) on google maps and the US ones are large https://www.google.com/maps/place/14%C2%B044'30.3%22N+74%C2%...

I think people forget how many satellites are pointed at all parts of the planet. They are used for crop reporting and weather and all sorts of shit. It isnt the 1960s where only the super powers have them and they drop rolls of film.


Satellites aren't pointed at "all parts of the planet". They're generally taking regular photos of known locations, when the right type of satellite passes over. That's where you get lucky shots like the one you noticed. Then that satellite has to orbit, and there isn't another one nearby just ready to take another photo. Then the carrier changes direction...

Sure any single one but there are many companies, some with hundreds of satellites in orbit at any given time who will point it where ever if you pay them enough

Which is why you get things like this https://www.cnbc.com/2026/04/05/satellite-firm-planet-labs-t...

An aircraft carrier is not that fast, if you see it once you know roughly what radius of circle it is going to be in for a while (ignoring the fact that they are likely going somewhere for a reason its not their job is to say out of sight)

edit: aha that company literally lists it on their website https://www.planet.com/industries/maritime/


This is literally the point: it's easy to tell them to point a satellite at beirut and get pictures every 3 hours or whatever, it's much more difficult to tell them to point at a location in the middle of the pacific ocean... because you don't know the location in the first place.

Beirut doesn't move around a lot. Carriers do. While there are a lot of satellites pointing at the earth at any one moment, this isn't some kind of Hollywood super screen showing a real time image of the entire pacific. You just see whatever small patch the satellite happens to be pointing at.

And again, ignoring the part where america would probably start shooting down satellites.


>because you don't know the location in the first place

Do you seriously think China doesn't track US carrier movements?


Do you seriously think the US Navy doesn't avoid Chinese tracking? What kind of a question is that? Like, there's probably a magazine that lists the cruising destinations of most of the carriers, what ports they're going to stop at next, etc, because, you know, they're not at war and trying to maintain secrecy.

> Do you seriously think the US Navy doesn't avoid Chinese tracking?

How would they avoid having a Chinese satellite continuously track their movement? They have the capability to do that, there is nothing USA can do about it except shoot down all the Chinese satelites.

https://defencesecurityasia.com/en/china-three-satellites-tr...


US carrier groups probably pose the #1 strategic threat to the PRC in the Pacific. You can safely assume they throw whatever resources are necessary at the task of knowing their whereabouts.

I mean, you can try all you want, but there's limits to hiding a fleet of ships on the open sea. They are huge, emit immense heat signatures, and produce miles-long wakes while moving. As long as there are satellites overhead, they will be able to find them.

I suspect we might be talking past one another because we have different degrees of precision in mind: I'm not saying the Chinese could have a missile target lock on a carrier whenever they wanted, much less in wartime. Far from it. But I highly doubt you can reposition a carrier group without them catching wind of it within hours.


This is the sort of arms race that is going to change every year. I just read an article that claimed that China has launched a system of satellites that use non-visual means to track ships in the pacific (via.. emissions or radar or something?) and china can certainly afford to put a bunch of them in orbit.

It's not impossible to track a carrier group via satellites, but it's not trivial either, you can't just, like, open up your windows gui and click on a satellite and click the button that says "follow this carrier" because like satellites orbit and fly around the earth and the ships can alter course when you don't have eyes on them and so on and so forth.

And yeah, as you point out, there's a big difference between having a satellite picture showing a probable carrier group at X and Y coordinates and being able to actually strike the thing.



Now I’m contemplating just how small and light of an instrument could be carried on a Starlink-style satellite that could detect a large ship. A smallish COTS telescope, e.g. a Celestron 8SE ($1700 retail) could easily see a ship from the Starlink constellation altitude.

Never mind that the Starlink radio arrays are, well, radio arrays that quite effectively cover the whole planet. If you think of each satellite as a radio telescope, its resolution is crap and probably cannot disambiguate a carrier group from anything else (at least according to disclosed specs). But it would be quite interesting to build a synthetic aperture array out of multiple satellites. This would rely on emissions from the ships themselves, but I bet it could be done and could locate ships quite nicely.


Seeing it isn't the issue. Scanning the oceans is the issue.

Carrier groups also don't emit radio when they aren't interested in being detected!


Citation needed.

All of those can be achieved with replaceable batteries.


Are you claiming it's not cheaper to embed batteries?

Citation needed. It seems pretty clear that a mechanism to allow a user to access a battery will increase complexity, making all the other properties harder to achieve.

Fairphone managed to do it, I'm sure companies with more budget than them can figure it out.

Not water proof and definitely big for its capacity.

Yes, hence why I'm sure companies with 100x the budget can do better.

You're asking for proof that effective waterproof phones with removable batteries exist?

https://m.gsmarena.com/results.php3?chkRemovableBattery=sele...


You're proving the point.

1) iPhones for example are ip68 rated while those are just ipx8/9

2) Do you want to be limited to the universe of those search results? Do you want to buy a Sony Xperia?

You can't make batteries directly replaceable at the same quality and price. There are tradeoffs. Obviously waterproof non-embedded batteries exist. Just like you could make a removable battery the same slimness as embedded. With massive tradeoffs. It's capacity will be terrible. No one is surprised a removable battery can be waterproof but the point is there are tradeoffs.


I don't see those options in the search results either way

In any case we heard the same sort of rationalization for getting rid of the headphone jack, so color me extremely skeptical-- yes of course there's going to be trade-offs, but what a coincidence that headphone jacks, replaceable batteries, SD card slots have all gone by the wayside, which just so happens to allow for upselling Bluetooth and cloud storage


> just ipx8/9

Do you actually need it? For what?


Kinda weird to argue for longer life via battery replacement and against longer life via contaminant protections. My phone is regularly covered in chalk dust, sawdust, water, …

1 mm thickness is a fine trade-off

No, the list was "Cheaper, higher battery capacity, water proof, smaller, stronger". I don't think it's all that controversial to say that there are engineering tradeoffs to be made here. You can make a waterproof phone with a removable battery, but you can't make a waterproof phone with a removable battery that is as good or better than an iPhone in every other respect too. If you could, iPhones would already have removable batteries.

> If you could, iPhones would already have removable batteries.

A crazy take since apple has very clearly made anti-consumer moves in the past.

If having a baked in battery caused there to be 1% more iphones sales which would they choose.

You were likely nodding along when Jobs was out there telling people they were holding the phone wrong.


My point is that if it's all of those things (crucially, including cheaper), then it's a Pro-Apple move to manufacture iPhones that way. There would be no downside. To the extent they make anti-consumer moves at all (which I'll cede for the sake of keeping this brief), they do so because those moves are pro-Apple.

The crazy take is thinking that a design choice that causes there to be 1% more iPhone sales is an anti-consumer move.

Planned obsolesce are anti consumer and increases sales. So yes anti consumer design can increase sales volume, that is often the point.

Replaceable batteries lets you use your phone longer, that means people will take longer to buy a new phone and reduce iphone sales. Such anti consumer moves requires regulations to be fixed, since there is no incentive for the company to be pro consumer here.


That relies on the questionable assumption that consumers don't understand the overall value proposition.

The point is that the incentives are not pointing towards "make better phone" they are pointing towards "sell more phones"

Sometimes "better phone" drives "sell more phones"

Sometimes it doesn't.


Very often it does, certainly more often than a government regulation results in a better product.

Can you explain your reasoning? Is there some minimum sales threshold required, and 2 million iPhones wouldn't meet it?

If people buy more of a product, that's because it's better in some way. Maybe it's cheaper, or maybe it's better quality.

Oh yes, the famous Galaxy XCover 7 Pro. People are camping out in the rain waiting for their release because replaceable batteries are under such high demand.

So we're moving the goalposts from "these features can coexist" to "such a phone has to be popular"? Why don't you skip to the end and tell me where they're going to end up?

If phones are not for sale with features, how does that allow drawing any conclusion about popularity? I've yet to meet a single person who says, "I sure am glad I can't use fingerprint unlock on my iPhone anymore", but obviously it's not worth leaving the entire ecosystem

Recall also that building Android phones barely makes any money, so it's not exactly a business teeming with disruption


It'll increase the size of the case by a small amount but a battery cell is a battery cell... Rip open an old device and you'll see.

It really really really depends on how you are using it and what you are using it for.

I can get LLMs to write most CSS I need by treating it like a slot machine and pulling the handle till it spits out what I need, this doesnt cause me to learn CSS at all.


I find it a lot more useful to dive into bugs involving multiple layers and versions of 3rd party dependencies. Deep issues where when I see the answer I completely understand what it did to find it and what the problem was (so in essence I wouldn't of learned anything diving deep into the issue), but it was able to do so in a much more efficient fashion than me referencing code across multiple commits on github, docs, etc...

This allows me to focus my attention on important learning endeavors, things I actually want to learn and are not forced to simply because a vendor was sloppy and introduced a bug in v3.4.1.3.

LLMS excel when you can give them a lot of relevant context and they behave like an intelligent search function.


Indeed, many if not most bugs are intellectually dull. They're just lodged within a layered morass of cruft and require a lot of effort to unearth. It is rarely intellectually stimulating, and when it is as a matter of methodology, it is often uninteresting as a matter of acquired knowledge.

The real fun of programming is when it becomes a vector for modeling something, communicating that model to others, and talking about that model with others. That is what programming is, modeling. There's a domain you're operating within. Programming is a language you use to talk about part of it. It's annoying when a distracting and unessential detail derails this conversation.

Pure vibe coding is lazy, but I see no problem with AI assistants. They're not a difference in kind, but of degree. No one argues that we should throw away type checking, because it reduces the cognitive load needed to infer the types of expressions in dynamic languages in your head. The reduction in wasteful cognitive load is precisely the point.

Quoting Aristotle's Politics, "all paid employments [..] absorb and degrade the mind". There's a scale, arguably. There are intellectual activities that are more worthy and better elevate the mind, and there are those that absorb its attention, mold it according to base concerns, drag it into triviality, and take time away away from higher pursuits.


I agree with your definition of programming (and I’ve been saying the same thing here), but

> It's annoying when a distracting and unessential detail derails this conversation

there is no such details.

The model (the program) and the simulation (the process) are intrinsically linked as the latter is what gives the former its semantic. The simulation apparatus may be noisy (when it’s own model blends into our own), but corrective and transformative models exists (abstraction).

> No one argues that we should throw away type checking,…

That’s not a good comparison. Type checking helps with cognitive load in verifying correctness, but it does increase it, when you’re not sure of the final shape of the solution. It’s a bit like Pen vs Pencil in drawing. Pen is more durable and cleaner, while Pencil feels more adventurous.

As long as you can pattern match to get a solution, LLM can help you, but that does requires having encountered the pattern before to describe it. It can remove tediousness, but any creative usage is problematic as it has no restraints.


> there is no such details.

Qua formal system, yes, but this is a pedantic point as the aim - the what - of a system is more important than the how. This distinction makes the distinction between domain-relevant features and implementation details more conspicuous. If I wish to predict the relative positions of the objects of our solar system, then in relation to that end and that domain concern, it matters not whether the underlying model assumes a geocentric or heliocentric stance in its model (that tacitly is the deeper value of Copernicus's work; he didn't vindicate heliocentrism, he showed that a heliocentric model is just as explanatory and preserves appearances equally well, and I would say that this mathematical and even philosophical stance toward scientific modeling is the real Copernican revolution, not all the later pamphleteer mythology).

Of course, in relation to other ends and contexts, what were implementation details in one case become the domain in the other. If you are, say, aiming for model simplicity, then you might prefer heliocentrism over geocentrism with all its baroque explanatory or predictive devices.

The underlying implementation is, from a design point-of-view, virtually within the composite. The implementation model is not of equal rank and importance as the domain model, even if the former constrains the latter. (It's also why we talk about rabbit-holing; we can get distracted from our domain-specific aim, but distraction presupposes a distinction between domain-specific aim and something that isn't.) When woodworking, we aren't talking about quantum mechanical phenomena in the wood, because while you cannot separate the wood from the quantum mechanical phenomena as a factual matter - distinction is not separation - the quantum is virtual, not actual with respect to the wood, and it is irrelevant within the domain concerning the woodworker.

So, if there is a bug in a library, that is, in some sense, a distraction from our domain. LLMs can help keep us on task, because our abstractions don't care how they're implemented as long as they work and work the way we want. This can actually encourage clearer thinking. Category mistakes occur in part because of a failure to maintain clear domain distinctions.

> That’s not a good comparison. Type checking [...]

It reduces cognitive load vis-a-vis understanding code. When I want to understand a function in a dynamic language, I often have to drill down into composing functions, or look at callers, e.g., in test cases to build up a bunch of constraints in my mind about what the domain and codomain is. (This can become increasingly difficult when the dynamic language has some form of generics, because if you care about the concrete type/class in some case, you need even more information.)

This cognitive load distracts us from the domain. The domain is effectively blurred without types. Usually, modeling something using types first actually liberates us, because it encourages clearer thinking upfront about the what instead of jumping right into how. (I don't pretend that types never increase certain kinds of burdens, at least in the short term, but I am talking about a specific affordance. In any case, LLMs play very nicely with statically-typed languages, and so this actually reduces one of the argued benefits of dynamic languages as ostensibly better at prototyping.)

> As long as you can pattern match to get a solution [...]

Indeed, and that's the point. LLMs work so well precisely, because our abstractions suck. We have lot of boilerplate and repetitive plumbing that is time-consuming and tedious and pulls us away from the domain. Years of programming research and programming practice has not resolved this problem, which suggests that such abstractions are either impractical or unattainable. (The problem is related to the philosophical question whether you can formalize all of reality, which you cannot, and certainly not under one formal system.)

I don't claim that LLMs don't have drawbacks or tradeoffs, or require new methodologies to operate. My stance is a moderate one.


Yes but that’s why you ask it to teach you what it just did. And then you fact-check with external resources on the side. That’s how learning works.

> Yes but that’s why you ask it to teach you what it just did.

Are you really going to do that though? The whole point of using AI for coding is to crank shit out as fast as possible. If you’re gonna stop and try to “learn” everything, why not take that approach to begin with? You’re fooling yourself if you think “ok, give me the answer first then teach me” is the same as learning and being able to figure out the answer yourself.


I would consider this a benefit. I've been a professional for 10 years and have successfully avoided CSS for all of it. Now I can do even more things and still successfully avoid it.

This isn’t necessarily a bad thing. I know a little css and have zero desire or motivation to know more; the things I’d like done that need css just wouldn’t have been done without LLMs.

This exactly. My css designs have noticeably gotten better without me,the writer getting any better at all.

But were you trying to learn CSS in the first place?

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: