Probably more that they are compute constrained. In his latest post Ben Thompson talks about how Microsoft had to use their own infrastructure and supplant outside users in the process so this is probably to free up compute.
This is so spot on and I’ve been harping on this for about two years based on my own professional experiences. The surprising thing, though, is that upper management is ostensibly cool with incompetent people using AI to produce things that are clearly not accurate and have no idea whether it is or not. I believe this is because upper management themselves believe AI is much more accurate in its current form than it is. It’s not clear what if anything will change this but I believe many organizations are rotting from within because they no longer have stringent requirements.
This raises a good point. The analogy in the article implies that eventually there will again be a need to know how to write code at a large scale and nobody will know how to do it. I don’t think the analogy holds if you think of AI as a sort of orchestration and abstraction layer which at the end of the day, all software development tools are.
But I do think there’s another thing going on quietly in corporate America currently that will have major ramifications for companies that have prioritized using AI and that is a loss of technical excellence in general.
I can’t put my finger on it but sometime around 2023 or so there was a noticeable falloff in technical competence at companies I work with because the higher ups went all in on a generative AI future. No longer were they investing in training new hires and having rigorous certification standards. Instead people were encouraged to use AI tools to answer questions and would regularly pass off the output to more knowledgeable workers for refinement. These people clearly had no idea whether what they were sending out was accurate or not but it looked and felt like real work.
I think there will be a consolidation across the tech industry and AI will not be a differentiator and only those who are actually competent will succeed but right now AI is allowing a lot of incompetence to go undetected throughout a lot of organizations.
This article is the first that I’ve seen that hits on a theme I’ve been pondering for a long time. And that is that everyone super bullish on AI assumes that there are enormous real world gains to be had by existing companies automating software tasks- building software, using software- essentially moving bits around. I’ve been very skeptical of this.
We’ve had the ability to automate work between systems with bots and even regular jobs that automate certain repetitive tasks with APIs for years. Yet there’s been a relatively small uptake of bots and there is still a very large market for vendors an SIs who can improve existing processes.
I think it’s pretty clear why this disconnect exists between “regular people” and the people Patel describes as having software brain. And that is that the nature of LLMs is that they are limited to the digital world. At the core they really only do one thing and that is take some text, overlay it with digital representations of the world and try and find the one that most closely matches.
The inborn assumption is that they will get better and better and climb the corporate ladder, starting in the call center but climbing the corporate ladder to replace everyone’s jobs like Michael J Fox in the Secret of my Succcess. But I’m skeptical. Automation always starts in call central customer service use cases because that is one of the few use cases where humans are involved, they are supposed to follow a script, and take actions entirely inside software applications. But it always seems to stall out. Because once you move from jobs where a script can be provided to ones where ambiguity is a constant factor and judgments and decisions have to made that don’t have exact precedents you need humans. LLMs are backwards looking. Humans can consider things that have happened previously as guidance but critically, can also imagine a future state where things operate differently all why considering multiple competing factors, all of which are unique to that situation. They don’t always do it well but LLMs are incapable of doing it in any case.
People hate AI because it doesn’t really do that much for a regular person, is massively hyped by people who look shadier and less credible by the day and has some vague threat of destroying civilization or at least making you homeless.
But- I have zero doubt that if these same companies did actually excel at providing real world value, nobody would care about the negative implications. For example if they produced robots that could automate aspects of your life like cleaning your house, getting your groceries and doing it very inexpensively, I have no doubt the popularity would be off the charts and a bona fide bubble would ensue.
This is plainly true. First off, Waymo is one of the few companies successfully using AI to operate real world objects at enormous complexity and risk. Talk to anyone who just used Waymo for the first time and they will be almost euphoric. It’s amazing technology with overwhelming utility. There are also several examples of companies with less than stellar images who consumers were told they should boycott but most users couldn’t have cared less. Uber in its earlier days and Facebook coming out of the Cambridge Analytica scandal come to mind.
This and also constantly saying stupid things like “yes that is a great observation and that’s how the pros do it for this very reason!” for a specific question that doesn’t apply to anything anyone else is doing
> My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.
My greatest frustration with AI tools is along a similar line. I’ve found that people I work with who are mediocre use it constantly to sub in for real work. A new project comes in? Great, let me feed it to Copilot and send the output to the team to review. Look, I contributed!
When it comes time to meet with customers let’s show them an AI generated application rather than take the time to understand what their existing processes are.
There’s a person on my team who is more senior than I am and should be able to operate at a higher level than I can who routinely starts things in an AI tool but then asks me to take over when things get too technical.
In general I feel it’s all allowed organizations to promote mediocrity. Just so many distortions right now but I do think those days are numbered and there will be a reversion to the mean and teams will require technical excellence again.
I’ve been astonished at how bad the battery is in my base iPad I bought last year.
Granted I’m switching to it from an iPhone 17 pro max but still the thing goes from 100% to 80% overnight without being used and a 40 minute Zwift ride routinely drains 15-20%. Makes me much more reluctant to buy another as it has to be tethered to a charger.
Also have to consider that it’s now private which removes the pressure of having to show any semblance of a profit or, critically, share usage or advertising statistics which could (and probably are) down dramatically since the acquisition. Being private allows the fictitious storyline to persist that “we’re doing great and everyone is using our products.”
There are many, many, many companies much older than Palantir operating in the beltway that do this. Having TS/SCI cleared resources who can work in SCIFs isn’t in itself a differentiator. Besides, that type of security level would make it very difficult to make use of their products in the first place.
You're missing "better then what they had". It was as I understand it, a big innovation to just bring some post-2010s webdev to the UI experience.
A relevant comparison would be that SpaceX didn't build fancy rockets and their was a lot of similarly old players in the space. They still took it over pretty thoroughly.
One of the most telling experiences while following this company was a town hall type of discussion between Karp and I believe former BP CEO. In it, the CEO gushes about how vital Palantir has been in transforming operations and ironing out inefficiencies. But as he continues to talk it becomes apparent he has absolutely no idea what was done or how it helped.
Then the motives became very clear to me- Palantir wants to sell more software by creating an image of a secretive panacea while the c level wants to create an image that they are forward thinking and using cutting edge tools to transform operations. It’s a two way fortuitous grift but I have no doubt the investors pouring money into it have also gotten ensnared in this grift and it’s grown from questionable sales tactics to a full blown bubble.
When the former CFO becomes CEO and starts talking about the potential of a vendor's black box, it calls into question everything else they've said like thinking a journalist's coverage is accurate until they blunder a topic your familiar with.
reply