I know HN has a lot of devs, but I'm pretty sure none of us are going straight to Github to file for a refund from a bug. I'm assuming they notified customer service first and were rebuffed, then filed the bug.
Wouldn’t this use your internet data, though? Isn’t the point of these tools to send locally without being limited by internet speeds and without having to use your mobile data?
An alternative reading is that after 13 years dedicated to a single project, the original author is simply burnt out on it, but a new maintainer can start with fresh passion that will last a number of years.
Just because someone gets tired of working on something eventually doesn't mean everyone else will immediately feel the same way.
Did you read the notice on the git hub site? I think he clearly states that he wanted to continue to work on the project, but could not justify it after sources of funding failed to materialize.
Sure, but a new maintainer might have different needs. The original maintainer doesn’t have the time now to do the work for free, since they have to also have a job to pay the bills. A new maintainer might have more free time, at least for a while…
I assume you would just unfollow them if you want to stop seeing their posts? It sounded more like the person I was talking to was more concerned with follower counts being accurate, which doesn’t seem relevant for feed algorithms.
> Whereas for an agent it will happily include details that are not literally in its chain of thought as justifications for its decisions.
Humans do this too, ALL THE TIME. We rationalize decisions after we make them, and truly believe that is why we made the decision. We do it for all sorts of reasons, from protecting our ego to simply needing to fill in gaps in our memory.
Honestly, I feel like asking an AI it’s train of thought for a decision is slightly more useful than asking a human (although not much more useful), since an LLM has a better ability to recreate a decision process than a human does (an LLM can choose to perfectly forget new information to recreate a previous decision).
Of course, I don’t think it is super useful for either humans or LLMs. Trying to get the human OR LLM to simply “think better next time” isn’t going to work. You need actual process changes.
This was a rule we always had at my company for any after incident learning reviews: Plan for a world where we are just as stupid tomorrow as we are today. In other words, the action item can’t be “be more careful next time”, because humans forget sometimes (just like LLMs). You will THINK you are being careful, but a detail slips your mind, or you misremember what situation you are in, or you didn’t realize the outside situation changed (e.g. you don’t realize you bumped the keyboard and now you are typing in another console window).
Instead, the safety improvements have to be about guardrails you put up, or mitigations you put in place to prevent disaster the NEXT time you fail to be as careful as you are trying to be.
Because there is always a next time.
Honestly, I think the biggest struggle we are having with LLMs is not knowing when to treat it like a normal computer program and when to treat it like a more human-like intelligence. We run across both issues all the time. We expect it to behave like a human when it doesn’t and then turn around and expect it to behave like a normal computer program when it doesn’t.
This is BRAND NEW territory, and we are going to make so many mistakes while we try to figure it out. We have to expect that if you want to use LLMs for useful things.
Plan for a world where we are just as stupid tomorrow as we are today. In other words, the action item can’t be “be more careful next time”, because humans forget sometimes (just like LLMs).
That’s a great way of putting it, I’ll remember that one (except when I forget...)
I am pretty sure you will remember it during your next learning review… as soon as you get in that learning review, it is suddenly very easy to remember all the things you forgot to do.
Humans don't do this all the time. I think you are conflating things to further this false idea that there is no distance between human thinking and the behavior of LLMs. The kind of rationalization humans sometimes do generally happens over a period of time. Humans are also not "rationalizing" their actions all the time. Also, when humans do what you call "rationalizing," it is to serve some kind of interest, beyond responding to a prompt.
This is obviously slightly exaggerated, but I do feel like this whenever people dismiss Kubernetes as either too complicated or not needed.
The response I always got when suggesting Kubernetes is "you can do all those things without Kubernetes"
Sure, of course. There are a million different ways to do everything Kubernetes does, and some of them might be simpler or fit your use case more perfectly. You can make different decisions for each choice Kubernetes makes, and maybe your decisions are more perfect for your workload.
However, the big win with Kubernetes is that all of those choices have been made and agreed upon, and now you have an entire ecosystem of tools, expertise, blog posts, AI knowledge, etc, that knows the choices Kubernetes made and can interface with that. This is VERY powerful.
Kubernetes is a complicated solution to a complicated problem. A lot of companies have different problems and should look for different solutions. But if you are facing this particular problem, Kubernetes is the way to go. The trick is to understand which problem you are facing.
Kubernetes can be a sign that are you making things more complicated than they should be, too early. But if you actually have made things complicated enough (whether through essential or accidental complexity) that you have problems that k8s is good at solving, I really hope you have it instead of some hand rolled solution.
I feel the same way about commercial APM tools. Obviously in a perfect world, you would have software so simple and fast that they’re unnecessary. Maybe every month or two someone has to grep some logs that are already in place. Once you’ve gotten yourself in situation where this is obviously not true, having Datadog, New Relic or similar set up (or using k8s instead of 100 unversioned shell scripts by someone who doesn’t work there anymore) will make your inevitable distributed microservice snafu get resolved in hours rather than a longer business-risking period.
> But if you actually have made things complicated enough [...]
The only problem I see in this case is that complexity doesn't come all at once. By the time you reach a problem that k8s is good at solving, you've probably already accidentally made a k8s alongside your piece of software.
In my(quite short) SWE career, I've seen software evolve, even ones with a proper design stage. Maybe I just don't have enough experience to have seen a properly designed project, but I don't know what I don't know after all.
> all of those choices have been made and agreed upon
Have they really? I have a few apps deployed on k8s and I feel like every time I need something, it turns out it doesn't do that and I'm into some exotic extension or plugin type ecosystem.
Something as simple as service autoscaling (this was a few years ago) was an adventure into DIY. Moving from google cloud to AWS was a complete writeoff almost - just build it again.
I'm sure it captures some layer of abstraction that's useful but my personal experience is it seems very thin and elusive.
This, and because of that, claiming your app "runs in kubernetes" is completely meaningless.
Concretely: Take your app. With one button click, or apt-get install ??? on all your machines, configure k8s. Now, run your app.
The idea that this could work has been laughable for any k8s production environment I've seen, which means you can't do things like write automated tests that inject failures into the etcd control plane, etc.
(Yes, I know there are chaos-monkey things, but they can't simulate realistic failures like kernel panics or machine reboots, because that'd impact other tenants of the Kubernetes cluster, which, realistically, is probably single tenant, but I digress..)
If your configuration is megabytes of impossible to understand YAML, and is also not portable to other environments, then what's the point?
(I understand the point for vendors in the ecosystem: People pay them for things like CNI and CSI, which replace Linux's network + storage primitives with slower, more complicated stuff that has worse fault tolerance semantics. Again, I digress...)
> If your configuration is megabytes of impossible to understand YAML, and is also not portable to other environments, then what's the point?
If almost all your configuration is about getting Kubernetes set up, and not about your application setup inside Kubernetes, there probably isn't a point. But being able to use roughly the same config inside different Kubernetes is quite good.
But I've never seen portable kubernetes configs (except for vendor software that probably wouldn't be needed outside of kubernetes).
If you just tell kubectl to dump your pod configs, then load them on some other cluster, that definitely won't work.
If you use the management software that generated the pod setup somewhere else, that probably won't work either because the somewhere else is going to be missing the CSI and CNI you targeted. Even if those match, it'll be missing the CRDs. God help you if you want to run two programs on one Kubernetes, and there's a CRD versioning conflict in their two dependency sets.
> Moving from google cloud to AWS was a complete writeoff almost - just build it again.
Yep. Kubernetes is not just kubernetes when moving between clouds, it becomes a very opinionated product (for better or worse) with lots of vendor addons. Could someone that is familiar with one pick up on the other? Sure! But there are gotchas. And then kubernetes on prem adds the hardware lifecycle piece, and potential data locality issues, etc.
There are differences across vendors, but there’s a way to build with k8s where the benefit far outweighs the cost.
We run a bunch of services in two very different cloud vendors (one of which used to be DIYed with kubeadm), and also on dev machines with k3s. Takes a while to figure this out and to draw the kustomize boundaries in the right place, but once you do, it’s actually really nice.
Two things work in our favor:
- we’ve been at this for around 8 years, so we didn’t have to deal with all the gotchas at once
- we aggressively avoid tech that isn’t universal (so S3 is OK, but SQS or DynamoDB is not; use haproxy instead of ingress controllers; etc)
> Kubernetes is not just kubernetes when moving between clouds, it becomes a very opinionated product (for better or worse) with lots of vendor addons.
I think this is gradually getting better. Networking with Gateways is better than with Ingress in this sense. Things like autoscaling groups need to get better, as they are (or were a couple of years ago) very bespoke.
I wouldn’t really call it “DIY” per se, k8s has the resource API and you can create whatever scaling policies you want to with it, but I do see how that’s not obvious when it’s advertised as ‘batteries included’
> However, the big win with Kubernetes is that all of those choices have been made and agreed upon, and now you have an entire ecosystem of tools, expertise, blog posts, AI knowledge, etc, that knows the choices Kubernetes made and can interface with that. This is VERY powerful.
Yep! I am now using k8s even for small / 'single purpose' clusters just so I can keep renovate/argo/flux in the loop. Yes, I _could_ wire renovate up to some variables in a salt state or chef cookbook and merge that to `main` and then have the chef agent / salt minion pick up the new version(s) and roll them out gradually... but I don't need to, now!
Agree. For years I had developed my own preferred way of deploying Rails apps large and small on VMs: haproxy, nginx, supervisord, ufw, the actual deploy tooling (capistrano and other alternatives) and so on... and if those tools are old or defunct now it's because my knowledge of that world basically halted 8 years ago because I've never had to configure anything but k8s since then.
I've used it every day since then so I have the luxury of knowing it well. So the frustrations that the new or casual user may have are not the same for me.
I just feel like "you can do this with Kubernetes" is a slippery slope.
"You can do X with Y, so use Y" is a great way to add a dependency, especially if it is "community vetted" already.
Sometimes simple is better - you don't need to add anything that implements some of you logic as a dependency to stay DRY or whatever you want to call it.
It really feels like we are drowning in self-imposed tech debt and keep adding layers to try and hold it for just a while longer.
Now that being said, there is no reason not to add Kubernetes once a sufficient overlap is achieved.
Kubernetes handles so many layers you are going to need for every app, though… deployments, networking, cert management, monitoring, logging, server maintenance, horizontal scaling… this isn’t a slippery slope, it is just what you need.
You have to pick and then configure those components, just like you would have had to pick and configure apps doing those things if you were not using k8s, so the only thing k8s actually brings to the table is a common configuration format (yaml).
The thing about Kubernetes is its a standardization of deployment. Kubernetes is complicated because deploying software is complicated. You might try to YAGNI hand wave it away, but as the article points out, over time, you end up building Kubernetes anyway
You can use k8s on $2/mo digital ocean projects. It probably even works on the free tier of a lot of providers.
And there's zero setup. Just a deployment yaml that specifies exactly what you want deployed, which has the benefit of easy version control.
I don't get why people are so bent on hating Kubernetes. The mental cost to deploy a 6-line deployment yaml is less than futzing around with FTP and nginx.
Kube is the new LAMP stack. It's easier too. And portable.
If you're talking managed kube vs one you're taking the responsibility of self-managing, sure. But that's no different than self-managing your stack in the old world. Suddenly you have to become Sysadmin/SRE.
This made me audibly guffaw. Kubernetes is a lot of things, but "portable" is not one of them. GKE, EKS, AKS, OCP, etc., portability between them is nowhere near guaranteed.
It is if you stick to standard Kubernetes resources, and it has gotten even easier with better storage class and load balancer support. All of the cloud providers now give you default storage classes and ingresses when you provision a cluster on them, so you can use the exact same deployment on any of them an automatically get those things provisioned in the right way out of the box.
>It is if you stick to standard Kubernetes resources
"If you stick to standard C..."
No one does, that's the issue. Helm charts that only support certain cloud providers, operators and annotations that end up being platform specific, etc.
>now give you default storage classes and ingresses
Ingress is being deprecated, it's Gateway now! Welcome to hell, er, Kubernetes.
If you're using it after it's dead, you're at risk of further problems of this nature that aren't in the underly nginx reverse proxy but in the code wrapping it.
That's one reason I've always used Traefik as my Ingress (I work mostly with K3S, which uses it by default). Which appears to have had its own security issues too, but it still looks like an implementation issue, not a weakness designed in by the spec.
On EKS I'm using whatever AWS has brewed up to integrate ELB/ALB, but I'll tend to trust it ... though maybe I shouldn't, given all the troubles I have with other integrations like secrets management.
Would love to use Gateway! Every time I spin up a new cluster it goes like this:
- New cluster setup, time to use gateway! Yay!
- Oh crap, like 80% of the helm chart and other existing configurations I need for the softwares I'm trying to deploy STILL doesn't use gateway, this new API that's been out for... like half a decade at least.
- Even core networking things like Istio/Envoy only have limited gateway support compared to ingress
- Sigh. Ingress again.
It's been like this since gateway's inception and every time I check the needle has moved like 2% towards gateway. So I'm looking forward to year 2050 when I can use gateway!
The problem, as CNCF knows, if they pushed Gateway and deprecated ingress the world would revolt due to the amount of work involved to migrate stuff. Therefore, they leave it up to "the people" to do the extra work themselves, who have no incentive to do so since for many usecases it's not materially better.
I use Kubernetes every day, and have worked with dozens of helm charts, and have yet to encounter cloud specific helm charts. Are these internal helm charts for your company?
Obviously you can lock yourself in if you choose, but I have yet to see third party tools that assume a specific provider (unless you are using tools created BY that provider).
At my previous spot, we were running dozens of clusters, with some on prem and some in the cloud. It was easy to move workloads between the two, the only issue was ACLs, but that was our own choice.
I know they are pushing the new gateway api, but ingresses still work just fine.
"When deploying a JFrog application on an AWS EKS cluster, the AWS EBS CSI Driver is required for dynamic volume provisioning. However, this driver is not included in the JFrog Helm Charts."
"JFrog validates compatibility with core Kubernetes distributions. Some Kubernetes vendors apply additional logic or hardening (for example, Rancher), so JFrog Platform deployment on those vendor-specific distributions might not be fully supported."
I'm a Kubernetes user and advocate but to call it "portable" just tells me you've never actually tried to deploy the similar thing on multiple different clouds. Even the standardized kubernetes resources behave differently due to various cloud idiosyncracies. You can of course make the situation easier, but to call it entirely portable is probably a misnomer.
I don't you made that argument but could a valid conclusion of your comment be that, because Kubernetes is so ubiquitous, using it frees you from being a Sysadmin/SRE?
> If you can solve the same problem in a simpler way without using k8s
I think I disagree with this, or at least the implication. I think it is true you can solve EACH OF THISE PROBLEMS INDIVIDUALLY in a simpler way than Kubernetes, the fact that you are going to have to solve at least 5-10 of those problems individually makes the sum total more complicated than Kubernetes, not to mention bespoke. The Kubernetes solutions are all designed to work together, and when they fail to work together, you are more likely to find answers when you search for it because everyone is using the same thing.
I think it is fair to say k8s is not a zero cost abstraction, but nothing you use instead is going to be, either, and when you do run into a situation where that abstraction breaks, it will be easier to find a solution for kubernetes than it will for the random 5 solutions you pieced together yourself.
Ephemeral user accounts were agreed upon before that. The OG container
Docker and k8s are just wrappers around namespaces, cgroups, file system ACLs, some essential cli commands, which can also be configured per user.
We may be headed back there. Have seen some experiments leveraging Linux kernels BPF and sched_ext to fire off just the right sized compute schedule in response to sequences of specific BPF events.
Future "containers" may just be kernel processes and threads... again. Especially if enough human agency looks away from software as AI makes employment for enough people untenable. Why would those who remain want to manage kernels and k8s complexity?
Imo its less we agreed on k8s specifically and more we agreed to let people use all the free money to develop whatever was believed to make the job easier; but if the jobs go away then it's just more work for the few left
> Docker and k8s are just wrappers around namespaces, cgroups, file system ACLs, some essential cli commands, which can also be configured per user.
Docker, yes, but kubernetes is way more than that the instant you have more than one physical machine node. (If you only have one node in any deploy, sure, it's likely overkill, but that seems like a weird enough case to not be worth too much ink.)
If you silently replaced all my container images with VM images and nodes running containers with nodes running VMs, I think the vast majority of all my Kubernetes setup would be essentially unchanged. Heck, replace it all with people with hands on keyboard in a datacenter running around frantically bringing up new physical servers, slapping hard drives in them, and re-configuring the network, and I don't think the user POV of how to describe it would change that much.
I've seen some places advertise it but I have not tried it.
But, honestly, more generally in my head I wasn't thinking much about it since I consider that as a "cost optimization" thing than a "core kubernetes function." E.g. the addition (or not) of limits is just a couple lines, compared to all the rest of the stuff that I'd be managing specification of (replicas, environment, resource baseline, scheduling constraints, deployment mode...) that would translate seamlessly.
(And there are a lot of parts of kubernetes that annoy me, especially around the hoops it puts up to customize certain things if you reaalllly actually need to, but it would never cross my mind in a hundred years to characterize it as just a wrapper around cgroups etc like the OP.)
Something often underappreciated is that, in the possible future you're describing, you can use all of these new fangled "what's old is new again" approaches by continuing to just use Kubernetes. Kubernetes is, in a way, designed to replace itself.
"Kubernetes is beautiful. Every Concept Has a Story, you just don't know it yet... So you use a Deployment... So you use a Service... So you use Ingress... So..."
lol, the big problem with kubernetes is that none of the choices have been made, it's not opinionated at all, there's no conventions. It's all configuration and choices all the way down. There's way too much yaml, and way to many choices for ever tiny component, it's just too much.
I do run a k3s cluster for home stuff...
But I really wish I could get what it provides in a much simpler solution.
My dream solution would effectively do the same as k3s + storage, but with a much simpler config, zero yaml, zero choices for components, very limited configuration options, it should just do the right ting by default.
Storage (both volume and s3), networking, scale to zero, functions, jobs, ingress, etc... should all just be built in.
Well... we have k8s for that... I do not wish to take k8s away from those who like it, I am asking for a new solution that's very opinionated, and as close to zero config as practical.
I have limited ram and want scale to zero for apps that use a lot of ram, but I only use one of at a time like game servers, or things that can be done over night while I sleep like media encoding.
The main reason I went to k8s, is for the not having to think about what machine will have enough resources to run an app, just throw it at the cluster and it figures out where there's capacity.
And, I want hardware failing/getting replaced to be a non issue.
edit: I wanted to add that my hobby is not systems admin, I want it to be as hands off as possible. Self-hosting is a means to an end. I have so far saved over $200/month in subscriptions by replacing subscriptions I was using with self-hosted alternatives. I can now use that money on my actual hobbies.
Yeah, I spent quite a bit of time learning Kubernetes, but now I'd use it to host a static webpage on a single server, over alternatives. It's so awesome.
I am not the person you asked this question to, but I would probably do the same so I will answer:
Once you get used to it, it just makes managing things simple if you always use it for everything. I have a personal harbor service that I run on my local cluster that has all my helm charts and images, and i can run a single script that sets up my one node cluster, then run a helm install that installs cert-manager and my external-dns, and now I can deploy my app with whatever subdomain I want and I immediately get DNS set up and certs automatically provisioned and rotated. It will just work.
1. Assuming managed service, it frees me from host OS management. So basically the same proposition, as good old "PHP+MySQL" hosters. You upload your website, they make sure it works. But without limitations and with much better independence.
2. It allows me to configure everything using standard manifests. I need to provision the cluster itself initially, then everything could be done with gitops of various automation levels. I don't need to upload my pages via FTP. My CI will build OCI image, publish it to some registry, then I'll change image tag of my deployment and it'll be updated.
3. It allows to start simple, and extend seamlessly in the future. I can add new services. I can add new servers. I can add new replicas of existing services. I can add centralized logging, metrics, alerts. It'll get more complicated but I can manage the complexity and stop where I feel comfotable.
4. One big thing that's solved even with the simplest Kubernetes deployment is new version deployment with zero downtime. When I'll update image tag of my deployment, by default kubernetes will start new pod, will wait for it to answer to liveness checks, then redirect traffic to new pod, let old pod to gracefully stop and then remove it. With every alternative technology, configuring the same requires quite a bit of friction. Which naturally restricts you to deploy new versions only at blessed times. With Kubernetes, I started to trust it enough, I don't care about deployment time, I can deploy new version of heavily loaded service in the middle of the day and nobody notices.
5. There are various "add-ons" to Kubernetes which solve typical issues. For example Ingress Controller allows the developer to describe Ingress of the application. It's a set of declarative HTTP routes which will be visible outside and which will be reverse-proxied to the service inside. Simplest route is https://www.example.com/ -> http://exampleservice:8080, but there's a lot more to it, basically you can think about it as nginx config done differently. Another example is certificate manager, you install it, you configure it once to work with letsencrypt and you forget about TLS, it just works. Another example is various database controllers, for example cloudnativepg allows you to declaratively describe postgres. Controller will create pod for database, will initialize it, will create second pod, will configure it as replica, will perform continuous backup to S3, will monitor its availability and switch master to replica if necessary, will handle database upgrades. A lot of moving parts (which might be scary, tbh), all driven by a simple declarative configuration. Another example is monitoring solutions, which allow to install prometheus instance and configure it to capture all metrics from everything in cluster along with some useful charts in grafana, all with very little configuration.
6. There are various "packages" for Kubernetes which essentially package some useful software, usually in a helm charts. You can think about `apt-get` but for a more complicated set of services, mostly pre-configured and typically useful for web applications. The examples above are all installable with helm, but they add new kubernetes manifest types, which is why I called them "add-ons", but there are also simpler applications.
Just for the record, I don't suggest that to everyone. I spent quite a bit of time tinkering with Kubernetes. It definitely brings a lot of gotchas for a new user and it also requires quite a bit of self-restrictions for experienced users to not implement every devops good practice in the world. Sometimes maybe you don't even want to start with ingress, I saw cluster which used manually configured nginx reverse proxy instead and it worked for them. You can be very simple with Kubernetes.
Honestly the main problem is people using k8s for something that's like... a database, and an app, and maybe a second app, that all could be containers or just a systemd service.
And then they hit all the things that make sense in big company with like 40 services but very little in their context and complain that complex thing designed for complex interactions isn't simple
But if you want some redundancy, k8s let's you just say run 4 of this, 6 of this on these 3 machines. At least I find it quite straight forward.
The database is more complex since there is storage affinity (I use cockroachDB with local persistent volumes for it) - but stateful is always complicated.
Most of the time you don't need redundancy. You need regular backups for exceptional circumstances. And k8s gives you more complexity, and more problems through more moving parts, to give you the possibility of using a feature you'll never need, and if you do start to use it it'll probably be instead of fixing performance problems downstream
Are we talking for personal projects where there are no expectations, or small startups where you don’t have much scale but you still care about down time and data loss?
Personal projects are one thing, but even the smallest startup wants to be able to avoid data loss and downtime. If you are running everything on one server, how do you do kernel patches? You need to be able to move your workload to another server to reboot for that, even if you don’t want redundancy. Kubernetes does this for you. Bring in another node, drain one (which will start up new instances on the new node and shift traffic before bringing down the other instance, all automatically for you out of the box), and then reboot the old one.
Again, you could do all of this with other tech, but it is just standard with Kubernetes.
> but even the smallest startup wants to be able to avoid data loss
Seems true at a glance!
> and downtime.
Maybe less so - I think there’s plenty out there, where they’re not chasing nines and care more about building software instead of some HA setup. Probably solve that issue when you have enough customers to actually justify the engineering time. A few minutes of downtime every now and then isn’t the end of the world if it buys you operational simplicity.
A while back when the agents got hyped I was looking into the whole "give it a VM / docker container" I realized the safest and simplest option was just to give it its own machine.
Then I realized giving it root on a $3 VPS is functionally equivalent. If it blows it up, you just reset the VM.
It sounds bad but I can't see an actual difference.
No argument there. The Toyota 5S-FE non-interference engine is a near indestructible 4 cylinder engine that's well documented, popular and you can purchase parts for pennies. It has powered 10 models of Camrys and Lexus and battle proven. You can expect any mechanic who has been a professional mechanic for the last 3 years know exactly what to do when it starts acting up. 1 out of 4 cars on the road have this engine or a close clone of it.
It's not what any reasonable person would use for a weedwhacker, lawnmower, pool pump or an air compressor.
Sure, but to extend your metaphor, Kubernetes HAS smaller engine models that you can use in those situations, and still gain all the benefits of being in the same ecosystem. You can use K3s, for example, and get all the benefits without having a giant engine in your weedwhacker.
It downloaded itself on my phone as well. I thought it was some quirk with the Apple Watch sync because I used to have headspace installed at some point and that automatically shows up on the Apple Watch but deleting an app on the iPhone doesn’t always delete the corresponding Apple Watch app. So if you open headspace on the Apple Watch I assumed it redownloaded itself on the iPhone.
same. i get blasted with ads for this app on whatever platform, never installed it myself. the amount of promotions + this = my underdeveloped brain is so ready to assume the worst here. been a while since i used my pitchfork & i'm here for the riot.
if it is, in fact, something nefarious at play that would be a pretty crazy 2026 era exploit. but i'm certain it's a bug/artifact of some sort that, for whatever reason, affects this specific app.
Maybe the developer was using Headspace as part of the test data and it bled into production?
It's hard to imagine what Headspace would like to achieve if this were an exploit executed by them. It's so salient, that it makes no sense to do on purpose. At least some portion of Apple employees and their families are going to be affected by this, and this would escalate to the legal department immediately.
when "explaining a thing, no more assumptions should be made than are necessary."
could be an ios bug; a bug with the notification library they use, any other app behaving similarly?
considering the possibility this was on purpose, they would risk getting banned from the appstore. no, they are not big enough to avoid that. so it's unlikely this was intentional.
I feel like their are (at least) three main critiques of AI, and I wish we could debate them separately, because I think they each have different resolutions.
The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society. The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income. There are obviously a ton of variations on this idea, but I think the general premise of sharing the gains with everyone is sound. I don’t think many would complain if they lost their job but kept their income.
The other two critiques are trickier. The first is the environmental impact of AI, and the response is difficult. Doing work to make it more efficient, and continuing to develop cleaner energy sources is paramount. Taxing and efficiency requirements might be a start. We have the technology to produce energy in sustainable ways, but it is expensive. It has to be non-negotiable if massive energy usage for AI is to continue.
The last is the REAL conversation, and I don’t know the answer. How do we handle AI doing creative work? How do we treat AI creative work? How much creative work do we feel comfortable handing over to AI?
I guess there is another issue, related to the last one, which is how do we deal with the ability to use AI to mislead and commit fraud at scale. How do we deal with not being able to trust what actually said/done by a human and what is AI pretending to be human? How do we avoid and mitigate the ability for AI to generate a massive amount of custom content that is used to mislead and defraud people? So much of our current mitigation strategy relies on the assumption that it takes a lot of effort and time to do certain things that can now be done instantly thousands of times?
>The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society.
This was the argument about robots. It did not pan out. No taxes materialized. Robots and Automated Machines have not shared productivity. In fact, things like self-checkout has spread the labor load to the customer, instead of the company.
>We have the technology to produce energy in sustainable ways, but it is expensive
AI Datacenters should be completely sustainably self-powered. Full stop. We did not spend decades bringing down the cost of power only to have it all hoovered up by robber barons who "need" it to be the first immortal AI God. We did not install water treatment plants to bring down our water usage rates just to feed the machine spirit.
>How do we treat AI creative work? How much creative work do we feel comfortable handing over to AI?
Someone said it as a joke, but I want AI to be doing my dishes and sorting my laundry while I write books and compose music. I don't want AI writing books and composing music so I have more time to do my dishes and sort my laundry.
> Someone said it as a joke, but I want AI to be doing my dishes and sorting my laundry while I write books and compose music. I don't want AI writing books and composing music so I have more time to do my dishes and sort my laundry.
Well then we should maybe ask ourselves why RealityTV gets more views than well written work.
If you lost your $60,000 a year job due to this, do you really believe a basic income funded by it will make up that loss? It won't. Basic income in the US is usually proposed at $12k per year, which would add another $3 trillion to the budget. Do you think you can even get that just taxing these companies? I don't.
People who bring up basic income need to get serious about the numbers involved because I never see it. It's not a realistic solution.
People complain UBI doesn’t make mathematical sense doesn’t realize our current economy doesn’t make mathematical sense either. All this prosperity we in the developed world get comes at the cost of extracting wealth from the rest of the world and all government taking on ever more debt.
The modern (social or economic) history of China, Europe, Russia, UK, US are all good case studies. In aggregate, I think they underscore the reality of the system. Every year we now have high profile people coming out of the system screaming about how insane it is: bankers, traders, politicians, military intelligence. If you had to boil it down to a single book debunking late 20th century pax Americana international macro-economics, it's hard to go past Confessions of an Economic Hitman, although not written formally. I've personally had chapter one verified by an Indonesian diplomat. Alternatively, take the quippy summary of a world-recognized capitalist, George Soros: Classical economics is based on a false analogy with Newtonian physics.
Fair warning: I’m quite ignorant in terms of economics, so this is a naïve way of looking at it.
The question that always pops up for me when it comes to UBI applied to the current capitalist system: even if you did actually come up with the money somehow (which is a pretty huge if as you say), once everyone has X “base money” per month, doesn’t that mean the cost of living (specifically renting) will rise to match this new “base”?
The cost of living would certainly rise somewhat but the point is that UBI is redistributive: the same absolute amount to everyone raises low incomes by a larger percentage than high incomes. Long term effects are hard to predict but in the short term it would mean the poor doing slightly better while the middle class is slightly worse off. The non-working (owning) class would be mostly unaffected as assets are insulated from inflation.
Another factor to consider is that putting more money in the hands of people in need of <thing> means producing <thing> becomes more profitable and thus more investment and resources are directed towards <thing>. If we assume the economy works the way the proponents of capitalism say it does, this should eventually drive the cost of living back down.
But personally I think the biggest benefit of UBI would be the reduction in number of people who are desperate enough to accept work – both legal and illegal – that is unfairly compensated, inhumane and/or immoral. The existence of that class of people is the driving force behind many societal problems. Exorbitant amounts of resources are wasted treating the symptoms of those problems instead of fixing the root cause.
And even if you did get the 60k and never can find work again are you gonna be happy about the next door neighbor working for 120k and getting his 60k on top?
All the proposals I’ve seen would set the marginal tax rate on the 120 so high that his earnings would end up more like 40k from the 120k job and then he gets his 60. So, still some benefit to working, but a very progressive tax rate on higher earnings. Not sure I agree with this, but that is what I’ve seen.
Your neighbor would get $60K UBI but their tax bill would go up by $80K because the government needs tax revenue to pay the UBI.
For high levels of UBI it’s not possible to get all of the necessary tax revenue from taxing billionaires or corporations or other simplistic ideas that sound good unless you do math.
I mean the numbers. 12k per year is peanuts. You cannot live off that and to do it we'd be nearly doubling the budget (that's old data, it's probably not that portion of the budget anymore).
That 12k doesn't include healthcare, it doesn't include a lot of things. It's basically ensuring that people live well below poverty level, and for what? I just don't get how the numbers work, even if it was politically feasible.
I'd much rather have free healthcare and other amenities other countries have. Here in the US if you lose your job there is virtually nothing between you and the streets besides family and friends.
I'm facing this right now. I cannot get a job in tech which means restarting my career. Getting a job right now is not easy in any field especially not in anything like a living wage. If I did not have my parents I would be on the streets right now, thankfully I don't have a mortgage or anything like that. I'm not sure how much $12k per year would really help, it certainly wouldn't pay for housing.
Oh, yeah $12k would not do it. For a UBI to work we would have to shift a significant portion of the concentrated wealth. I too was laid off long enough ago that by now I would be in a bad spot without help, also no mortgage or anything, and I don't travel or go out much. UBI of any degree would do something, but it would have to be much higher than $12k to tread water just due to rent alone. Aside from UBI we would likely need to decouple housing from profit, it has the same problem as healthcare. Demand for it is inelastic to a certain degree, everyone needs somewhere to live.
> do you really believe a basic income funded by it will make up that loss? It won't.
Almost definitionally it would. If society is saving a bunch of money on all that saved labor, that extra value is still there, it just needs to be appropriately redistributed
This is one of the most horrifying comments I've ever read on this website. It's practically a dare to engage in civil war or violent revolution. People fundamentally experience life as relative - as changes. You can't "deprogram" intrinsic human nature. You can just wait 80 years for everybody who's not used to the new hell to die.
24k puts you near poverty level. $1k per month will cover food expenses, it won't cover transport, shelter, and certainly not medical. On 12k per year you have enough money for food and praying that an emergency doesn't happen. It's hard enough living on 40k, and I'm not even in a place where costs are expensive.
UBI will never happen in the US so it's a pointless argument. Americans will have plenty of pawn shops and short-term loan services to help them, though.
How is not wanting to live in poverty using the poor as a foil? How is it hypocritical/fake to care about people who are in situations that I don't want to be in? Isn't that just logical?
> $12k a year is plenty. You’ve just been raised above your natural standard
I get where you're coming from. But this is politically unworkable, and for good reason. If AI increases productivity, that means more wealth, which means living standards should go up.
> $12k a year is plenty. You’ve just been raised above your natural standard
> I get where you're coming from.
You do? Have you priced out health insurance lately? I have. Insurance on HealthCare.gov for my partner and I would be $1700/month for what amounts to catastrophic coverage. It had around a $20k deductible and covered nothing other than an annual physical prior to hitting the deductible.
With $2k/month to work with between us, I guess we have to somehow find a place to live and eat on the remaining $300 as we pay for our functionally worthless health insurance since there is no way in hell we could afford to pay the deductible.
Their numbers are wrong. But their fundamental argument, I believe, is degrowth. That we are living beyond our means and need to lower our expectations of living standards to live sustainability. It's a philosophically-appealing argument. It's also wrong, unless you're comfortable with the inevitable violence and likely population destruction that would need to ensue from an honest degrowth agenda.
Just as hyperloop was designed as a techbro pie in the sky notion to kill high speed rail, basic income as an idea is designed to kill more realistic attempts to shore up welfare, e.g.
* A job guarantee like we had during the great depression
* Lowering retirement age
* Raise minimum wage
* Expanding medicare to everyone
It's worth remembering that if AI really can do everyone's jobs then it'll be wildly deflationary so there's no need to worry about pesky government spending on this stuff or paying people more. Spend spend spend, baby!
Ah youre worried it cant do that? Maybe it is mostly smoke and mirrors then.
The historic origins of UBI are from political parties that wanted most of those same things, too, especially raising the minimum wage and expanding medicare to everyone.
A strong minimum wage makes UBI more attractive. More people will want jobs in addition to UBI. UBI is also seen as a market force to naturally drive minimum wage up, because UBI offers workers more choices: more opportunities to build a startup or take a sabbatical instead of work 40 hours. The labor market has to compete with that "opportunity cost" in ways it doesn't need to care about today. It would increase liquidity in the labor market and in terms going all the way back to even Adam Smith, make the market more free. Wages would better reflect demand for the work if laborers had more choices at more times in their lives where and how much to work.
Medicare for Everyone and Universal Health Care make UBI simpler. Health risk is always going to be variable and insurance-like risk pooling will always be a good idea for society to defray costs in bad years from surpluses in good ones and defray costs from unhealthy people by considering how many people are kept healthy. UBI could be designed to try cover much of health care, but it is never going to be as efficient as a pooled single payer. If a country already has Universal Health Case, the conversations about UBI get a lot simpler. It is a lot easier to sell it is a flat universal grant. Your health care can be provided by a complex risk pool and smart accountants doing a lot of smart math on your behalf. Your UBI can be just a flat number. Simpler: you can think about how you spend your UBI without having to consider your predicted health outcomes in that period of time. UBI's flat universal value can be set on benchmarks that don't need need complex amortization schedules and risk analysis.
The Canadian Social Credit Party, formed to espouse UBI was one of the keys to building Canada's Universal Health Care and their priority was that first, then UBI. That still seems the best priority order to me.
So the problem with 3 out of 4 of your challenges is that, right now, it means young people need to work more to achieve them. Money is an issue, but money by itself cannot solve it, it really needs to be backed with more people working. That's not going to happen, in fact, less people will work.
So without AI, the path forward is obvious: those 3 will become worse. Lowering retirement age, raising minimum wage, and expanding medicare won't happen without AI. They can't.
We already are reasonably close to a job guarantee. If unemployed people would accept any job, unemployment would drop by a lot. Not to zero, obviously, but a lot. Unemployment is also pretty low by historical standards, so fixing unemployment with a job guarantee can't fix much. We'll need something else.
> It's worth remembering that if AI really can do everyone's jobs then it'll be hyperdeflationary so no need to worry about pesky government spending on this stuff.
So yeah, I disagree. If you're going to assume AI will just jump to how capable it'll be 100 years from now, then you need to think a bit deeper. What AI effectively does, it provides capital-based labor. You buy a robot. Robot costs a lot, but operational expenses are marginal, energy and (maybe) "tokens". Add solar power, and let's say local AI becomes a thing, at least for normal robots, and you need nothing other than the initial cost of the robot.
Okay, so this will mean everything can be staffed with tens of thousands of these robots. Remote mine? No problem. 500 robots in your house? Why not. Cleaning very large facilities? Not a problem. Farm hundreds of square kilometers? Fine. Dig a canal to avoid the strait of Hormuz and just do it with shovels? Let's get to it. AI can be a universal machine that can do anything labor can achieve.
Obviously AI will massively increase the output of the economy, and people will figure out what to do with that, as people will want a shitload of things done. Which means the problem you're identifying will be trivial to solve, and we'll figure something out.
> Obviously AI will massively increase the output of the economy, and people will figure out what to do with that, as people will want a shitload of things done. Which means the problem you're identifying will be trivial to solve, and we'll figure something out.
Historically, that "we'll figure something out" has usually meant the economical wipeout of large parts of the population, sooner or later followed either by some epidemic event or other "act of god" (like fires) that was a consequence of squalor and poverty, or by some sort of war to thin out the herd.
I'd prefer if history would not repeat itself for once.
> Historically, that "we'll figure something out" has usually meant the economical wipeout of ...
Uh, historically everything has usually meant the economical wipeout of large parts of the population. It still means that in most third world countries. Economic power is not the huge differentiator here.
Job guarantees and higher minimum wages are just UBI with extra steps, while lowering retirement age is just conditional UBI by another name. If you're giving people more money in exchange for nothing (or nothing of any value to anyone, as in the case of a job guarantee), it's effectively indistinguishable from UBI.
"When our grandparents built the hoover dam, the lincoln tunnel and the triborough bridge with a job guarantee that was just money for nothing - UBI with extra steps."
^ this would be an accurate representation of your opinion then?
That job guarantees exceptionally produce useful things doesn't mean that they don't overwhelmingly produce useless things, or things that are more expensive than they're worth.
> doesn't mean that they don't overwhelmingly produce useless things, or things that are more expensive than they're worth
One could say the same thing about all the little art projects a hypothetical society on UBI might busy itself making. The pertinent difference seems to be one about scale and co-ordination. Job guarantees say we work together–through a centralised power–to build big things. Handing everyone cash leans more towards arts and crafts and consumption.
>Job guarantees say we work together–through a centralised power–to build big things. Handing everyone cash leans more towards arts and crafts and consumption.
Creating busywork doesn't strike me as a particularly worthwhile endeavor, compared to idleness.
> the Hoover dam is also not the typical example of the kinds of projects guaranteed job programs generate
NASA arguably ran its post-Apollo pre-Artemis period as a jobs program. Again, there will be waste. But there will also be waste with UBI. My suspicion is peoples’ tendency towards purposelessness will exceed bureaucrats’ tendency towards uselessness. That’s a loose hypothesis. But in its balance lies which system is more competitive (and satisfying).
>My suspicion is peoples’ tendency towards purposelessness will exceed bureaucrats’ tendency towards uselessness.
The question we need to answer is, given infinite labor (limited over time, but unlimited given unlimited time) is there infinite meaningful work that a government can allocate it to? Eventually you will have built all the dams, tunnels, and bridges that you can usefully build. Historically what tends to happen is that work that isn't strictly necessary gets allocated. Roads that are fine get repaved, etc. I don't see how needlessly wasting energy and resources is better than paying people to spend their time however they see fit.
Like the post above says that there are multiple issues at play with AI. The same can be said about universal income.
The pay levels are not comparable because you are also recompensed with time. You may choose to spend your time in a number of ways that you find rewarding that also reduce your expenses. Making your own meals, clothes, furniture, beer, wine etc. There are a lot of people who would enjoy doing these things but are too time poor to do so.
Your expenses also reduce by the amount you must spend in order to make yourself available to work. Travel, work clothes, medical certificates when sick. You can spend a lot in order to be paid.
If you want a world with a reasonable distribution of income levels. It stands to reason that those receiving more right now should receive less. Certainly, the absolute wealthiest should reduce the most, but on a global scale, it is hard to defend that those in the top 10% of incomes should retain their position.
The proposal for how much a universal income should pay is a variable to be argued itself. I can certainly see it being argued for at a lower level than ultimately desired since something is better than none.
In a sense the end state of a universal income in an equitable world would be remarkably simple. The income available divided by the world's population,
Those reviving more than their share now may not be happy about it, but I'm not sure they have a right to their larger portion either.
There is also a likeability problem. Altman and, shockingly, to a lesser degree, Musk have terrible brands. When folks see those people at the top of these companies, folks who have been publicly saying they're going to cause massive job losses and cause human extinction or whatnot, they're going to hate the companies irrespective of the actual risk of job losses or environmental impacts.
I'm curious for metrics, but Dario strikes me as being less perpetually online. Given equal time, they may each be unlikeable. But they don't put themselves out there equally–Sam and Elon are unable to focus on their work. (I'll admit I've had a soft spot for Dario since he stood up to Hegseth–maybe I'm just not seeing the equal hate he's getting.)
The fourth aspect to discuss is how do we want to restrict the influence of AI companies on politics? Will we allow the CEOs to implement Thiel's vision of a world run as a company with CEOs at the top via massive monetary influence on political decision making, effectively abolishing democracy? If they really manage to replace 50% of the workforce with AI, their influence over everything from regulation to elections to social security networks as well as foreign policy will be enormous.
There is another critique that is not specific to AI but I think is bigger than all of these: that a relatively small number of large companies, and the small number of very wealthy people who control those companies, have an outsize influence on many aspects of society. AI is the poster child for this right now, but tech companies in general are also reviled, and more generally all kinds of companies (media, fossil fuels, etc.) are targets of opprobrium.
From this perspective, the main irritation of AI is that it is the biggest, most intrusive case of "some rich guy is messing with my life". This is driven largely from the willingness of a small number of rich people to lose large amounts of money shoving AI down everyone's throats in the hope that that will eventually lead to them recouping those losses.
I believe a significant amount of AI criticism is really about this, and that means we need to resolve the overall issues of wealth inequality and economic skewing. People would be much less angry about AI if its development and ownership were more diffuse, and if the patterns of its use were more directly connected to its current observable abilities, rather than based on what some group of insiders thinks about how much its stocks may go up in the future.
I think you're missing one of the major reasons people are against "AI": the jerks at the top. When obviously nefarious people are lining their pockets and not bothering to even pretend to care about the people around them, it's no surprise they're hated.
> The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income
Every call for UBI should be qualified with two estimates:
1) How much money you think UBI will pay out
2) How much money you think the tax will generate
Creating a UBI program with AI taxes sounds like a clean solution to something until you do any math.
If we estimate today’s AI revenues across all the big providers at $100B annually (a little high) and divide by the population of the US, I get around $24 per month per person.
So a 100% tax on AI plans would allow us to give UBI of about 80 cents per day.
Even 10X the revenues wouldn’t make bring that to parity with UBI expectations. A 100% tax would also be an incredible gift to foreign AI companies that could offer similar services for half the price to everyone else in the world.
This is based on the assumption that AI is going to take all our jobs. If this is true, than as more jobs are absorbed by AI the revenue would increase.
As more jobs are absorbed by AI the revenue would increase, but not dramatically. As AI competition will lower the prices to the cost of serving tokens, which is close to zero. We’ll get more stuff done and the world would be better off. Even if we don’t invent new jobs we can compete for existing jobs harder, for example we can have 3 times more of X (for example restaurants) at 1/3 the client base on average. It doesn’t mean that everyone will be of a higher social status than average, which is what people actually want, and which is mathematically impossible.
I don't think the last two critiques are good critiques at all. The environmental impact is a function of our energy sources not energy uses. Complaining about energy and water when we have infinite energy beamed down to us surrounded by a planet that is 70% water seems silly.
And AI "Ikea-fies" art and creativity. It doesn't get rid of it. Of course you can get a generic table from IKEA, but for a real unique piece, you need to go to a real artist. Always.
The real main critique is for AI jobs that are a one-to-one replacement, your taxi driver, your dock worker etc. I don't think UBI is a viable solution (I used to) but nothing replaces the community and status that a real job gives you. This is going to be a tough one.
In my opinion the main, and really only, issue: AI is a necessity. Everything from war (including defense departments), to jobs, to rental advertisements, to food packaging, to restaurant reviews, to news, to education, to programming, to architecture, to politics ... will have to change due to AI. Not changing them is not really an option. Everything needs to be figured out here.
A lot of this will both cost money AND require people to change their jobs, their investments, their equipment, ... And they hate it.
Everyone, including governments will have to adapt.
And to add insult to injury, everything comes from the US and it's really expensive.
I think you may be going too far, as in your critiques assume the tech is further along than it actually is. There are three fundamental problems for mass AI adoption/AGI:
1. Lack of memory/continuity
2. Lack of agency
3. Lack of self-awareness
Based on my understanding of the basic 'loop' of an LLM, solutions for these may be decades off or not possible. Which leads me to the fourth problem:
4. Lack of compute
To get anywhere near AGI we need massive context windows. The whole thing is a mess.
I think people really confuse their imagination and expectations with reality. There's so much talk about AGI and mass layoffs. Then there is my experience.
I was talking to Claude and ChatGPT, trying to fix an issue with a simple function in Rust, which is returning a boolean depending on day of week and time of day. The logic looked ok to me, but tests were failing. Notably, my real world data derived tests were succeeding, while brute-force/comprehensive tests written by Claude were failing. I wanted those "just to be sure". Both Claude and ChatGPT were spinning their wheels, introducing fixes, then undoing prior fixes, so on and so forth. They also updated tests. We were going from one failure to another, while they confidently reassured me that "this is the fix", they found the "crucial bug" etc. etc.
Turned out my logic was correct from the beginning. My tests were correct. Claude's tests were broken. I realized this by writing my own brute force test. Just a simple loop with asserts and printlns to see what is failing. I did what the machine was supposed to do for me. In less than 5 minutes I fine tuned the test to actually check what it was supposed to be checking and voila. The "fast" thinking machine episode took me 2 hours and only produced frustration. Sorry I should learn to speak the language - AI reduced my development velocity :)
The only poverty I see coming is from collapse of quality after these dumb machines are used to replace people, who actually know what they are doing.
They are? Is your LLM ready to run your organization without further input from you or anyone? Do you realize that "memory" requires eating your hilariously small context window?
Have you not had a discussion with Opus where it insists it is correct about something it is objectively wrong about for several turns?
That seems like an unreasonably high standard. I like to think that I have memory, agency, and self awareness, but I'm not ready to run my organization without further input from anyone.
> Do you realize that "memory" requires eating your hilariously small context window?
I do! LLMs are structured differently than humans, so the component we call "memory" corresponds to what humans call "short-term memory"; practical long-term memory for an LLM looks much more like what a human would call "let me write this down". But you can and commercially available systems do load it into context on demand when it's needed for some problem or another.
The LLM only currently has the illusion of these things. Hence the bubble.
I know that you (or anyone) as a human being don't have the illusion of these things.
This is not like the car replacing the horse for transportation. The LLM as-is cannot fundamentally replace the person. They require the agency of a human to take turns at all, and even more so to enact change in the world.
Your LLM does not actively engage in the world because it does not experience anything. It only responds to queries. We can do a lot with that, but it's not intelligence. It can't say oh hey SpicyLemonZest, I was thinking and had an idea the other day. Because it has nothing between each query.
> How do we handle AI doing creative work? How do we treat AI creative work? How much creative work do we feel comfortable handing over to AI?
Just as a good for thought, looking back into history, during the late 1920s, mass production had a critical impact on Art Deco [1]. Artists were divided on the question if mass-produced art (using new industrial methods) could have a quality similar to hand-crafted art. It is clear that different people will have different opinion on the subject.
The technology is not there yet, but one example of mass production from AI would be book adaptation into movies. I'm sure that there are many other examples hard to predict that might: empower people, degrade art quality, improve art quality, divide people or maybe gather people.
> The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income. There are obviously a ton of variations on this idea, but I think the general premise of sharing the gains with everyone is sound. I don’t think many would complain if they lost their job but kept their income.
Nice, but completely unrealistic. The whole reason why AI is/will be adopted by companies like wildfire is to cut costs and increase profits. If they have to pay taxes equivalent to what they were paying in labor (or anywhere close to that), then AI is for nothing. Business will never agree to it. So this will never happen unless there is some sort of social revolution that completely remakes the system.
Why not just cut out the company entirely and go after something much easier to audit: capital gains taxes.
This is money obtained for doing nothing (i.e., like UBI but for the wealthy).
It should be taxed heavily and in inverse proportion to how long the investment was held. For example, HFT gains get a 90% tax. Gains held over 10 years gets a 10% tax. Fit an appropriate curve in the middle.
> The first is the fear of job loss, and I feel like this is the most straightforward to deal with.
In the same way that it was straightforward to deal with job loss from the industrial revolution, or when the US shipped away all its manufacturing capability?
Needing less offices, less people driving to those offices, less A/C and heating for those offices and less resources building those offices could offset the energy usage of AI.
We can just turn all the office buildings into datacenters, they already look like heating vents! cover them in solar panels on the outside to cover the windows, and done!
Universal basic income is not an adequate replacement for a good career. Universal unconditional prosperity might be one, but it's not clear whether AI can really do that.
> The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society. The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income.
The main critique of AI is that it's a dumb hallucinating parrot. It can't do genuine human quality work at all, outside of extremely narrow domains like basic translation and copyediting. Even for Q&A, while it can be useful by quickly accessing a huge storehouse of learned knowledge, the vulnerability to hallucinations means that human expert verification will always be required.
I'll note that there can be multiple main critiques coming from an incoherent set of viewpoints, since this is public opinion we're talking about.
Between "AI doing creative work", if you believe, and "fraud", there's all the low-key filler material that's sub-creative and sub-fraudulent. There's a similarity between the phrase it was made with AI and phrases like I didn't bake your cake myself, it came from a store or sorry, it's just a cheap plastic one. So part of AI's image is that it's a flourishing new source of disappointment.
The concern I hear the most (which I don't think is common among the general public) is the existential risk one (that an AI may be created that drastically exceeds human intelligence, and that it may accidentally be incentivized to take actions that destroy most or all of human civilization).
> concern I hear the most (which I don't think is common among the general public) is the existential risk one
Altman and friends' "stop us before we shoot grandma" PR tour in 2023 and '24 is largely the cause of this AI backlash. If you tell everyone you're building something that will kill us all, you will scare up investors. But you'll also turn the public against you. In truth, we have zero evidence of the alignment problem to date in the existential form. Instead, it's the usual technology enabling bad actors stuff.
The "alignment problem" as traditionally understood assumed a different path to AI development, where the best AIs wouldn't primarily operate on a substrate of human language. If AI becomes powerful enough to make human employment non-viable without being post-scarcity enough to make permanent unemployment viable, that's going to be an existential problem, and it seems no less likely today than it did in 2023.
> If AI becomes powerful enough to make human employment non-viable without being post-scarcity enough to make permanent unemployment viable, that's going to be an existential problem
That's massively moving the goalposts on what counts as "an existential problem." The original framing was not economic dislocation but actual existence, i.e. existential. This new framing is a retreat to a way-of-life argument.
And I'm still calling baloney! The "AI will kill us all" argument backfired on Altman et al, so now we have an "it'll take over all the jobs" pitch. But it's all smoke and mirrors for investors. We have no good reason to expect current AI methods will lead to an AGI that can not only do most human labour, but do so economically competitively.
I don't understand how you can consider the AI industry to be in any sense retreating from prior claims. The existential problem remains an active near-future risk; you're hearing a lot about the jobs problem because it's already here, now, today. Do you not remember how much less capable AI systems were in 2023, and how implausible it seemed that they could become as good as they are now without new theoretical breakthroughs?
You're hearing a lot about the jobs problem largely because the media's job is to scaremonger. There have been multiple studies that have concluded AI's impact on jobs/layoffs is negligible: https://budgetlab.yale.edu/research/evaluating-impact-ai-lab...
This has become some kind of truism that has little actual evidence. It was said it would happen in 2022, 2023, 2024, 2025, and is still being said but it isn't true. The models got better this year but they've always gotten better. And none of the recent improvements have been as dramatic as say GPT3 to GPT4. I feel crazy. People are saying the sky is falling but every researcher or person who analyzes these issues professionally says the opposite.
In that sense the general public is less superstitious than many technologists. Some of the general public might anthropomorphize too hard. Which is pretty tame compared to the belief of the alien AI intelligence sprouting and killing us accidentally or intentionally.
As far as the paperclip problem is concerned, we’ve already had that problem for a long time now in the form of good old fashioned human institutions.
No, you're backwards. The first point is definitely the most important and most tricky.
UBI is a dangerous distraction in this context. It's a mammoth cost to achieve an impoverished quality of life. It may be worth implementing in general, but it absolutely must stay out of the conversation about AI. It's like if the ruling class started announcing that they would like to imprison us all, and your "discussion" about the problem revolved around how we can make our future jail cells feel as nice as possible.
We are allowed to regulate businesses. We simply don't.
How would I know what's best? Anything to protect people. There are infinite ways to write regulations/measures, with varying degrees of results, difficulty, and severity. Obviously, while UBI might be a good idea or not in general, in this context it is on the low-results, low-severity, high-difficulty side (probably every solution is on the high-difficulty side, though).
I think frontier AI research should be outlawed until such time as there's a broad consensus on how society ought to deal with it. This would have to be coordinated internationally to be effective, but I think that would be achievable if the US sent a credible signal by forcibly shutting down any one of the major labs.
Even supposing we could somehow get the political will to do this, how would you write such a law? What counts as “AI frontier research”? How would you write a regulation around that that isn’t trivial to bypass without banning general computing itself?
As I said in a sibling comment, we're fortunate that training modern AIs requires large quantities of specialized compute. We just have to restrict GPU sales and outlaw GPU farms. I don't deny that it would be a seismic, controversial change, but I don't think it's terribly hard to implement if we can reach a consensus that we want to implement it.
There were historical worries about whether a ban would be feasible, but frontier AI research as we understand it today requires large amounts of specialized compute. Even if we couldn't or wouldn't destroy the chips, we could imprison anyone who tries to start a large training run, the same way we imprison anyone who tries to buy enriched uranium.
Yes, that is true, but it's not my point. I am not saying it'd be impossible to find people who are doing it. My point is that there will always be a group of people, who'd be willing to do potentially dangerous things as long as those things are possible and are believed to provide some sort of advantage. For that reason, those people would either be in decision making positions or have a good enough offer to decision makers. Speaking of uranium - I don't think AI is anything like it (although the AI industry propaganda really wants us to believe that), but even there we have examples of countries that were pursuing nuclear weapons both successfully and unsuccessfully as well as countries that could have them, but choose not to. So the ban itself isn't necessarily the main point here.
It means that if something is physically possible, someone will be doing it, regardless of legal, moral, or social barriers. False on its face? Not that long ago, global public opinion was mortified at the news, that newborn twins in China have been genetically modified. I am old enough to remember the outrage in the late 90s as the world watched the first cloned sheep grow up, get sick, and die. It was possible to do, so someone had done it.
The point is - with the use of law, morality, social pressure, we can moderate the frequency and scale of some phenomena, but we cannot stop it. I think this idea is what prevents some bans. "If the Chinese can do it, and we stop ourselves from doing it, they will gain an advantage and we would lose". Substitute "the Chinese" with whoever is the opponent at any given point in time and you have a rather plausible explanation for why things were the way they were.
Substantially increase the capital gains tax and cap inheritance to only enough wealth for a single generation to get by without performing any real work.
This might also come with the benefit of reducing the number of nepo-imbeciles running things now.
The easiest (and hardest) solution ever - fucking ban "offshores". So for example a UN session convenes and passes an extremely simple resolution - all members are instituting a minimum tax (say 15%, for example) for literally everyone and everything. And all countries non-complying must be completely sanctioned a-la Iran. This would take less space than a A4 page and super easy to draft. But of course absolutely impossible to ever pass, just a fantasy.
While this is counter-intuitive, since we are talking about taxing billionaires and megacorps etc. to the tune of >90%, but it would actually make those people and companies part with their gains, unlike 90% tax which will never work. Every single company in the world above certain size and every single millionaire and above, are employing criminal schemes to evade paying taxes, declare funds and participate in the economy in other ways. By enforcing a certain minimal tax and banning all "drains" the amount of accrued money would be astronomical. There would be no need for 90%, instead the crazy 50-60% taxes paid by the fools (actual middle class) could be meaningfully reduced to the levels half a century ago and there would still be money left.
It is quite telling that so many comments here are about UBI as a solution. UBI is a billionaire proposed solution, or distraction. Yeah, of course they want to keep control of the surplus and just have a sustenance spigot for the former workers.
> We are allowed to regulate businesses. We simply don't.
If workers are defunct, what are businesses? Also defunct. Business owners can’t gloat about not needing workers while at the same time claiming that their businesses have a right to life. What is a business owner sitting on a completely automated set of assets? Smaug sitting on his cache of gold.
> The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society.
This is straightforward? This is a colossal task. Monumental. Billionaires own it. That’s the political status quo. You could build something to counter those centers of power. But from what base?
Well-paid software developers have scoffed at or been ignorant of worker organizing for, maybe forever? But I have good paycheck and equity... Now what?
But it's not a clear way to solve the issue. UBI, even if enacted tomorrow, doesn't stop the enormous crash of the middle-class, and the fallout of that. Maybe it will stop some people from literally dying - that's "solved"? It's a small buffer at the very worst end of a gigantic problem. The word "solve" is totally ridiculous.
Okay you’re right. In some sense of the word it is straightforward. But I still think it is not straightforward compared to most things.
I can get more muscle mass at the gym. That is straightforward. Only a few things makes it not easy.
But “share the productivity with society at large”... you have to collapse so many more variables.
- How to organize political resistance against AI tech billionaires
- How to not get co-opted by counter-measures by AI tech billionaires
- How to resist false promises (backed by nothing) that some AI tech billionaire will enact UBI for everyone so everything will be fine (those with all the power can withdraw whatever they want at any point)
- How to deal with white collar competition in the interim period before automation: everyone using AI and nodding along with it[1] just to not “fall behind”
- How to potentially fight against a small minority (AI tech billionaires) but that now might have enough megawatts to turn their stochastic parrots against any dissenters
Your first critique has a massive hole in it, because it assumes that every person whose job is replaced by "AI" is actually going to be done as well by the "AI" as it was by the human.
To the extent that it's even true that this is going to happen, that is laughably false.
Yes; LLMs can answer some questions well (but unreliably) and with the right setups, can be rigged to perform some tasks well (but unreliably).
There is no way they are ready to take over a single full-time job. If any employer tried, the number of errors in the performance of that job would jump by a huge amount, because LLMs are not reliable and cannot be made so.
> The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income
Problem for jobs is that there are 200 countries and all the earnings will go to a few. Universal basic income for everyone? Or just the US?
Who gets to keep their house locations in a new fair world? The person whose parents bought in the right place 50 years ago? Who pays the money these models earn, if nobody clicks ads or does a job? What is income for if we don’t work and can just ask the AI for everything we want?
What happens when the super smart AI comes up with “better” (more fair, consistent, etc) answers than you think you have to questions like the above? What if they end up socialist? Do we force it (and invite risk it escapes and fights us for the greater good) or give in to the presumably more thorough reasoning?
USA will never have UBI, period. So any idea that includes any mention of is an absolute non-starter. Outside of the USA, perhaps, but for us that is never happening.
This is the "safety" messaging that OpenAI and Anthropic keep harping on and on, and on about, while whistling a merry tune as they turn around and sell AI to the US military and worse, to the tune of $billions/year already.
The "and worse" needs elaboration, because fundamentally the single biggest cash cow for AI vendors will be (and maybe already is) implementing a dystopian future where everything we say, type, or do will not just be recorded but also: read, analysed, and cross-correlated by unfeeling heartless machines tasked with keeping us in line.
I'm not being paranoid, President Biden said as much, but only in reference to China. If you think only China has motivation to use AI to keep a lid on dissent, I have a bridge to sell you. And if you think the Land Of The Free(tm) will never abuse AI in this manner, well... I have some bad news. You may want to sit down.
Here in Australia, the cyberpunk dystopia is already starting to be rolled out. A customer of ours asked their IT team to hook up a variety of HR-related information sources to their new AI system tasked with making recommendations for hiring, promotion, and demotion.
Yeah, AI-enabled surveillance capitalism is likely to be every bit as bad as what people imagine China is doing with their social credit scores.
And the scary thing is that you can probably easily sell it to Democratic voters if you track racism scores for people, so you can filter people out of your dating pool or job/rental applications. Most people don't care about privacy as a fundamental right, and they'll roll over and compromise if you give them a way to track what they hate. You just need to make sure it is "bipartisan" and it'll be wildly popular.
Like copyright. All modern LLMs are built on troves of copyrighted material that was used in their training. AI companies are claiming this is fair use, while pretty much all of the copyright holders would strongly disagree. This is going to get litigated for years, but regardless of what various legal systems decide, morally, people can be against this.
And people are already sick and tired of AI-generated content being used to replace human made content, be it on Spotify or TikTok. This is part "AI replacing humans", part "I'm being scammed by lower quality content".
I understand your points, but I think what scares people is that the solutions you propose are disregarded by our politicians. At least in the US, both politicians and the large donors funding them seem to be more and more allergic to anything resembling an universal basic income, and they do their best to scare people away with fearmongering about “communism”. The US is also doing a hard U-turn away from environmental protection and is trying to frame environmental conservation as radical and harmful. Other countries might be doing better on these fronts, but it’s definitely not a good sign that the US doesn’t seem to be on board with your first two solutions.
In the more immediate run, I think the concern is that AI will reduce the ability of workers to collectively bargain and thereby grant the wealthy oligarchs even more control over their workers’ lives.
I completely agree that governments and power brokers will disregard these solutions unless forced.
However, they will also disregard any attempt to slow down or halt AI progress in general, so it isn't like the people wanting to end AI in general are any more likely to succeed than those wanting to do what I propose.
I personally feel my suggestions would be slightly more feasible to gain support for than trying to stop AI completely. The power brokers in control of AI currently certainly aren't going to stop developing and pushing AI, but they might be convinced that sharing the wealth is the only way to avoid massive revolt in the long run. While it is conceivable that the wealthy wouldn't need the masses for labor like they do now in the AI future, they still need to not be killed in a massive uprising when 90% of the population is unemployed and starving. While I know a lot of people think the plan is just to kill off that part of the population, that is not that easy to do even with an army of AI robots, and would likely be cheaper and easier to just share a bit of the productivity. I don't think it will be trivial, but I don't think it is impossible.
> The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income.
The very same CEOs are extremely against social support, any taxes for themselves and any govermental agencies that help or protect people.
How is can this be possibly easiest in the world of Thiel, Musk, Trump, Vance, Palantier and overtone window moving toward economically conservative for years.
Picasso famously said "Computers are useless, they can only give you answers."
You can't put things back in the bag. Perhaps the true underlying social problems are:
1. There's too many humans and not enough jobs.
2. The capitalist system only rewards profit seeking and cost externalization.
3. Our democratic representation myth is dead and buried.
4. Even in the developed world, middle-class security is gone.
So here's my question: given the current global system has failed and is clearly in its death throes, as a pan-national species how can we transition to a less mono-focal economic rationalism driven means of governance and self-organization without turning in to an autocracy or reinforcing negative nationalist bloc-level thinking that will tie us in to the same old human-thump-human stone age ape-ism and environmental cost externalization?
Perhaps AI can help in areas like improved education, improved media, proposals for improved government process or process transition for enhanced efficiency. Enforce transparency and accountability in the halls of power by reducing human process and corruption. Public auditable decision making and public auditable oversight. It's at least potential grounds for partial optimism. The best I can summon under present conditions. Of course, we want to avoid a dystopian global AI autocracy, the technocratic basis for which we have already well established, but if you view the present system as a dystopian human autocracy with the same technocratic basis (an increasingly rational perspective given recent events), then it starts to look more rosy.
I am not sure if inflation will work exactly the same in a world where AI/robots do all the work.
Inflation is driven by scarcity. More demand for a fixed/limited resource drives up the price. Historically, every good and service humans bought followed this pattern, so we didn’t even have to consider an alternative.
Already in our current economy, however, we have seen a good portion of our economy shift to things that do not have this characteristic. For example, take something like a video streaming service. The marginal cost for additional demand is small enough to be almost negligible; if everyone in the world decided they wanted a Netflix subscription, there wouldn’t suddenly be a shortage of streams or a run on episodes of The Great British Bake Off. They would have to build more datacenters, but the cost per additional user is tiny compared to almost every other traditional good that came before.
If AI and Robots start doing all work, then this would spread to more of the economy. The increase in productive capacity would severely reduce the limitations that have historically driven inflation. We obviously have to invest in building robots and AI, but once we have enough robots they would be making more of themselves and we would be limited by natural resources, but we could use robots to get more of those, too… and we could focus on clean energy, since we would have plenty of robots to do that work, too.
> The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income.
If the Epstein class wouldn't go for something like this in a world where they needed workers to produce, the idea that they will when we are surplus to requirement is inconceivable.
reply