What irks me about Amodei is his insistence in his public communication and speeches for the role of AI in defense and in providing a strategic advantage over "the enemies of the US". Not sure how much it is political talk to appease this particular administration but it seems more prominent and reiterated than I'd like.
Critical but maybe not sufficient. Hassabis claims to manage two separate workdays every day, the first spent in meetings in Deepmind's office, the second until late at night studying new papers. So not just high IQ but incredible energy too. And finally, as I understand, a highly competitive attitude.
If everyone picks red everyone lives, nobody needs saving by picking blue. Picking blue obliges others to pick blue to prevent your death, risking their own life in turn. Red is the moral option.
There is no topic in which you'll get 100% of people to agree with you, and this is no different. There will always be people who choose blue. Arguing that you could ever get 100% of people to pick red is a coping mechanism to deal with the knowledge that your choice to pick red will result in some deaths (i.e., unless blue wins).
That isn't to say I categorically judge anyone who would choose red.
If there's good reason to believe a majority and especially a supermajority would choose red over blue, then choosing red is indeed the only rational choice, and convincing overs to do the same is the only way to save lives.
What I like about the question is that it can be used to measure whether a society is low trust (majority red) or high trust (majority blue).
However, where I take issue with the article is the assertion that it's impossible to get a blue majority, especially in the face of polling that suggests such a majority already exists. The article's claim that choosing red is the only moral choice seems at best to be self-delusion.
The utility of choosing red and the morality of convincing others to follow suit maximizes the larger the currently expected pool of red gets, sure. However, while choosing blue has less and less personal downside the greater the expected majority of blue there is, similar to red, the morality of choosing blue maximizes the closer you get to an even split, since it's the product of the potential lives saved by going blue and the likelihood your individual vote will push it over the edge.
Personally, I'd choose blue. I'd rather sacrifice myself than be party to the deaths of billions of people, so if there's even some hope at convincing the majority to go blue, I'd feel obligated to stay with it even if pre-polling suggests things initially tip toward red. I'd also be a bit wary of living in a society now devoid of anyone willing to self-sacrifice. I'm not convinced most people choosing red give that any thought.
> However, where I take issue with the article is the assertion that it's impossible to get a blue majority, especially in the face of polling that suggests such a majority already exists.
The people saying they'd vote blue would never actually do it. People support lots of altruistic things in the abstract, but almost nobody does it when it involves real risk and sacrifice. The cost of saving a kid in Africa by donating malaria medicine and insecticidal nets is only about $5,000. How many people do you know who will cancel their Hawaii vacation and donate that money to an African charity?
Every time you choose to take a vacation, or get a tricked out Macbook Pro, etc., you are in a real way choosing to allow some kid in Africa to die. But you do it anyway.
I've watched a couple of the recent Alcaraz/ Sinner matches, knowing next to nothing about tennis. The first serve ends up mostly on the net and is repeated; most points are on the serve, the response to the serve, or the third exchange. Longer rallies are rare and a good chunk of the time (more than half, I's say) the game is stopped. Boring as hell.
Example stats:
Sinner- Alcaraz Wimbledon 2025
238 points total of which 150 (63%) in rallies between 1 and 3 shots and 20% between 4 and 6 shots.
Some people that pick red might just think that everyone will pick red. Then everyone survives. In a way it could reflect a positive view of humanity. Plus they wouldn't die. You'd need to pick red to make sure you're still alive to kill them all.
> Some people that pick red might just think that everyone will pick red.
Yes, isn't that always the excuse? "The others are bad so I'm justified in being bad".
But seriously, I am way more uncertain than I first thought. Basically the only reason to choose blue is because you know that a lot of people will do it, so that in order to save their life you choose to put yours at risk.
The guy is clearly an obsessive hyper-perfectionist- he's telling (or boasting) of taking a culinary obsession from reproducing fine-dining dishes (when most people are content mastering a few decent recipes) to building automates curing chambers and butchering whole animals. It's kind of obvious that this personality leads from any random objective to into the deepest of the rabbit holes where everything is studied and annotated with the utmost precision. Funny as a clinical case, not sure I'd like to be around someone like this though :)
Point is that sub-millimeter precision when measuring rings is doing absolutely nothing to further his shooting skills to take down a tasty deer. To the contrary. Time is limited, and every minute spent perfecting this automation was not spent improving shooting skills by, you know, shooting. In other words, this may well have made him a worse shooter than he could have been. Nothing wrong with it, but let's call it for what it is.
A perfectionist defines a goal and then finds the perfect path to get there. He was just giving in to distractions and "perfectionist" is the wrong label.
It's not about submillimetre precision (OP here), it's about knowing if you can shoot well. The most common deer stalking certification in the UK (DSC1) involves three shooting tests from 20, 70, and 100m - if I don't care about 8/10 vs 9/10 shots from 25 yards, there is no way I am putting a shot within a 4" circle from 100 metres.
> every minute spent perfecting this automation was not spent improving shooting skills by, you know, shooting
I mention in the post that I had access to the range only 1-2 evenings a week, so there was no way I could improve my skills outside of these few hours.
> if I don't care about 8/10 vs 9/10 shots from 25 yards, there is no way I am putting a shot within a 4" circle from 100 metres.
Totally with you there. Though isn't what counts in the end how close you were to the center? If you look with your eyes and it looks like it was in the 3rd ring, what does it matter whether it "technically" wasn't because half a millimeter was in the next ring? It was surely much better shot than fully in the next one, unless you actually want to go to the olympics or are otherwise competing in the sport.
Don't get me wrong, I totally respect the challenge of automating the counting but that this actually helped your progress still seems doubtful to me.
> I mention in the post that I had access to the range only 1-2 evenings a week, so there was no way I could improve my skills outside of these few hours
Ok, fair. Though we can surely agree that even though the automation-building that you did during all this extra time improved your skills, it was coding skills rather than shooting skills? (Which, again, is fantastic!)
I'll bite: no I don't think so. If the examples are not cherry-picked and by "image model" we mean just the ability to generate pictures, this looks like parity with human excellence, there isn't much space for further improvement. The images don't just look real, they look tasteful- the model is not just generating a credible image, it's generating one that shows the talent of a good photographer/ designer/ artist.
reply