Without social media the majority of the populace would be completely misinformed on everything and the current Iran war would have 60%+ support like the Iraq war did, how is that possibly a better world?
While old style mass media could move in lock step, the lacl of that mechanism seems to just produce a flood of counter narratives to counter narratives. It provides the illusion of being informed but actually being more confused.
As they say, I would rather be uninformed than misinformed.
Not "every parent knows this"; lots of parents fiercely oppose their kids being banned from access to decentralized information and communication sources. Would you prefer your kids get all their information from textbooks written by Glisaine Maxwell's father, all their news from sources owned by zionist-aligned billionaries?
>Allowing children to smoke and drink from age 12 would be a social disaster, it's not even an argument - obviously - the 'prohibition' works - and in that case, there's nary any negative externality.
The negative externality is the huge amount of young adults damaging their bodies with excessive alcohol consumption in college because they never learned to drink healthily. The US with its late legal age for alcohol has a far bigger problem with youth alcohol abuse than European countries where youth are introduced to alcohol earlier.
Given that alcohol is carcinogenic, there is no such thing as "drinking healthily".
That point aside, alcoholism rates in the Eastern EU are much higher than the US. And Russia/Belarus leads the world. I don't think younger drinking age correlates very well with reduced rates of alcoholism.
Not really though. Drinking age is 18 in Sweden and they have hugely worse rate of hazardous drinking than US, same for Finland, and a bit UK where there are slightly fewer restrictions.
The legal age for alcohol is 18 in France.
This idea of 'US binging' doesn't really hold that much water, though one could very well argue that 21 is just 'too old' - the fact is, these are as much cultural issues as anything else.
Same with Japan, they are 'polite drunk', it's not even quite the same thing.
Take the argument and apply it to smoking or cocaine, fentanyl and you see that it doesn't really work out.
It really depends.
US could have lower drinking age, possibly 'permitted with parents at 16' - but - a much more responsible culture overall as well. It's hard.
How is it even remotely as destructive as casinos, where the odds are always against you? From a probability perspective Polymarket is much more fair, as you actually have positive expected value if you have an information advantage.
Yup. Gambling is already bad but somewhat contained within the walls of a casino or race track.
When you put money on the line for events that are influencable, things get dark fast.
Seemingly harmless sports betting is rife with stories about mob members going to extreme lengths to change the outcome of a sporting event.
It's not hard to see how bad a bet like "so and so won't show up to event x" can go. These don't have to be bets about killing or injuring people to result in people being killed or injured to win a bet.
Casinos are regulated and have to be much more transparent. In Europe and Australia slot machines are required to have appx min RTP (return-to-player) of 95%. With sportsbetting, bookmakers cannot just make up results. It's a dirty business preying on weak, but it's not hiding that.
Polymarket prediction results can be swayed by whales and differ from reality.
Example: they ruled Tesla unsupervised full selfdriving a real usable feature in ~Feb 2026. It still doesn't exist and won't. This decision is far from being the only such one.
I will make this argument in favor of casinos, which is that at least we have coevolved with them. They've been around for centuries. We collectively recognize the dangers. We are not collectively blindsided by them.
Individually, yeah, by all means they can prey on people. But they're on the list of things that have been preying on people for centuries, like alcohol, and all kinds of other things. Ring-fencing casinos has a track record of some success at containing them.
I mean, sure, I'd love to wake up tomorrow and for all of the human race to have advanced to the point that to every last individual we are no longer gambling and that industry vanishes in a puff of smoke. I am as far into the belief that they are immoral as it is practically possible to be. But they are at least a known and knowable risk.
These prediction markets are blindsiding us. We could put up with them for another few decades until we coevolve further with them, or we could just, you know, not. Just end them now. Plus, prediction markets have a certain meta-ness to them that casinos largely lack that will keep them fresh and coevolving their own new ways to predate on us. Casinos have basically reached their final form, prediction markets could take yet more decades to get there and it's possible there isn't a stable endgame with them. Or, again, we could just end them here and now.
I don't understand this logic. If people with information advantage get to cheat and win, then everyone without that advantage gets screwed. I struggle to see how this is even remotely "fair". It's like playing poker, but some players get to see what everyone's cards are.
Even humoring your logic begs the question: Why is monetizing an "information advantage" valuable to society?
>By pawning it off to AI to solve, you have learned nothing, not even how to prompt correctly as test questions are usually formulated well enough that AI doesn't need prompt massaging to get it.
If you got AI to produce a working solution, you solved the problem. In the real world nobody who's paying you cares about the method as long as you deliver results. Students taught to solve easy problems by themselves will be at a big disadvantage in the workforce compared to students taught to solve hard problems using AI.
The part you're missing is that the evaluator already knows the answer. They're not looking that you can arrive at the correct answer, but that you know how to arrive at the correct answer. If "arriving at the correct answer" just means retrieving data from a Baysian database using a Markov chain you have only demonstrated you provide no value in the chain and should indeed get a mark of zero or get recycled.
>The part you're missing is that the evaluator already knows the answer. They're not looking that you can arrive at the correct answer, but that you know how to arrive at the correct answer.
The university evaluator is not the one paying you, the one paying you is your boss or customer. It doesn't matter how highly your university professor thinks of you, if you can't solve difficult problems as fast because your university never taught you to solve hard problems with AI, you're going to be at a competitive disadvantage in the workforce when you graduate.
I don't think the entire purpose of schools is to teach you how to answer some specific set of questions; they want to improve your knowledge and skills in various domains, and the questions are merely a roundabout way to assess that. If you can answer the questions but don't know the knowledge, you're missing the most important part.
For most students the "purpose" of studying computer science at university is to get a better job and make more money. And for the people for whom this isn't the case, they're generally smart and motivated enough to learn the extra details they're interested in by themselves.
There's also no reason to learn to read and write! First graders could just point their phone at some text and have it read to them, or dictate to their phone to achieve the reverse. Why learn to swim, walk, run? Machines can do that for you too!
For now there's plenty of people who are significantly more capable than AI models. Someone who fully outscources to machines will never join that club.
You have to evaluate students on their own skills before you continue their education, because at some point AI models won't be able to help them. Anyone can use some LLM to pass the first few months of undergraduate engineering disciplines, but if you got through that and haven't learned a thing, you're completely fucked. Worse, you won't even notice the point at which AI starts to fail until you get your test results.
Once the above is not true anymore, education is pointless anyways. However for now AI can at best replace the worst performers and only in some areas.
>You have to evaluate students on their own skills before you continue their education, because at some point AI models won't be able to help them.
If at some point AI models won't be able to help them, then give them assignments that reach the point where AI alone isn't enough, so they'll only be able to solve them if they learn whatever is necessary. This is what's meant by "making assignments harder". Students who learn to solve harder problems with AI will be more competitive in the workforce than students who only learn to solve easier problems by themselves. Because AI already allows people to solve harder problems than they could unassisted, but it's a skill that needs to be learned.
As an example, with AI, it'd be a reasonable assignment to ask students to write a working C compiler from scratch. Without AI that'd be completely beyond the reach of the vast majority of students.
That's great for autodidacts, but most students will be stumped by a complicated problem if you don't slowly walk them up an incline first.
Also what do you think is an appropriate assignment for first graders where "AI is not enough"? Are we supposed to give them problems meant for engineering majors?
The things you are saying at best apply to a few select areas of education and you are hyperfocusing on them. What you are neglecting is that a lot of education focuses on teaching tool use: reading and writing is a tool, CAD software is a tool, AI is a tool, even language is a tool. For many people the best way to learn to use tools is being taught by another human being. That human being has to evaluate their progress somehow. If a first grader uses their phone to have text read to them, this tells me very little, except maybe that they can at least understand spoken language to a degree.
Using LLMs effectively, especially without essentially becoming the LLMs meat-puppet, requires a set of skills many 10th graders still struggle with. Skills like putting what you mean into words, extracting meaning from text, and thinking critically about the information you are fed.
Finally there's the matter of philosophy, ethics, and politics, which also happen to be on the curriculum in some places. Are you going to let a LLM argue for you? If you have never learned to evaluate your own beliefs and turn them into something coherent that you can communicate to others, and instead let the LLM argue on your behalf, then congratulations: you have just un-personed yourself because you refused to let others help you become an actual individual in society. You're a sack of meat hooked up to a machine. ... It's probably obvious I feel strongly about this in particular.
At the end of the day, we can at least agree that people should learn to read and write? For now?
> Students taught to solve easy problems by themselves will be at a big disadvantage in the workforce compared to students taught to solve hard problems using AI.
What hard problems could students solve with AI that requires the students to be especially trained? It seems you are thinking of GPT-3 style "prompt engineering". That's a thing of the past. Students can just copy the assignment into the LLM. They don't need to be taught to do that.
There are a number of assumptions in what you say that don't necessarily hold.
1) That school is simply about landing a job.
2) That there is a value in students knowing how to have the AI do problems for them.
3) That follow-on effects of manually solving difficult problems is discountable compared to the direct output of the work.
I would say you're absolutely correct in that people pay for the result and they don't really care how you got there. But that's a pretty shallow rationale which overvalues the ability to be the conduit from the source of requirements to the final output and undervalues the individual ability to think for one's self when faced with the challenges of technological, geopolitical, or simply uncontrolled personal circumstances.
"The conduit", who you seem to be believe is the one with marketplace advantage, is exactly the person I would say is the most vulnerable. Not because getting the AI to produce demands is without value, but that its quickly becoming a task that doesn't need the intermediary at all. Those magicians that can prompt/agent/mcp/etc their way through to positive successes are actively being challenged by the very AI producers which our conduits people now depend on. Removing the need for intermediaries would be a great competitive advantage for any AI vendor able to achieve it. But insofar as intermediaries create output from LLMs, they'll not be very well differentiated: the common wisdom tends to be the output, lest the AI be accused of hallucination or being overly supportive. But when everyone is using AI for everything the opportunities will be in arbitraging that which is missed by common wisdom... filling in the cracks that any responsible AI would simply never venture to consider. Our conduit-person will be at a decided disadvantage because it takes real thought to know when it's best to color within the lines, and when it's best to not do so.
And that's really it. A good education is teaching you about the process of thought and becoming practiced at thinking. I would expect a better educated, thinking person to more easily adapt and make use of technology such as generative AI to solve problems more so than a person that just knows how to deal with today's prompting needs. The thinking person will be able to understand the bigger picture to better get a consistent and high quality series of results than the person just getting results as needed.
And that's really it. The output of a good education is you as a thoughtful & knowledgeable person: the output on the page is merely a means to that end. But if you focus solely on the answer on the page and the only important thing... you're really evaluating the AI, not the person that acted as intermediary.
In otherwords, if the person following your advice comes for a job, simply ask them which AIs they used in the interview and then just sign contracts with those vendors instead... you'll get better bang for your buck cutting out the middleman.
If American billionaires couldn't exist then America would be even poorer and underdeveloped than Europe, the entire tech industry wouldn't exist, and it'd be entirely at the mercy of China. Because nobody's going to start a business in a country that violently confiscates their wealth just for being successful. The envy of people like yourself is a deep moral illness that destroys civilizations if left unchecked.
Have you actually spent an appreciable amount of time outside of the US? Europe isn't the place of destitution and squalor you imply. I highly suggest it, to widen your perspective at least. Maybe then you'll see it's quite the reverse in many cases.
It's the exact same thing they did with Google BigQuery, which initially was an absolutely amazing piece of technology before they smothered it with more and more limits and restrictions. It's like they're putting SREs first, customers second.
All those apply to OpenAI+Codex too, but they're far more generous with limits than Anthropic, and with granting fresh limits to apologize when they fuck up.
Especially since Codex faced the same issue but the team decided to explicitly default to only ~200k context to avoid surprises and degradation for users.
reply