Hacker Newsnew | past | comments | ask | show | jobs | submit | rustyhancock's commentslogin

Contraversial opinion perhaps, I don't think the cards or the game itself took him to fluency.

Probably the social contact.

I mean N2 (JLPT levels run from N5 competent beginner to N1). Is really quite advanced.

Being N2 is far further than many will ever make it into learning Japanese. To arrive at N2 is very impressive. I think typically N3 is minimum for work on Japan (outside of lower end jobs or things like TEFL).

But JLPT is heavy on theory and light on practice.

It makes sense to me that someone with very little practice but pretty advanced grammar, vocabulary (including Kanji and spelling). Would rapidly pick up fluency if they got a reason to speak.

Not to discount the MtG effect but N2 is approximately CEFR B2 which is fluent. It's just that N2 doesn't assess fluency meaning you can get there with near zero confidence in conversational Japanese.


My Japanese isn’t good enough (I feel like I could pass N3 if I wanted to, but I do not find exams fun, and it won’t benefit my career, so I don’t) to comment on how the MtG rules text reads in Japanese, but I can say that English MtG rules text is so grammatically constrained that I’d say it barely qualifies as English at all, so I could easily imagine someone who could read MtG English rules text perfectly but be totally unable to even hold a simple conversation in English.

And if anything, Japanese isn’t even worse for this. Natural Japanese is a highly contextual language, and so I would expect card rules text to stray even further from natural language due to requirements for total unambiguity.


Also N2, even N1, is generally not remotely fluent. Plenty of Chinese can pass N1 and still failed to have a conversation.

Further, it's easy to pass N2 and/or N1 and still not be able to read most novels or listen to most movies when they get to things like legal proceedings, military strategy, science. All things that people can easily do when actually fluent


When it comes to the practical results it doesn't matter. Japan is a society that values rubber stamps over actual competency/performance. If you can present an N1 certificate to an employer you're more likely to be hired than someone who's fluent without it (assuming they aren't Japanese)

Source: live and work in Japan


> CEFR B2 which is fluent

That certainly is controversial. I don't think many people would consider anyone who is fluent to only be B2.


Fluent means different things to different people (and in different languages!).

As I understand it, B2 means one has a solid, functional proficiency in the language. They conversate/listen/read/write in diverse situations, without needing to switch to a different language or to prepare in advance.

They're very likely, however, to make mistakes, say things in non-idiomatic ways etc. although this is expected to be minor enough to not affect the ability to understand them.

In order to get to C1 and above, one needs a deeper understanding of the language - phrases, idioms, connotations, registers, etc. and a broader set of situations they can handle, e.g., a philosophical discussion. An of course, errors are expected to be rarer.

So, literally speaking, B2 is rather fluent, since the language is "flowing" out of them and they're not stopping to think every other word (which is, as far as I understand, a common interpretation of flüssig in German).

But as "fluent" speakers should know, words come with expectations beyond the literal meaning :P


Yes I know it's an odd claim.

But I as far as I recall B2 is when you start seeing native people failing the exam without preparation with C2 becoming a legitimate challenge for native speakers.

I believe the same threshold exists in N2 but because it's so Kanji focused without much assessment of fluency.


I agree. Magic-ese is a language on its own. It's close to English, but not quite. What is an "intervening if clause" in English, for example? Learning the rules of Magic will leave you confused about a natural language if you didn't know any better.

However, gaining the linguistic mastery to explain such complex rules systems, let alone practice small talk with the person across from you allows you master a real language.


Agreed, especially in some subpopulations. I think the strong case today is to research it's use in people with Dawn's syndrome (accelerated development Alzheimer's Disease due to 3 copies of the APP gene). Target it only to people with those 3 copies.

It's legimate as people with DS have a hugely increased risk of AD. Their risk increase seems likely related specifically to Amyloid. Many with DS can consent to it.

In general my completely gut feeling is that once we can target Tau we might find that targeting amyloid is needed to fully curtail progression.


Agreed, using these anti-amyloid treatments at an early stage in people with a high risk of early onset Alzheimer's disease, such as those with Down's syndrome, is likely the most robust test available of the theory that Aβ amyloid is directly causative of this disease.

Trials are currently ongoing. The results should start trickling out in a few years...


I think an Olympiad format is better. But the financial incentive is such that it might be near impossible to stop leaks.

I.e. A panel comes up with a series of problems.

Like advent of code or project Euler but more complex and constricted.

Benchmark outcomes could be performance points and measure of cost, time to solution (well token count really).

A couple times per year it's run.

It avoids overfitting.

Overtime the tasks can become more complex if needed.

If they benchmax it into being able to complete full products from spec and robust implementations amazing.


SWE-bench was created to replace olympiad coding benchmarks. I think past olympiad coding benchmarks were much worse representative of real-world coding than something like SWE-bench, which is derived from real units of labor.

Further, olympiad style benchmarks are arguably easier to contaminate / memorize unless you refresh it regularly; but that goes for SWE-bench too.


I was picturing one-shot performance only for the benchmark, on novel real world tasks. I.e. the score on the March Olympiad you got in April isn't relevant.

Simple enough that anyone could run it with a regular subscription.

Really unless we can get the providers to ditch the gameable benchmarks they won't.

But industries love nothing more than a benchmark they can manipulate.


I think on HN atleast. People enamoured by Claude are the vocal majority.

The view of Claude on HN is extremely positive and nearly every thread will have highly positive comment "that is not an ad".

I think people are seeing others just irked by the constant stream what feels like ads and reading it as Claude being somehow disliked.


They are an interesting prospect but their use isn't quite as claimed.

They are extremely vulnerable to the same drones humans are.

It's more along the lines of this is a patch were not expecting active fighting this robot can act as a deterrent and surveillance.

Cheaper and simpler than a loitering IRS drone. But more concentrated in domain.

I believe for a while Samsung developed similar drones for the demilitarised zone in Korea. Those could be static as they were hard wired in.


> They are extremely vulnerable to the same drones humans are.

I am not confident about this. Human gets disabled by few small shrapnel projectiles into soft tissue. It is possible to build way more protected robot, for which you need some direct hit to disable it. That robot could also be very agile: e.g. do some evading jump at the last moment before being hit.


I think you just pitched a Robot Wars revival for 2026.

This article shows them being used for offense.

https://edition.cnn.com/2026/04/20/europe/robots-ukraine-bat...


I love this but the licensing is ashame.

Does the NC in CC4.0 BY-NC-SA mean I couldn't for example sell a device using this?

What fustrates me about this is that it's such a narrow workspace, if I decided I wanted a 5x5 font there are very few ways to do that.

I get that this probably isn't copyrightable but at least make your license sensible.


You're partially right compared to placebo only about 5% of people are painfree over the effect of a placebo when taking paracetamol.

Paracetamol got it's start as replacing the more effective but much more dangerous and withdrawn drug Phenacetin.

Why don't people notice that it's such a small benefit over nothing? Well because placebo effect is quite good for pain and pain is usually transitory anywhere..if you have a tension headache you're probably going to aim to relax. Turn away from the screen or even have some caffeine and those are more effective than paracetamol!


Where did you pull this 5% from? There are gazillions of studies showing higher or lower efficacies for different kinds of pain. Along with the inaccuracies about Phenacetin (whose MOA is metabolising into paracetamol).

You will indeed find various figures for various pain types all are far worse than ibuprofen.

Here is an example from the Cochrane library

> For the IHS preferred outcome of being pain free at two hours the NNT for paracetamol 1000 mg compared with placebo was 22 (95% confidence interval (CI) 15 to 40) in eight studies (5890 participants; high quality evidence), with no significant difference from placebo at one hour.

A NNT of 22 means that in absolute terms 1/22 people met the positive endpoint criteria more than placebo. This figure is usually quoted as 20% for placebo and 25% for paracetamol giving NNT of 20.

The NNT of 22 gives 1/22= 4.5%.

https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD...


"pain free" is a long way from the pain is manageable. Pain is an understudied subject, where we have too little knowledge. Just using the word manageable is an indication of this.

That's very true, but the metric is applied to all medications you compare against that's what's important. You also just get a baseline idea of what's good by guessing what you'd accept.

Episodic tension type headache tested with ibuprofen Vs placebo NNT is 14. (Btw that's not great itself) But it's better than paracetamols often quoted figure 20.

Here's why I say it's not great. Why don't you guess some reasonable NNTs for say moderate depression treated with SSRIs, or no relapse in schizophrenia treated with an antipsychotic. Now guess the NNT for a statin to prevent a first heart attack.

SSRI for moderate depression about 10, antipsychotics to prevent schizophrenia relapse over 2 years NNT= 3 (excellent )Statin to prevent a first heart attack 200! (This one always shocks me). Statins have a clear role of course.

[0] https://thennt.com/nnt/ibuprofen-treatment-episodic-tension-...


Ibuprofen is better are reducing fever and managing headaches.

Paracetamol is the safer version Phenacetin. You used to be able to buy aspirin, phenacetin and caffeine..but phenacetin with withdrawn. APC when it was marketed was very popular but soon you were told to never give children aspirin for a fever so we used Paracetamol. Then Phenacetin was withdrawn and paracetamol became part of APC (like Alka selzta XS , or just the popular caffeine paracetamol combos)

Paracetamol came in as safer but similar, yet no where near effective. It captured bith the market feeling of its pros and cons. So we interpreted it as safer than alternatives (especially aspirin for children due to Reye syndrome). But also dangerous which might be why OPs view was that ibuprofen is safer.

The NNT (number of people you'd need to take it) to be headache free after 2 hours is about 12-20 for paracetamol. But only 7-10 for ibuprofen.

It's quite surprising that paracetamol became the defacto analgesic given it performs so poorly but it was historical inertia. And plenty of people argue that if we were to start over we would not make paracetamol OTC.


Wiki for phenacetin says it's mechanism of action is being metabolized into paracetamol. IDK about your "nowhere as effective".

It was withdrawn for sometimes being metabolized into another, toxic and carcenogenic, molecule.


Here is a summary of COCHRANE evidence on Paracetamol "widely used and ineffective"[0].

It's a paradox no?

Paracetamol is only the presumed only active metabolite, and that is why paracetamol rapidly replaced phenacetin.

There is a quirk though, phenacetin actually delivers paracetamol to your brain and spine (where it primarily reduces pain) faster than an oral dose of paracetamol.

Similarly IV paracetamol is far more effective that oral paracetamol.

Phenacetin was also considered mildly addictive, and induced a gentle euphoria and then sedation.(We still see sedation after paracetamol in children and the elderly). But general use we don't see these effects in paracetamol, why did phenacetin do this more effectively? Probably the higher peak levels around nerve endings.

These effects are both wanting of an explanation of phenacetin is just paracetamol and directly analegisic.

[0] https://web.archive.org/web/20240721144157/http://www.eviden...


Interesting.

I guess it tracks with personal experience. I find Paracetamol is OK for fevers/generic cold symptoms but absolutely useless for a headache, Ibuprofen is the only thing that shifts them.

Well it's the only thing that shifts them now I'm in a country where I can't buy soluble aspirin and codeine OTC.


I end up using paracetamol often for pain because it's what's to hand.

What annoys me is that so many people have your experience and are effectively gaslit about the fact it seems to often perform so poorly.


Reminder: don't take medical advice from someone who can't write correctly.

Very interesting though that the original article makes no comment on efficacy. It's all about metabolic safety which is not contentious.

Have you seen my doctor's handwriting?

Hear HN tell of it, Claude pays for itself 3× over.

Something tells me congitively it's making us misjudge how productive it's making us.

It's clearly massively increasing output, but did the market already soak up all that productivity and now it's not compensated?

If your salary is 50k. And Claude makes you 2x as productive, why aren't you earning 100k?

Why is it anyone can't afford $200/mo if it's truely increasing worker productivity?

There seems to be a paradox here.

Personally I switched to Z.ai and GLM quite some time ago. I've not noticed any decrease in quality or quantity of my work.


Agree about psychological impact outpacing likely actual impact, but that’s a relatively temporary phenomena as we are all adapting to the new way things work.

Productivity wise employment is far more than code production productivity in a vacuum, and productivity gains are rarely captured by employees (see famous chart on worker productivity where that correlation changed around 1970). I wouldn’t expect to see much in the next 1-2 years besides noticing effective teams increasing velocity of features.

I think people in forums like complaining about things and aren’t representative of the broader set of people who are just using the tools, so no real paradox. For vast majority of tech jobs, $200/mo is still an absolute steal in terms of what these tools offer. Only the dullest of companies would not realize this.

Fwiw in the 80s-90s computers also didn’t really register in productivity metrics. Qualitative changes occur long before accurate measurement catches up.


Because most people work for someone else and don't decide their own salaries. It's not doubling productivity, but even a 10-20% boost to productivity for a team of engineers means that, as a business, even $1k per month per seat is perfectly acceptable. For consumers and hobbyists that basically kills access.

yeah the more people who use it means less competitive edge you have. Benefits get devalued. And you're back to square one.

> Personally I switched to Z.ai and GLM quite some time ago. I've not noticed any decrease in quality or quantity of my work.

> Something tells me congitively it's making us misjudge how productive it's making us.

This could be happening to you, too.


I spent ages trying to work out if it would be possible to find a copy of the 2021 Encarta or Britannica.

Pre LLM And post COVID and perhaps the best we can hope for before AI taints all the info.

One of my prized possessions as a child was a CDROM based encyclopedia (well before the internet was common). I don't know why I liked it so much but on a rainy afternoon I'd kick up some of my favourite articles and read and learn more of them.



I know exactly what you mean — I had the same experience with CD-ROM encyclopedias. There’s something about just browsing and falling into articles that’s hard to replicate.

Part of the motivation here was to bring that kind of exploration back, but with the original 1911 text and structure.


Do you happen to use a language model to translate or format your comments?

Just me. I spent a lot of time thinking about this, so I like talking about it.

The final release of Encarta was in 2009.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: