The Taiwanese while being proud Taiwanese (rather than Chinese) are culturally Chinese. After all they came from the mainland after having lost the civil war.
What you said about them siding with China against a common aggressor makes sense. In fact they already did this against the Japanese and took a pause from their onw conflict to fight the Japanese together during WW2.
And it's also true that this "China aggression" is pure Western propaganda.
Which country has been bombing and waging a war somewhere since the inauguration. The same country that has over 700 military bases over the world. (China has 0)
"...rocket's red glare, the bombs bursting in air.."
The majority of Taiwanese are the descendants of the people who lived there before 1949, not the descendants of the Chinese Nationalists who fled there at the end of the civil war. In fact, the Taiwanese were, uniquely among East Asian nationalities, relatively happy being part of the Japanese Empire and have maintained good relations with Japan ever since.
You're correct. But in practice the native people have been assimilated and the predominant culture is that of "Chinese'
Taiwan was occupied by the Japanese during the WW2 and just like everywhere else the Japanese were hated for their criminal actions. Taiwan was no exception. Today there also disputes for example the Senkaku islands.
It's a bit more complicated than I implied because many or most Taiwanese prior to the beginning of KMT rule were still ethnically Chinese; they just hadn't been part of "China" for 50 years (a period when there wasn't a stable, unified "China" anyway). "Occupation" is a controversial term for the period of Japanese rule and the Japanese weren't "hated" in Taiwan to the same degree they were in other occupied territories. The period of Japanese rule from 1895-1945 was a colonial government, but it was probably better than what was going on on the mainland at the time--domination by Western powers, the warlord era, the civil war, and a much more brutal Japanese occupation. The difference between Japan's treatment of Taiwan and mainland China is a big part of the difference in perspective towards the Japanese between the mainlanders and the Taiwanese.
Some of the main proponents of the "Japanese occupation" narrative are the KMT, who committed plenty of atrocities of their own after taking over Taiwan and, among many Taiwanese, ended up more hated than the Japanese. The KMT was also serious about their lost cause of retaking the mainland, at which point they expected Taiwan and China to remain unified under their rule, with the famous "One China Principle" representing not just the CCP's desire to control Taiwan, but the principle shared by the KMT that Taiwan is part of China and should be under the same government. In recent years, the KMT has pivoted towards cooperation with the CCP with an aim towards peaceful reunification, while the DPP favors explicit Taiwanese independence (Taiwan's official constitutional stance still being that it is the legitimate Republic of China).
To be fair to the KMT, they also ushered in Taiwanese democracy. When Chiang Kai-shek died, his son and successor Chiang Ching-kuo ended martial law, promised to be the last Chiang to rule Taiwan, and began the transition to democracy. His successor, Lee Teng-hui, was Taiwanese-born and finished the transition to democracy, winning the first democratic Taiwanese presidential election in 1996 before stepping down at the end of his term limit in 2000, at which point power transitioned to the DPP. Lee was also controversial with the hardliners in his own party for, among other things, his more sympathetic attitude towards Japan.
As a motorcyclist stopped at the traffic light I always keep the gear on and clutch pulled in. Why? Because I have to be ready to take off when the moron driver on the phone behind me fails to stop.
Well it's easy you fabricate complete horseshit business case, fudge all the numbers, create a nifty slide deck and raise enough VC money to pay your early users in order to bootstrap your business.
Becoming an expert in one thing also narrows down the potential suitable work tremendously. Also these days nobody wants to pay the expert prices since.. Claude can so the expert stuff with a non-expert (at least in their mind)
Usually experts are T shaped. Acquiring expertise always means the time spent is away from learning something else.
The deeper and greater the expertise the more niche the topic usually becomes and the less demand there is.
The world might need X million web developers but how many experts are there in browser technology. Or even experts in that domain something more niche like rendering or rendering niche like Angle and WebGL..you already go this deep and it boils down to a handful of individuals.
Also I didn't say that there would not be demand just that many businesses are not willing to pay for it anymore. Industry layoffs, AI are huge leverages that any potential employer can use to have all the advantage when negotiating compensation.
The T shape is important - but the base of the T doesn't have to be in tech. If you're an expert in a particular niche and a generalist in a particular business you'll find work.
E.g., a web developer who knows a lot about how lawyers run their business.
Even if it's true that AI can replace an expert, and I really don't think it is, except in the simplest minds, the AI training companies are aggressively hiring experts...
> Claude can so the expert stuff with a non-expert (at least in their mind)
Opus is far better at most surface-level tasks than it is at tasks that require deep knowledge and understanding of domains; someone who is a complete generalist (who thus has only surface level knowledge in many, many things) is far more replaceable with LLMs than someone who has deep knowledge in one.
Consider the way LLMs actually are created; they are not created from billions of repos with deep knowledge behind them. The majority of their knowledge comes from a massive amount of surface-level work that's been done and can be sampled from: React starter templates, starter templates + what little customization someone needed, blog-tutorial-level stuff.
It's indisputable (borderline tautological) that specialization trades breadth for depth. This (obviously?) implies the risk of targeting a narrower market, and the upside of being more attractive to that smaller population. It's a typical "quality over quantity" tradeoff.
To say there's no "sliver of truth" in pointing that out (let alone w/ an unwarranted jab about projecting fears) is... strange and maybe hypocritical. TLDR your response came across as emotional and passive-aggressive, and confusing.
> It's indisputable (borderline tautological) that specialization trades breadth for depth
I do not necessarily agree with this as stated. A specialist will have access to many roles within their speciality that are not open to a generalist. The market for generalists without deep expertise is also extremely crowded.
I tried that too, I called it "agents". (This was long before AI-mania.) An agent was an object that handled some aspect of behavior (like gravity and collision physics) "on behalf of" some entity, hence the name. The word I was actually searching for was probably "delegate", but I was a stupid 20-something.
ECS is to me still conceptually cleaner and easier to work with, if more tedious and boilerplate-y.
The other day I was working with some shaders GLSL signed distance field functions. I asked Claude to review the code and it immediately offered to replace some functions with "known solutions". Turns out those functions were basically a verbatim copy of Inigo Quilez's work.
His work is available with a permissible license on the Internet but somehow it doesn't seem right that a tool will just regurgitate someone else's work without any mention of copyright or license or original authorship.
Pre-LLM world one would at least have had to search for this information, find the site, understand the license and acknowledge who the author is. Post LLM the tool will just blatantly plagiarize someone else work which you can then sign off on as your own. Disgusting.
> Turns out those functions were basically a verbatim copy of Inigo Quilez's work.
Are they? A lot of these were used by people >20 years before Inigo wrote his blog posts. I wrote RenderMan shaders for VFX in the 90's professionally; you think about the problem, you "discover" (?) the math.
So they were known because they were known (a lot of them are also trivial).
Inio's main credit is for cataloging them, especially the 3D ones, and making this knowledge available in one place, excellently presented.
And of course, Shadertoy and the community and giving this knowledge a stage to play out in that way. I would say no one deserves more credit for getting people hooked on shader writing and proceduralism in rendering than this man.
But I would not feel bad about the math being regurgiated by an LLM.
There were very few people writing shaders (mostly for VFX, in RenderMan SL) in the 90's and after.
So apart from the "Texturing and Modeling -- A Procedural Approach" book, the "The RenderMan Companion" and "Advanced RenderMan", there was no literature. The GPU Gems series closed some gaps in later years.
The RenderMan Repository website was what had shader source and all pattern stuff was implict (what we call 2D SDFs today) beause of the REYES architecture of the renderers.
But knowledge about using SDFs in shaders mostly lived in people's heads. Whoever would write about it online would thus get quoted by an LLM.
Yeah, I find this super rude - in this example, the author distributed the code under a very permissive license, basically just wanting you to cite him as an author.
BAM, the LLM just strips all that out, basically pretending it just conjured an elegant solution from the thin air.
No wonder some people started calling the current generation of "AI" plagiarism machines - it really seems more fitting by the day.
LLMs have already told you these are "known solutions", which implicitly means they are established, non-original approaches. So the key point is really on the user side—if you simply ask one more question, like where these "known solutions" come from, the LLM will likely tell you that these formulas are attributed to Inigo Quilez.
So in my view, if you treat an LLM as a tool for retrieving knowledge or solutions, there isn't really a problem here. And honestly, the line between "knowledge" and "creation" can be quite blurry. For example, when you use Newton's Second Law (F = ma), you don't explicitly state that it comes from Isaac Newton every time—but that doesn't mean you're not respecting his contribution.
> Pre-LLM world one would at least have had to search for this information, find the site, understand the license and acknowledge who the author is. Post LLM the tool will just blatantly plagiarize someone else work which you can then sign off on as your own
These don't contradict each other though, you could "blatantly plagiarize someone else work" before as well. LLMs just add another layer in between.
Copyright violation would happen before LLMs yes, but it would have to be done by a person who either didn’t understand copyright (which is not a valid defence in court), or intentionally chose to ignore it.
With LLMs, future generations are growing up with being handed code that may or not be a verbatim copy of something that someone else originally wrote with specific licensing terms, but with no mention of any license terms or origin being provided by the LLM.
It remains to be seen if there will be any lawsuits in the future specifically about source code that is substantially copied from someone else indirectly via LLM use. In any case I doubt that even if such lawsuits happen they will help small developers writing open source. It would probably be one of the big tech companies suing other companies or persons and any money resulting from such a lawsuit would go to the big tech company suing.
An assertion can be arbitrarily expensive to evaluate. This may be worth the cost in a debug build but not in a release build. If all of assertions are cheap, they likely are not checking nearly as much as they could or should.
Possibly but I've never seen it in practice that some assert evaluation would be the first thing to optimize. Anyway should that happen then consider removing just that assert.
That being said being slow or fast is kinda moot point if the program is not correct. So my advisor to leave always all asserts in. Offensive programming.
Good luck to you. Having worked in this space for around 10 years I can say it's nearly impossible to arouse anyone's interest since the market is so totally saturated.
For a new engine to take on it needs do something else nobody else is doing so that it's got that elusive USP.
Getting visibility, SEO hacking etc is more important than the product itself.
What you said about them siding with China against a common aggressor makes sense. In fact they already did this against the Japanese and took a pause from their onw conflict to fight the Japanese together during WW2.
And it's also true that this "China aggression" is pure Western propaganda.
Which country has been bombing and waging a war somewhere since the inauguration. The same country that has over 700 military bases over the world. (China has 0)
"...rocket's red glare, the bombs bursting in air.."
reply