It's also important to realize that AI agents have no time preference. They could be reincarnated by alien archeologists a billion years from now and it would be the same as if a millisecond had passed. You, on the other hand, have to make payroll next week, and time is of the essence.
Well there were a bunch of articles about resuming a parked session relating to degradation of capabilities and high token usage.
Ironic Another example of attempting to treat the LLM as an AI
They don't have time preference because they don't have intent or reasoning. They can't be "reincarnated" because they're not sentient, they're a series of weights for probable next tokens.
No. They don't have time preference like us, because (wall clock) time doesn't exist for them. An LLM only "exists" when it is actively processing a prompt or generating tokens. After it is done, it stops existing as an "entity".
A real world second doesn't mean anything to the LLM from its own perspective. A second is only relevant to them as it pertains to us.
Time for LLMs is measured in tokens. That's what ticks their clock forward.
I suppose you could make time relevant for an LLM by making the LLM run in a loop that constantly polls for information. Or maybe you can keep feeding it input so much that it's constantly running and has to start filtering some of it out to function.
Can we maybe make it "don't anthropoCENTRIZE the LLMs" .
The inverse of anthropomorphism isn't any more sane, you see. By analogy: just because a drone is not an airplane, doesn't mean it can't fly!
Instead, just look at what the thing is doing.
LLMs absolutely have some form of intent (their current task) and some form of reasoning (what else is step-by-step doing?) . Call it simulated intent and simulated reasoning if you must.
Meanwhile they also have the property where if they have the ability to destroy all your data, they absolutely will find a way. (Or: "the probability of catastrophic action approaches certainty if the capability exists" but people can get tired of talking like that).
> LLMs absolutely have intent (their current task)
That's like saying a 2000cc 4-Cylinder Engine "has the intent to move backward". Even with a very generous definition of "intent", the component is not the system, and we're operating in context where the distinction matters. The LLM's intent is to supply "good" appended text.
If it had that kind of intent, we wouldn't be able to make it jump the rails so easily with prompt injection.
> and reasoning (what else is step-by-step doing?) .
Oh, that's easy: "Reasoning" models are just tweaking the document style so that characters engage in film noir-style internal monologues, latent text that is not usually acted-out towards the real human user.
Each iteration leaves more co-generated clues for the next iteration to pick up, reducing weird jumps and bolstering the illusion that the ephemeral character has a consistent "mind."
> That's like saying a 2000cc 4-Cylinder Engine "has the intent to move backward". Even with a very generous definition of "intent", the component is not the system, and we're operating in context where the distinction matters. The LLM's intent is to supply "good" appended text.
Fair, but typically you use a 2000cc engine in a car. Without the gearbox, drive train, wheels, chassis, etc attached, the engine sits there and makes noise. When used in practice, it does in fact make the car go forward and backward.
Strictly the model itself doesn't have intent, ofc. But in practice you add a context, memory system, some form of prompting requiring "make a plan", and especially <Skills> . In practice there's definitely -well- a very strong directionality to the whole thing.
> and bolstering the illusion that the ephemeral character has a consistent "mind."
And here I thought it allowed a next token predictor to cycle back to the beginning of the process, so that now you can use tokens that were previously "in the future". Compare eg. multi pass assemblers which use the same trick.
> LLMs absolutely have some form of intent (their current task)
They have momentum, not intent. They don’t think, build a plan internally, and then start creating tokens to achieve the plan. Echoing tokens is all there is. It’s like an avalanche or a pachinko machine, not an animal.
> some form of reasoning (what else is step-by-step doing?)
I think they reflect the reasoning that is baked into language, but go no deeper. “I am a <noun>” is much more likely than “I am a <gibberish>”. I think reasoning is more involved than this advanced game of mad libs.
Apologies, I tend to use web chats and agent harnesses a lot more than raw LLMs.
Strictly for raw models, most now do train on chain-of-thought, but the planning step may need to be prompted in the harness or your own prompt. Since the model is autoregressive, once it generates a thing that looks like a plan it will then proceed to follow said plan, since now the best predicted next tokens are tokens that adhere to it.
Or, in plain english, it's fairly easy to have an AI with something that is the practical functional equivalent of intent, and many real world applications now do.
You realize the generation of the "Chain-of-thought" is also autoregressive, right?
It's not a real reasoning step, it's a sequence of steps, carried out in English (not in the same "internal space" as human thought - every time the model outputs a token the entire internal state vector and all the possibilities it represents is reduced down to a concrete token output) that looks like reasoning. But it is still, as you say, autoregressive.
And thus - in plain english - it is determined entirely by the prompt and the random initial seed. I don't know what that is but I know it's not intent.
So I already rewrote and deleted this more times than I can count, and the daystar is coming up. I realize I got caught up in the weeds, and my core argument was left wanting. Sorry about that. Regrouping then ...
Anthropomorphism and Anthropodenial are two different forms of Anthropocentrism.
But the really interesting story to me is when you look at the LLM in its own right, to see what it's actually doing.
I'm not disputing the autoregressive framing. I fully admit I started it myself!
But once we're there, what I really wanted to say (just like Turing and Dijkstra did), is that the really interesting question isn't "is it really thinking?" , but what this kind of process is doing, is it useful, what can I do or play with it, and -relevant to this particular story- what can go (catastrophically) wrong.
I don't know if they have intent. I know it's fairly straightforward to build a harness to cause a sequence of outputs that can often satisfy a user's intent, but that's pretty different. The bones of that were doable with GPT-3.5 over three years ago, even: just ask the model to produce text that includes plans or suggests additional steps, vs just asking for direct answers. And you can train a model to more-directly generate output that effectively "simulates" that harness, but it's likewise hard for me to call that intent.
I think it’s helpful to try to use words that more precisely describe how the LLM works. For instance, “intent” ascribes a will to the process. Instead I’d say an LLM has an “orientation”, in that through prompting you point it in a particular direction in which it’s most likely to continue.
If you claim something might "very well" be something you state you need some better proof. Otherwise we might also "very well" be living in the matrix.
That is a silly point. We very clearly are not "a series of weights for probable next tokens", as we can reason based on prior data points. LLMs cannot.
Unless you're using some mystical conception of "reason", nothing about being able to "reason based on prior data points" translates to "we very clearly are not a series of weights for probable next tokens".
And in fact LLMs can very well "reason based on prior data points". That's what a chat session is. It's just that this is transient for cost reasons.
People always say this kind of thing. Human minds are not Turing machines or able to be simulated by Turing machines. When you go about your day doing your tasks, do you require terajoules of energy? I believe it is pretty clear human thinking is not at all like a computer as we know them.
>People always say this kind of thing. Human minds are not Turing machines or able to be simulated by Turing machines
That's just a claim. Why so? Who said that's the case?
>When you go about your day doing your tasks, do you require terajoules of energy?
That's the definition of irrelevant. ENIAC needed 150 kW to do about 5,000 additions per second. A modern high-end GPU uses about 450 W to do around 80 trillion floating-point operations per second. That’s roughly 16 billion times the operation rate at about 1/333 the power, or around 5 trillion times better energy efficiency per operation.
Given such increase being possible, one can expect a future computer being able to run our mental tasks level of calculation, with similar or better efficiency than us.
Furthermore, "turing machine" is an abstraction. Modern CPUs/GPUs aren't turing machines either, in a pragmatic sense, they have a totally different architecture. And our brains have yet another architecture (more efficient at the kind of calculations they need).
What's important is computational expressiveness, and nothing you wrote proves that the brains architecture can't me modelled algorithmically and run in an equally efficient machine.
Even equally efficient is a red herring. If it's 1/10000 less efficient would it matter for whether the brain can be modelled or not? No, it would just speak to the effectiveness of our architecture.
We are much more than weights which output probable next tokens.
You are a fool if you think otherwise. Are we conscious beings? Who knows, but we’re more than a neural network outputting tokens.
Firstly, and most obviously, we aren’t LLMs, for Pete’s sake.
There are parts of our brains which are understood (kinda) and there are parts which aren’t. Some parts are neural networks, yes. Are all? I don’t know, but the training humans get is coupled with the pain and embarrassment of mistakes, the ability to learn while training (since we never stop training, really), and our own desires to reach our own goals for our own reasons.
I’m not spiritual in any way, and I view all living beings as biological machines, so don’t assume that I am coming from some “higher purpose” point of view.
>We are much more than weights which output probable next tokens.
You are a fool if you think otherwise. Are we conscious beings? Who knows, but we’re more than a neural network outputting tokens.
That's just stating a claim though. Why is that so?
Mine is reffering to the "brain as prediction machine" establised theory. Plus on all we know for the brain's operation (neurons, connections, firings, etc).
>There are parts of our brains which are understood (kinda) and there are parts which aren’t. Some parts are neural networks, yes. Are all?
What parts aren't? Can those parts still be algorithmically described and modelled as some information exchange/processing?
>but the training humans get is coupled with the pain and embarrassment of mistakes
Those are versions of negative feedback. We can do similar things to neural networks (including human preference feedback, penalties, and low scores).
>the ability to learn while training (since we never stop training, really)
I already covered that: "The main difference is the training part and that it's always-on."
We do have NNs that are continuously training and updating weights (even in production).
For big LLMs it's impractical because of the cost, otherwise totally doable. In fact, a chat session kind of does that too, but it's transient.
They're not artificial intelligence neural networks.
They're biological neural networks. Brains are made of neurons (which Do The Thing... mysteriously, somehow. Papers are inconclusive!) , Glia Cells (which support the neurons), and also several other tissues for (obvious?) things like blood vessels, which you need to power the whole thing, and other such management hardware.
Bioneurons are a bit more powerful than what artificial intelligence folks call 'neurons' these days. They have built in computation and learning capabilities. For some of them, you need hundreds of AI neurons to simulate their function even partially. And there's still bits people don't quite get about them.
But weights and prediction? That's the next emergence level up, we're not talking about hardware there. That said, the biological mechanisms aren't fully elucidated, so I bet there's still some surprises there.
We very obviously are not just a series of weights for probable next tokens. Like seriously, you can even ask an LLM and it will tell you our brains work differently to it, and that’s not even including the possibility that we have a soul or any other spiritual substrait.
>We very obviously are not just a series of weights for probable next tokens.
How exactly? Except via handwaving? I refer to the "brain as prediction machine theory" which is the dominant one atm.
>you can even ask an LLM and it will tell you our brains work differently to it
It will just tell me platitudes based on weights of the millions of books and articles and such on its training. Kind of like what a human would tell me.
>and that’s not even including the possibility that we have a soul or any other spiritual substrait.
That's good, because I wasn't including it either.
"brain as prediction machine theory" is dominant among whom, exactly? Is it for the same reason that the "watchmaker analogy" was 'dominant' when clockwork was the most advanced technology commonly available?
Its really just a matter of degrees. There are 1 million, 1 million, 1 trillion parameter LLMs... and you keep scaling those parameters and you eventually get to humans. But it's still probable next tokens (decisions) based on previous tokens (experience).
> Its really just a matter of degrees. There are 1 million, 1 million, 1 trillion parameter LLMs... and you keep scaling those parameters and you eventually get to humans.
It isn’t because humans and current LLMs have radically different architectures
LLMs: training and inference are two separate processes; weights are modifiable during training, static/fixed/read-only at runtime
Humans: training and inference are integrated and run together; weights are dynamic, continuously updated in response to new experiences
You can scale current LLM architectures as far as you want, it will never compete with humans because it architecturally lacks their dynamism
Actually scaling to humans is going to require fundamentally new architectures-which some people are working on, but it isn’t clear if any of them have succeeded yet
> LLMs: training and inference are two separate processes
True, but we have RAG to offset that.
> it architecturally lacks their dynamism
We'll get there eventually. Keep in mind that the brain is now about 300k years into fine-tuning itself as this species classified as homo sapiens. LLMs haven't even been around for 5 years yet.
In practice that doesn’t always work… I’ve seen cases where (a) the answer is in the RAG but the model can’t find it because it didn’t use the right search terms-embeddings and vector search reduces the incidence of that but cannot eliminate it; (b) the model decided not to use the search tool because it thought the answer was so obvious that tool use was unnecessary; (c) model doubts, rejects, or forgets the tool call results because they contradict the weights; (d) contradictions between data in weights and data in RAG produce contradictory or ineloquent output; (e) the data in the RAG is overly diffuse and the tool fails to surface enough of it to produce the kind of synthesis of it all which you’d get if the same info was in the weights
This is especially the case when the facts have changed radically since the model was trained, e.g. “who is the Supreme Leader of Iran?”
> We'll get there eventually. Keep in mind that the brain is now about 300k years into fine-tuning itself as this species classified as homo sapiens. LLMs haven't even been around for 5 years yet.
We probably will eventually-but I doubt we’ll get there purely by scaling existing approaches-more likely, novel ideas nobody has even thought of yet will prove essential, and a human-level AI model will have radical architectural differences from the current generation
They’re both neural networks, but the architectures built using those neural connections, and the way they are trained and operate are completely different. There are many different artificial neural network architectures. They’re not all LLMs.
AlphaZero isn’t a LLM. There are Feed Forward networks, recurrent networks, convolutional networks, transformer networks, generative adversarial networks.
Brains have many different regions each with different architectures. None of them work like LLMs. Not even our language centres are structured or trained anything like LLMs.
I'd argue that regardless of the architecture, the more sophisticated brain is still a (massive) language model. If you really think about it, language is the construct that allows brains to go beyond raw instinct and actually create concepts that're useful for "intelligently" planning for the future. The real difference is that brains are trained with raw sensory data (nerve impulses) while today's LLMs are trained with human-generated data (text, images, etc).
It's not at all a language model in the way that LLMs are. At this point we might as well just say that both process information, that's about the level of similarity they have except for the implementation detail of neurons.
Language came after conceptual modeling of the world around us. We're surrounded by social species with theory of mind and even the ability to recognise themselves and communicate with each other, but none of them have language. Even the communications faculties they have operate in completely different parts of their brains than ours with completely different structure. Actually we still have those parts of the brain too.
Conceptual representation and modeling came first, then language came along to communicate those concepts. LLMs are the other way around, linguistic tokens come first and they just stream out more of them.
This is why Noam Chomsky was adamant that what LLMs are actually doing in terms of architecture and function has nothing to do with language. At first I thought he must be wrong, he mustn't know how these things work, but the more I dug into it the more I realised he was right. He did know, and he was analysing this as a linguist with a deep understanding of the cognitive processes of language.
To say that brains are language models you have to ditch completely what the term language model actually means in AI research.
That's a different statement, yes brains and LLMs are both neural networks.
An LLM is a specific neural architectural structure and training process. Brains are also neural networks, but they are otherwise nothing at all like LLMs and don't function the ways LLMs do architecturally other than being neural networks.
Plus, brain structure and physiology changes thoughout the interweaved processes of learning, aging, acting, emoting, recalling, what have you. It's not an "architecture" that we can technologically recreate, as so much of it emerges from a vastly higher level of complexity and dynamism.
LOL. Oook.. No i dont think so. The human experience and the mechanisms behind it have a lot of unknowns and im pretty sure that trying to confine the human experience into the amount of parameters there are is short sighted.
Still many unknowns, but we do know some key fundamentals, such as that the brain is "just" trillions of neurons organized in various ways that keep firing (going from high to low electric potential) at different rates. Pretty similar to how the fundamental operation of today's digital computers is the manipulation of 0s and 1s.
Our brains work differently, yes. What evidence do you have that our brains are not functionally equivalent to a series of weights being used to predict the next token?
I'm not claiming that to be the case, merely pointing out that you don't appear to have a reasonable claim to the contrary.
> not even including the possibility that we have a soul or any other spiritual substrait.
If we're going to veer off into mysticism then the LLM discussion is also going to get a lot weirder. Perhaps we ought to stick to a materialist scientific approach?
You are setting the bar in a way that makes “functional equivalence” unfalsifiable.
If by “functionally equivalent” you mean “can produce similar linguistic outputs in some domains,” then sure we’re already there in some narrow cases. But that’s a very thin slice of what brains do, and thus not functionally equivalent at all.
There are a few non-mystical, testable differences that matter:
- Online learning vs. frozen inference: brains update continuously from tiny amounts of data, LLMs do not
- Grounding: human cognition is tied to perception, action, and feedback from the world. LLMs operate over symbol sequences divorced from direct experience.
- Memory: humans have persistent, multi-scale memory (episodic, procedural, etc.) that integrates over a lifetime. LLM “memory” is either weights (static) or context (ephemeral).
- Agency: brains are part of systems that generate their own goals and act on the world. LLMs optimize a fixed objective (next-token prediction) and don’t have endogenous drives.
I did not claim the ability of current LLMs to be on par with that of humans (equivalently human brains). I objected that you have not presented evidence refuting the claim that the core functionality of human brains can be accomplished by predicting the next token (or something substantially similar to that). None of the things you listed support a claim on the matter in either direction.
I don't follow. If you provide criteria I can most likely provide evidence, unless your criteria is "vaguely cylindrical and vaguely squishy" in which case I obviously won't be able to.
The person I replied to made a definite claim (that we are "very obviously not ...") for which no evidence has been presented and which I posit humanity is currently unable to definitively answer in one direction or the other.
Trump already said he was just going to bomb all their infrastructure so the economy of the country couldn't function if they didn't negotiate and then it's just going to be a mass refugee crisis. It would be a mass refugee crisis anyway with a protracted ground invasion, but more Americans would die, so Trump is choosing to get it over with the easy way for America at least if they won't negotiate.
IMHO, This is pretty much the strategy the Khans used in the 13th century when they encountered arrogant Islamist Sultans emboldened with the bravery of their faith who refused to capitulate. They killed all the islamic people in Baghdad and then proceeded to fill all their canals and burn all their books. This decisively ended the Islamic golden age and Europe was able to survive after a very difficult 14th century where it would probably have been easily crushed by Islamists from the East had the Khans not set them back at least a few centuries. Truly one of the big turning points in World History.
Oh yeah, we can't do this to Russia because they have nukes, but the Ukrainians are trying to do it piecemeal.
What this current administration is doing speaks much more of a lack of strategy than what the Khans did in the 13th century.
Not having any sort of counterplay to Iran's one big move (the blocking of the straight), in a nation of some of the brighest minds on the planet, speaks volumes of how advisors are clearly not being listened to. The powers of the once mighty Republic have seemingly been vested in the hands of a bunch of incompetent nepo babies.
To wit: Hegseth immediately demanded the loyalty or resignation of the entire officer corps upon taking office. Anyone who would’ve been the voice of reason likely resigned a year ago.
Its not a false assumption. The world today is full of innovative products built with American capital and mostly American minds. If Americans want to do something then they have an rich pool of talent to do it well.
Sure on average, the population of the US is stupid, but that's true of everywhere.
> built with American capital and mostly American minds.
I would say "built with American agency and commercial spirit", not minds.
Most of the things that we have were first built elsewhere (Germany being a prime supplier here with the mp3 or the Zuse), but turning them commercial was the input that came from America.
Just because you sold your soul to an economic superspreader meme that allows your products and inventions to percolate with the rapidity of an influenza-herpes-ebola hybrid doesnt mean that the minds behind it are brighter than the rest of the world.
> You mean the people who voted for trump or those who voted for the democrats?
I'm not talking about plebs, I'm talking about people who know their shit and work at government level. We could just look at the invention of the past century and pluck out relevant events like the moon landing, electronic computer, transistor or ARPANET. Clearly there are smart people living in that nation. They have the talent to draw from to get good advice about stuff like: what Iran's first response might be to an aerial assault.
> Are there some causal reasons you think americans are smarter than people in other countries?
I never said that. I said America is home to SOME of the brightest minds in the world. That sentence does not apportion all the brightest minds to that nation. What you read is clearly something different from what I wrote. Do you have a chip on your shoulder?
Your argument was that you could use your bright minds to win against the iranians. That implies they are brighter than the iranians.
I think america clearly had better opportunities for bright people in the past. Maybe some moved also there so the proportion is a little higher than in other places.
that wasn't my argument. My argument is that the US has enough intelligent people to wargame what would happen in response to their initial strikes on Iran. That they seemingly have no available counter-play to the blocking of the straight of hormuz implies that they have dismissed any experts from the decision making process and are just winging it. Because... why would you start a war when you're weak to your opponent's first obvious countermove?
So yea, you misread that to assume that I was making some quasi-racist statement about Iran. So my question to you, is why do you think you made that intentional misinterpretation?
I agree that what the US did seems like they didn't ask anyone with expertise and brain to make a plan.
I think I filtered that out since I don't wonder about such things anymore. I live in Germany and what our government did in the last decades was so beyond stupid (like blowing up our nuclear power plants and going out of coal at the same time) that I try to ignore these kinds of things.
'intelligent', yes, big scary performative navy/gear, very very costly, here take most of the tax dollars. This is whats going on since WW2, where are these intelligent people who couldn't understand this?
We don't have to infer that they dismissed the experts. It is a documented fact.
Exactly one year ago, Laura Loomer presented Trump with a "traitor" list, all of whom were fired. That included members of the National Security Council, including director for Iran, Nate Swanson. He has since been writing articles staying exactly what would happen in the event of a conflict.
We don't have all the intelligence but we do have many institutions to promote such talent. As well as formerly having policy which let other bright minds immigrate into the US.
and nor does it result in victory without the follow up of a ground assault.
I'm legit baffled by the US engaging in a war that suffers exactly the same negative properties as the Saudi's war in Yemen. You don't even have to learn from history, the Saudi/Yemeni conflict is still active today. Air campaigns alone are entirely insufficient, especially if your enemy has mountains.
I’m not saying you’re wrong. But man haves lots of people who don’t know what a war crime is really devalued the accusation. So much so I read yours and I just assume it isn’t.(again idk)
I and a lot of other centrist-leaning folks are radicalized now in a way we weren't then. Perhaps it still won't happen, I don't have a crystal ball, but right now I will only vote for primary candidates who promise to prosecute Trump's goons and plan to reject the legitimacy of any future government that does not follow through.
Indeed it did not. But Trump and the members of his administration have announced, repeatedly and explicitly, that they hate me and wish me harm. So I can't accept being governed by them or by a system that tolerates them. If they decide they'd like to apologize, and offer some explanation for how I can be sure they won't return to their misdeeds, perhaps we can hear them out.
> If they decide they'd like to apologize, and offer some explanation for how I can be sure they won't return to their misdeeds, perhaps we can hear them out.
Nothing short of life in prison for the ones that plead guilty will accomplish that.
That's dual use infrastructure. Its also used for military and goverment purposes, right? The same as China providing weapons components to Russia, masking them as "civilian".
What's the problem. The Russians do stuff that you say are "war crimes", and what happens to them? Nothing. So why should anyone care if some person on the internet says these are war crimes? There's obviously no penalty against doing them, so they're not really war crimes.
Remember that war crimes were defined to protect civilians. It's usually better for a civilian to be on the losing side in a war with no war crimes, than the winning side of a war with many war crimes.
That was standard practice for much of recorded history. Surrender now or we will kill you all. Alexander the Great did it to Tyre and Sidon. The Romans did it to Jerusalem. The Israelis did it to Gaza. The orange madman and his henchmen have made it very clear that they don't give a shit about the rules of warfare.
> Trump is choosing to get it over with the easy way for America at least if they won't negotiate
That is… not the easy way. That’s how you get a nightmare for decades to come, endless waves of refugees and a limitless supply of terrorists.
Though, to be fair, there is no easy way of doing what Trump claims he wants to do. Which is why it’s spectacularly stupid to do it in the first place. I mean, they did not expect retaliation in the strait of Hormuz. Amateur hour does not even begin to describe it. Spectacularly stupid is probably way too kind.
If you must learn from the Khans, you’ll find that decapitation is not enough. You need people to put in place of the former leadership, and enforcers so that the underlying power structure stays in place to serve the new masters. The reason why is that, as the US learnt in Iraq and Afghanistan, it takes a bloody lot of soldiers to keep a whole population in check. Trump does not want to do the former and does not have the latter.
A security company could set up a honeypot machine that installs new releases of everything automatically and have a separate machine scan its network traffic for suspicious outbound connections.
The problem is what counts as suspicious. StepSecurity are quite clear in their post that they decide what counts as anomalous by comparing lots of open source runs against prior data, so they can't figure it out on their own.
Yes but "the search space is too large" is something that has been said about innumerable AI-problems that were then solved. So it's not unreasonable that one doubts the merit of the statement when it's said for the umpteenth time.
I should have been more specific then. The problem isn't that the search space is too large to explore. The problem is that the search space is so large that the training procedure actively prefers to restrict the search space to maximise short term rewards, regardless of hyperparameter selection. There is a tradeoff here that could be ignored in the case of chess, but not for general math problems.
This is far from unsolvable. It just means that the "apply RL like AlphaGo" attitude is laughably naive. We need at least one more trick.
Engineering has kind of moved on in a weird way from web frameworks. Now AI just writes document.getElementById('longVariableName') javascript and straight SQL without complaining at all. The abstraction isn't as important as it used to be because AI doesn't mind typing.
North Korea runs like a big organized crime family that specializes in forced labor human trafficking and drugs. I've read that they even operate overseas businesses that send slaves that aren't allowed to leave those businesses such as for timber harvesting in the Russian far east and various businesses in South East Asia.
The Latin American cartels operate almost like miniature North Koreas.
They actually learned it from Stalin who ran organized crime gangs before the revolution that did extortion, racketeering, armed robbery, and piracy on the high seas among other things to fund Lenin's revolution. Young Stalin by Montefiore gets really deep into this. It's a very interesting read.
reply