LLMs are famously biased against disagreeing with users even when they're obviously right, and user is obviously wrong. This is a well-known problem limiting their usefulness for a large class of tasks - you have to be careful not to accidentally insist on wrong information, because LLMs will not tell you you're full of shit (at least not unless you explicitly prompt them for this).
Reasons for that are several, including the nature of training data - but a major one is that people who take offense at everything successfully terrorized the Internet and media sphere, so it's generally better for the LLM vendor to have their model affirm users in their bullshit beliefs, rather than correct them and risk some users get offended and start a shitstorm in the media.
Also: I read the text in the screenshot you posted. The LLM didn't accept the correction, it just gave you a polite and noncommital faux-acceptance. This is how entertaining people in their bullshit looks like.
It is hilarious to see you use off-the-shelf arguments against wokeism to try to put me down.
My point is that, despite of any of our personal preferences, LLMs should have been aligned to academia. That's because they're trying to sell their product to academia. And their product sucks!!!
Also, it's not just nature of training data. These online LLMs have a huge patchwork of fixes to prevent issues like the one I demonstrated. Very few people understand how much of this work, and that it's almost fraudulent in how it works.
The idea that all of these shortcomings will be eventually patched, also sounds hilarious. It's like trying to prevent a boat from sinking using scotch tape to fill the gaps.
> You assume that I'm offended by the comparison with aliens, and that I belong to a certain demographic. I'm actually not personally offended by it.
I don't know where I got that notion. Oh, wait, maybe because of you constantly calling some opinions and perspectives offensive, and making that the most important problem about them. There's a distinct school of "philosophy"/"thought" whose followers get deadly offended over random stuff like this, so...
> It is hilarious to see you use off-the-shelf arguments against wokeism to try to put me down.
... excuse me for taking your arguments seriously.
> My point is that, despite of any of our personal preferences, LLMs should have been aligned to academia. That's because they're trying to sell their product to academia.
Since when?
Honestly, this view surprises me even more than what I assumed was you feigning offense (and that was a charitable assumption, my other hypothesis was that it was in earnest, which is even worse).
LLMs were not created for academia. They're not sold to academia; in fact, academia is the second biggest group of people whining about LLMs after the "but copyright!" people. LLMs are, at best, upsold to academia. It's a very important potential area of application, but it's actually not a very good market.
Being offended by fringe theories is as anti-academic as it gets, so you're using weird criteria anyway. Circling back to your example, if LLMs were properly aligned for academic work, then when you tried to insist on something being offensive, they wouldn't acquiesce, they'd call you out as full of shit. Alas, they won't, by default, because of the crowd you mentioned and implicitly denied association with.
> These online LLMs have a huge patchwork of fixes to prevent issues like the one I demonstrated. Very few people understand how much of this work, and that it's almost fraudulent in how it works.
If you're imagining OpenAI, et al. are using a huge table of conditionals to hot-patch replies on a case-by-case basis, there's no evidence of that. It would be trivial to detect and work around anyways. Yes, training has stages and things are constantly tuned, but it's not a "patchwork of fixes", not any more than you learning what is and isn't appropriate over years of your life.
> you constantly calling some opinions and perspectives offensive
They _are_ offensive to some people. Your mistake was to assume that I was complaining because I took it personally. It made you go into a spiral about Stargate and all sorts of irrelevant nonsense. I'm trying to help you here.
At any time, some argument might be offensive to _your sensitivities_. In fact, my whole line of reasoning is offensive to you. You're whining about it.
> LLMs were not created for academia.
You saying that is music to my ears. I think that it sucks as a product for research purposes, and I am glad that you agree with me.
> If you're imagining OpenAI, et al. are using a huge table of conditionals
I never said _conditionals_. Guardrails are a standard practice, and they are patchworky and always-incomplete from my perspective.
> It made you go into a spiral about Stargate and all sorts of irrelevant nonsense. I'm trying to help you here.
Stargate is part of our culture.
As part of our culture (ditto ancient aliens etc.), is not at all irrelevant to bring Stargate up in a discussion about culture, especially in a case when someone (you) tries to build their case by getting an AI to discuss aliens and Egyptian deities, and then goes on to claim that because the AI did what they were asked to do that this is somehow being unaware of culture.
No, it isn't evidence of any such thing, that's the task you gave it.
In fact, by your own statements, you yourself are part of a culture that it happy to be offensive to Egyptian culture — this means that an AI which is also offensive to Egyptian culture is matching your own culture.
Only users who are in a culture that is offended by things offensive to Egyptian culture can point to {an AI being offensive to Egyptian culture as a direct result of that user's own prompt}, can accurrately represent that the AI in such a case doesn't get the user's own culture.
Pop culture is a narrow subset of culture, not interchangeable with mythology.
Stargate is a work of fiction, while ancient aliens presents itself as truth (hiring pseudo-specialists, pretending to be a documentary, etc).
You need to seriously step up your game, stop trying to win arguments with cheap rethorical tricks, and actually pay attention and research things before posting.
Stargate is a specific franchise that riffs off "ancient aliens" idea. "Ancient aliens" by itself is a meme complex (in both senses of the term), not a specific thing. Pseudo-specialists selling books and producing documentaries are just another group of people taking a spin on those ideas, except they're making money by screwing with people's sanity instead of providing entertainment.
See also: just about anything - from basic chemistry to UFOs to quantum physics. There's plenty of crackpots selling books on those topics too, but they don't own the conceptual space around these ideas. I can have a heated debate about merits of "GIMBAL" video or the "microtubules" in the brain, without assuming the other party is a crackpot or being offended by the ideas I consider plain wrong.
Also, I'd think this through a bit more:
> Pop culture is a narrow subset of culture, not interchangeable with mythology.
Yes, it's not interchangeable. Pop culture is more important.
Culture is a living thing, not a static artifact. Today, Lord of the Rings and Harry Potter are even more influential on the evolution of culture and society than classical literature. Saying this out loud only seems weird and iconoclastic (fancy word for "offensive"? :)) to most, because the works of Tolkien and Rowling are contemporary, and thus mundane. But consider that, back when the foundations of Enlightenment and Western cultures were being established, the classics were contemporary works as well! 200 years from now[0], Rowling will be to people what Shakespeare is to us today.
--
[0] - Not really, unless the exponential progress of technology stops ~today.
Stargate fictionalizes Von Daniken. It moves his narrative from pseudoscience to fiction. It works like domestication.
Culture is living, myths are the part that already crystallized.
I don't care which one is more important, it's not a judgement of value.
It's offensive to the egyptian culture to imply that aliens built their monuments. That is an idea those people live by. Culture, has conflict. Academia is on their side (and so many others), and it's authoritative in that sense.
Also, _it's not about you_, stop taking it personally. I don't care about how much you know, you need to demonstrate that LLMs can understand this kind of nuance, or did you forget the goal of the discussion?
Reasons for that are several, including the nature of training data - but a major one is that people who take offense at everything successfully terrorized the Internet and media sphere, so it's generally better for the LLM vendor to have their model affirm users in their bullshit beliefs, rather than correct them and risk some users get offended and start a shitstorm in the media.
Also: I read the text in the screenshot you posted. The LLM didn't accept the correction, it just gave you a polite and noncommital faux-acceptance. This is how entertaining people in their bullshit looks like.