Hacker Newsnew | past | comments | ask | show | jobs | submit | e12e's commentslogin

In Norse mythology "the nine realms" encompass the entire world - but there's no definive list of what realms constitute the nine.

In the center, humans inhabit Midtgård. The gods in Valhall and the Jotun in Jotunheim.

Then there's also Helheim or Hel - for the dead, Alfheim for the elves, Svartalfheim for the dwarves...

https://commons.wikimedia.org/wiki/Category:Locations_in_Nor...


> It lacked even basic features like the ability to create a topic/channel.

Pretty sure you can define a new topic from all clients.

On mobile you currently need to use the web interface to create a channel.

I don't know why using an agent would need multiple channels - but sounds like one solution would be have the agent create channels via the API?


Seems to gloss over other kinds of contamination, beyond GPL code. Code from pirated text books, the problem with the entire language model being trained on copyright data, and on the possibility of the training data containing various copyrighted code.

> Code from pirated text books

Anthropic "solved" this by intermingling the texts extracted from pirated books (illegal) with texts extracted from the physical books they bought and destroyed (legal), so no one can clearly say if the copyrighted material it spits out came from a legal source or not. Everyone rejoiced.


The intermingling argument is actually central to the Bartz settlement structure. The settlement required destruction of the pirated dataset specifically because commingled training data creates an unresolvable provenance problem. For deployers building on Claude, EDPB Opinion 28/2024 requires a documented assessment of the foundation model's training data legal basis before deployment. "We cannot tell which outputs came from which source" is not a satisfactory answer to a regulator running that assessment. wrote about it before here: https://legallayer.substack.com/p/i-read-every-edpb-document...

I've seen copyright notices that explicitly forbid use for AI training. Would this "transformation" argument still hold in such cases?

For example:

No Generative AI Training Use

For avoidance of doubt, Author reserves the rights, and grants no rights to, reproduce and/or otherwise use the Work in any manner for purposes of training artificial intelligence or machine learning technologies to generate text, text to speech, voice, or audio including without limitation, technologies that are capable of generating works in the same style or genre as the Work, unless individual or entity obtains Author’s specific and express permission to do so. Nor does any individual or entity have the right to sublicense others to reproduce and/or otherwise use the Work in any manner for the purposes of training artificial intelligence or machine learning technologies to generate text, text to speech, voice, or audio without Author’s specific and express permission.


Only if you also manage to purchase and destroy the source material, I suppose? In Anthropics case it wouldn't have worked if they've stolen/rented the books then destroyed them, but in the judge's eyes it was legal because legally purchased -> destroyed.

> books they bought and destroyed (legal)

They're only legal if training is fair use - and even I don't think it's immediately clear what would be the legal status of verbatim regurgitation of code in copyright, or code protected by patents?

AFAIK I (as a human developer) can't assume that I can go and copy code out of a text book, and then assume copyright and charge for a license to it?


> They're only legal if training is fair use

The judge seems to have said it's because they "transformed" the books (destroying them after digitalizing) in the process, that made it legal.

> Ultimately, Judge William Alsup ruled that this destructive scanning operation qualified as fair use—but only because Anthropic had legally purchased the books first, destroyed each print copy after scanning, and kept the digital files internally rather than distributing them. The judge compared the process to “conserv[ing] space” through format conversion and found it transformative. - https://arstechnica.com/ai/2025/06/anthropic-destroyed-milli...


Interesting - so local models, like Google Gemini is then likely pirated by this interpretation - because the model is distributed? Ditto open weight models?

> All of this is built around the simple idea that real friendships happen when you actually meet in person.

I understand the sentiment - but this would make it useless for my closest friends - we live in different cities and countries now - and it would take years to fill in the social graph. We would all have to travel and meet everyone else.

I suppose this is alleviated by the talk to a friend of a friend feature - but does sound like it partially excludes friends with limited mobility.


Seen from another angle: this encourages you to make new friends locally.

And yes, also one more excuse to visit faraway friends.


Do you use a docking station and an external display?

I don't. I never got into external displays because I travel a lot and write code in strange places.

Not OP but external display only works via HDMI directly atm (m2 mb pro).

Interesting - but how do I patch, upgrade and build my own iso?

The source repository isn't very enlightening?

> The actual repository here hosts the source code for Lightwhale, and is not of any interest for most people.

> https://bitbucket.org/asklandd/lightwhale/src/master/


It appears to be outdated (last commit from 2 years ago), and version 3.0 seems not to be there.

Yes, sorry. I don't use the master branch, and Bitbucket doesn't easily let me delete it. But wait, you confirmed that "3.0" isn't there, but somehow missed that "3.0.0" is? =)


> this is a great example of a driver that should have been running in userspace a long time ago, just like how Windows has been moving in that direction.

Hasn't windows (nt lineage) moved solidly in the opposite direction? Used to be you could reload/restart the video card ("GPU") driver if the driver crashed?


No it's the opposite. WDDM and DirectX are constantly being updated and have been improving crash recovery of the GPU, updating its driver, power management, abstracting features like video encoding and storage DMA, among many things. In Linux it is taking ages, the first proposal for DRM to support 2010 era WDDM features was in 2021 and it still does not exist. Graphics is one of the few places some of Microsoft still innovates. Although not in the sense of having great code, they just put in the work to coordinate these changes from the handful of vendors. If only someone hosted more steak dinners for Linux.

I think this conflates two different eras/layers. NT 4 famously moved the window manager/GDI/graphics subsystem into kernel mode, so that’s probably the “opposite direction” history. But modern GPU-driver recovery is WDDM/TDR, and it very much still exists: WDDM splits the display driver into user-mode and kernel-mode components, and TDR resets/recovers a hung GPU/driver instead of requiring a reboot.

https://learn.microsoft.com/en-us/windows-hardware/drivers/d... https://learn.microsoft.com/en-us/windows-hardware/drivers/d...

I also update NVIDIA drivers regularly on Windows 11 without rebooting, though that’s install-time driver reload rather than exactly the same thing as TDR.


And I don't recall any practical way to recover from a crashed NT 3.x GUI subsystem.

It would presumably still function as a server, and be gracefully shut down remotely, but in the absence of anything like Remote Desktop or EMS[1], you'd be hard pressed to get much local troubleshooting done without rebooting the system anyway.

Also, as an NT user since 3.1, and a daily user of 3.5 and 3.51, I don't recall the GUI ever actually hanging or crashing (other than as a side effect of a bugcheck, which, by definition, is a crash triggered by code running in kernel mode).

That's one of the main reasons I was an early and enthusiastic NT user: while I can't say its performance was any better than "good enough", and then only on hardware that was at least comfortably above average in terms of CPU speed and RAM capacity, it was remarkably stable compared to every other PC OS I had used at the time.

Which, to be fair, would have been limited to MS-DOS, 16-bit Windows 2.x and 3.x, and OS/2 2.0 at the time, though it remained true throughout the lifespan of Windows 9x and OS/2 (at least through 3.0, the last version I used), and neither FreeBSD nor Linux were as reliable once you added at least a basic X11 environment to reach rough feature parity (and while X11 did allow recovery after crashing, insofar as it can be restarted without rebooting the system, it still took all your GUI applications and xterm windows down with it when it crashed).

[1] https://en.wikipedia.org/wiki/Emergency_Management_Services


I think there's room for a distinction between "not using metrics" and "not using data".

Unthinkingly leaning on metrics is likely to help you build a faster, stronger horse, while at the same time avoiding building a car, a bus or a tractor.


Posting inspired by this tweet on x:

>> I was in Ukraine drone HQ last year and they were using Palantir tech to blow up Russian tanks dude

> And I am a Ukrainian drone pilot on the frontline. We use the Delta Battlefield Management System, fully developed in Ukraine. Not American Peter “Antichrist” Thiel bullshit.

https://x.com/laser_kiwi_ua/status/2046446354558251100


The post in question (also linked in TFA):

https://x.com/PalantirTech/status/2045574398573453312


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: