Hacker Newsnew | past | comments | ask | show | jobs | submit | anyfoo's commentslogin

Wow. Did anyone else have some serious trouble with this?

The first color was obvious to me, as it was designed to be (it even tells you if you intentionally misclick). But at the very next color, the first "test color", I literally face palmed and said "oh my god" out loudly.

It was so, so hard for me to decide. I really just wanted to pick a non-existent "teal" option. Both "blue" or "green" felt wrong and equally right at the same time.

It just got harder from there. At the end, it told me that my threshold is "bluer than 80% of the population", but honestly, I don't think that's really true in my case. I was so ambivalent, my choices really felt random to me very quickly.


Unless instances are sparse, higher code density is of course always better, because of the instruction cache (and the microcode cache, if this doesn't get "pinhole optimized" away or something like that, I know nothing about the microcode cache).

But yeah, it may not make a real impact yet anyway.


This would have blown me away back in the late 80s/early 90s.

(Or maybe not, if it doesn't perform better than random, I haven't actually tried it out yet. Some more examples would have been nice!)

I wonder how far you could push this while still staying period correct, e.g. by adding a REU (RAM Expansion Unit), or even a GeoRAM (basically a REU on steroids).

SuperCPU would also be an option, but for me it's always blurring the line of "what is a C64" a bit too much, and it likely just makes it faster anyway.


How fast is the “new” Commodore 64?

Have not heard much about it since launch. Although, now that I look, it seems they are just shipping now.

https://www.commodore.net/product-page/commodore-64-ultimate...


RAM can be increased to 16 MB and CPU speed to 48 GHz.

I’m sorry how many Hz???

The 64Ultimate goes to 64MHz, the Ultimate64 cartridge goes to 48MHz "only".

Which reminds me that I just lost the game.

I also lost the game not too long ago, but before that, I think I didn't actually lose it for a decade of more? And losing it wasn't even because it was mentioned anywhere, I genuinely just thought of it by myself, after forgetting about it for so long.

So my sincerest apologies if my comment just made any readers lose their long streak in the game.


Damnit, I am pretty sure I had a few-year-streak going until just now. Welp, off to the grind again, I suppose.


I've lost it a lot lately, for some reason, after what I suppose was my third multi-year victory streak.

Like, five or so losses this year.


Same here, oddly enough, and every time besides this one was without anyone else mentioning it.


I think once you lost the game once, it's much easier to lose it again relatively shortly after. It takes some long term distraction (and nobody mentioning it) to forget about it again.


Yep, just lost after I think >5 years. But not because of your comment, because of GP comment.


damn. multiyear streak ruined. i even managed to forget i was playing.

i just lost the game.


Nah, I won't be fooled again. I won a long time ago and never looked back.

https://xkcd.com/391


Wow maybe 10+years running here since i lost last..


Damn!


A big problem I have with ssh carts is that they are not universally supported. For me, there is always some device or daemon (for example tinyssh in the initramfs of my gaming pc so that I can unlock it remotely) that only works with “plain old ssh keys”. And if I have to distribute and sync my keys onto a few hosts anyway, it takes away the benefits.


Adding to this: while certs are indeed well-supported by OpenSSH, it's not always the SSH daemon used on alternate or embedded platforms.

For example, OpenWRT used Dropbear [1] instead, which does not support certs. Also, Java programs that implement SSH stuff, like Jenkins, may be doing so using Apache Mina [2] which, though the underlying library supports certs, it is buggy [3] and requires the application to add the UX to also support it.

[1] https://matt.ucc.asn.au/dropbear/dropbear.html

[2] https://mina.apache.org/sshd-project/

[3] I've been dealing for years with NullPointerExceptions causing the connection to crash when presented with certain ed25519 certificates.


You can just replace dropbear with openssh on OpenWRT. That was one of the first things I did, since DropBear also doesn't support hardware backed (sk) keys. Just move it to 2222 and disable the service.

I reenabled DB on that alt port when I did the recent major update, just in case, but it wasn't necessary. After the upgrade, OpenSSH was alive and ready.


Upgrade to a better one in initramfs?


Might actually be a positive instead of a negative. Gaming use-cases should have not any effect on security policies, these should be as separate as possible, different auth mechanisms for your gaming stuff and your professional stuff ensures nothing gets mixed.


Hah? It being my gaming machine has nothing to do with the problem. It’s also my FPGA development machine, though it gets used less for that. It only happens to be the only Linux workstation in my home (the others are Macs or OpenBSD).


If you care about security, I recommend investing into a separate computer for developing hardware and software and another for downloading games on.

You can setup your security any way you like, but nothing beats an air gap in terms of security and simplicity.


remote unlock is also useful when you're not gaming so that feels like the wrong aspect to focus on


I feel like that was super common. Apart from changing the volumes of entire channels (e.g. changing the level of Line In vs. digital sound), volume was a relatively “global” thing.

And I’m not sure if that was still the case in 1997, but most likely changing the volume of digital sound meant the CPU having to process the samples in realtime. Now on one hand, that’s probably dwarfed by what the CPU had to do for decompressing the video. On the other hand, if you’re already starved for CPU time…


I mentioned this in another thread now, but it was definitely noteworthy to me that it did this since I was used to other programs not doing so, for example Winamp, I would also have thought Windows' Media Player did not do this, but I can't remember for certain.


Winamp had a software equalizer with a preamp, which was noteworthy. Are you sure changing the volume did not mean changing the preamp level in Winamp?

If you turned off the preamp (could be directly done in the EQ window I think), what did the volume control actually do?


Maybe we're not understanding each other correctly here.

It's 30 years ago now, but my recollection is that Winamp did not change Windows' global volume.

I am less certain, but I thought Windows' own Media Player similarly also did not change Windows' global volume.

What I definitely recall correctly is being surprised that Real Player would change the Windows' global volume and this would not have been so noteworthy to me unless it was unusual compared to other applications I typically used.


No, I get you. I'm stating that Winamp might have been "special" because it had a software equalizer, and its volume control might have actually changed the preamp level. This would be fairly unusual for other app of its time, and I also wondered what would happen if you turned the Preamp off with its big shiny button, and whether that would let the volume control control the global volume instead, or whether it maybe would disable the volume control entirely.

What I'm saying is: I still feel (perhaps wrongly, quite possibly so) that in 1997, changing the global volume was more common, and that even being able to change app-specific volumes required some non-trivial features from the app who can do so.


Awesome.

Side note: virtual 8086 mode was protected mode, or rather, implied protected mode. A task could run in virtual 8086 mode where to the task it was (mostly) looking like it was running in real mode, when in actuality the kernel was running in full protected mode.

Note that the "kernel" was never DOS. It could often actually be a so called "memory manager", like EMM386, and the actual DOS OS (the entire thing, including apps, not just the DOS "kernel") would run as a sole vm86 task, without any other tasks. The memory manager was then serving DOS with a lot of the 386 32 bit goodness through a straw, effectively.

It's very bizarre from today's (or even back then's) OS standards, and evolved that way because compatibility.


Is it so bizarre from today's perspective? Virtualization and hypervisors are commonplace.


The virtualization itself is not the bizarre part. The bizarre part is where the actual OS is 16 bit and runs as the singular "task" of a thin 32 bit layer that merely calls itself a "memory manager". The details of that machinery (segmentation, DPMI, ...) are quite a sight to behold. And it's all because of how PCs evolved at that time, and because we needed to keep running DOS and still wanted to make use of all the extra memory that wouldn't fit into its address space.


macOS also uses compression in the virtual memory layer.

(It's fun to note that I try to type out "virtual memory" in this thread, because I don't want people to think I talk about virtual machines.)


I'm getting tired of typing this, but swap space is not just to increase available virtual memory. If you upgrade from 8 GB to 24 GB, then with proper swap space usage, you have 16 GB that could be used for additional disk cache.

Sure, you're still better off with 24 GB overall compared to 8GB+swap whether you add swap to your 24 GB or not, but swap can still make things more better.

(That says nothing about whether the 2x rule is still useful though, I have no idea.)


There's a chance that those servers might run more efficiently with some swap space, for the reasons mentioned many times in this thread. Swap space is not just for overcommitting.


The theories are repeated a often but I have never seen any empirical data to back it up assuming one is setting the options I mentioned. These anecdotes usually come from servers with default settings and no attempt to tune them for the intended workloads and no capacity planning for application resources. Even OS maintainers are starting to recognize this and have created daemons such as tuned for the people that never touch settings. The next evolution will be dynamic adjustments from continuous bpf traces. I just keep it simple and avoid the circular arguments all together.


Oh sure, it might or might not make a significant difference at all. Chances are, if you do a lot of I/O on a large (or very large) amount of data, and you also have a lot of rarely used but resident anonymous memory, then swap space should help, as that anonymous memory can get paged out in favor of disk cache, but I have no idea how common that is.


Yeah I mean, I know what you mean but this is where it gets into circular reasoning. I will always have operations groups move the workload to a node that has more memory if that is what is needed. In my case having swap on disk would require it to be encrypted due to contracts requiring any customer data touching a disk to be encrypted but I just avoid that all together and just add more memory. If 2TB or RAM isn't enough then they get 3TB and so on. We pushed vendors and OEM's to grow their motherboard capacity. At some point application groups just get more servers.


Yeah, that seems like a reasonable approach for your case!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: