It was only 2 weeks ago that I donated my HP LaserJet MFP, which was manufactured in 2010. It was still 100% functional and had never required a repair. Even the paper rarely jammed.
My father has always insisted on purchasing the latest inkjet at Costco and he has probably burned through 5 printers in those 16 years when I had one. The OEM compatible toner cartridge was ubiquitous, and lasted around 18 months for my low-volume needs.
I would never dare purchase anything but a LaserJet. HP has been so very good to me in engineering, support, and reliability. I considered a Brother printer, but without any valid reason to leave HP behind, I stuck with them again for a new model. No regrets!
Wouldn’t say zero but if you are printing most kinds of documents the laser is the way to go.
On the other hand I am an inkjet enthusiast who prints photos and art reproductions and you can get great results if you: get a good printer, use OEM ink, use quality paper, and perfect your technique. Cheap inkjets do amazingly good work for you pay but will have you tearing your hair out… and knock it off with the third-party ink.
And when you occasionally want high quality photo prints, outsource the printing to Walmart, Walgreens, OfficeDepot, Shutterfly etc.. Pickup in 1 hour in some cases.
To be honest, just a user here, it’s only recently (like a week?) you can ask Copilot to edit an existing PR, historically it’s had to open a new one (that merged back to original PR) or it had to make it to begin with, I can see this unintentionally happening as part of this improvement to edit existing PRs
I think you’d find Dr. Richard Scolyer’s story really relatable. He’s an Australian cancer expert who, along with his colleague, is using himself as "patient zero" for a world-first treatment for his own brain cancer. They’re basically doing the research and the treatment in parallel to find a new way forward: https://www.abc.net.au/news/2025-10-30/dr-richard-scolyer-sp...
I found his story to give my mom hope when her cancer had metastasized to her brain in 2025.
Cancer cells in the brain are in a nutrition rich environment for growth and at the same time dangerous for treatment for removal and to prevent growth. The expected 5 year life expectancy is less than 5%.
Dr. Richard Scolyer had been diagnosed at 2023 and is still with us today. I hope he succeeds in his work.
Yes, moved to GitHub Codespaces and generally has been good.
Pros: one click setup for devs jumping between projects after you get the devcontainer setup process working, takes some fiddling, trial and error.
Has felt good for some older projects to be wrapped in the devcontainer and once it’s working feel comfortable the environment is stable, and also moving everyone to new environments has been easy.
Keeping a haywire dev/npm script away from your main machine is also good, but I know it’s not foolproof.
Cons: Codespaces CPUs are usual cloud slow so you need to pay more and single threaded perf won’t be as good as your laptop, a real shame. I think GitHub competitors would have better CPUs.
Very rarely but Codespaces can have a technical issue and you can't do your work (inaccessible), and to avoid it sleeping during the day due to inactivity you may leave it running most of the day but it demands a shutdown after 12 hours or so, so very long dev sessions can be interrupted.
GitHub also dropped support for using JetBrains IDEs which was not cool, so it’s just vscode which is usable but would have preferred other IDEs.
If Codespaces team is reading would love to see some improvements here.
GitLab's write-up mentions a dead man's switch where "The malware continuously monitors its access to GitHub (for exfiltration) and npm (for propagation). If an infected system loses access to both channels simultaneously, it triggers immediate data destruction on the compromised machine. "
Neat! How do you handle state changes during tests, for example, in a todo app the agents are (likely) working on the same account in parallel or even as a subsequent run, some test data has been left behind or now data is not perhaps setup for a test run.
I’m curious if you’d also move into API testing too using the same discovery/attempt approach.
This is one of our biggest challenges, you're spot on! What we're working on taking this includes a memory layer that agents have access to - thus state changes become part of their knowledge and accounted for while conducting a test.
They're also smart enough to not be frazzled by things having changed, they still have their objectives and will work to understand whether the functionality is there or not. Beauty of non-determinism!
Could you share the session url via the feedback form if you still have access to it?
That's really strange, it sounds like Webhound for some reason deleted the schema after extraction ended, so although your data should still be tied to the session it just isn't being displayed. Definitely not the expected behavior.
Some of the slowdown will come from not indexing the FK columns themselves, as they need to be searched during updates / deletes to check the constraints.