Yeah, this. 1password does the same thing for any browser it detects when installed for the native desktop integration from the chrome extension.
Not 100% across the spec but this wouldn't functionally do anything until you install the related extension? e.g., it's pinned to nominated `allowed_origins`
Yea I guess the issue here is whether you think installing the extenstion should set up the integration or installing the thing being integrated should set up it. Im inclined to think its the extensions responsibility, but I dont think its a severe data issue.
Had the same issue on arch, though survived it OK (6.19.11-zen1-1-zen). Maybe it's a zen kernel thing, it only pegged 2-3 cores and the others were OK so could jump in and kill it.
I abused this term a bit to help data scientists & data engineers understand why they should take interest in each others skillsets. I used to liken it to a formula 1 driver (scientist) and the car / pit crew (engineers).
Sure, you can maybe be a great driver without caring about the car or the crew, but it is definitely going to have its limits. Likewise, at the end of the day the crew is there to make the driver shine, and need to be invested in understanding how they operate.
Creates a much better sense of culture and collaboration, and overall better products, when everyone can see the part they play and how important the relationship is to their peers.
Ah, this explains chatgpt (and probably copilot) performance behind corporate firewalls such as zscaler.
Between the network latency and low end machines, there is an enormous lag between chatgpts response and being able to reply, especially for editing a canvas.
I've been sitting there for up to a minute plus waiting to be able to use the canvas controls or highlight text after an update.
yeah, I took the Ubuntu / Fedora perf for granted as well. Recently switched back to Arch on a whim across one low-end machine, one high-end machine, and both run like lightning compared to Ubuntu 24.04 / Fedora 40.
Expected the difference with Ubuntu as it packs more out of the box for the enterprise behaviours, not so much with Fedora. I've had no freezes, faster startup and shutdown, generally more responsive desktop etc. with Arch.
Generally, though a rolling release it also has fewer moving parts as well - only having to deal with the main repo + flatpak (and a select few AUR pkgbuilds) is nice compared to Ubuntu where I had to layer deb repos + PPAs + flatpak + brew to get my tooling in place without having to script my own git-driven installers.
One thing that tripped me up on any distro - the defaults for TLP (vs power profile daemon) seem hyper conservative wrt performance, probably by design. I never bothered digging in, just switched back to PPD, but it definitely prioritises power savings above all else.
I've been on Manjaro (arch based) for a few years now. I only ever installed it once and regularly update it. I've had some minor issues over the years but was able to resolve them. Mostly updates are without issues and when they aren't usually the fix is a google search away and pretty straightforward.
And of course just about everything has been updated many times at this point. Latest kernel, gnome, etc. Nice when a bunch of Intel driver performance improvements landed a few years ago. I got them right away after that kernel got released and noticed a slight difference. A few months ago, I noticed a few more improvements with performance when a bunch of btrfs fixes landed.
It's a good reason to stick with rolling releases. And since the Steam Deck uses Arch, getting Steam running on this was ridiculously easy. I'd use it professionally except I have a Mac Book Pro M1, which is really nice, and the Samsung laptop I run Manjaro on is not great, to put it mildly.
I check once in a while but there are a lot of compromises out there in terms of different laptops but none of them really come close to Apple. They all do some things well only to drop the ball on other things. You can have a fast laptop but not a quiet one. You can have a nice screen but then the keyboard or touchpad is meh. Or the thing just weighs a ton.
I think that was the point with the Surface Pro 4 in the article. It's a bit crap in terms of performance but the formfactor is nice-ish. Of course the touch support isn't great, which is no different with Manjaro. Except of course you do have access to all the latest attempts to address that.
Question: what is the reason for the silent copy when append exceeds the original slice cap?
It's a footgun avoided by reading the spec and (maybe) remembering it in practice, but it feels like it would be safer to throw a comp error and force the user to deal with it when a user is trying to exceed the cap of the underlying array?
Alternative is defensively using len() and cap() for slice ops in which case error-ing out feels more ergonomic.
Because you would not have any growable vector/list structure otherwise.
The real problem is that Go merrily lets you have copy-and-append operation on a slice (good), subviews of a slice so that you can share subsets of the data without copying it (good), at the same time (very bad: any operation on either will lead to confusion).
In most languages, subslicing gives you something of another type that can't be modified (or at least not accidentally). But in Go, if I call a function `do_smth_with_slice([]byte xs)`, there is no way for me to know whether this function expects `xs` to be mutable or not.
I'm sure that e.g. a C++ function taking an std::view can do some forbidden magic to still modify the underlying data, but at least the original intent is made clear by the argument type.
> Question: what is the reason for the silent copy when append exceeds the original slice cap?
Because Go slices play double duty as vectors. And that is the usual behaviour of a vector.
And the issue is the opposite situation, when appending does not exceed the original slice cap. The entire point of the slice trick is to force a resize (and thus a copy) on append.
> it feels like it would be safer to throw a comp error and force the user to deal with it when a user is trying to exceed the cap of the underlying array?
It would be safer to have not confused slices and vectors, but half-adding that confusion sounds even worse, your suggestion would only keep the worst parts, and would require hand-rolling the rest every time.
Erroring on appending to a slice would require checking every call to append for an error. I'd find it more surprising for append to error on resize since append implies a growable array.
Working on a python gig targeting an airgapped windows server, different OS architecture from everything else on-hand, restrictions on building and deploying VM's, poor dev/prod workflows etc.
After a couple of months of app dev we ran into weeks of grinding through building a reproducible process (including a sneakernet step) for python dependency resolution and binary builds to get the payload dropped on the server. It was brittle and a bit overwhelming.
Go was "big enough" at the time to be a reasonable next step and after a few weeks of porting we had a cross-compilation process up and were done.
Go actually _shits me to tears_ as a language compared to python, rust, or even Lua, but as a back-end dev it's still where I do most of my work as I'm nearly guaranteed to be able to i) predictably solve something in a way i can share it with a team and ii) get it deployed without too much pain.
Same plan as this + parent comment. I struggle with the collection + curation eventually overtaking the utility of the thing, especially with "valuable information" captured in RSS feeders, bookmarks, etc. Feels more like FOMO at times.
I have a paper notebook that lets me be more present in different scenarios and only capture stuff like actions or critical facts. Likewise key data around policy numbers, cost codes, etc. go into a markdown file.
I used to be a voracious note-and-knowledge capture type between Standard Notes and Obsidian. I'd keep both a daily rolling log of everything I did + cross-ref and expand key items into broader knowledge capture.
It probably has some intangible benefit I didn't pick up on but overall it just felt like a waste of energy compared to letting stuff go, sometimes having to google stuff again, and putting effort into building things and talking to people.
I also started to wonder, what is the value? Documentation, code comments, communication all help to take an idea or understanding from Me -> People. Hoarding notes and digital content for myself really didn't accomplish much other than demanding attention.
Not 100% across the spec but this wouldn't functionally do anything until you install the related extension? e.g., it's pinned to nominated `allowed_origins`
reply