I do really enjoy working on the site, it's great to have an outlet and playground for ideas and do things just for fun. There never was (and never will be) any commercial angle for this, as I said in a footnote in the "Sloppy Copies" post, I have other motives for writing code and I appreciate I am very fortunate that I have the opportunity to be able to do that.
There's always been a tendency amongst the "priesthood" of any in-group to hoard knowledge and use it to maintain their position. So, regarding the "democratizing" of creating software - I mostly agree with you, and also agree that it's probably a good thing. I think it's pretty neat that someone without any coding experience can create their own bespoke tooling to solve a problem. I have caveats and concerns, but that's a topic for another day.
I also agree with the "that's art" part of your comment. I learned to program by reading other people's code, learned to build infrastructure by watching what my peers were doing, and learned to play an instrument by listening to and copying musicians I admired. Heck, I play in a covers band!
The problem is that this isn't just someone being inspired to create their own thing and put their own spin on it, which could be cool.
Even "nice idea, I'm going to do that and see if I can charge for it" isn't really an issue, free market and all that. This is cloning and copying on an automated, industrial scale, apparently sometimes for malicious, criminal purposes too.
I posted more or less the same thing in a comment over on lobste.rs[1] - being able to create your own bespoke software tools, without any developer experience is (mostly) a really cool thing.
This isn't someone being inspired to build something: It's the automated "drive-by" cloning and scammy, dubious nature of these clones that bothers me along with the copying of personas & identities to spam them across social media.
You're about the 5th person now in as many days who has recommended Elixir when I mentioned I was building a project in Ruby. I'll definitely have to check it out for my next project (whatever that may be!)
Can you expand on why you found it so appealing or "holy crap, this is awesome" things I should look at first ?
Not the guy, but I used rails at my old job for one and a half year, and used it in some personal projects. I looked into Elixir(and Phoenix) during this time, and Phoenix felt like it was designed for more modern websites, where RoR is built for older and tries to adapt to handle modern ones. It just feels that when you want to do something more responsive in Elixir, it's designed for it, but in Rails, it feels like you're doing something unorthodox or something that is added as an afterthought. Obviously this isn't quite accurate, but it is the vibe I got.
Elixir is also a very cool language in a lot of ways. I wouldn't go all in on Elixir/Phoenix, but that's because there's not a huge demand for it, at least where I reside. I would 100% consider it for some smaller projects though, if I stood between that and Rails, and I wouldn't mind having to get more comfortable with Elixir.
Edit: I haven't used Rails 8, and haven't followed the ecosystem since a bit before, so not sure how this feels nowadays. I *really* enjoy Rails backend though, but the frontend stuff never quite clicked.
Counterpoint on the "going all-in": we have a 7 year old Elixir/Phoenix project that currently sits at ~100K LOC and I couldn't be happier.
It has been absolutely wonderful building this with Elixir/Phoenix. Obviously any codebase in any language can become a tangled mess, but in 7 years we have never felt the language or framework were in our way.
On the contrary: I think Elixir (and Phoenix) have enabled us to build things in a simple and elegant way that would have taken more code, more infrastructure, and more maintenance in other languages/frameworks.
Not OP, but I made the move from Ruby/Rails to Elixir years ago, so I'll try to answer from my perspective.
Elixir is a functional programming language based on the "BEAM", the Erlang VM. We'll get back to the BEAM in a moment, but first: the functional programming aspect. That definitely took getting used to. I remember being _very_ confused in the first few weeks. Not because of the syntax (Elixir is quite Ruby-esque) but because of the "flow" of code.
However, when it clicked, it was immediately clear how easy it becomes to write elegant and maintainable code. There is no global state in Elixir, and using macros for meta-programming are generally not encouraged. That means it becomes very easy to reason about a module/function: some data comes in, a function does something with that data, and some data comes out. If you need to do more things to the data, then you chain multiple functions in a "pipe", just like how you chain multiple bash tools on the command line.
The Phoenix framework applies this concept to the web, and it works very well, because if you think about it: a browser opening a web page is just some data coming in (an HTTP GET request), you do something with that data (render a HTML page, fetch something from your database, ...) and you return the result (in this case as an HTTP response). So the flow of a web request, and your controllers in general, becomes very easy to reason about and understand.
Coming back to the BEAM, the Erlang VM was originally written for large scale (as in, country size) telephony systems by Ericsson. The general idea is that everything in the BEAM is a "process", and the BEAM manages processes and their dependencies/relationships for you. So your database connection pool is actually a bunch of BEAM processes. Multi-threading is built-in and doesn't need any setup or configuration. You don't need Redis for caching, you just have a BEAM process that holds some cache in-memory. A websocket connection between a user and your application gets a separate process. Clustering multiple web servers together is built into the BEAM, so you don't need a complex clustering layer.
The nice thing is that Elixir and Phoenix abstract most of this away from you (although it's very easy to work with that lower layer if you want to), but you still get all the benefits of the BEAM.
Something I never quite understood: differentiate between BEAM process and operating system process. The OS has launched one (in theory) BEAM Erlang VM runtime process with N threads; are we saying “process” here to try to emulate the OS process model internally within the BEAM OS process, when really we’re talking about threads? Or a mix of threads and other processes? I’m imagining the latter even cross network, but am I at least on the right track here?
A BEAM process is not an OS thread. The way I understand it, a BEAM process is just a very small memory space with its own heap/stack, and a message system for communication between BEAM processes.
The BEAM itself runs multiple OS threads (it can use all cores of the CPU if so desired), and the BEAM scheduler gives chunks of processing time to each BEAM process.
This gives you parallel processing out of the box, and because of the networking capabilities of the BEAM, also allows you to scale out over multiple machines in a way that's transparent to BEAM processes.
When I first started out with Elixir, it was more the overall architecture that first sold it to me. It is remarkably robust, my impression is that you can more or less yank RAM modules out of the server while it is running, and the last thing which will crash is Elixir. And it is absolutely top in class when it comes to parallel processing and scaleability. Not only how it does it internally, but also how it abstracts this in a way that just makes sense when you are working with it.
When it comes to web development specifically, what really got me hooked, was LiveView from the Phoenix framework. It keeps a persistant WebSocket connection to the client which it uses to push DOM updates directly. Instead of the usual request/response cycle on the client side, the server holds the state and just pushes the diff to the browser. It just made so much sense.
I am/was a huge Ruby fanboy, and I used Rails a lot and loved it (though had some criticisms around too much "magic"). I made the jump to Elixir/Phoenix around 8 years ago, and have loved it. Phoenix to me basically "fixed" all the things I didn't like about Rails (basically opacity and hard-to-find-where-it's-happening stuff due to someone metaprogramming aggressively). I will admit that I've been a functional programming fan for a very long time too. I always write my ruby code in a functional style unless there's a good reason not to (which is increasingly rare).
I still love and use ruby a ton for scripting, and still reach for Sinatra for super simple server needs, but Phoenix is my go-to stack these days.
I've also found the Elixir community to be amazing in the same ways the Ruby community is/was. It's not all roses, for example there's not as many libraries out there. Distribution is also not awesome so for example I currently use ruby or rust when writing CLIs. But for anything distributed (especially web) Phoenix is amazing.
This is a self plug, but I did a conference talk introducing Ruby veterans to Elixir/Phoenix some years ago. It's probably aged a bit, but should still be pretty accurate. https://www.youtube.com/watch?v=uPWMBDTPMkQ
The original conference talk is here (https://www.youtube.com/watch?v=sSoz7q37KGE), though the made-for-youtube version above is better because it's slightly updated, and I didn't run out of time :-)
The "one-person framework" thing is a big draw. I'm amazed at how productive I was in it, and it's not just at the code level. Even though I've been doing sysadmin/devops/architect work for over 25 years now, it's just so damn nice now not to have to think about e.g. standing up a HA PostgreSQL cluster or Redis and deployment is largely a solved problem.
Author of the article here (hi! Anxiously watching my Grafana stack right now...)
I've only just noticed that on the Rails homepage, and while I acknowledge everyone's chasing that sweet sweet AI hype, I gotta say that's... disappointing[1]. The reason I fell in love with Ruby (and by extension, Rails) is because it enabled me as a human to express myself through code. Not to become a glorified janitor for a LLM.
[1]=Well, I had a stronger response initially but I toned it down a bit for here...
Definitely. It really makes me wish it was getting more attention - and I know I'm late to the party having only picked it back up over a year after Rails 8 was released! It's just such a smooth experience and I haven't found anything like it that compares.
The thing that really impresses me is how it's become a "one person framework"[1] and thanks also to the "batteries included" approach, you can run everything with zero external service dependencies. I have no problem with managing other services like a cache or DB, but it's just so damn nice to be able to focus on the code and not have to context switch!
Author here, thanks for posting this! Any questions, comments or "You're wrong and this is why" let me know :) I do find myself wondering about the future of Rails (and I guess the wider Ruby ecosystem) though. I'm definitely in the "you can prise it from my cold, dead hands" camp but after years of watching them both slide down developer surveys it does make me concerned.
I'm kinda attached to "odd" outsider technologies like the Amiga and BeOS (which does make me wonder if there's a common thread there) so am used to seeing old packages and documentation gradually fade away but that's clearly not something that points to a sustainable future.
There's enough of the core components still active and after 20-odd years you could just say "it's done" (as I allude to in the Wrap Up) but I do wonder how many here would start a new project on Rails or make a Ruby platform a critical part of a new start-up ?
If you'd like to experiment with running your own AS in private address space, connecting to a friendly network of geeks over wireguard tunnels, check out DN42 https://dn42.dev/Home.
It's a great way to explore routing technologies and safely experiment with your own AS, running the same protocols as the "real" Internet, just in private space.
If you do get set up, give me a shout (https://markround.com/dn42), I'd be happy to peer with you if you want to expand beyond the big "autopeer" networks :)
There's a sci-fi story in there about an android manual injecting routes to get around a failed limb.
It's been fun explaining to our cloud engineers that BGP is pretty useful in AWS. Most had never touched it after they got their CCNP/CCIE. My networking cred went up a bit.
That was what I was thinking of (but worded it badly in the middle of my rant!)
If I wanted to intercept all your traffic to any external endpoint without detection I would have to compromise the exact CA that signed your certificates each time, because it would be a clear sign of concern if e.g. Comodo started issuing certificates for Google. Although of course as long as a CA is in my trust bundle then the traffic could be intercepted, it's just that the CT logs would make it very clear that something bad had happened.
The whole point of the logs is that they're tamper-evident. If you think the certificate you've seen wasn't logged you can show proof. If you think the logs tell you something different from everybody else you can prove that too.
It is striking that we don't see that. We reliably see people saying "obviously" the Mossad or the NSA are snooping but they haven't shown any evidence that there's tampering
> We reliably see people saying "obviously" the Mossad or the NSA are snooping but they haven't shown any evidence that there's tampering
Why would they use the one approach that leaves a verifiable trace? That'd be foolish.
- They can intercept everything in the comfort of Cloudflare's datacenters
- They can "politely" ask Cloudflare, AWS, Google cloud, etc. to send them a copy of the private keys for certificates that have already been issued
- They either have a backdoor, or have the capability to add a backdoor in the hardware that generates those keys in the first place, should more convenient forms of access fail.
> Why would they use the one approach that leaves a verifiable trace?
It is NSA practice to avoid targets knowing for sure what happened. However their colleagues at outfits like Russia's GRU have no compunctions about being seen and yet likewise there's no indication they're tampering either.
Although Cloudflare are huge, a lot of transactions you might be interested in don't go through Cloudflare.
> the hardware that generates those keys in the first place
That's literally any general purpose computer. So this ends up as the usual godhood claim, oh, they're omniscient. Woo, ineffable. No action is appropriate.
Your "I bet they're God" stance is even more naive. They're not God, they've got a finite budget both in financial terms and in terms of what will be tolerated politically.
Of course spooks expend resources to spy on people, but that's an expenditure from their finite budget. If it costs $1 to snoop every HTTP request a US citizen makes in a year, that's inconsequential so an NSA project to trawl every such request gets green lit because why not. If it costs $1000 now there's pressure to cut that, because it'll be hundreds of billions of dollars to snoop every US citizen.
That's why it matters that these logs are tamper-evident. One of the easiest ways to cheaply snoop would be to be able to impersonate any server at your whim, and we see that actually nope, that would be very expensive, so that's not a thing they seem to do.
That's never been my stance because there's a difference between mass surveillance and targeted surveillance. If you understood that then you wouldn't be getting lost and making silly references to "God".
I don't believe that the NSA is omniscient. I believe they have 95% of data on 95% of the population through mass surveillance, and 99.9% of data on 99.9% of people of interest through targeted surveillance.
You think abusing public CAs for mass surveillance is a genius idea, and that its lack of real-world abuse proves that mass surveillance just doesn't happen - full stop.
Unfortunately you fail to consider that if they tried to do this just once, they would be detected immediately, offending CAs would be quickly removed from every OS and browser on the planet, the trust in our digital infrastructure would be eroded, impacting the economy, and it would likely all be in exchange for nothing.
On the other hand if you're trying to target someone then what's the point of using an attack that immediately tips off your target, that requires them to be on a network path that you control, and that's trivially defeated if they simply use a VPN or any sort of application-layer encryption, like Signal? There is none.
The first quote was about them having nearly unlimited power for targeted surveillance and the second was about not having such power for mass surveillance. You keep confusing them.
Just stick to your original claim that I responded to - I addressed it in the second half of my previous comment which you glossed over.
There's no "nearly" in your statement. "a backdoor, or have the capability to add a backdoor in the hardware that generates those keys" is the same God powers claim again. If you now want to water it down with enough caveats it's nothing, this reminds me of how people go from "In lab conditions we can do a timing attack on the electronics from a FIDO key" to imagining that outfits like this just routinely bypass FIDO and so it's worthless.
It's very difficult and expensive to attack our encryption technologies, and so it's correspondingly rare. We are, in fact, winning this particular race.
Encryption actually works not because surveillance is now utterly impossible but because it's expensive. How you went from my pointing out that there's no evidence of this mass surveillance to the idea that I'm claiming these outfits don't conduct targeted surveillance at all I cannot imagine.
> How you went from [...] to the idea that I'm claiming these outfits don't conduct targeted surveillance at all
Again, I didn't. You concluded that the lack of evidence of public CA abuse indicates lack of surveillance, full stop, as if that's the only viable way of conducting surveillance. Here's a reminder:
> It is striking that we don't see that. We reliably see people saying "obviously" the Mossad or the NSA are snooping but they haven't shown any evidence that there's tampering
That's a reasonable observation with an unsupported and faulty conclusion. It doesn't even matter whether you meant mass surveillance (preceding context) or targeted surveillance here because the conclusion is bunk either way. I discussed that earlier but you keep glossing over it in favor of these absurd tangents.
There is also Spectranet[1] and clones for the Sinclair Spectrum, which allows for a much richer Internet-connected experience. It can load and boot remote programs from a server which allows you to get quite creative and produce sites like my TNFS server[2]. You can also try it out from an emulated Spectrum in a web browser at https://jsspeccy.markround.com if you don't have the original hardware lying around to see the sort of stuff you can build!
There's also Telnet clients so you can access old-school BBSes, and a variety of interesting "bridges" that grant access to Gopher or even parse websites. Quite amazing to access the modern Internet on an 8-bit machine from the early 80s that originally loaded games from cassette tape :)
Once you have telnet you just get an SDF account and do anything you want with a Unix shell.
And, if you fire up Emacs, you are god. IRC, EMail, Jabber, Mastodon, gopher, gemini, a calculator, a Lisp environment, play ZMachine games with Malyon (and spawn full v5 and v8 games unlike the Speccy which could just handle v3 ones)...
There's always been a tendency amongst the "priesthood" of any in-group to hoard knowledge and use it to maintain their position. So, regarding the "democratizing" of creating software - I mostly agree with you, and also agree that it's probably a good thing. I think it's pretty neat that someone without any coding experience can create their own bespoke tooling to solve a problem. I have caveats and concerns, but that's a topic for another day.
I also agree with the "that's art" part of your comment. I learned to program by reading other people's code, learned to build infrastructure by watching what my peers were doing, and learned to play an instrument by listening to and copying musicians I admired. Heck, I play in a covers band!
The problem is that this isn't just someone being inspired to create their own thing and put their own spin on it, which could be cool.
Even "nice idea, I'm going to do that and see if I can charge for it" isn't really an issue, free market and all that. This is cloning and copying on an automated, industrial scale, apparently sometimes for malicious, criminal purposes too.
That's a far cry from creative copying of ideas.
reply