Hacker Newsnew | past | comments | ask | show | jobs | submit | zackmorris's commentslogin

It's partially because the internet only grants us free storage (noun), not free compute (verb).

Which is fundamental to so many XY problems, including why cloud services are so byzantine instead of just providing isolated secure shells with full root access within them. And why distrust is a growing force in the world instead of, say, unconditional love.

I always dreamed of winning the internet lottery so that I could help dismantle the systems of control which currently dominate our lives. Which starts with challenging paradigms from first principles. That looks like asking why we only have multicore computing in the cloud and not on our desktops (which could be used to build our own cloud servers).

When we're missing an abstraction layer, that creates injustice and a power drain from the many to the few. Some examples:

- CPU -> multicore MIMD (missing) -> GPU (based on the subset SIMD instead of MIMD upon which graphics libraries could be built)

- UDP -> connectionless reliable stream (missing) -> TCP (should have been a layer above UDB not beside it)

- UDP/TCP -> P2P (NAT and other limitations block this and were inherited by IPv6 as generational trauma) -> WebRTC (redundant if we had P2P that "just works")

- internet connection -> symmetric upload/download speed (blocked for legal reasons under the guise of overselling to reduce cost) -> self-hosted web servers (rare due to antitrust issues stemming from said legal reasons)

- internet connection -> multicast (missing due to suppression of content-addressable-memory/hash-tree/DHT/) -> self-hosted streaming (negates the need for regions and edge caching)

I had high hopes for Google and even Tesla (for disrupting the physical world). But instead of open standards, they gave us proprietary vendor lock-in: Google Workspace (formerly G Suite) and NACS instead of J1772 (better yet both). Because of their refusal to interoperate at the lowest levels, there is little hope that they will do the real work of solving the hard problems at the highest levels.

For example, I just heard that China has built thousands of battery swap stations to provide effectively instant charging for electric vehicles, whereas that's something that Tesla can't accomplish because they chose to build Supercharger stations instead.

Once we begin to see the world this way, it's impossible to unsee it. It calls into question the fundamentals (like scarcity) which capitalism is based upon, and even the concept of profit itself.

From a spiritual perspective, I believe that this understanding is what blocks me from using my talents to use the system for personal gain to win the internet lottery. The people who own the systems of control don't have this understanding, and even view its basis in empathy as a liability. So we sacrifice the good of the many for the good of the few and call that progress.


Friendly reminder that antitrust enforcement and deregulation are incompatible


Hacker Mews


Looksmaxxing really has gone mainstream huh


Thought it was all the Rust catgirls.


Sounds like a lovely co-op building, or perhaps a retirement community for aging hackers.


In his 1999 book "Entering Space: Creating a Spacefaring Civilization", Robert Zubrin mentioned checking the math for Bussard ramjets with Dana Andrews in their 1988 join paper "Magnetic Sails and Interstellar Travel" and found that they aren't capable of reaching more than a few percent of the speed of light before drag overcomes propulsion:

https://www.goodreads.com/book/show/54728.Entering_Space

https://www.researchgate.net/publication/236447908_Magnetic_...

That's not to say that they don't work. But they'll probably be used primarily for braking to enter orbit around destination stars.

Probably the only way to reach a high fraction of the speed of light is to construct a giant laser to beam energy to a spaceship (which uses a reflector to receive light pressure momentum) and leave it behind orbiting the origin star. That's the premise of the Breakthrough Starshot project, which is ambitious with today's technology. But with self-replicating makerbots, building one may not be a big deal.

https://en.wikipedia.org/wiki/Breakthrough_Starshot

Unfortunately the force of light pressure (by F=2P/c for full reflection) is only about 2/3 of a kg or 1.5 lbs per GW, so a TW or greater would be needed for practical thrust. However, light pressure becomes the most efficient form of propulsion above about 25% to 50% c, if fusion or antimatter is used to create a gamma ray rocket.

Personally, I find it unlikely that aliens use these methods. I think that they probably worked out how to build neutrino lasers, since they don't burn up objects in their wake, perhaps by scaling superradiant Bose Einstein condensates:

https://arxiv.org/abs/2412.11765

In embarrassingly oversimplified layman's terms, I think that works by recruiting the macro-scale quantum state of the condensate (increased cross-section or barn) to overcome the short interaction distance of the weak force. Or by cooling the atoms to such an extent that they don't have the energy to hold themselves apart anymore, which accelerates their decay. I'm sure my explanations are wrong somehow.

Soon we may be able to investigate stuff like gravity waves and how the fabric of spacetime may be able to rebound above flat to create tiny ripples that allow mass energy to escape black holes, for example. I know that current theories don't state it quite that way, but if we consider stuff like the no-hair theorem and black box thought experiments, it's hard to see how Hawking radiation could exist without the wavelike nature of spacetime. We can even experiment with it on a relatively large scale by measuring the Casimir force. If we can rebound space, then we can play with stuff like negative energy and Alcubierre drives.

I looked up a Dyson sphere made from Mercury and it would be 1.5 mm thick, so aliens almost certainly aren't building them. But Dyson rings and swarms are probably a thing.

Some people in the tinfoil hat crowd think that UFOs can move 4th dimensionally and just travel orthogonally to our space and appear somewhere else. Theoretically, that might only require the energy difference (delta v) between planets. That hinges on if gravity spans higher dimensions and also touches on the multiverse. I'm way outside my wheelhouse so I'll stop blabbering about that.

In all honesty though, I question whether aliens travel. I think civilizations ascend about 10 years after they implement AI, or annihilate themselves in a Great Filter, their equivalent of WWIII. We're already staring the secrets of the universe in the face with automated theorem provers. And FUD around that and other accelerating tech drives people to become Luddites and elect amoral people who would gladly see the world burn for profit. So things could go either way really.

In my heart, I feel like we have a childlike understanding of consciousness. It probably transcends 4D spacetime. It's not hard to imagine aliens scaling what was learned from the CIA Gateway Program and doing stuff like FTL message passing via remote viewing. At that point FTL teleportation comes into the realm of possibility, sort of like in Dune.

If so, then aliens are probably everywhere, know about us, and maybe had a hand in our evolution. The probably live in what we think of as a Matrix, where years could go by for every second of our time. Another interpretation might be that they're able to return to source consciousness and exist as one, rather than in separation like we do. Maybe they periodically choose to reincarnate in us to study what transitioning to a spacefaring civilization looks like.

I probably shouldn't have bothered writing all of this, but it's Sunday, and I also really don't want to do my taxes.


> …and I also really don't want to do my taxes.

What a coincidence! April 15th really is the worst day of the year, isn’t it?


Just wanted to mention genetic algorithms (GAs), popularized by John Koza and others.

The post uses a 4 instruction program as an example having about 256^4 or 4 billion combinations. Most interesting programs are 10, 100, 1000+ instructions long, which is too large of a search space to explore by brute force.

So GAs use a number of tricks to investigate the search space via hill climbing without getting stuck at local optima. They do that by treating the search space as a bit string, then randomly flipping bits (mutation) or swapping bits (sexual reproduction) to hop to related hills in the search space. Then the bit string is converted back to instructions and tested to see if it performs the desired algorithm.

The bit string usually encodes the tree form of a Lisp program to minimize syntax. We can think of it as if every token is encoded in bits (like Huffman encoding inspired by Morse code) For example, the tokens in a (+ 1 2) expression might have the encoding 00, 01 and 10, so the bit string would be 000110, and we can quickly explore all 2^3 = 8 permutations (2^6 = 64 if we naively manipulate an uncompressed bit string whose encoded token sizes vary).

Note that many of the bit strings like (+ + 1) or (2 1 +) don't run. So guard rails can be added to reduce the search space, for example by breaking out early when bit strings throw a compiler exception, or using SAT solvers or caching to weed out nonviable bit strings.

We could build a superoptimizer with GAs, then transpile between MOS 6502 assembly and Lisp (or even run the MOS 6502 assembly directly in a sandbox) and not have to know anything about how the processor works. To me, this is the real beauty of GAs, because they allow us to solve problems without training, at the cost of efficiency.

I don't think that LLMs transpile to Lisp when they're designing algorithms. So it's interesting that they can achieve high complexity and high efficiency via training, without even having verification built-in. Although LLMs trained on trillions of parameters running on teraflops GPUs with GBs of memory may or may not be viewed as "efficient".

I suspect that someday GAs may be incorporated into backpropagation to drastically reduce learning time by finding close approximations to the matrix math of gradient descent. GAs were just starting to be used to pseudorandomly produce the initial weights of neural nets around 2000 when I first learned about them.

Also quantum computing (QC) could perform certain matrix math in a fraction of the time, or even preemptively filter out bit strings which aren't runnable. I suspect that AI will get an efficiency boost around 2030 when QC goes mainstream. Which will probably lead us to a final candidate learning algorithm that explains how quantum uncertainty and emergent behavior allow a physical mind to tune into consciousness and feel self-aware, but I digress.

Because modern compilers don't do any of this, and we aren't accustomed to multicore computing, then from a sheer number of transistors perspective, we're only getting a tiny fraction of the computing power that we might otherwise have if we designed chips from scratch using modern techniques. This is why I often say that computers today run thousands of times slower than they should for their transistor budgets.


Andreessen's criticism of introspection, and Musk's criticism of empathy, are projections of their fear of being disconnected from spirit (primarily the notion that we're all one).

Some of us eventually find ourselves in situations that defy logical explanation. I've witnessed my own thoughts and plans rippling out into the world and causing external events to unfold. To the point that now, I'm not sure that someone could present evidence to me to prove that our inner and outer worlds aren't connected. It's almost as hard of a problem as science trying to solve how consciousness works, which is why it has nothing to say about it and leaves it to theologians.

The closest metaphysical explanation I have found is that consciousness exists as a field that transcends 4D timespace, so our thoughts shift our awareness into the physical reality of the multiverse that supports its existence. Where one 4D reality is deterministic without free will, 5D reality is stochastic and may only exist because of free will. And this happens for everyone at all times, so that our individuality can be thought of as drops condensed out of the same ocean of consciousness. One spirit fragmented into countless vantage points to subjectively experience reality in separation so as to not be alone.

Meaning that one soul hoarding wealth likely increases its own suffering in its next life.

That realization is at odds with stuff like western religion and capitalism, so the wealthy reject it to protect their ego. Without knowing that (or denying that) ego death can be a crucial part of the ascension process.

My great frustration with this is the power imbalance.

Most of us spend the entirety of our lives treading water, sacrificing some part of our prosperity for others. We have trouble stepping back from that and accepting the level of risk and/or ruthlessness required to take from others to give to ourselves. We lose financially due to our own altruism, or more accurately the taking advantage of that altruism by people acting amorally.

Meanwhile those people win financially and pull up the ladder behind them. They have countless ways, means and opportunities to reduce suffering for others, but choose not to.

The embrace or rejection of altruism shouldn't be what determines financial security, but that's the reality we find ourselves in. Nobility become its opposite.

That's what concepts like taxing the rich are about. In late-stage capitalism, a small number of financial elites eventually rig the game so that others can't win, or arguably even play.

It's the economic expression of the paradox of tolerance.

So the question is, how much more of this are we willing to tolerate before the elites reach the endgame and see the world burn?


Note that we had the technology to do this affordably as of about 2008, when lithium iron phosphate (LiFePO4) batteries became widely available for about $10-12 each (I had to look that up). They were definitely available at low cost ($6) by 2018:

https://web.archive.org/web/20180201203013/https://www.18650...

Looks like sodium-ion (Na-ion) 18650 batteries at 1.5 Ah have about 1/2 the capacity of LiFePO4 18650s at 3.5 Ah, and are about twice the price, so lets call them 4x the price per energy stored:

https://www.radicalrc.com/item/Sodium-Na-Ion-Battery-18650-3...

https://ogsolarstore.com/products/sodium-ion-cells-3-1v-batt...

https://coulombsolutions.com/product/12v-sodium-ion-battery/

Battery prices halve about every 4 years:

https://pv-magazine-usa.com/2024/03/06/battery-prices-collap...

So we can project that Na-ion batteries will have the same price per kWh as today's LiFePO4 in perhaps 8 years, or around 2034, if not sooner. That will negate the lithium supply chain bottleneck so that we're limited to ordinary shortages (like copper).

500 W bifacial solar panels are available for $100 each in bulk, so there's no need to analyze them since they're no longer the bottleneck. A typical home uses 24 kWh/day, so 15-20 panels at a typical 4.5 kW/m2 solar insolation provide enough power to charge batteries and still have some energy left over, at a cost of $1500-2000. Installation labor, electricians/licensing, inverters and batteries now dominate cost.

The sodium ion battery market is about $1 billion annually, vs $100 billion for lithium ion. It took lithium about 15-20 years to grow that much. So whoever gets in now could see a 1-2 orders of magnitude return over perhaps 8-15 years. I almost can't think of a better investment outside of AI.

-

I've been watching this stuff since the 1980s and I can tell you that every renewable energy breakthrough coincides with a geopolitical instability. For the $8 trillion the US spent on Middle East wars since 9/11, we could have had a moonshot for solar+batteries and be at 90+% coverage today. Not counting the other $12 trillion the US spent on the Cold War. Fully $20 trillion of our ~$40 trillion US national debt went to funding endless war, with the other $20 trillion lost on trickle-down tax cuts for the ultra wealthy.

We can't do anything about that stuff in the short term. But we can move towards off-grid living and a distributed means of production model where AI, 3D printing, permaculture, and other alternative tech negates the need for investment capital.

In the K-shaped economy, the "if you can't beat 'em, join 'em" phrase might more accurately be stated "if you can't join 'em, beat 'em".


This is great!

Forkrun is part of a vanishingly small number of projects written since the 1990s that get real work done as far as multicore computing goes.

I'm not super-familiar with NUMA, but hopefully its concepts might be applicable to other architectures. I noticed that you mentioned things like atomic add in the readme, so that gives me confidence that you really understand this stuff at a deep level.

My use case might eventually be to write a self-parallelizing programming language where higher-order methods run as isolated processes. Everything would be const by default to make imperative code available in a functional runtime. Then the compiler could turn loops and conditionals into higher-order methods since there are no side effects. Any mutability could be provided by monads enforcing the imperative shell, functional core pattern so that we could track state changes and enumerate all exceptional cases.

Basically we could write JavaScript/C-style code having MATLAB-style matrix operators that runs thousands of times faster than current languages, without the friction/limitations of shaders or the cognitive overhead of OpenCL/CUDA.

-

I feel that pretty much all modern computer architectures are designed incorrectly, which I've ranted about countless times on HN. The issue is that real workloads mostly wait for memory, since the CPU can run hundreds of times faster than load/store, especially for cache and branch prediction misses. So fabs invested billions of dollars into cache and branch prediction (that was the incorrect part).

They should have invested in multicore with local memories acting together as a content-addressable memory. Then fork with copy-on-write would have provided parallelism for free.

Instead, CPU progress (and arguably Moore's law itself) ended around 2007 with the arrival of the iPhone and Android, which sent R&D money to low-cost and low-power embedded chips. So the world was forced to jump on the GPU bandwagon, doubling down endlessly on SIMD instead of giving us MIMD.

Leaving us with what we have today: a dumpster fire of incompatible paradigms like OpenGL, Direct3D, Vulkan, Metal, TPUs, etc.

When we could have had transputers with unlimited compute and memory, scaling linearly with cost, that could run 3D and AI libraries as abstraction layers. Sadly that's only available in cloud computing currently.

We just got lucky that neural nets can run on GPUs. It would have been better to have access to the dozen or so other machine learning algorithms, especially genetic algorithms (which run poorly on GPUs).

Maybe your work can help bridge that gap.


I appreciate the high praise re: forkrun.

forkrun's NUMA approach is really largely based on the idea that, as you said, "real workloads mostly wait for memory". The waiting for memory gets worse in NUMA because accessing memory from a different chiplet or a different socket requires accessing data that is physically farther from the CPU and thus has higher latency. forkrun takes a somewhat unique approach in dealing with this: instead of taking data in, putting it somewhere, and reshuffling it around based on demand, forkrun immediately puts it on the correct numa node's memory when it comes in. This creates a NUMA-striped global data memfd. on NUMA forkrun duplicates most of its machinery (indexer+scanner+worker pool) per node, and each node's machinery is only offered chunks from the global data memfd that are already on node-local memory.

This directly aims to solve (or at least reduce the effect from) "CPUs waiting for memory" on NUMA systems, where the wait (if memory has to cross sockets) can be substantial.


I don't know why you or your parent commenter got downvoted, but I use that as evidence that the end is very near.

With the current geopolitical climate and the arrival of AI, I'm predicting a sharp economic downturn at the end of the year the likes of which we haven't seen in a century.

I mean the Housing Bubble popping and the Dot Bomb were bad, but the US national debt was so much lower then. Income inequality was lower. Student loan debt was lower. Healthcare was more affordable. Homes were more affordable. Food was more affordable. We had (some) faith in our electoral process.

When the cheap capital runs out, when value of the dollar collapses due to unforced error, when the overseas investment dries up, when billionaires panic and yank their investment in AI (leaving us with a duopoly like always), when the employment rate peaks never to return, when companies stop hiring for the foreseeable future, when people stop visiting websites or buying software, when we abandon liberal arts for the trades in Service Economy 2.0, when hospitals and universities close, when farms go bankrupt, when interest on the US national debt consumes its social safety net, when we sell our public lands for pennies on the dollar, when nobody is held accountable..

That's when we the people will remember who we are. Somehow, like every other time before, we'll pull ourselves up by our bootstraps from nothing. Without time, money or resources, we'll come together and find a way to rebuild. We won't even tax the rich or incite violence against them, we'll simply manifest the abundant reality that's been denied to us by them for so long.

That looks like organizing. Unions. Cooperatives. Mutual aid networks. Renewable energy. Permaculture. Voluntary employment and clock-in. Credit unions and crowdfunding. Automation. Distributed means of production. Fair trade. Class action lawsuits. Boycotts. Voting against incumbents. Solarpunk.

We'll transcend competition and see the matrix for the bill of goods that it is. Rather than trying to get the money and power back in futility, we'll make them irrelevant.

It's time to start thinking about selling those stocks. Divesting from the blood money of unearned income that comes from exploitation, suffering and war (even though they don't tell us that). Steering clear of prediction markets. Dropping the crypto.

We know they won't. But that's why they'll stay insulated from knowing what stuff they're made of, holding out as long as possible, lonely and alone. And the fun part is, they'll get to find out anyway when the music stops.


Mine was downvoted because unionizing is the last thing people on a startup forum want to hear.


This is great!

I remember having this debate back in the late 1990s when I was in college for my electrical and computer engineering (ECE) degree. At the time as students, we didn't really know about nuances like delta cycles, so preferring Verilog or VHDL came down to matter of personal taste.

Knowing what I know now, I'm glad that they taught us VHDL. Also that's one of the reasons that it's worth trying to get into the best college that you can, because as long as you're learning stuff, you might as well learn the most rigorous way of doing it.

---

It's these sorts of nuances that make me skeptical of casual languages like Ruby and even PHP (my favorite despite its countless warts). I wish that we had this level of insight back during the PHP 4 to 5 transition, because so many easily avoidable mistakes were made in a design-by-committee fashion.

For example, PHP classes don't use copy-on-write like arrays, so we missed out on avoiding a whole host of footguns, as well as being able to use [] or -> interchangeably like in JavaScript. While we're at it, the "." operator to join arrays was a tragic choice (they should have used & or .. IMHO) because then we could have used "." for the object operator instead of -> (borrowed from C++), but I digress.

I often dream of writing a new language someday at the intersection of all of these lessons learned, so that we could write imperative-looking code that runs in a functional runtime. It would mostly encourage using higher-order methods strung together, but have a smart enough optimizer that it can handle loops and conditional logic by converting them to higher-order methods internally (since pure code has no side effects). Basically the intermediate code (i-code) would be a tree representation in the same form as Lisp or a spreadsheet, that could be transpiled to all of these other languages. But with special treatment of mutability (monadic behavior). The code would be pure-functional but suspend to read/write outside state in order to enforce the functional core, imperative shell pattern.

A language like that might let us write business logic that's automatically parallelized and could be synthesized in hardware unmodified. It would tend to execute many thousands of times faster than anything today on modifiable hardware like an FPGA. I'd actually prefer to run it on a transputer, but those fell out of fashion decades ago after monopoly forces took over.


maybe clash [1] would be interesting to you then?

[1] https://clash-lang.org/


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: