Mice do a hell of a lot more socialization than lizards, and mammalian socialization is more complex per individual (more competition, feinting, theory-of-mind-like strategies) than the eusocial insect strategies of "my body is the swarm, I just happen to be the limb I have direct control over".
The bet is that they perfect a new kind of neural network which is roughly as good at "training" as the human mind is as far as "amount learned/experience gained per bit of information input".
Current LLMs are absolutely stupidly inefficient on this front, requiring virtually all human knowledge to train on as a prerequisite to early-college-level understanding of any one subject (granted, almost all subjects at that point, but what it has in breadth it lacks in depth).
That way instead of training millions of TPUs on petabytes of data just to get a model that maintains an encyclopedia of knowledge with a twelve-year-old's capacity for judgment, that same training set and compute could (they hope) instead far exceed the depth of judgement, planning, and vision of any human who has ever lived (ideally while keeping the same depth, speed of inference, etc).
It's one of those situations where we have reason to believe that "exactly matching" human intelligence is basically impossible: the target range is too exponentially large. You either fall short (and it's honestly odd that LLMs were able to exceed animal intelligence/judgment while still falling short of average adult humans.. even that should have been too small of a target) or you blow past it completely into something that both humans and teams of humans could never compete directly against.
Chess and Go are fine examples here: algorithms spent very short periods of time "at a level where they could compete reasonably well against" human grand masters. It was decades falling short, followed by quite suddenly leaving humans completely in the dust with no delusions of ever catching up.
That is what the large players hope to get with AGI as well (and/or failing that, using AI as a smoke screen to bilk investors and the public, cover up their misdeeds, play cup and ball games with accountability, etc)
GP said:
> there was a dramatic increase in regulations for banks (of all kinds)
and if I can deposit funds, make "investments" (gambling or not.. to me most stock investments are gambling anyway since they pay out in dividends based on quarterly profits) and withdraw money, that is at it's core a bank.
However, in these and many other cases they are now effectively unregulated ones. Partly because the executive branch responsible for said regulations have their hands in the pie.
>and if I can deposit funds, make "investments", and withdraw money, that is at it's core a bank.
Not according to pretty much all countries. When a depositor is making investments, that is called a brokerage, not a bank. But these gambling websites are not making investments either.
Also, with the transition to digital currency, banks don't really have the same purpose anymore of storing and guaranteeing people's access to their money, now that money is entries in a digital ledger and there is no risk of losing the money (outside of political risks where someone edits the database). They have been obviated, but the system persists.
>most stock investments are gambling anyway since they pay out in dividends based on quarterly profits
This is not true, publicly listed companies' boards regularly choose to vary dividend amounts based on longer term cash flow, and you will be sorely disappointed if you are expecting dividends to land in your account as a function of quarterly profits.
And how do you expect a business to work if it did not vary dividends? No business knows the future, and dividends are the same as owners paying themselves a profit, so if owners do not know how much profit there will be in the future, how can dividends be a "known" quantity? They have to be based on profits, they literally are the owners distributing the profit amongst themselves
Any venture in life is gambling, just like people exploring the oceans and traversing continents without knowing what was ahead of them was gambling. But that is a different type of gambling than risking money with no effect on the outcome such as on the aforementioned gambling websites and casinos.
Fair point/question. For many of my HN responses, I first ask ChatGPT for a bit of information about the topic. For the case of GE Cap's wrecking of parent GE with excessive financialisation, I could only loosely remember the details from the 2000s. It is a long time ago! That prompt that I shared gave a reply that was 100s of words. Too much for copy/pasta, and too hard for me to summarise briefly. Instead, I decided to share the prompt. It is not my intention to dodge sources. Plus, the newest versions of ChatGPT is pretty good about sharing sources. (Of course, the quality of sources can be debatable.) In short, it was not my intention to be snarky by sharing my ChatGPT prompt.
EDIT
----
Also, the OP was so brief about GE Cap, I realised that most readers under 30 (maybe 35) will have almost no knowledge or memory of that economic history. I wanted to offer an "intellectual carrot" (ChatGPT prompt) for anyone wishing to learn more.
----
What bothered me most about the original post was the person was putting all vendor financing in the same "bad" bucket. I disagree. I would characterise GE Cap as an infamous example! They were the worst of the worst in a generation (25 years). Most vendor financing is very boring and is used to buy big heavy things with very long operational lives. If the buyer goes bankrupt, it is (relatively) easy to repossess the big heavy thing and sell it again (probably with vendor financing again!).
Well, at the very least one thing I would caution against in "prompt sharing as a way to lead people to information" is that a chat bot is far less deterministic than even a traditional web search (let alone a link to a static source): any other user putting in the prompt won't get the same explanation you got, and thanks to hallucination they may get a wildly different answer or a case where this time around the bot wound up misunderstanding what was asked.
Yes, cause google has been giving crap results long before chatgpt was a thing and it only got worse. Before ai it was "let me google that on reddit for you".
Very tangentially related comment, but I remember seeing a post on a local Facebook clone with a prompt to throw at Claude to "make a custom YouTube downloader for MacOS", so the general "Here is a prompt to feed an LLM" is somewhat real for some, apparently
It's a good use case really – it'll tell it differently according to what it knows about your background, if you 'just Google it' you'll get the same maybe-appropriate results as anyone else.
Google search has gone way down hill after they nerfed it and then did nothing to prevent the flood of AI slop seo websites. So unfortunately, instead of sharing links everyone now gets sent to the inefficient text generator that hallucinates nonsense and will color the average summary of a topic by whoever trained it and your most recent chat history instead.
I haven't run a Google search in two years. Your comment just made me realize that. Doing a Google search is like trying to watch cable after being on YouTube for years.
I use different search engines than Google. They have similar issues, but some are better at ignoring the slop.
I just cannot justify the environmental impact and surveillance of using LLMs for everything. I prefer to summarize recent information myself. LLMs are not particularly good at it.
Funny thing about the cable analogy. Ever since all streaming providers have started cranking up prices and still forcing users to see hundreds of ads my family has been buying second hand dvds. So we have regressed from streaming to right after cable. I know one family that went back to cable, they do still watch YouTubes here and there but they got sick of it.
You've been able to get Intel X520 NICs [0], with transceivers included for ~40USD on Newegg for a long time. This is a little more than double the price of Newegg's cheapest single-port 10/100/1000 copper card, but even the cheapest available such card is three times your "chicken and egg"-solving price point.
I suspect the combination of the absence of cheap-o all-in-one AP/router combo boxes with any SFP+ cages and fiber cabling's reputation of being extremely fragile have much more to do with its scarcity at the extremely low end of networking gear than anything else.
But +$15 and an extra wall outlet per endpoint is still an inconvenience, and if a two-port device with its own power supply can be made for $15 then where is the PCIe/USB to fibre adapter for <$10?
Yep. Good NICs last for approximately forever, life's way too short to deal with maybe-flaky NICs, and the price difference between the Amazon Special and something that's going to be reliable is -what- two big boxes of Cheerios? Two dozen eggs? Not. Worth it.
> But it's not competing with those, it's competing with the copper port which is already built into most devices.
Correct! That's part of why I was so very surprised to see you suggesting that extremely cheap PCI Express and USB adapters would "solve the chicken and egg problem".
> The point being you need some cheap way to plug in existing copper devices if you run fibre to the endpoints.
That's called a multi-port switch. Netgear sells five-port gigabit ones for like 20 USD. Switches that have two SFP+ cages and eight copper gigabit ports [0] are six times the price of a cheap-o Netgear switch, but are something that's going to last at least a decade. It's also pretty uncommon to find SOHO switches that have SFP+ cages and don't have at least one fixed copper port.
> This plus $5 for a transceiver is pretty close at $15:
If you're connecting a single device, why the hell would you use that when you could slap a copper SFP or SFP+ module in the switch's cage and run a cable? If you're connecting multiple devices, then either install multiple copper modules and run multiple cables, run multiple copper cables from fixed copper ports on the switch, or put a switch where the existing copper devices are.
> If you're connecting a single device, why the hell would you use that when you could slap a copper SFP or SFP+ module in the switch's cage and run a cable?
The problem to be solved is that you want to be able to put fibre inside the walls of the building instead of copper. Running a new cable to the switch closet is the thing to be prevented.
But if the wall jacks are fibre then you need some economical way of hooking them up to every printer and single-purpose device with a network port. If you have to buy another $100+ switch just to get from fibre to copper even when there is only one device near that jack, people aren't going to go for that.
> The problem to be solved is that you want to be able to put fibre inside the walls of the building instead of copper. Running a new cable to the switch closet is the thing to be prevented.
...why would you ever not run copper alongside fiber for new construction? If nothing else, PoE is extremely useful, and nothing says that you actually have to connect all of that copper cable to your switch... you can connect it as-needed. I also can't imagine that most refits only have room for exactly one cable in their conduit. [0]
I'd expect to hear the sort of plan you propose from a PHB or Highly Paid Consultant, not someone who actually has had to use that sort of configuration.
Regardless, the scenario you're now proposing is one where noone other than a PHB would use that Amazon Special that you linked for media conversion.
[0] If there's no conduit and cables are all flopping around in the wall, then there's even more room for cabling.
The original problem was that everyone runs copper instead of fibre because there are too many existing devices that only have copper. Running both everywhere would require you to buy and terminate twice as much cable as you expect to use, which leads people to running only copper again.
If you chose PCs to begin with that come with fibre ethernet or put quality cards in the ones that matter then you could make fibre the default instead of copper. Until you have a number of devices like printers or VoIP phones or Raspberry Pis that have no need for 10Gbps or even 1Gbps connectivity, they just need a way to be plugged in at all. If you need to add $100+ in conversion expense to each of those devices, you're back to using copper by default.
> Running both everywhere would require you to buy and terminate twice as much cable as you expect to use...
Ah. Let's play with that logic a bit:
"Running Ethernet cabling everywhere would require you to buy and terminate far more than twice as much cable as you expect to use. Just run power cables and wire up one extra outlet for a HomePlug in each room.".
Yeah, that checks out. "Powerline Ethernet" devices are actually pretty good these days, and are right around your magic price range... Amazon has them at ~13 USD per unit. [0] Why would anyone bother running a second cable to each room? Thirteen bucks per room has to be way less than the materials and labor cost for the cable run. Doing anything else is, like, really stupid. Don't you agree?
Anyway. You expect to use the cabling that you plan to install... plus some extra for screwups, man.
> You can run composite cable if you desire copper, fiber and power.
Oooh. Cool.
By "power" do you mean 120/240VAC, or do you mean much lower voltage DC? I've found some Belden cabling that I think provides mains power and Ethernet, and I've found fiber cabling that I guess carries lower voltage DC, but am having a tough time finding a cable that combines fiber and copper data with mains power. Do you have an example of such a cable handy?
(Full disclosure: I'm refusing to spend more than like five minutes on the search... so I might have been able to dig up examples of such a cable.)
Which is why people run only copper because that costs less than running multiple types of cable everywhere when most drops only have one device, and then pull fibre through using the existing copper cable in the rare instances where they find a need for 40Gbps or more.
But then the copper gets used for 10Gbps connections instead of fibre because it's what's already in the building.
No, JavaScript caught on because at the time it was the only game in town for writing web front-ends, and then people wanted it to run on the server side so that they could share code and training between front end and back end.
It's not enough to just be first. It would have been replaced by now if it wasn't fit for purpose. Otherwise we might as well not bother to critique anything.
I don't think the kind of exponential you are looking for (and especially not "the singularity") can manifest until the product (AI) is at a point where it can meaningfully take over the task of improving itself directly.
I would say we have certainly seen a bottleneck in the ability of LLMs to handle any kind of broad abstractions or master the architecture of coding. That is the hinge of why "vibe coding" is as trashy of an approach as it is: the LLM can't cut the mustard on any actual software design.
So they have nothing close to the deep understanding required to improve their own substrate.
They can be exceptionally good at understanding what humans mean when they say things, far better than poking keywords into a google search for example, especially when said keywords are noisy and overloaded. And they can be a very good encyclopedic store of concepts (the more general the idea the less likely they hallucinate it, while the details and citations are far more frequently made up on the spot). But they suck at volition, and at state representation (thanks to those limited context windows) which cuts them off at the knees if they ever have to tenaciously search for anything including performing creative problem solving.
We do have AI models which can get somewhere on theorem proving or protein folding or high level competitive game playing, but those only sometimes even glancingly involve LLMs, and are primarily custom-built amalgams of different kinds of neural networks each trained on specific tasks in their fields.
None of that can directly move the needle on actual AI research yet.
reply