Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why the singularity may never arrive (maximise.dk)
29 points by mixmax on Nov 26, 2008 | hide | past | favorite | 49 comments


Kurzweil deals with this "S-curve" phenomenon extensively in his books. He shows that each individual computing technology follows an S curve, but the exponential price-performance of computation continues through a smooth transition to the next method. For example, as vacuum tube computing saturated, transistor computing picked up. This is not a coincidence; as one technology reaches saturation, resources flow to the bottleneck, fostering the next technique in the chain.

Intel hit just such a wall a few years ago with the size of its gates. Many had predicted, based on this bottleneck, that Moore's Law might end some time around 2003-2005. Intel's CTO Justin Rattner explained at this year's Singularity Summit that there are smart people at Intel, and they found a way around the problem. In fact, the transition was so smooth that few people even noticed.

And so it goes. The computational capacity of matter is effectively infinite on the scale of current technology, so there's no reason to expect computation to saturate for a long time. Barring a global cataclysm, it's hard to see anything standing in the way of a technological singularity some time in the next 50-100 years.


Actually Intel did not really "find a way around the problem". CPU speeds have barely improved since then, instead it's all about multi-core.

If passively parallel was all it took, we could have AI today, but we can't. And it doesn't look like CPU's are going to get any faster.

The curve did exactly what the linked article said: it turned into an S curve.

Moore's law still works because that's a measure of cost, but if you plot a single core, you get an S curve - which means no more singularity.

Plus, even if you had a super fast computer, you'd still not get a singularity since no one knows how to program AI - even if you gave them the fastest computer in the world.


I don't want to jump to conclusions, but the way you throw around the term "CPU speeds," (how do you even measure that?) you sound to me like a programmer for whom computation is a magical process composed of functions, arguments, processes and files.

Moore's Law is only indirectly related to CPU speed. Instead, it predicts the MOST ECONOMICALLY PROFITABLE minimum feature size of a semiconductor manufacturing process.

To a lot of people, those two things are one and the same, but in reality, computing power tends to grow because of innovations in processor architecture. In other words, material and device engineers will wring lots of improvements out of a given process, giving the architecture guys more transistors to implement bigger caches, longer pipelines, branch prediction, and the like.

So while "Moore's Law" continues apace, "CPU speeds" (however you measure those) have stalled a bit. This is because the current slate of architectural improvements has been exhausted, and there's a lot of uncertainty surrounding how to implement the Next Big Thing (core-level parallelism). This shouldn't be terribly worrisome to us, as it's happened before.

From the 1970s to the early 1990s, CPU manufacturers focused on "bit-level" parallelism, basically throwing in bigger registers and more instructions to burn through growing hardware budgets. When it became obvious that this approach wasn't improving performance any more, we got tghe RISC processors that enabled pipelining, upclocking, and caching.

If you didn't already know all of this -- and a lot more background besides -- your opinions about "programming an AI" are worse than useless. You're contributing zero information, and adding a little more noise (in the form of unsubstantiated certainty) to a field that's already debated too hotly.


Use of the term "the singularity", particularly your use here sets my teeth on edge. It's not about technology, it is about our models.

The North Pole is a singularity - coordinates converge there to become a single point (singular). They only do this because we chose a coordinate system that makes this happen. It is silly to talk about how you coils go there and then not be able to go any further north, as if that means it is the edge of the world.

Likewise "the singularity" is not a magical threshold of AI and immortality or transhumanism, it's where a given model of the future predicts nonsense because you have pushed it too far. Pick a different predictive model, get different results.

To claim, then, that the singularity may never happen is to claim either that change with feedback will stop, or a perfect model.

Better to claim that AI or genetically enhanced people, or neural prostheses, or smart matter, or superconductivity, or antigerones, or quantum computing, or virtual reality or whatever game changing technology won't happen - if that's what you mean.

(I for one think good PDA keyboards will never happen. :( hurry up Gunilla Alsio and senseboard, and that new t9 thing)


...since no one knows how to program AI

Do you think no one will ever figure this out? There's a bunch of stuff that no one used to know how to do but is common now. Why would AI be any different?


I don't know what all, or any, general intelligence algorithms would look like but they only one we have currently available, the human brain, appears to be intrinsically parallel.


CPU speeds have not changed much in about a decade but that does not mean that the current GHZ barrier will never be passed. There are indications that THz transistors are possible at room temperatures see http://www.tgdaily.com/content/view/36946/113/ and http://www.technologyreview.com/read_article.aspx?id=17368&#...


I wish I could find where, but I remember Don Knuth saying that he didn't think that the multicore was really the only answer, only that its almost lazyness or hubris that has stopped innovation in other areas (he used nicer wording then that). Perhaps as a result of there being really no competition in architectures anymore?

Or maybe I dreamed it all up, and multicore it is !


Right now not many people have enough calculation power available to really experiment much with AI. I'm only working with game AI where the bots run around in a simplified model world with a simplified set of available actions. But even with those constrains the by far biggest challenge is is the available processing time. You still have to pre-calculate much stuff that people can work out in a second.

The current state of AI feels to me similar to the state the 3D graphics felt to me in Amiga times. You can already do a few nice effects in real-time, but for real cool stuff you still have to run the computer 3 days. The basics are already mostly there even if they might be used in new compositions in 20 years.


Not only does Kurzweil deal with the "S-curve", but unlike every article which I've seen so far where he was criticized he actually explains and defends his numbers. The main reason why I currently think that Kurzweil's numbers are probably the best estimation of the singularity is that I still haven't seen anyone else seriously trying to do the numbers themselves.


Kurzweil is just replacing points on a graph with s-curves on a graph.

This doesn't change the fundamentals of the argument in any way.


I don't think the hole for the singularity argument is necessarily that technological progress, measured by CPU speed or accessible memory or bandwidth or whatever, will necessarily slow.

I think that the hole is probably "We have virtually inexhaustible amounts of X and THIS MEANS MAGIC HAPPENS."

There are many resources for humans which were, in certain times and places, very, very finite. Let me give you a trivial example: drinking water that wouldn't kill you. The history of humanity up until quite recently was the history of drinking water.

However, most areas of major Western nations hit, effectively, the Drinking Water Singularity a long time ago: you can get what are (relative to historical needs and prices) infinite amounts for nothing. (Seriously: think of how much human labor it costs to draw one bucket of water from the well located several miles from your village. Mentally got an idea for how much? OK, now how much labor does it take a McDonalds employee to afford one bucket worth of tap water? What, maybe a quarter of a second, if that?)

The change from cholera outbreaks to no cholera outbreaks, which happened way the heck back on that curve, had PROFOUND consequences for civilization. The change from "I could fill up a swimming pool for a trivial amount of money" to "I could fill up TEN swimming pools for a trivial amount of money" had negligible consequences for civilization. All that water wealth, nothing of major consequence to spend it on.

(I know, I'm overlooking the fact that certain areas like the American Southwest are actually facing water crunchiness again, and large portions of the human population still fail to have their basic needs met. Ignore that for the purpose of simplification -- and incidentally, "You can have all the water you can drink but you can't water your lawn in the daytime" is still singularity-esque relative to "Send a woman to walk a mile to bring back a bucket of water".)

That is how I see the technology singularity coming about: what if I gave you all the CPU cycles you could want for nothing and you found, after a certain point, you had nothing of major consequence to spend them on? What are you going to do, calculate a new Mersenne Prime every second for eternity and call it the singularity?

[Edit: I originally said typhus, not cholera. Typhus is not caused by water quality issues. Sorry, it has been a long time since I played Oregon Trail.]


There are currently problems that is pushing the envelope of computational power, i.e., MD simulation (see D.E Shaw Research), engineering simulation on jet engines (see Pratt & Whitney) - even state of art of Infiniband clusters and custom-designed chips cannot simulate more than 10 seconds of molecular interactions or the complete fluid dynamics of a jet engine under a reasonable time-frame.

You are however right in the sense that AI can't be achieved by tossing more CPU power. Almost all of the media hype on the Deep Blue Chess Playing or hedge fund black boxes are all narrow AI, that is, programs that look "smart" because human beings have painstakingly poured so many techniques into a specific and narrow problem-space. We have yet to come up with a artificial general intelligence (AGI). For those who are interested in AGI, check out: http://www.agi-09.org/ and http://journal.agi-network.org/ .



This defines the phrase "a good argument until you think about it."

I mean, you really can't think up a good use for enough computing power to compute a Mersenne prime every few seconds?

Let me give it a shot:

Discretize a space about the size of a big protein plus a ribosome plus a few hundred nucleotides. Run the time-dependent Schrödinger Equation. It might be necessary to approximate, say with ion cores and valence electrons. The fact that, on some level, biochemistry "works" suggests that the problem is tractable. Enumerate space of final states, pruning the boring or repetitious cases as needed. Congratulations, you've just solved Protein Folding. Have fun scanning your new Database of All Possible Proteins for drug applications, or as the prerequisite for simulating a full-sized human body down to the protein level.

And let's not even start talking about building abstract neural networks the size of human brains...

I do agree that a lot of talk about the singuarity is irrational hype -- rapture for the geeks, and all that -- but unless I've missed your point somewhere, this strikes me as only slightly less irrational hype-aversion.


This article is overly simplistic to the point of uselessness. Population growth peaked in 1963 not because of food production limits but because of cultural and social changes related to growing wealth. That destroys the whole Malthusian argument.

Similar demolitions can be performed on the rest of the article's key points:

The same thing will happen with technology - eventually we will run into insurmountable barriers to growth and progress will stabilise at this level. It's just a question of what the barriers are - the food for progress so to say.

The food for progress is human intelligence. Singularity models are built precisely on an exponential increase of intelligence that can be used to produce more intelligence. The Malthusian model breaks down there again.

To presume that Kurzweil doesn't know about Malthus is pretty cocky and very likely wrong.


I'm sure Kurzweil knows about Malthus. The point is that nowhere in nature or sociology is there such a thing as unlimited exponential growth - it is always stopped by lack of resources.

Why should computation be any different?


The Malthusian curve occurs because progress impedes progress. The more you grow, the less resources you have to grow with.

Technology doesn't work that way. Technology feeds technology. Therefore, the curve will not look like the Malthus curve.

Kurzweil doesn't claim it'll actually go forever. That's a strawman. He simply observes that technology feeding technology is a different shape.

Technology feeding technology would probably look more like a exponential curve that hits a very sudden wall at the limit point, whatever it may be, rather than a gradual falloff.


The computationally enormous brain contained in a compact space is proof enough that technology won't graze any limits for a long time.


Interesting argument...


It's one of the cornerstones of the singularity possibility. If you didn't get that argument, you didn't really get the whole singularity movement...


Nothing in nature has the power to actively manage its resources. Bacteria can't build a bigger petri dish or perhaps they wouldn't stop growing.

Humans could grow their food sources to match massive population growth (up to some maximum amount determined by the Earth's natural resources) our country did it. Many impoverished nations did not, but that doesn't mean it's impossible.

If computers can just become intelligent enough to spread beyond Earth, given the energy they have available now (which clearly they can since we did) they would theoretically be limited only by the amount of energy in the universe, but the curve would stay vertical for so long as to be effectively infinite.


The difference is that we are probably nowhere near the Malthus limit in technology. The equivalent of the Malthus limit for technology would be when the universe was saturated with intelligent matter - intelligent matter that would probably make our current CPUs look like vacuum tubes. Even the most optimistic singularitarian wouldn't predict this would be happening any time soon. .


From the article:

"Thomas Malthus Put forth his theory of limits to human growth... What happened was that population growth declined and has halved since its peak in 1963."

Of course, on the other hand, something else happened 3 years before that peak: http://en.wikipedia.org/wiki/Birth_control_pill


Interesting suggestion that resources for computing will eventually run out in some fashion, thereby invalidating the Singularity prediction. But what resources?

I wonder where energy costs fit into this. Already, Internet companies are seeing energy as a significant part of the costs for running their business. How much energy would be consumed by a $1000 computer with the same computing capacity as the entire human race? How much heat would it produce?


Of course, Kurzweil took this into account. The first link for a "kurzweil malthus" Google search is directly to a page in The Singularity is Near, via Google books. He seems to think the energy requirements are not significant.

Here's the URL, but might just work better to do the search yourself.

http://books.google.com/books?id=88U6hdUi6D0C&pg=PA427&#...


Energy production is also increasing exponentially.

http://en.wikipedia.org/wiki/Kardashev_scale


According to the chart on that page, at a much slower rate than Moore's law. So what is the implication of that for the Singularity? Will the increased demand for energy to power computation cheaply force the Singularity to wait until the energy growth curve catches up?

It is also relevant to note, I think, the current volatility in energy prices. We are having a difficult time already supplying all of humanity's energy demands.

Of course, as I note in another post Kurzweil has considered this and does not think it will be a significant barrier. I'm just curious about how the numbers work out.


Perhaps this cost will be the overhead of managing the increasing amount of information, with its storage, indexing, and relationships.


Perhaps you should be able to make the overhead logarithmic with clever coding.


Aaaand once again, author does not know what a "Singularity" is.

http://www.singinst.org/blog/2007/09/30/three-major-singular...


The question of the singularity isn't whether it will happen, it's if it will happen as the big overhaul of civilization people predict. I mean the best example I can think of for the 'singularity phenomena' comes from Charles Stross' book Singularity Sky in which a super-advanced race comes along and basically asks everyone in an entire civilization "what do you dream of and we'll do it for you?" Revolutionaries ask for self-replicating weapons and use them, the governments policy is to pretend the super-advanced race isn't there and lets the planet get destroyed.

Essentially the above is as predicted. However, what is likely to happen is that fundamentally no human being will notice a difference between now and post-singularity. It's like video games, people criticize the graphics today for being too fake or not realistic enough when if you look back even a couple of years the improvement is immense. If you look back to the NES the improvement is fucking unbelievable, just like looking back from post-singularity it will be fucking unbelievable but when you're in it you're still going to complain that computers aren't fast enough, that something in a video game doesn't seem realistic enough, etc.

We might have humanoid robots in 20 years time, but it isn't going to be holy crap they magically appear. It'll be like in the movie iRobot, each new model is better than the last just like with PC's, but it still took them like 40 years from the founding of the company before the supercomputer that ran them decided to take over things.


I always thought that Kurzweil was too simplistic in the way he comes to his conclusions and this points to just a couple of the reasons.

I think if we ever hit singularity, it will be because of some black swans spikes and leaps, not because of a gradual curve.


This article is even more simplistic.

"This chart looks like this chart - therefore they are the same... The reason is because they are limited by some kind of resource, but I have no idea what that is"


The people actually working on general-purpose AI, exactly what the heck are they doing?

It seems to me we:

* Don't have a clear idea what consciousness is * Have very pathetic attempts at creating it so far.

But machine learning has achieved some seriously impressive stuff. Although I wouldn't call that intelligent. I'm into machine learning because it works and it is of great help to humanity.

If someone told me to design a general-purpose AI, I wouldn't really know what to do.

So what are these singularitans doing? Only philosophy or something real?

They strike me more as the alchemists of our time rather than serious scientists.


Note that Isaac Newton was seriously into alchemy, and that chemistry evolved out of alchemy...


And it came to pass that the light from the self-sustaining fusion reaction danced upon the surface of the orb that would be known as Terra, and this dance would cause the essence of the orb's surface to dance in unison, in ways that would sustain itself and yet change it's cadence and routine slightly with each generation...

This in turn would feed another improbable series of creations that would form their own yet again....and these would reflect...only to say that their own ability to shape the dance is impossible...


I would like to point out to the author that many natural phenomena like rabbit populations are tightly constrained by environmental factors, mainly available living space and food. Its these environmental factors that cause the S-curves. These same constraints do not apply to technology.


This is not exactly on-topic, but here's a question that I still haven't gotten a good answer to:

People who are excited about a potential singularity: why? Doesn't the idea of being obsolete scare you?


That's why Eliezer Yudkowsky is working on friendly AI.

http://yudkowsky.net/

Though, that still doesn't rule out people misusing friendly AI against each other. Say some megalomaniac thinks it'd be much more convenient to have a bunch of friendly AIs around than a human population prone to revolt. In that case, friendly AI is not so great since it disrupts the power balance.


Just upgrade. If the singularity gets here, cybernetic grafts should be easy to figure out.


We are not on Moore's law for batteries.


A $1000 computer with the equivalent knowledge of the whole human race won't happen because it is physically impossible.

There are 10^11 neurons or so in the brain. There are 10^10 (say) people in the world (more like 0.5 X 10^10 but anyway). So loosely there is an order of 10^20 neurons in humans on earth total. Avogadro's number has approx. 6 * 10^23 atoms in 12 grams of carbon, so, basically, you'd have to have 1 neuron simulation per atom to fit the entire human intelligence on say, your hand held mobile device, or close to a 1 : 1 simulated neuron to atom ratio. Transistors are cool but they ain't that cool boys and girls. Won't happen.

Now, you could fit a 10^10 neuron simulation in a 10 meter square server room 1 meter deep, that might be possible. So replicate this set-up 10 billion times and then you'd get the population of the human race on a computer. Slightly more than $1,000 tho. :-)


You're thinking with modern technology, not future technology. What if $1000 (minus change to account for resource usage) bought me a computer seed that grew into a massive supercomputer in some relatively-short timeframe?

If self-replicating technology is possible, then $1000 for that much computer power would actually be a ripoff.

(I'm not claiming that such a thing is possible, per se. But then again, we are such seeds ourselves, or rather, seeds that grow one human's worth of computing power, so it's hard to argue that this is physically impossible. People who doubt this is possible need to explain why we're going to run out of progress before we figure out enough about biology to trick some existing organism into doing something like this for us. We don't even have to build the things ourselves...)

Arguing about whether the singularity is possible is a waste of time. Truth is that according to the original definition, it's pretty much inevitable, according to the physics we already know and the engineering that we can almost already do. The real argument people are trying to have is over the final limits of technology, and an awful lot of people argue for limits that are far, far too low, with very feasible paths for passing the limits plainly in view even today.


You fail for assuming future $1000 computer will be transistor based.

You're also being disingenuous in stating that every neuron contributes to a human's knowledge, that there is no overlap of knowledge between humans, and creating a computer with equivalent human knowledge requires a 1 to 1 mapping of neurons to thingers(say transistors, but could be quantum gate whatcha-ma-callits).


You could have a computer that can simulate one brain (or whatever "equivalent" means), but runs 6 billion times as fast as a human's -- or any of those factors in between (1000 brains, 6 million times speed).


Given 6*10^20 neurons in the human race, we have 10^3 atoms in 12 grams of carbon per neuron, a little bit more than 1 neuron simulation per atom. I'm not a neuron simulation or an atomic computing expert but even a few orders of magnitude above it (e.g. 1.2 kg of carbon) would give us sufficient computing power for a neuron AFAIK.

Also 10^11 neurons fit in roughly 0.001 cubic meters (i.e. a cube of 10cm), a thousand would fit in a cubic meter, your one brain simulation server room has space to a 100,000 actual human brains. Even if the simulation hardware occupies the same space as the human brain requires we would use only 0.01 cubic kilometers for the entire human race simulation. I have to agree that this will cost more than $1k, but is much smaller than your design.


Why would the 10^10 neuron simulation take up more space than an actual human brain? We would at least get to that level of technological advancement; we do, after all, exist, so the technology can't be that far-fetched ;)


Oh, one other comment regarding population explosions in general. I just got a tip from an inside source at the government that is top-secret classified information:

"SOYLENT GREEN IS PEOPLE! SOYLENT GREEN IS PEOPLE!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: