[nfb-talk] Fw: [nesfa-open] Top futurist, Ray Kurzweil, predicts how technology will change humanity by 2020

Ed Meskys edmeskys at roadrunner.com
Mon Dec 14 18:20:39 UTC 2009


----- Original Message ----- 
t at nasw.org>
To: "NESFA Open Mailing List" <nesfa-open at lists.nesfa.org>
Sent: Monday, December 14, 2009 8:32 AM
Subject: Re: [nesfa-open] Top futurist, Ray Kurzweil,predicts how technology 
will change humanity by 2020


> At 10:16 PM -0500 12/13/09, Mark L. Olson wrote:
>>
>>Most people are confident that we have another 10 years of Moore's Law
>>(about a 30x improvement in price/performance) because we can see the
>>technology that will get us there, and a lot more than that is certainly
>>possible.  (It's also true that people have been saying that Moore's Law 
>>has
>>only a decade left for the past 40 years.)
>
> The original formulation of Moore's law was the number of transistors per 
> unit area, and that improvement _has_ been slowing.
>
> The real sticky issue in computer-chip performance has become signal 
> transmission on and off the chip. Raw speed per processing node has pretty 
> well stalled out at several gigahertz, and chip developers have instead 
> put multiple cores on the chips. However, that doesn't translate into raw 
> speed because effective use of multiple processors requires parallel 
> processing -- massively parallel as the number of cores increases -- and 
> that requires developing new software, as you noted.
>
>>
>>My guess is that about 1000x improvement is possible without radically new
>>technology, but that will come only with great difficulty.  The biggest
>>problem is that it will certainly require massive parallelism, and it's
>>really hard to program so as to use massive parallelism.  OTOH, there's a
>>limit to how fast we need Excel to be, and there are other things which we
>>do poorly now (e.g., speech recognition and animation) which would benefit
>>hugely from it.)
>>
>
> I think we're at a point where microprocessor technology is going to split 
> into at least two distinct classes -- one for simple single-stream 
> processing (e.g., for word processing in PCs), the other for operations 
> that can be performed in massively parallel ways. The improvements in 
> single-stream processing are going to come from cutting the fat from 
> bloatware and shifting parallel processing out of the single stream. 
> There is some impressive and interesting ongoing work in massively 
> parallel supercomputing (e.g., the Roadrunner supercomputer at Los 
> Alamos), but that's aimed at a limited range of applications. The 
> interesting software challenges are in finding ways to use that 
> parallelism for things like speech recognition and animation.
>
> -- 
> Jeff Hecht science & technology writer
> _______________________________________________





More information about the nFB-Talk mailing list