Is there a practical limit to how fast computing can become?
Is there a practical limit to how fast computing can become? Or is the speed of light the only limit?
We put this question to Mike Muller and Professor Andy Hopper...
Mike - I think computers are slowing down, they're not getting faster and faster. The real challenge is how you make things go in parallel. How do you divide things up, and actually have multiple computers working on the same problem at once? That's probably the way you actually push performance in the long run.
Andy - We are approaching what people have described as the silicon endpoint. Mind you, the silicon endpoint seems to have a half-life of about 5 years! That is the point at which we really don't make any more substantive progress. And so, the speed of individual chips will asymptote and will be limited. Now the parallel point is important, but that's an old cherry, and how to make things [process] in parallel is very difficult...