Information Technology Reference
In-Depth Information
required to do the computation. We can get into the discussion of reversibility,
and what are the options there. But no matter what your material, if you're
computing a bit of information, even if it's an analog computation, you have to
put in energy to encode your information. And if you're doing a computation,
even an analogue computation that has a certain amount of information in and
a certain amount of information out, you're still faced with fundamental limits.
So how you encode the information is less important than how you process the
information (whether it's analog or digital). When you're doing computation,
you have to worry about how you're doing the computation and how you're
using the energy to do that computation.
Lent: I don't understand much about computer architecture, but it does seem to
me there's a kind of rough, blurry convergence there. The most impressive QCA
architectures that I have seen simulated [were by] Sarah Frost-Murphy, that had
these little regions that stored some memory and did some computation. There
were a lot of them. And she had this thing called bouncing threads, so threads of
information move from one region to another, bouncing around, and they would
do a computation which would alter the thread, and that would go elsewhere to
an available little computational cluster. They were just integer operations.
Think about QCA as processing in memory, holding lots of state and process-
ing while it's holding the state so it's breaking the von Neumann separation.
That seems to me to be a promising way to think about it. It's not so different,
except in scale, from multicore, so the problem you're running into if you have
thousands of cores is a similar kind of problem: that there's a big hit for com-
munication. (The scale is just bigger for what counts as local and what counts
as remote.) But you can do lots of stuff locally. So how do you map important
problems onto that architecture?
You have a lot of finite state machines and they're connected by more costly
communication. But it's costly only in the sense of latency, so once things get
there - I know that this doesn't sell well with lots of computer architects, the
idea that you have to walk over there - but once there, you get a different bit
every cycle. It does seem to me that if you think about that from the processing-
in-memory point of view, then that's kind of promising.
Bhanja: There is a lot of work in that direction, with magnetic memories, a lot
of work in non-volatile computing.
Wo lkow: Another thing that comes to mind is that there's a lot of new effort in
what you might call probabilistic computing, non-fully deterministic computing
where people want to do, say, pattern recognition or image analysis but not
seeking some absolutely precise numerical representation. They just want to
sort of know “did that thing move?” in the simplest and just sucient way and
no more.
No matter how good a normal von Neumann binary computer you ever make,
it's never going to be enough to handle all the data people want to handle.
And so there need to be totally different approaches. So maybe we could, as a
community, look at that more and think of radically different things we should
Search WWH ::




Custom Search