Advertisment

'I know something that you don't!'

author-image
DQW Bureau
New Update





Advertisment

At least, that is what one

particular computer seems to be saying! Today, we program our computers,

laborious-line-of-code by laborious-line-of-code. Then we test the program, and

sometimes swear, "It can not do that!" But subsequent investigation,

perhaps using a fresh pair of eyes, always does lead to an understanding of how

and why a given program did what it did (assuming the hardware is stable!). Now

though, in at least one case, it seems that an unusual type of computer has

solved a problem in a way that its designer still does not understands!

Brought to our attention by

Frosty Cummings, Nathan Price and Darrin Resner, the April 9, 2001

NewsObserver.com article ‘Computers that improve themselves’ tells the tale

of how University of Sussex's Adrian Thompson has spent the last four years

developing computing elements that actually — mutate themselves.

They are based on Field

Programmable Gate Arrays (FPGAs), which we can think of as a huge collection of

primitive logic circuits that can be interconnected with each other. But those

interconnections are not static–they can very quickly be reconfigured, time

and time again, under program control. Essentially, this chip can reconfigure

itself as it sees fit to best solve a problem! (If you are thinking that this

sounds disturbingly like something we might have learned about in biology

class–well, I would not argue...)

Advertisment

Now, instant

self-reconfiguration is pretty neat–indeed, it is the concept behind a rather

special computer produced by StarBridge Systems that claims to offer a thousand

times the power of a traditional PC (for certain very specialized tasks) in a

box about the same size! NASA must be convinced (or at least intrigued), because

their Langley Research center is reportedly buying one.

But even more interesting to me,

is the story of how Thompson developed an FPGA ‘circuit’ that could

distinguish between two audio tones. He programmed in the very basics of how to

recognize tones, and the computer then took itself through 4,000 generations of

circuit configurations to end up with the circuit that worked best–but it

worked too well!

"Out of 100 logic cells he

had assigned to the task, only a third seemed to be critical to the circuit's

work. In other words, the circuit was more efficient, by a huge order of

magnitude, than a similar circuit designed by humans using known principles.

Advertisment

And get this: Evolution had left

five logic cells unconnected to the rest of the circuit, in a position where

they should not have been able to influence its workings. Yet if Thompson

disconnected them, the circuit failed! Evidently, the chip had evolved a way to

use the electromagnetic properties of a signal in a nearby cell. But the fact is

that Thompson does not know how it works!"

Which brings up some rather

sensitive questions. Such as, if we used such techniques to develop a

wonderfully effective circuit for, say, controlling a nuclear power plant, or

driving a locomotive, or moving air traffic, but we did not really understand

just why it worked so well, would it be prudent for us to use it?

I mean–I would really hate for

our machines to begin to consider us–redundant…

Advertisment

Jeffrey Harrow

Advertisment