Five years ago a little mathematical game triggered a curiosity, which has persisted ever since. It is called "The Game of Life," first made famous in the October 1970 issue of Scientific American. The brain child of mathemetician John Conway, The Game of Life gave rise to a field of study now called cellular automata.
Let me start by explaining the game. (If you're the impatient type visit the link below and then come back.) The game consists of a grid of cells. The cells can be black or white. A black cell is alive, a white cell is dead. We create a starting screen in which some of the cells are alive and some are dead. These are our initial conditions. Now we evolve the grid step by step. One step involves recalculating and redisplaying the status of every cell in the grid. During each step, some cells will die and some will be born. A set of very simple rules are used to determine whether a cell lives or dies:
Now, what do you think will happen when we quickly evolve the grid step-by-step using a computer program? No doubt, many of you have seen the game and already know that "life" emerges from this simple scenario. Okay, not life as we know it, Jim, but something surprising.
At this point, you have to see it in action.
Here is a link where you can see what I'm talking about. Click the "Enjoy Life" button to run a java applet. You can experiment by creating your own initial conditions with your mouse or click the Open button to select some common initial conditions with cool behaviors. (If this link has disappeared because you are reading this article many moons after it was written, just search on 'the game of life'. It's all over the web.)
If you examined the standard initial conditions you saw how they could give rise to particularly interesting behaviors. Perhaps the most common creature to appear is called the glider, which hobbles its way across the screen and would go forever if it had enough screen and nothing got in its way.
What's interesting about the game of life is that we have no way of predicting from our basic set of rules above what kind of complex, self-organizing behaviors we could expect when we run the program. All of the observed patterns have been discovered rather than predicted in spite of the fact that this is a math problem for which we know all the rules. Even though the process is completely deterministic we have no theory for it. The best we can do is run a simulation and see what happens.
Real life, the life that is reading this page right now, is similar. You and I are composed of trillions upon trillions of tiny quarks, electrons and pions all interacting according to a few basic rules, rules which, for the most part, we can explain with math and physics. However, perhaps due to the limitations of our own minds, math and physics have tended to deal with only a few particles at a time; beyond that, the mathematics becomes prohibitively complex.
Due to the power of computers and chips like Ageia's physics processor, we now have systems that can handle all those complex calculations for systems much larger than three or four elements. But still no theory that can help us predict what should emerge from a certain initial state and a set of basic rules.
Alright, alright, so where am I going with this? Fundamentally, I believe that truly intelligent silicon systems of the future will be emergent (bottom-up) rather than architected (top-down). But in order for the bottom up approach to be most successful, we need a theory of how large nodal systems evolve given a certain set of basic rules and some initial conditions. Short of that, our primary recourse is simulation--raw number crunching, which can be fun and effective like doing surgery with a sledge hammer--but it isn't exactly elegant.
It wouldn't surprise me if we're out of luck when it comes to the theory of evolving nodal systems. In the early 1900's it was disconcerting to many physicists, including Einstein, that quantum mechanics is probabilistic--meaning you can never know both the precise position and speed of a subatomic particle, only the probabilities of certain positions and speeds (hence the need for the Heisenberg compensators in the Star Trek transporter). So too, it may turn out that we can never find a theory for the evolution of a simple rule-based cellular system. We may be stuck growing the computer brains of the future, starting with a bunch of chips, a few rules and a couple years to evolve them. Maybe there is no theory of life. Maybe the only way to achieve intelligent life is in its living.
|LG UltraFine 4K and 5K displays are the new Apple monitors||28|
|Cuplex Kryos Next water block is heaven for control freaks||3|
|OWC Ministack places up to 6TB of storage atop Mac Minis||4|
|Gigabyte lets loose a flurry of BIOS updates with Kaby Lake support||1|
|Deals of the week: a GTX 1080, photography gear, and more||4|
|Geil Evo X memory kits with RGB LED lighting are now available||15|
|GeForce 375.70 drivers gear up for a raft of triple-A titles||4|
|AMD announces Radeon Pro drivers with scheduled releases||6|
|We're giving away our Aimpad R5 review unit||22|
|Absolutely. GCN is pretty much GCN, so the math backs this up: R9 290X = 1GHz x 2816 GCN CUs = 2816 CUGHz (pronounced "cougar hertz") RX 480 = 1.27GHz...||+44|