56K Lounge

Getting acquainted with Arduino
— 10:54 AM on October 21, 2011

I tend to think of myself as a well-rounded geek. Excluding the keyboard-cowboy day job, my attention is divided between hardware tinkering and putting that hardware to good use via software. Over the years, I've invested most of my energy into commanding data via PHP, with the occasional deviation into Python and C# territory, depending on the project. While it's been an interesting ride, I've reached the point where simply manipulating data is feeling somewhat redundant and, frankly, a tad boring. I need action. I want to write functions that return explosions instead of Booleans. I want my code to reach outside of the box and manipulate physical objects in the cluttered room around me. I want... Arduino?

I've been hearing about this Arduino thing for a while now. I read up on the more intriguing projects that propagate amongst various technology blogs from time to time, but I've never worked up the courage to go hands on. My recent bouts with coding monotony have finally pushed me past the point of no return and into the arms of this mysterious little circuit board.

The Arduino team describes its creation as an "open-source electronics prototyping platform." In plain English, that means they've strapped an inexpensive microcontroller onto a small PCB and added convenient headers that enable easy access to the chip's pins. In addition, they've provided an integrated development environment (IDE) for programming the controller chip and support for an enthusiastic community of hobbyists who know the device inside and out. That's oversimplified, of course, but simple is the name of the game.

Even though the onboard controller chip performs various calculations, the point of Arduino isn't to be another tiny computer board. Instead, it's intended to choreograph the actions of LEDs, sensors, servos, and whatever else you see fit to jam into its input/output headers. To quell any doubts, the Atmel ATMEGA328P chip on my Arduino Uno board is an 8-bit RISC-based device, operating at a break-neck 16MHz, and packing a whopping 32K of Flash and 1K of SRAM. Sandy Bridge? I think not.

Because Arduino is more interested in monitoring simple input values and voltages and supplying an appropriate output value or voltage, it doesn't need a boatload of compute power. This underlying simplicity was one of the major points of attraction for me. For instance, when you slot a video card into your PC, it's communicating with more than a few of the thousand-or-so pins on your CPU. There is so much going on in there that most enthusiasts give up trying to understand exactly how it works—and just care that it does work. With Arduino, things are simple enough that wires are essentially attached directly to individual CPU pins. The visceral thrill of combining this direct connection with a datasheet that describes what each pin does is hard to explain. It makes an electronics newbie like me feel as if I'm hanging out with the hardware hackers of the 1970's, cobbling together an expansion card for the Altair 8800.

From here on, I promise that I won't waste any more time reveling in undeserved delusions of hardware-hacking grandeur. Instead I want to discuss the basics of Arduino and share some pointers with others interested in learning the ropes. While it's a little off the beaten track for us here at TR, the DIY possibilities of this platform are virtually endless, constrained only by imagination, and on occasion by the 8-bit microcontroller.

Because Arduino is open-source hardware, there are many versions of the platform available. You can even buy the board as just a pile of discrete components and solder it together yourself if you really want to. For me, starting out at ground zero with no cache of miscellaneous electronics components at my disposal, a complete Arduino starter kit seemed like the best route to go. The kit that came home with me was the Sparkfun Inventor's Kit for Arduino. This particular kit comes in a small tackle box that includes a pre-assembled Arduino Uno board, a bunch of LEDs, a breadboard, patch wires, resistors, an assortment of various sensors, a USB cable, some other miscellaneous odds and ends, and a nice user's guide with 14 learner's sample projects. These projects focus on the parts included with the kit, and they allow new users to get up to speed building circuits quickly. If you have never worked with the Arduino platform before, I would strongly recommend looking into such a starter kit.

Shortly after opening the box and inspecting its contents, I began assembling the first training project. The inaugural circuit consisted of a single LED and resistor plugged into the breadboard, which was in turn connected to the Arduino board and programmed to blink at a constant interval using the Arduino IDE. The project was extremely simple and straight-forward, but it illustrated the basic concepts of Arduino very well. By the end of the project, I became overly excited and skipping ahead to latter lessons to see how transferring data over a serial link worked and how to utilize sensors and potentiometers. In hindsight, I would recommend following the projects in order for the most part.

The second major ingredient in Arduino's secret sauce is not hardware related, but lives on your PC in the form of its integrated development environment. Arduino uses something akin to C++, but it's been tweaked to include many built-in functions for reading and writing to the Arduino's I/O pins. Coming from a PHP background, the coding process felt familiar enough, and the learning curve to accomplish some basic tasks was minimal. The IDE provides a quick and simple method of compiling, debugging, and uploading your code to the Arduino's Flash memory. Because the program (called a "sketch" in the Arduino vernacular) is saved to non-volatile memory in the controller chip itself, your projects can run untethered from the computer so long as there is a power source available. When connected to a computer, the Arduino gorges itself on the 5V power supplied by the USB port, but battery packs and wall-wart adapters are also viable options.

Within the IDE is a decent library of example sketches that can be directly uploaded to the Arduino. For example, this simple sketch is used to blink an LED:

/* Blink Turns on an LED on for one second, then off for one second, repeatedly.
This example code is in the public domain. */

void setup() { // initialize the digital pin as an output.
// Pin 13 has an LED connected on most Arduino boards:
pinMode(13, OUTPUT);
}

void loop() {
digitalWrite(13, HIGH); // set the LED on
delay(1000); // wait for a second
digitalWrite(13, LOW); // set the LED off
delay(1000); // wait for a second
}

In the above example, we first define an output pin on the Arduino board connected to the LED's positive power wire. Pin 13 is configured as an output in this case. Next, we start the main loop. This is an infinite loop, as implied by its name.

In this example, there are only four lines of code in the loop that will repeat over and over until either the power is removed or the program is overwritten with something else. The first line, "digitalWrite(13, HIGH);", tells the Arduino to assert a high electrical voltage for pin 13. Because the LED is connected to this pin, it will illuminate for the duration of the delay() function called right below it. "delay(1000);" is equal to one second (1000ms) here. "digitalWrite(13, LOW);" does just the opposite of its predecessor and tells Arduino to stop a voltage on pin 13. This command causes the LED to turn off for the duration of the delay() statement below it. Since there is no more code after that, the program jumps back to the first line and starts over.

The LED is now blinking on and off at a one-second cadence. By adding more high/low/delay logic and tweaking the delay times, you could make the LED blink at any rate you wish, from a playful Morse Code "SOS" to a seizure-inducing strobe.

The items included in the starter kit are just the beginning. At my local Micro Center, Arduino-related goods are relegated to a small, dark corner near the book section. Even so, you can still find expansion components like GPS modules, barometer modules, humidity sensors, SD card readers, Bluetooth adapters, small LCD screens, and much more. I opted to pick up a small, 2-line-by-16-character LCD display during my last visit. Using the LCD libraries already included with the Arduino IDE, it is relatively easy to display the obligatory "hello world!" or any other message. Not all add-on hardware is supported out of the box, but there is a good amount of support for the most common components.

If you have the time, money, and electronics know-how, Arduino can be put to any number of clever uses. I've seen RC cars, Quadrocopters, kegerators, gaming devices, and many other projects built using this platform. With Ethernet connectivity enabled, things get even more interesting. Using this functionality, one can send trigger signals to an Arduino using anything from incoming tweets to web-based commands sent from a cellphone. PC builders could, perhaps, set up a contraption to monitor the temperature of a certain zone in a computer case and activate a fan or warning indicator if it rises above a defined threshold. Arduino can also be used in conjunction with servo motors to push physical buttons or to create intricate robots. The sky is the limit.

This is admittedly an extremely high-level overview of the Arduino platform. I still have much to learn about it, but I am certainly happy with my investment so far. If there is enough interest in this topic within our community, I'd like to hear some of your thoughts on potential project ideas. If something really good (and viable) gets thrown out there, it could make for an interesting feature article some day.

28 comments — Last by dashbarron at 9:47 AM on 10/26/11

Are sound cards still relevant?
— 10:58 PM on September 29, 2011

Recently, I supplemented my desktop's arsenal of hardware with a stand-alone Asus Xonar DX sound card. This upgrade was something of a shot in the dark, fueled primarily by the sudden availability of the card, in conjunction with the excellent reviews and accolades bestowed upon the Xonar series. Heck, our own Geoff Gasior gave the thing an elusive Editor's Choice award a few years ago. For me, this upgrade would come after many consecutive years using integrated motherboard audio exclusively.

In my early days, you would have found me firmly entrenched in the Creative camp, having owned a battery of Sound Blaster cards that included the Vibra16, AWE32, Live! 5.1, Audigy, and Audigy2 ZS. After the Audigy series, I dropped off the sound-card grid entirely. Instead, I opted for the simplicity, front-panel connectivity, and "good enough" sound quality of the SoundMax ADI1988 audio chip found on my Asus M2N32-SLI Deluxe motherboard. I actually had a hard time perceiving any significant quality difference between the Audigy2 and the integrated audio chip when I made the switch.

Since then, I've swapped my speakers to a pair of Audio Engine A5s that remain the single biggest jump in audio quality I've experienced since upgrading from the AWE32 to the Live 5.1. These speakers sound fantastic. In fact, they even prompted an upgrade to my MP3 collection, which was previously made up of tracks encoded at 128kbps. My old speaker/amp setup muddied up the waters enough that one MP3 bitrate pretty much sounded like the next, but the A5s allow me to hear just how much is sacrificed by lower-bitrate encoding algorithms. I had to go back and re-encode or re-download high-quality VBR or 320kbps versions of many songs. I even took my Pandora addiction to the next level by subscribing to the Pandora One service for the sole purpose of exploiting its 192kbps audio streams.

With the Xonar in hand and a vacant PCIe x1 slot staring up at me, I decided to see if I could squeeze any more range and clarity out of the Audio Engine speakers. To be honest, I wasn't expecting a whole lot. If I couldn't really tell a difference between the 95dB SNR claimed by the SoundMax ADI1988 chip on my motherboard and the 108dB boasted by the Audigy2, I doubted the Xonar DX's purported 116dB SNR would blow my mind. However, when I was comparing the Audigy2 to motherboard audio, my speakers weren't as good as the A5s.

After popping the Xonar into my system, disabling my motherboard's integrated audio chip in the BIOS, and installing the appropriate drivers, the moment of truth was at hand. I fired up Winamp and played the last song I had listened to before installing the new sound card: Pray from Jay-Z's American Gangster album encoded in 320kbps MP3 format. I doubt it's part of any professional audio testing suites, but I find that this particular song has a great range of sound that makes it easy to distinguish differences in audio quality subjectively.

With Pray, I could hear a definite, albeit slight, difference between the integrated audio and the Xonar. The track sounded a little bit crisper and cleaner than before. The difference wasn't so great that I was wowed by the experience, though. Hoping for more, I loaded up a FLAC copy of Girl Talk's All Day album to see if lossless audio would bring that wow factor to the table. Again, the sound quality was good, but nothing Steve Jobs would drag on stage and tout as "insanely great."

In gaming, the story is very much the same. The Xonar software comes with a whole slew of effects, equalization sliders, positional audio settings, and surround-sound emulation options. With only two speakers, I find that kind of post-processing detrimental to the overall quality of the audio. Except for some minor equalizer tweaks, I ended up leaving the other effects disabled.

One of the major problems with my sound cards of yore was the lack of a front-panel audio header. Creative offered the Live! Drive as a solution, but you had to pay extra and dedicate an entire 5.25" drive bay to the cause. Even cards like the Audigy2 ZS lacked the standard AC97/HD audio connectors that most cases and motherboards have provided as standard equipment for some time. I use the front-panel headphone jack constantly, making the absence of a compatible header a deal-breaker for me.

Sound card manufacturers are finally implementing front-panel headers on their products, and one can be found on the Xonar DX. Just when I thought I was safe, however, Asus decided to test my patience. The Xonar DX is unable to detect when headphones are plugged in and mute the rear speaker outputs automatically. This is something that the cheapest of motherboards have been able to handle with aplomb for over a decade. Frankly, I don't understand this seemingly obvious omission. I now have to use Asus' Xonar control panel to select manually whether I want sound routed to the front-panel headphone out or to the rear speaker jacks.

At the end of the day, the Xonar DX's audio quality is just enough of an improvement over the old integrated solution to make me keep the sound card around. I can live with manually selecting the headphone jack, but I really shouldn't have to considering that an entire motherboard supporting this feature can be purchased for the cost of the $82 Xonar.

Getting back to the integrated-versus-dedicated debate, I think integrated audio really is sufficient for most purposes. My motherboard is going on five years old now, and the audio is still subjectively good enough that I wouldn't be heartbroken if the Xonar died tomorrow. After listening to both solutions consecutively, the thought of reverting back to integrated sound doesn't make me cringe at all. Those with higher fidelity ambitions than my own are welcome to disagree, but I think the cost of a discrete sound card outweighs the benefits for the casual listener. Invest that money in a faster CPU, GPU, SSD, RAM, or concert tickets for you and a special someone.

If improved audio quality is your goal, I'm convinced money is better spent on your speakers and amplifier first. I can't speak highly enough of the Audio Engine speakers connected to my PC; their impact on sound quality with my motherboard's integrated audio was phenomenal. If you already have decent speakers that outclass the output of your motherboard's audio jacks, then by all means grab a nice sound card. The jump in quality may not be huge, but you will notice the difference (or at least think you do).

Editor's note — Since we regularly recommend discrete sound cards, we can't let this one pass without voicing some dissent. Our audio coverage has included blind listening tests for quite some time, and our subjects have consistently preferred the sound of discrete cards to integrated solutions. Some of those listeners have clearly had better ears than others when it comes to detecting subtle differences in playback quality, though. Motherboard audio has also improved a great deal over the years—just as quality sound cards have become cheaper than the Xonar DX. Our current budget favorite, the Xonar DG, costs only $30 yet scored better in our listening tests than not only Realtek integrated audio, but also a much pricier Xonar Xense.

157 comments — Last by Kaleid at 6:53 AM on 10/08/11

Life after Moore's Law
— 9:20 AM on September 9, 2011

Recently, I've been turning the pages of Michio Kaku's new book, Physics of the Future. While much of its content echoes what was said in his earlier work, Physics of the Impossible, one section in particular grabbed my attention: Kaku's discussion regarding the end of Moore's Law.

This topic has received no shortage of attention. Pundits have predicted the end of Moore's Law ever since its inception some 46 years ago. Even so, few people today seem to agree on a precise day of reckoning. Kaku believes that by 2020, or shortly thereafter, transistors will run up against their atomic size limits and Moore's Law will break down. Years ago, Intel predicted this event would occur at a 16-nm process node with 5-nm gates, yet it has plans on the table for 15-nm and 11-nm process nodes going forward. When wires and gates get too small (about five atoms thick), electrons begin to stray from their dictated paths and short circuit the chip. This issue makes shrinking the transistors further a futile endeavor.

When transistors can no longer be made smaller, the only way to continue doubling the transistor count every two years is to build upward or outward. Stacking dies poses challenging heat dissipation and interconnect problems, while making larger dies and linking them together in a single package is only sustainable up to a point. Indeed, with 22-nm production ramping up, it seems that the zenith of silicon-based IC design is finally a legitimate point on the horizon.

The paranoid among us may see the recent netbook mantra of "good enough computing" as a ploy by CPU manufacturers attempting to acclimate users to an impending period of diminishing performance returns. More interesting to me than when we break the law, though, is what the consequences will be as advancements in cheap computational power begin to level off. Will research and advancement in other areas, such as genomics, decline at the same rate as commodity computing power? Will longer silicon refresh cycles pour salt in the wound of an already ailing world economy? Will people even notice or care?

I believe those of us in the enthusiast realm will indeed notice when transistor counts begin to level off, but for the vast majority of everyday users, it probably won't matter. As we near the end of the decade, the bulk of our daily computing burden will be likely be removed from our desks, and placed on the backs of "cloud" companies that handle the heavy lifting. This arrangement will provide a comfortable layer of insulation between the masses and the semiconductor manufacturers, as scaling issues will be dealt with quietly in the background, giving users get an end-product that "just works." You'll still be able to find laptop-toting hipsters at your local coffee spot, and locally installed software will still be commonplace. But by the end of the decade, the size of your Internet pipe may be more important than the speed of your processor.

As transistor counts stagnate, a combination of clever parallel programming techniques and engineering tricks at the silicon level will become even more important than they are today. These tweaks will be required to keep the industry moving forward until a post-silicon computing era can take root. There are several prospects on the radar to replace traditional silicon chips, including graphene, light-based logic circuits, and quantum processors. The next big thing beyond silicon is still anybody's guess, though.

The biggest questions in my mind, however, revolve around the world economy and its reaction to silicon scaling issues of the future. Will the mantra of "good enough," coupled with incremental improvements over time, be sufficient to stave off a meltdown in the tech sector? Will device upgrade cycles lengthen, or will users continue to purchase new toys at the same rate, even though they aren't much faster than their predecessors? Will software begin to outpace hardware, creating enough computational scarcity that market forces drive efforts to advance computing to the next level?

There are a lot of unknown variables at play here, and the capital required to research and develop silicon's successor is staggering. Further damage to the financial sector could potentially slow down progress toward the post-silicon era if R&D funding dries up. Similarly, the fallout from various government debt crises could limit future investments in technology, despite immense interest in quantum computing for cryptography.

Before this starts sounding too much like a doom-and-gloom, fear-mongering editorial, it should be understood that the end of Moore's Law does not spell out the end of all advancement. It merely suggests that the gains we're used to seeing come online every two years will be on a slower schedule. Progress will march on, and there will be ample time to develop new technologies to pick up where silicon leaves off. Of all the items on the post-silicon wish list, the technology that makes me feel the most warm and fuzzy inside is superconductors—specifically, the potential discovery of a room-temperature superconductor.

A superconductor is a material that loses all electrical resistance when cooled to a certain temperature. In theory, assuming an absence of outside forces, electrons could zip around a superconducting ring forever with no loss of energy. Materials have already been discovered that lose all resistance at temperatures easily attainable using cheap liquid nitrogen. However, finding a material that does so at room temperature would represent the holy grail for scientists and electrical engineers. Imagine a processor whose interconnects and transistors were crafted from a superconducting material. Such a beast would be able to operate with next to no electrical leakage and minimal heat generation despite running at insane clock speeds. There are many other novel and mind-boggling uses for superconductors, particularly in the transportation sector, but such a discovery would be a huge boon to the technology world. There is nothing out there that proves room-temperature semiconductors actually exist, but the end of Moore's Law could serve as incentive to ramp up the search efforts.

Looking ahead to a time when chip makers are no longer able to significantly shrink transistors, how do you think the world will cope? Personally, I predict things will be business as usual in a post-Moore world. Worst case scenario, compute power will expand to meet demand inside of large cloud-based server farms, where performance can grow by adding more CPU's to the mix and use is metered and billed like a utility. Best case scenario, we'll all eventually be playing holographic Crysis on personal desktop quantum computers or shattering clock speed records with cool-running, superconducting CPUs.  For those interested in computing and the physics behind it, the next couple decades should provide quite a show. Who else wants some popcorn?

100 comments — Last by Antimatter at 8:59 AM on 09/23/11

Keep your G6; I'm in it for the P6
— 5:20 PM on August 18, 2011

Honestly, I tried to fight the urge to reminisce today, but August 18th marks the day the youngest member of the Pentium Pro family turns 14 years old. Resistance is futile. The Pentium Pro 200/1M was the swan song for the original, MMX-less P6 core. The chip valiantly forged ahead, even as its Pentium II and Xeon successors washed over the computer landscape, and it happens to be my favorite CPU of all time.

While the P6 introduced many architectural enhancements that persist in CPUs today, my fascination with the Pentium Pro processor is not founded on technical merit alone. When something commits itself to your brain's favorites table, something more than just a list of numbers and features is typically required. For instance, I have a friend whose favorite CPU award goes to be the Applebred Duron, which wasn't chosen based on raw performance figures, I'm guessing. Indeed, if you ask an audience about their favorite chip, you'll receive an eclectic and exhaustive list in return, with accompanying stories to boot. The story of my own personal attachment to the Pentium Pro largely boils down where I was and what I was doing at that point in my life.

Even though the first commercial Pentium Pros wandered off the production line at the tail end of 1995, it would take almost four years beyond that for one to land in my lap for the first time. As a junior in high school, I landed my teenage dream job working for a local computer merchant. The vendor purchased used corporate equipment as it came off lease and then resold it through a then-novel website called Ebay. It was also around this time that the first few batches of corporate workstations and servers sporting Pentium Pro processors were up for retirement.

After wrapping up a couple months of tedious work wiping hard drives and cataloging a huge backlog of IBM 486 PS/2 boxes, a new batch of computers was backed up to the loading dock; hidden among them lurked a Dell Optiplex GX Pro workstation sporting the Pentium Pro 180MHz/256K processor. This workstation had the misfortune of a bum motherboard, which meant it was my duty to part out any usable or resalable components. As I dismantled it, the heatsink fell by the wayside, and I knew it was love.

Up to this point, my personal frankenputers had been based on Socket 3 and Socket 7 chips. The Socket 8 CPU staring up at me was a behemoth in comparison. Removing the chip from its socket only dug the hook in deeper. The physical heft of the the thing, the gold heat spreader, and the unique pin layout on the bottom made an immediate impression. It felt as though I was holding something powerful, capable, and professional in my hand—a dangerous combination of attributes for an impressionable teenager.

Other working systems, replete with Pentium Pro chips, eventually made their way into the warehouse. Because literally everything was for sale, I had to devise a novel ploy in order to selfishly utilize one as my work system. This plan involved scrounging a 2GB hard drive, loading it with my OS and files, and then moving the drive between computers as they were sold out from under me. The scenario was far from ideal, but it allowed me to get plenty of hands-on time with my new hero. Our logistics manager eventually adopted a similar strategy.

In 2001, after the Pentium 4 had been on the market for several months, older Pentium Pro-based systems weren't exactly flying off the shelves. It was at this time that I bit the bullet and finally purchased a one to call my own. While doing my thing at the warehouse one day, I happened upon a Compaq Proliant 850R server that was missing its hard drives but otherwise complete. 50 dollars changed hands, and a bona-fide server was mine to play with.

The Proliant was special in two ways. It was not only my first Pentium Pro-powered computer, but also my first dual-processor system. Initially, it housed a single 200MHz/256K processor and 64MB of ECC EDO RAM. Over the next few months, I managed to rummage up a second, matching CPU with an accompanying VRM module, a couple 4.3GB SCSI hard drives, and an impressive 512MB of RAM, which was the most the system could handle.

The video card in the server was an oddity. It pulled double duty as a SCSI controller, and as video performance is not a top priority for servers, the display output was limited to 256 colors with a maximum resolution of 1024x768. I wasn't about to give up the Riva TNT2 in my home desktop just for the novelty of having dual processors, so the Proliant was relegated to *gasp* server duty.

About a year after I had my server tricked out and happily pulling double duty as a file and DHCP server, I bumped into the final piece of the puzzle. I was in college at the time, but I was still working in the computer warehouse during the summer. A faulty Proliant 6500 came through the door one day, complete with four Pentium Pro 200MHz/1M processors. I had played with Pentium Pros featuring 256K and 512K of L2 cache throughout my illustrious career, but the 1M versions had skated under my radar. As the heatsink was removed from the first 1M chip, revealing a black aluminum heat spreader, I was in love all over again. Naturally, a pair of them came home with me that night.

The 1M version of the Pentium Pro differed from the others because the die was packaged using an organic substrate material rather than a ceramic one. It was also the only version of the lot to get a black aluminum heat spreader. There is a very sensible reason for this cosmetic makeover, but it requires a little history lesson.

Prior to the Pentium Pro, Intel CPUs utilized an L2 cache that was located off package, typically soldered or snapped onto the motherboard. For instance, the Pentium P5 and P54C chips only had a paltry 16K of on-die L1 cache. When the CPU needed something that was not stored in the L1 cache, it had to reach out across the motherboard's backside-bus and query the L2 cache for the desired instruction. In this scenario, the cache was physically distant from the CPU and ran at a fraction of the its core clock speed, which introduced some serious latency into the computation.

Even more important than where the L2 cache was located is why the cache was put there. The short answer: cache is expensive. In the mid-1990s, Intel was fabricating chips using silicon wafers 150 mm and 200 mm in diameter—not the larger 300-mm wafers commonly used today. Because the process technology of the time wasn't as advanced as it is now, the resulting CPUs were comparable in size to many of the chips in production today even though they had much lower transistor counts. Fewer dies were etched onto each wafer, making mistakes even more costly. Adding an integrated L2 cache would have dramatically increased the overall die size of each chip, increased the likelihood of errors within it, and reduced the number of chips that could fit onto an already cramped wafer. That wasn't exactly exactly the best-tasting recipe for success.

Back then, fabricating cache modules separately was simply a function of operational prudence. By detaching the cache from the core, you wouldn't have to waste a perfectly good core because of an error in its cache, or vice-versa. Absolute performance took a back-seat to yield considerations until production lines with smaller lithographic processes and larger wafers came online.

The Pentium Pro was the first Intel processor that contained a L2 cache on the same package as the CPU die. It is important to note that the cache and the CPU were still physically separate dies, but they were both mounted inside the same package. The close proximity of the CPU to its L2 cache eliminated the long back-side bus ride and enabled Intel to run everything at same clock speed as the host CPU, providing a significant performance boost for cached data.

The 256K and 512K models consisted of two chips, one CPU core and one cache module, which could fit comfortably under the original gold heat spreader. In order to shoehorn 1M of cache, Intel had to add a second cache die, bringing the total up to three separate dies on a single package. The resulting row of silicon exceeded the boundaries of the gold heat spreader, necessitating a new design. Thus, the black aluminum heat spreader was born.

/End tangent.

I spent a lot of time with my pair of Pentium Pro 200/1M processors. The Proliant 850R required a little initial tweaking to work with the 1M chips, but once up and running, it stayed in service for many years. This was the server l used to experiment with Apache, MySQL, and PHP for the first time. Before my ISP wised up and blocked port 80, I used the machine to host several small websites for myself and some friends. The rig could easily handle more requests than my upstream bandwidth was capable of at the time. Many memorable nights were spent writing code and benchmarking the system against newer machines to see how well the old girl stacked up. I had great fun with it and felt like I was accomplishing something important in the process, which is why the Pentium Pro 200/1M remains my favorite processor. Happy Birthday!

Dare I ask what your favorite CPU is and why?

101 comments — Last by jackbomb at 1:26 PM on 09/04/11

Un-bustin' (monitor) caps with a soldering gat
— 11:19 PM on July 28, 2011

There comes a point in every computer guy's day when the phone rings, and a distraught acquaintance begins reciting error messages or describing some symptom of failure plaguing their Solitaire machine. Not too long ago, these calls were a welcome distraction. Perhaps they provided some validation of the countless hours spent peering into cathode ray tubes and liquid crystal matrices. Perhaps they were merely a common thread, facilitating rare interaction with other human beings. Whatever the reason, these tech support calls used to be a source of entertainment and pride.

Fast-forward a few years, and I've begun washing my hands of these support requests with canned responses like, "Google it," "take it to a 'Genius,'" or "that sucks, man." Somewhere along the way, dialog boxes proclaiming missing DLL files, blue screens of death, and endless pop-up ads selling virus scans just got a bit boring. Once in a great while, however, these phone calls involve something a little more obscure: something hardware-related warranting a road trip to acquire new soldering gear. I was fortunate enough to receive one of these fun calls about a Samsung 216BW LCD monitor the other day. The problem? "It don't work."

After coaxing slightly-more-precise information out of the owner, some quality time on Google revealed a recurring theme of leaky and bursting capacitors on that particular model—a relatively easy fix. Cue the A-Team theme music. Soldering equipment, screwdrivers, safety glasses, Bawls, a multimeter, and replacement capacitors were all tossed on the work bench. The A-Team theme faded into Gonna Make You Sweat at a neighbor-enraging volume, the soldering iron was plugged in and heating up, and I was good to go.

Often, the most time consuming element of such a project is disassembling and reassembling the device without breaking or losing parts. While a monitor isn't as intricate or as full of tiny screws and parts as, say, a laptop, it's still worth jotting down where each screw came from on some scratch paper. This practice is even more important when the screws are of different sizes and threading. During disassembly, I like to use a digital camera or cell phone to snap some quick pictures before removing each component. This creates a digital breadcrumb trail that can be retraced when it comes time to put Humpty Dumpty back together again.

Once the monitor's guts have been torn asunder, the misbehaving capacitors can be appropriately dealt with. Spotting a dead or dying electrolytic capacitor is usually pretty easy. They tend to bulge at the top, leak, or in extreme cases, explode. Modern electronic devices are starting to incorporate solid capacitors into their designs; those capacitors have a much higher life span and shouldn't leak or bulge. Catastrophic explosion and fire are still in the cards, though, which makes my inner pyro quite happy.

When messing with capacitors—especially the larger ones found in CRT monitors—you need to be extremely careful to ensure that they have been fully discharged. Even if the device has been unplugged for days, these tiny tin cans might still pack enough punch to make the day pretty exciting after first contact with your tender digits. A quick check with a multimeter can determine if a capacitor is hiding some leftover voltage behind its back. Should there be some contraband juice stashed away, capacitors can be discharged by either shorting the leads with a something conductive like a screwdriver (not recommended) or by attaching some insulated wires to a bleeder resistor or a standard 60-100W light bulb until the voltage drops to a safe level. Non-conductive gloves would be a handy companion during this process.

Some claim that removing the power cord and pushing the power button several times will also discharge the capacitors. This is not always true. If it makes you feel better about things, then keep on pushing that button by all means, but check the caps with a multimeter before going hands-on. These days, many electronic devices that use capacitors also wire up a small bleeder resistor in parallel. This configuration is designed to gradually drain residual stored energy over time, but unless you know with certainty that the capacitors you'll be handling have this feature, whip out the multimeter just to be safe.

Once the sparks have flown, all capacitors have been drained, and the pets have come out of hiding, it's safe to remove the dead weight. The quick and dirty method is to simply yank the bad caps off with pliers, clip any remaining wire, and punch out the solder points with a sharp solder pick. If you're in a more professional mood, remove the original solder by heating up the joint, place some desoldering wick on top of the joint, and run a hot soldering iron over the wick. The molten solder should be absorbed, allowing the capacitor to fall out cleanly once both leads have been freed from their tin restraints.

Before hastily slapping the new caps in place, take a second to ensure proper positive/negative orientation. The long lead is generally positive, and a quick glance at some neighboring capacitors will reveal any visual cues on the PCB that indicate which lead goes through which hole. Generally, the PCB will have a silk-screened circle marking indicating where the capacitor should sit. Half of that circle will be filled in or hashed, while the other half will be blank. The filled-in portion typically indicates the location of the negative lead hole.

Thread the legs through the solder point holes and flare the ends out to hold it in place while solder is being applied. If some rogue solder is preventing the lead from poking through, carefully punch out the hole using a sharp pick or needle.

Just like playing a musical instrument or quickly solving a Rubik's Cube, mastering the art of soldering requires practice, practice, practice. I don't consider myself an old soldering pro by any means, but enough electronic repair opportunities have popped up over the years that I've picked up a few handy tips along the way. The trick to a solid solder joint is preheating the receiving area a bit and applying just the right amount of solder in one smooth motion. Apply too little, and it might not stay put. Too much makes the joint brittle and prone to cracking. Overheating and remelting solder further weakens it, so getting it right on the first dab is critical. If you're not comfortable wielding a soldering gat yet, bust off a couple rounds on some scrap wire or aluminum foil before dropping molten metal on your gear.

Once you're ready to seal the deal, use the heat-and-dab technique on both capacitor leads. There may be other solder joints in the vicinity, and you need to be careful not to accidentally cross-connect them in the process. Should something go awry, this is where the desoldering wick steps in to save the day. Simply heat up the misplaced solder, apply the wick, and soak up your mistake.

With the solder applied and the capacitors held firmly in place, snip off the excess lead wires as close to the solder joints as possible, then reassemble. Tada! Good as new... hopefully.

When money is tight and the only thing holding back your hardware is a bum capacitor, DIY replacement can be an extremely economical way to go. Over the years, I've successfully saved several motherboards, video cards, and monitors from the scrap pile with simple cap replacements. The total cost of this repair including new capacitors, a new soldering iron, and desoldering wick was about $15. A new monitor could easily run you 10 times that.

Some replacement parts can be sourced from local electronic hobbyist retail stores like Radio Shack, but more often than not, they won't have the proper capacitor in stock. My personal favorite destination for capacitor shopping is Digi-Key. They stock just about every conceivable electronic component under the sun, and their prices are reasonable. You'll want to break out a ruler, since capacitors come in a wide range of heights and diameters. Also make note of the voltage and Farad (µF) ratings written on the capacitor(s) being replaced.

I'm sure this procedure is old-hat to many readers out there, but for those still earning their spurs, resurrecting a piece of kit like this is a sure-fire way to up your geek-cred. Just mind the obligatory warnings about eye protection, hot things burning other things, and electrical safety.

58 comments — Last by indeego at 2:16 PM on 11/04/11

Ubuntu ushers me out of the Windows XP era
— 11:02 PM on July 7, 2011

Well, it finally happened. Windows XP is no longer the primary OS on any of my day-to-day machines. The last holdout was my laptop, a gray-haired but solid HP nc8230 rocking a single-core 1.86GHz Pentium M, 2GB of RAM, and a Mobility Radeon X600 graphics chip. The low-brow specs ruled out my modern OS of choice, Windows 7 Professional. To be perfectly honest, I wasn’t thrilled about the prospect of dropping $130 for another license. A few months back, I moved my home file and web development server from a 32-bit Windows XP environment to 64-bit Ubuntu 10.10. Still pumped about the success of that project, I decided to give Ubuntu 11.04 "Natty Narwhal" a chance to win my heart as the sole proprietor of the HP's hard drive.

First things first: I need to clear the air. I would classify myself as an advanced Linux n00b. I'm fairly comfortable tossing around terminal commands and editing the odd config file (in gedit), but my neck beard has not yet matured to the point where terminal text editors like vi, vim, or Emacs seem like a good idea. My laptop is used for standard productivity tasks, Internet surfing, and some light scripting with PHP and Python. Simplicity, versatility, and usability are valued over geek-cred, compile-from-source, time-consuming complexity. This post contains some personal observations I've made after stepping outside my Windows comfort zone, but it shouldn't be viewed as a full-on Linux distro review. Consider yourself warned.

I've been casually following the Ubuntu lineage since about the time that 5.10 "Breezy Badger" dropped in 2005. It wasn't until the 10.x releases came along that I felt Ubuntu was finally refined and usable enough to replace Windows for one of my primary PCs. With previous releases, I'd lose hours fiddling with configuration files, fighting graphics drivers, and attempting to install software that wasn't in a default repository. The OS would inevitably blow up and refuse to boot. Before long, Windows would wash back over the hard drive platters, and life would go on. In a ironic twist, Windows XP decided to irreparably blow up this time, giving me the perfect chance to see if Linux could hack the full-time gig.

Installing Natty Narwhal was a cakewalk. Ubuntu has put a lot of work into its installer, which is attractive and about as simple as they come. This has always been one of the easier distributions to set up and get running, and the extra spit-shine shows that the Ubuntu gang is making a concerted effort to woo a more mainstream audience.

After the installer finished doing its thing, there was little more to be done. Since my laptop's hardware is old, bordering on retirement-home age, all the drivers were locked and loaded except for the proprietary fglrx AMD graphics driver. If I had to pick one Linux nemesis, it would be graphics drivers—without question. I've lost more battles trying to get proprietary drivers from Nvidia and AMD working properly than I care to admit. To this day, the thought of modifying an xorg.conf file makes me want to curl up in the fetal position under a desk. My record of failure remained unscathed as I installed the fglrx drivers through the Synaptic Package Manager and promptly lost all 3D acceleration and Compiz effects. Some cursory Googling of the issue seemed to indicate that Mobility Radeon X600 owners were a subhuman species, unworthy of functioning proprietary drivers, so I uninstalled them and went about my business using the default X.Org AMD drivers (which seem to work quite well with this hardware).

Once the drivers were sorted, customizing the rest of my Linux experience was a relatively pain-free affair. My first order of business was to ditch the new Unity theme that is now activated by default in Ubuntu. This theme would be great if I were on a netbook or some other device that views Internet communication as its sole purpose in life. However, for a system oriented toward general productivity (and with a display resolution greater than 1024x600), a dumbed-down interface laden with ginormous icons makes me see red. I promptly restored the standard Ubuntu-skinned Gnome desktop by logging out and selecting "Ubuntu Classic" from the drop-down options at the bottom of the screen. The process was simple enough, but Ubuntu should really ask the users for their preference during the installation process rather than dumping them into giant-icon land by default.

 

The standard Gnome interface is a beautiful thing when pimped out with Ubuntu's custom skin and wallpaper. The UI has more of an OS X vibe than a Windows feel, but it smartly combines elements from both in such a way that users from either side of the aisle should feel comfortable. As a bonus, if something like the positioning of the window close/maximize/minimize button array bothers you, it can easily be changed in the Appearance control panel. If you find yourself longing for an OS X-style dock, any number of free options are available for download in the Software Center. My personal favorite dock app is aptly called Docky. Even for hardened Windows users, Docky provides an unobtrusive, stylish, and useful way to organize and launch one's favorite locations and applications.

While we're on the subject of software, one thing I've always enjoyed about Ubuntu (and other Linux distros) is the ability to use the Software Center and Synaptic Package Manager to search quickly for applications and to install them with the click of a button. These services behave much like Steam or the jillion of other cookie-cutter app stores coming out these days, except the software listed within is generally free. Popular open-source applications like FileZilla, Firefox, Chromium, Wireshark, and GIMP can be installed easily without having to touch the command line. Of course, "sudo apt-get install" still works if you want to fire up the terminal and impress your friends.

The major side-effect of giving up Windows is the unfortunate forfeiture of native access to most games and popular software like Microsoft Office (particularly OneNote) and Adobe's Creative Suite. Most Windows apps can be run inside a virtual machine or a compatibility layer like Wine, but losing native support is detrimental to Linux adoption overall. As much as I respect the incredible effort put into the GIMP project, it's simply not on the same level as Photoshop. The situation is much the same for office suites, as Microsoft's Office 2010 maintains a sizable lead over the free alternatives in terms of both looks and functionality. Fortunately, online offerings like Google Docs and Microsoft Office 365 are beginning to bridge the platform divide. I'm still anxiously awaiting a decent open-source alternative to OneNote, though. An application called EverNote has gained popularity on Windows, Mac OS X, iOS, and Android platforms, but there is no love for the Linux crowd yet. Google Notebook works in a pinch, but it requires an Internet connection and doesn't even come close to OneNote's level of functionality.

When it comes to basic personal communication, Ubuntu has done an admirable job of integrating chat and e-mail features into the OS. The chat client can be configured to work with common services like Google Talk, AIM, ICQ, MSN, and Facebook Chat. Contacts are unified into a single list, and only one notification icon appears at the top of the screen. This is essentially the same functionality that multi-protocol instant messaging clients like Pidgin provide, except that it comes bundled with the OS. The only chat service I use regularly that isn't supported is Skype. A Linux version of the Skype app is available through the Ubuntu Software Center, but it looks like an afterthought compared to the Windows and Mac flavors.

Although I'm still in the early stages of living with Linux, barring a few minor teething issues, things have gone extremely well. Mainstream distributions like Ubuntu appear to be just about ready for prime time, provided the user isn't dependent on proprietary applications or games that only run on other systems. Native Linux applications and web-based software can be somewhat lacking in looks and features, but it's astonishing what $0 will get you these days. That said, I'm not ready to give up Windows totally just yet. I simply have too much invested in games and other software that won't run on Linux without jumping through hoops, and truth be told, I really like Windows 7. Linux is not for everyone, but it's hard to argue with the value and quality modern distros offer. In fact, I've decided to let Ubuntu take root and make a home for itself on my laptop. /end corny Linux puns.

67 comments — Last by dashbarron at 6:12 PM on 07/19/11

Hacking the planet
— 1:26 PM on May 20, 2011

I've got a confession to make. Books and I aren't as close as we used to be. Online articles catering to a slightly more attention-deficient audience have been the staple of my reading habits for an unacceptably long time. Even when those articles are presented in nice, bite-size portions, I am often guilty of skipping to the conclusions page. As my undergraduate college transcript will attest, books that don't pique my interest after the first chapter get tossed aside or skimmed at best. In spite of that, I'm here today to gush about a book by Steven Levy that grabbed me with the first paragraph and didn't let go until the afterword: Hackers: Heroes of the Computer Revolution.

Despite keeping a casual ear to the ground, listening for books and movies based on computer lore, Hackers stayed under my radar until a recent late-night, academically required Amazon shopping spree. While browsing for cheap books to push me over the $25 total required to get free shipping, Hackers popped up in a recommendation panel, and my mouse cursor was drawn to it like a neodymium magnet to a refrigerator door. A few clicks later, the book began its journey to my mail box.

Despite sharing a similar title, Hackers does not chronicle the exploits of such pseudonym-laden teenagers as Acid Burn, Cereal Killer, and Crash Override. Instead, Levy introduces us to the progenitors of personal computing as we know it today, starring characters like Ricky Greenblatt, Ed Roberts, Bill Gates, and Steve Wozniak. The book ties together dozens of stories about the hardware and software hackers who helped build and shape the modern computer industry.

As I am a byproduct of 1983, this book serves in a Cliff's Notes capacity, filling in the blanks surrounding various technological advancements and computing heroes from the late 1950s through 1982. If that seems like an odd place to stop recounting history, it should be noted that the first edition of this book was published 26 years ago, in 1985. The 25th-anniversary edition contains an afterword for both the 10 year and 25 year anniversaries of the book. It follows up on several of the original hackers and briefly highlights more contemporary visionaries like Mark Zuckerberg, who has only recently entered the pantheon of tech superstars.

The stories begin in 1958 at a quaint little Cambridge school called the Massachusetts Institute of Technology (you might have heard of it before), and they follow several members of the Tech Model Railroad Club (TMRC). This is the scenario that hooked me; why on earth would Levy be talking about model trains? This is supposed to be computer book! Without giving too much away, Levy eloquently describes how the underpinnings of the MIT model train set consisted of various components, wired in meticulous ways, which approximated the logic functions used in computing. This experience, in turn, caused certain members of the club to venture into forbidden rooms containing behemoth machines like the IBM 704, Lincoln Labs' TX-0, and the DEC PDP-1. Many of the exploits and accomplishments of this first generation of hackers are brilliantly captured throughout the first part of the book.

Beyond Cambridge, Levy takes us to the West Coast, as we tag along with a group of hardware hackers sharing a grandiose vision of computers for the masses. The focal point is a relatively small cluster of enthusiasts who formed the "Homebrew club." This was a place where people could gather to share their hardware and software hacks, generally performed with an Altair 8800. This is the group that spawned "The Woz" and his original Apple computer. We also meet characters like John Draper, a.k.a. "Captain Crunch", who got his dubious hacking start by building blue boxes—devices used to place free long-distance calls around the world. If you've ever see the TNT movie Pirates of Silicon Valley, a lot of the material covered here will seem familiar.

Building on the successes of hardware hackers in the 70s, a third generation of hackers began to emerge. This time, their attention was directed toward the development of games and software to make the hardware hackers' creations appealing to the layman. Companies like Sierra On-Line and Brøderbund took center stage, but the over-arching theme of this generation was the almighty dollar. Levy describes how some began to realize the monetary value of their innovations, and how capitalism quickly infected the formerly altruistic ethos of the hacker.

The magic of this book lies in Levy's ability to take otherwise unsexy, awkward protagonists and make you believe that they possess almost superhero-like powers. You can imagine them exercising absolute control over every bit and transistor at their disposal as you turn the pages. I find this particularly impressive; glorifying marathon sessions of Chinese take-out, key punching, hygiene forbearance, and beard-scratching is no easy task.

The tone set throughout the book venerates the open atmosphere and lack of bureaucracy enjoyed by the early hackers. At times, it can seem like a 477-page advertisement for the open source movement, championing the virtues of quality work and sharing over dollars and cents. Regardless of your stance on the topics of patents and licensing agreements, this book is worth a look, if only to help one understand why many people today choose to donate countless personal hours providing us with open source (and hackable) options. This book attempts to elevate the word "hacker" above the negative connotation it too often carries, giving the term back to those who see added potential in something and go hands-on to unlock it.

As much as I enjoyed the read, I am unsure of how well the material in this book will jell with audiences much younger than myself. I've been around just long enough that my first computing experiences took place on a combination of Apple ][s and IBM PCs running DOS. Much of the pre-Macintosh era discussion in the book brings back fond memories of "Inserting Disk 2" playing Oregon Trail and annoying family members with the "beep" command on our IBM PC-AT at home. Those who grew up in a world without command prompts and can't discern a PEEK from a POKE may find it harder to connect as deeply with the book as I did. Ultimately, however, Levy presents his stories in a way that any nostalgic computer geek should be able to appreciate.

In the technology industry, we are sometimes fixated on the future—searching for the next big thing or drooling over some next-generation gizmo. Occasionally, we need to take a deep breath and look back at our roots to see just how far we've come. That is what this book is to me: a glimpse into the past, an important perspective that shows where we've been, what we've achieved, and what remains to be conquered with our digital tools. It makes me truly appreciative of what we have today and of those who created the tools, which in turn created more tools, which we use today to craft even more tools for the future.

If you've already discovered this book (you've had 26 years for goodness' sake), I'd love to hear about your favorite parts or experiences using some of the technology discussed along the way.

13 comments — Last by jpostel at 10:48 AM on 06/03/11