Live blog: IDF 2011 Justin Rattner keynote
Time for an update on Intel R&D
Time for one last live blog from the Intel Developer Forum in San Franscisco. Today Justin Rattner gives an update on the future: Intel's R&D efforts.
We're starting with a cheesy video, in which the military has discovered a 48-core chip and is deeply concerned.
Ladies and gentlemen, please welcome Justin Rattner.
And he's wearing a beret! Nice.
He offers us his Mooly impersonation. Which is.. short.
Five years ago today, I stood on this stage to give the opening keynote. It's usually the CEO thing, and I'm not vying for the CEO thing, so have no fear. But I was onstage to introduce the Core architecture. At that time, we talked about slowing down some cores to speed up others. We've come a very long way in five years.
We now have heterogeneous processors, with GPUs onboard. Soon, Intel will introduce Knight's Corner, Intel's first many core general-purpose processor.
Programming the cores is easy. Familiar memory model, familiar instruction set.
We also launched our terascale effort, Intel's many-core research program. We've built a few experimental processors in Intel Labs. The 80-core, the 48-core single-chip cloud computer that tests out what a future server processor might look like.
And we've been busy creating better tools for programming many-core architectures.
We haven't limited our scalability test for many-core to HPC apps. We're testing lots of different types. Across this very large range of applications, we're seeing excellent scalability. Don't think there's anything on this chart less than 30X speedup. Given us a lot of confidence people are going to be able to put this architecture to work and take performance to higher levels.
Andrezj Nowak, a particle physicist from CERN openlab, is onstage to talk with us. He works on optimizing performance for many-core.
Let's talk about the large hadron collider. It operates at a temp cooler than outer space, at 1.8 Kelvin. And... we're going to see a video about it.
Currently yearly data production from LHC is over 15 petabytes. Real challenge is processing the data later. We've build a grid with 250K Intel processing cores. Is spread across the world.
Processing takes place in four major domains. We simulate the physics and see if the behavior in the collider matches with what we know.
We can use the same toolset we use for Xeon with the MIC, which is nice. Here's a look at some code than Intel helped us visualize.
We use a single MIC core here and can visualize the program running. Now, we've engaged all 32 cores of the MIC on this other machine. What would take minutes on a single core takes seconds on the MIC.
We're looking forward to further versions of the MIC, and we'll take as many cores as we can get.
Rattner: Make me one promise. Don't make any little black holes that suck us all in.
Openlab dude: Ok, if you promise me more cores.
And the Openlab guy is finished.
Rattner: Do you have to be a ninja programmer to write many-core applications?
Loud gong sound echoes through the hall. To no laughter whatsoever.
Don't worry. We're not going to bring ninjas back onstage.
(Phew. Can we banish additional ninja jokes, too? Parallelism isn't always good.)
Billy is here to talk about improving the access of large-scale content in the cloud.
Billy says what folks have done with legacy databases for the cloud is just moving the entire database into RAM.
Best transaction rates today are about 560K transactions/second. With MIC, we can go over 800K queries/sec with lower latency.
"I had 10 days in May 1995 to make it." Has grown into a mature, widely used language.
She says using this should be "quite easy" because it's just Javscript extended to add parallelism in an easy way. Is available to developers on github.com.
Brendan says he's going to promote this at standards bodies.
Moving on, we're asking whether an LTE base station can be built out of a multi-core PC. Had an idea, along with our friends at China Mobile, that it might be possible to turn a standard PC into a base station. Entered into an agreement with them two years ago in order to do this. Architecture is interesting. Key idea is that at the cell tower is just the RF front end. Radio signals are digitized, moved over fiber network to a data center. It's kind of base stations in the cloud.
Dude from China Mobile is onstage to demo. He's armed with a quad-core Sandy Bridge desktop and a pretty thick accent. Says they're using AVX instructions to do signal processing. With lots of optimization, can handle real-time requirements. And the workload isn't even using all of the computing power.
Rattner: Tell me how you dealt with the real-time issues.
Software is real-time patched Linux. In about 3ms, is able to respond. Can stream video over it. Next year, will begin field trials with China Mobile.
Rattner: We're not just looking at base stations, but high-volume network equipment and switches using standard components from computers.
Now, Dave is going to tell us how we can use the power of multi-core for security.
Folks have perked up, sitting on the edge of their seats.
Dave says "I love this demo, because you can actually see the cryptography." There are.. pictures of people onscreen. Not sure I see the crypto.
Oh, some pictures look like static.
They use a webcam as a biometric security gate to determine which pictures the user can access. Changes depending on who's on camera, using facial recognition.
Dave's finished, and Justin reminds us you don't need to be ninja programmer for MIC programming.
What lies beyond multi-core computing? Extreme scale computing.
Our 10-year goal is to achieve a 300X improvement in energy efficiency for computing. Equal to 20 picojoules/FLOPS at the system level.
Extreme scale guru whose name I missed is here to talk about... extreme scale computing.
Today, we operate a transistor at several times its threshold voltage. One thing we can do is reduce the supply voltage and bring it closer to threshold.
Claremont: a Pentium-class processor running near threshold voltage. This is the one from the solar-powered system demo on Tuesday. We're operating within a couple of hundred millivolts of threshold voltage.
This is a 5X improvement in power efficiency, but could have gotten ~10X with a newer core.
It's so old, we went on eBay looking for a Pentium motherboard for it.
How do we turn this into a higher performance system? Scales to over 10X the frequency when running at nominal supply voltage.
It's running Quake! Slowly. Heh.
So could see future ultra-low-power devices with wide dynamic operating range.
New prototype: hybrid DRAM stack. About 4-8 can be stacked. These are 4-high. Stacked mem is very high efficiency. A terabit per second demo, supposedly very energy efficient, but I don't see any info on voltage or power draw. Hrm.
And we're finished with the cool power guru.
Rattner: What we've been talking about today is the future. We have something called the Tomorrow Project. Brian David Johnson, our futurist, is gonna talk about it... on video.
Voiceover with lots of graphics that look like Tron. Although no light cycles. :(
We're talking to dignitaries, thinkers, sci-fi authors. Want to invite you all to join the conversation by visiting the website for it.
"If you can dream it, we can invent it together. Thank you, and see you next year."
Annnnnd, that's that. We'll take one of those near-threshold-voltage computers for review, please. Thanks.Live blog: IDF 2011 Mooly Eden keynote
Join us as we offer a live account of Mooly Eden's day-opening keynote from the Intel Developer Forum 2011 in San Francisco. Reload to refresh, in keeping with our incredibly high-tech real-time operating procedures.
Johan is back, sounding for all the world like Arnold Schwarzenegger, opening the festivities.
And here's Mooly!
...after a video, I guess. There are babies and stuff. Soft music. So hopeful. Geoff wipes a tear from the corner of his eye.
"PCs will continue to inspire all of us to make something wonderful."
LOUD music, and the beret is onstage!
Mooly's talking about how amazingly huge the PC market is.
"Emerging markets are on fire." China has surpassed US as a consumer of PCs, and Brazil is #3.
"Let me remind you, the personal computer has been the most adaptable device." "Its form and function is constantly evolving."
1995, Pentium MMX, was transition from enterprise to a consumer device.
Eight years later was another transformation, with the Centrino. Mobility. Interestingly enough, eight years later, our customers still want us to excel on this mobility vector.
And eight years later, we are transforming things with the Ultrabook. It will be a consumption device, but also a creation device.
Today, people use their PCs in several ways. There is debate: which is more important? The CPU? The GPU? Media? I think the best thing is to map applications and usage. In some cases, it's CPU. In some cases, it's GPU. Actually, your experience is not defined by the best component in the mix, but the worst one. The magic is to deliver a balanced system. That's what we tried to do with Sandy Bridge.
Annnnnd... demo time. Content creation. Picasa 3 with Task Manager showing eight threads. Going to compare, I think, an Ultrabook to a three-year-old Core 2 Duo system. Combining three images into a single HDR one. Showing the before and after images. Wow, HDR is so.... HDR-ish!
Now we're demoing a CyberLink video editing tool. And now the Ray-Ban website with a virtual tool that lets you see different glasses types on a representation of your face. Uncanny valley, meet high style.
Mooly: "All right, Ivy Bridge." Ivy Bridge has 1.48 billion transistors. Remember the number. "Those of you who are trying to take picturesw of this beautiful die, I played with this. It's not the real one." Hmm.. looks like a quad core. But will there by a quad Ivy Bridge? Intel is kinda being cagey here.
Mooly says Ivy Bridge is pin-compatible with Sandy.
Now he's talking about interrupt handling. 2.5K interrupts per second from a Gigabit NIC. 3K from USB. With Ivy, rather than waking a sleeping core to handle interrupts, the interrupts can be routed to the active core in order to save power, extend battery life.
DX11 is going to be available on all our PCs. We improved geometry throughput, shader array, sampling throughput. Those of you who have been surprised by Sandy Bridge graphics will be delighted by Ivy's.
Ivy Bridge demo time!
Display driver has stopped responding and recovered. Doh!!
Swappped to another demo. And now we're running HAWX 2 with tessellation on Ivy. Looks nice and fairly smooth.
We've been focusing on user experience. Actually talking with anthropologist, psychologists, which is weird. Asking" What do people want out of their computing?
Bringing David, a marketing manager, onstage to talk about this one.
David says we want to satisfy our left-brain side and our right-brain side. Left brain: We want to be productive and get things done. Learn and advance ourselves. Be in control, safe, and secure. Right brain: We want to create, to connect and share, and to lose ourselves in seamless, immersive experiences. Is there one device that can satisfy all of those things?
Hmm, perhaps something based on an Intel chip?!
David is telling us how an Ultrabook might meet each of his criteria. And he's finished.
Mooly: This brings us to the Ultrabook.
The Ultrabook is the device that you hold in your hand, the device that you like to show, the device that we put so much effort into what David was talking about.
Need a combination of responsiveness, security, good costs, style, form factor, battery life.
One of the things that we heard with the CULV is that it was still not enough performance. Ultrabook performance is better than that.
To ensure responsiveness, we extended Turbo. With 17W and 35W parts, base clock is different, but peak frequency is nearly the same. So responsiveness is pretty much identical.
Mooly is joined by a young woman who is doing a demo of hibernation, talking about how hibernate is too slow. Acer Ultrabook comes out of hibernate in ~5 seconds. (They counted to four, but very slowly.)
Toshiba laptop has been in sleep mode, but it woke up periodically to get updates from the 'net. Data is fresh when the user calls the laptop out of sleep.
Now we're talking security. To really discuss it, let me invite onstage one of the cyber-warriors, Todd Gebhart, Co-President of McAfee. Message: You should worry. And give us money.
McAfee, Intel working on an anti-theft technology. User can remote-wipe or lock a stolen laptop. Will be shipping in 2H of next year. And Todd's out.
And there's a ninja onstage. Him, "No, I'm a hacker." "This is very comfortable. Maybe not as much as a muscle T and Kango hat, but very comfortable." Ooh, Mooly. pwned by the ninja.
Serious dude on the other side of the stage says the ninja/hacker is failing to take over his secure data transfer.
Mooly says you can give your PC a suicide pill, and the PC commits suicide. Will be a deterrent to theft, since the PC won't be useful.
Kinda neat: an onscreen PIN pad won't show up when the ninja remote monitors the display via a hacked display driver. Server's display of those PIN pads is somehow secured.
Ninja's out, and Mooly's talking about thinness. Need a smaller, thinner hard drive. Different batteries. Was a huge effort. We had conferences in Taiwan, China, invested more than $300M in order to accelerate economy of scale for Ultrabooks.
Rolling a video about these conferences.
Wow, a room full of people at an Intel presentation is sitting here watching a video of a room full of people at an Intel presentation. How deep does it go?
But.... there was a lot of discussion lately about Windows 8. Intel is working with Microsoft. Welcome Bret Carpenter from Microsoft, who flew from the build conference to give us a demo.
Win8 tablet running on a 32-nm Atom SoC.
And moving over to the Ultrabook. Acer Aspire S3. 13mm profile, 13" display. Resumes very quickly.
A picture of Mooly onscreen.. without a beret. Mooly is scandalized!
Showing a Windows Metro UI start screen. "I am able to use a keyboard and mouse with this." Even though it was designed for touch.
Tiles represent all of your content, and you'll notice they're live. You'll notice there's no chrome. We give developers access to every pixel onscreen, so they have control over look of their application.
Popping over the Windows desktop mode... Visual Studio Express. Looks like Windows.
Mixed mode, a weather widget in Metro style split screen with traditional desktop mode. Nifty.
Now Mooly's going down a line of demo systems: Toshiba, Lenovo, Asus, Acer.. And they're really, really thin.
A surprise: "All of these Ultrabooks are featuring Ivy Bridge." Pause. "I repeat, all of these Ultrabooks are featuring Ivy Bridge." Applause.
Now Lily here is to talk about screen power savings.
System on left is a traditional LVDS panel, while on right, an eDP panel is refreshing while the CPU is asleep. Image is stored statically when nothing is happening. Savings of 500 mW, or 45-60 minutes of battery life in an Ultrabook.
Slide show: screen stays in self-refresh whenever the image doesn't change. CPU only wakes up when things change. She pulls out the display cable to prove it's working. Without the cable, the display continues to refresh itself and show an image.
Mark is here to show off Thunderbolt on Windows. Streaming four uncompressed HD videos. Over 700 MB/s. Acer and Asus will be delivering platforms with Thunderbolt technology on them next year.
Haswell time! Will deliver more than 20X reduction in standy power. Mooly holds aloft a Haswell chip. And now here it is in a working system, up, running, and ready. Several windows doing different things... briefly.
So, to summarize: PC market is growing and continuing to grow. Ivy Bridge is a "tick-plus," lots of new functionality. Ultrabooks are nifty. And Haswell will complete the Ultrabook revolution.
Mooly has one more thing! "Roll the video!"
Oh, it's inspirational, talking about how we are more than media consumers, but creators. Music cranked up to 11. The audience is.... deaf.
Mooly: "Lades and gentlemen, let's go and build wonderful things together!"
And that's it. Thanks for reading!Live blog: IDF 2011 Paul Otellini keynote
Ok, we're going to attempt a live blog from the opening keynote of the Fall 2011 Intel Developer Forum. Keep reloading this page as we go in order to see the latest updates, while will be at the bottom of the page. Yes, highly sophisticated.
And we're started.....
A Schwarzenegger clone named Johan is giving a slick overview of what's coming this week, before Intel CEO Paul Otellini takes the stage.
Ooh, the voiceover is talking about the history of Moore's Law. Yep, it's IDF.
Ladies and gents, please welcome Paul Otellini!
"My theme today is about fundamental changes." Transformations in computing have released wave after wave of productivity improvements, but I would submit we're at the very early stages in the history computing. Two years ago, I introduced a shift from the personal computer to personal computing.
Want to start by talking about how we got to where we are today. Faster processors, more capable computing, cloud services have changed our lives. Rattling off stats about the size of YouTube, Twitter, Facebook. Amount of data generated each year exceeds some 900B gigabytes. Creating unprecedented demand for transistors.
Talking about total transistor use worldwide in terms of quintillions. We'll soon move past the sextillion mark.
Moore's Law is not a scientific principle, but an observation of the pace of human innovation. Since it was first made, people have talked about how it was destined to end. But we've moved through multiple barriers and continue working. We already have line-of-sight for our 14nm technology, beginning to tool our factories for it.
Talking about "Intel Architecture" (x86) and its role, pervasiveness in the industry.
Computing has to adapt in the future. Must be engaging, consistent and secure.
Ultrabooks! First ones are now shipping from our partners.
Expect next year to ship Ivy Bridge, and it will accelerate Ultrabook development.
Wanted to go one generation beyond that and talk about Haswell. Next-gen processor's design is already completed. 30% reduction standby power vs. prior generation, but also architecting a system-level power management framework that has the potential to enable reductions of more than 20 times our current designs. All-day use on a single charge from the power grid, with no compromise on performance.
Demo of a future computer that's teeny, solar-powered, running an animation. Cuts off light to solar cell, and the animation stops.
Otellini: Was just a technology demo, no plans for products, but shows what we can do with our transistor technology.
Now, to talk servers. Another demo: real-time sharing of event data for visualization. Funky, but short. Moving on...
Intel and Cisco business communication device demo, running Android. It's a phone! With a screen! It can browse! It runs apps! Cisco apps! Lots of apps! Thousands of apps! Ah, and the screen pops off and is a small tablet. A "reinvention of the office phone." Hrm.
We have developed a framework for development in the Intel computing "continuum." Craig is gonna show us how that looks.
Craig takes a picture of Paul. Has an Android phone. Intel's pair and share allows him to pair phone to PC. And there is the picture onscreen. Can also do it with an iPhone. He's getting notifications, calls that come up in a window on the screen of the PC via Intel Teleport Extender software.
Craig has just one more thing!
A family wall. A digital bulletin board that shows up on a big screen, can be fed from multiple devices, tablets. Medfield tablet running Android Honeycomb. Also using a Toshiba Ultrabook to update the wall.
Craig's finished, and it's time for a video montage of people talking about the Intel computing continuum. Lenovo, Toshiba reps...
Otellini: that brings me to connected computing and security. Security is important. Every device is vulnerable. Smart phones and tablets are not immune. This led to our "deep partnership with McAfee." And a nice lady from McAfee joins Paul onstage. She has a CSI-style graphical map showing world malware infections. Talking about the difficulty of dealing with rootkits.
Paul wants to know if there's a way to detect unknown rootkits before they "occur." McAfee DeepSafe technology. Uses VT in Intel processors, hardware + software combo to detect rootkits.
Demo of software stopping an unknown rootkit in real time. Which is about as exciting to portray onstage as you might think.
Now, a video about making movies with Intel stuff. Jeff Katezenberg from DreamWorks has nice things say about Intel products. He's clearly reading a script. "Key enabler of a complete transformation of our business." At DreamWorks, we animate movies. Intel animates the world.
Paul has one more thing!
Happy to say we're making real progress on goal of getting into smart phones. Demo phone shown earlier was a reference design running Android. Want to see Intel phones in market in first half of 2012.
Andy Rubin, Sr. VP at Google, is here to announce a development partnership with Intel for smart phones.
Paul talks, awkward pause, and Andy then says "Oh, yes. That was my cue." Heh.
Andy: Let's talk about the future. We have a tight-knit family of developers. Here to announce continuation of strategic alliance. Going forward, all future versions will be optimized from kernel level all the way up to multimedia, 3D graphics. Very excited to work with Intel. Paul is also eager. Thanks, Andy!
And Andy's gone.
Paul, thank you and I hope you enjoy the rest of IDF.PC hardware enthusiasm.. miniaturized?
I've been into building PCs for many years now, but before that, I was never a particularly handy or mechanically inclined type of guy. That didn't stop me from developing some proficiency in PC repairs. Heck, I even worked one job in college where I frequently had to open up malfunctioning CRT displays, desolder a failed capacitor, and replace it with a new one. And I never once got shocked to death.
In fact, working with computer hardware has improved my general skill set and willingness to attempt household repairs. One day you're screwing together a PC, and then next thing you know, you've done a full-system repair on a malfunctioning refrigerator defrost system. Who knew?
There's one area of PC hardware repair where I've long feared to tread, though: laptops and other small, integrated, mobile devices. My first real experience there involved opening up a Sony Vaio, pulling up the front edge of the case, and promptly cracking one of those fragile amber ribbon cables in two, resulting in a dead keyboard.
However, my considerable struggles with my Samsung NC20 last year eventually ended in success, and when all was said and done, I'd disassembled and reassembled that thing multiple times. Poking around inside of it and looking at all of the insanely miniaturized components was fascinating and kind of fun, too. My confidence was boosted a bit by that experience, as was my willingness to tinker.
Enter my nine-year-old daughter, who somehow—don't ask me how, because it's baffling—managed to crack and destroy the top screen on her Nintendo DS Lite a while ago. After she'd been unable to use the thing for a while, I finally realized I might as well attempt a repair on it. It couldn't be much more useless than it was. So I ordered up a replacement LCD screen for $13 from Amazon. The screen comes with a tri-wing screwdriver necessary to get the DS Lite open.
Armed with a YouTube video showing how to do the replacement, I carefully took apart the DS Lite, extracting a bunch of little screws, gently prying apart the case and disconnecting a couple of those fragile ribbon connectors. Without, erm, having watched the YouTube video all the way through beforehand. I was surprised when the video ended with, "Now, de-solder the speaker connections from the old screen and solder them onto the new one." I thought tiny screwdrivers would be the extent of it.
I was able to handle the soldering work just fine, it turns out, but the fact that the video ended before providing any instructions at all about how to reassemble the device was daunting. Reassembly is definitely the hardest step, especially because you have to roll up a ribbon connector for the to screen, push it through the round hinge opening, and then thread two wires (for the antenna and microphone) through the middle of the roll. Once that part is done and the rest of the top screen is back together, the ribbon connector has to make it through the corresponding hinge opening for the bottom case and then into the backside of the motherboard, into a tiny connector with a miniature clip on it.
I did find another video showing how to reassemble the DS Lite, but not before I'd made the mistake of using a small pair of needlenose pliers to pull the ribbon cable through the lower hinge opening. When I tried to turn on the DS to test it, the top screen stayed blank and the system promptly shut back down. Further inspection revealed the problem: I'd ruptured one of the traces on the ribbon with the pliers, and there was a black line running up that portion of the trace. Zzzap. The new screen was ruined.
But... I learned a lot about reassembling the DS in the process. I was confident this display's sacrifice would not be in vain.
Another 13 bucks and a week later, I tackled the repair again this past Sunday afternoon. This time, I watched the instructional video much more carefully, was patient with each step of the process, and generally felt the Zen of miniature microelectronics repair. Doing more soldering wasn't fun, and I had to backtrack several steps and make adjustments several times. In the end, though, after a lot of tedium and time, I had the DS booting up and working properly with the new screen installed.
I learned some more lessons in this outing about how to deal with electronics components that are so small, you can barely manipulate them into position with your fingertips. Don't use needlenose pliers on a ribbon connector, for example. Also, look carefully, because these devices are designed to be assembled in a certain way. There's a slit on the DS Lite next to the lower case's hinge opening, expressly intended for the ribbon connector. And make sure you maintain the careful, patient mindset you'll need to make it through such a repair.
But mainly I learned that working on a computer the size of a pack of cigarettes isn't actually, you know, fun. Took forever, made me nervous, and gave me a bit of a headache. Like renewing your driver's license. I'm glad the thing is fixed, but I'm not sure I'd do it again. Big computers are much more fun for tinkering.A cautionary tale
Boy, Tuesday was a rough day. Walked into my office in the morning expecting to get some testing done, only to discover that my long-tenured file server was producing lots of fan noise. I pulled it open and found what I thought was the culprit: loads of dust and cobwebs accumulated over time. I shut down the system and spent the next 15-20 minutes cleaning it up, creating an awful mess as my compressed air can propelled dust into the atmosphere around me. Man, it was nasty.
Once I was finished, I reapplied thermal paste to the CPU and went to boot up the system, but it wasn't meant to be: the box came on, but wouldn't POST.
It was so unresponsive, even holding in the power button wouldn't cause it to turn off.
In fact, I think the mobo had been on the way out all along, which is why the fan speed control had gone haywire in the first place. Quick CPU and PSU swaps confirmed that the mobo was the likely culprit.
So my day of testing became something else entirely: a day of IT support work, swapping in a new motherboard and recovering all of the data and config files on this system. I tried installing a new mobo with a similar, newer AMD south bridge, hoping the WinXP installation on the system would still boot properly. (The storage picture was complicated by the fact that this box had dual 250GB RAID 1 mirrors in it.) I took extreme care to get all of the individual pins of the front panel and USB headers connected properly from this older case to the new motherboard, so the installation involved quite a bit of time and tedium.
At the end of it, though, WinXP wouldn't boot off of the new south bridge. I had to reinstall the OS on a fresh RAID mirror and then recover the data from the original boot array. Then came the work of restoring as much of the original software config as possible, including the FTP server that accepts nightly backups from TR's web servers.
At about 3:30PM, I had finished most of the hard parts and wanted to make one final tweak to the hardware build: the installation of a fan speed controller on the CPU cooler, which was a little louder than I liked.
Now, I work on PC hardware constantly, mostly on an open test bench, and doing so has caused me to develop some bad habits over time. I didn't even think twice about leaving the system running while installing the fan controller between the mobo and the CPU cooler. I just popped the three-pin connector off of the header while the thing was powered on. But then I realized I'd removed the system fan connector and went to reinstall it.
It was at this point that my bad habit caught up with me, as I missed the alignment by a pin and pushed the connector down over two of the three pins improperly.
Immediately, the mobo buzzed, there was a quick "pop," and an acrid smell filled the air. I had released the magic smoke. On subsequent boots, the mobo would POST, but not a single fan on any header would spin. I'd fried it.
Right about then, I had to leave the office for a prior commitment, and my work day was supposed to be done. Instead, I canceled my plans for later in the evening and came back to the task after dinner, swapping in another new mobo—whatever I could hunt down in the back room, though it was less ideal than the prior one—and reinstalling the OS yet again. At about 10PM, I realized I'd reached the same point in the process that I'd reached at 3:30PM with my first attempt.
My reward at the end of it all: a new system that pretty much plays the same role in the same way as the old system had for years before that, although this one may be a little louder, since the mobo's power draw is higher. I really like building my own PCs, but sometimes, the support of systems on which one relies for business is just, well, overhead. That overhead is increased if you decide to plug in fan headers while the box is running, folks.The great upgrade: Tales from the ancient past
Our current contest challenges readers to relate their first PC upgrade experiences for a chance to win a copy of Just Cause 2. We already have quite a few good entries, and skimming through them caused me to think back to one of my first PC upgrades—and wouldn't you know, I wrote an article about it and posted it on the web. With performance graphs! The year was 1997. I think. Pentiums were all the rage, 3D acceleration was dawning, and I somehow had good things to say about the awful Quantum Bigfoot hard drive I'd picked up in an online auction. Just like 3dfx, I never quite finished what I'd promised, but it's a fun jog down memory lane, regardless.
MAY 19: THE FIRST STEPS
Now that my computer is alive again, I can tell you about what I've been doing and why I haven't been able to update this page. Having finished my big papers for the term, I took some time and upgraded my computer system. I started with a Gateway 2000 system with a 100Mhz Pentium and an STB Powergraph 64 video card. This was a decent setup, with 40Mb of RAM and plenty of nice cards, drives, etc., but it was a tad on the slow side. For one thing, around the time I bought my PC, Gateway decided that the advent of EDO memory meant they didn't need to include a level 2 cache in their systems. They have since repented, but my motherboard had no cache and no place to add one. Anyhow, for a number of good but very techno-nerd intensive reasons, I decided to see how much I could improve this system's performance. I snagged an STB Lightspeed 128 graphics card (with 2Mb of MDRAM) in an online auction for 38 bucks—no kidding. The thing delivers blisteringly fast video. Then I ordered a new motherboard, following the advice I found at Tom's Hardware Guide. The motherboard is an Abit IT5H, which has a 512K L2 cache, a Pentium-style processor socket and a so-very-nifty "soft menu" BIOS—in other words, it's a jumperless motherboard that can be tweaked to no end via software.
I replaced the video card and the motherboard, using the memory and processor from my previous motherboard to populate the new one. Then I strapped a heat sink with a fan on top of my processor's current heat sink for a cooling double-whammy. Now I've got it all up and running, with my system bus running at an atmospheric 75Mhz—faster than a new Pentium II system's bus—and my processor overclocked to 112.5Mhz. Preliminary benchmark results are available here:
|Dhrystone MIPS (integer)||175||209||19.43%|
|Whetstone MFLOPS (floating point)||53||63||18.87%|
|Video speed MP/sec||13||31||138.46%|
|Draw filled objects||3.00||1.11||63.00%|
|RAM read average Mb/sec||157||234||49.04%|
|RAM write average Mb/sec||84||95||13.10%|
|RAM copy average Mb/sec||49||70||42.86%|
|demo1 320x200 frames/sec||23.1||33.7||45.89%|
|demo2 320x200 frames/sec||24.6||34.9||41.87%|
|demo1 640x480 frames/sec||9.4||13.8||46.81%|
|demo2 640x480 frames/sec||10.8||15.6||44.44%|
|Average performance increase||49.48%|
The testing was by no means scientific, but the results do give some indication of the upgrade's effectiveness. The long and the short of it is that I spent under $200, including shipping and handling, and now my system runs about 45% faster in real-world conditions. Also, in the future, when I want to go even faster, I can buy a Pentium, Pentium MMX, AMD K6, or some other processor and replace my tired ol' P100. Adding a faster processor could conceivably increase performance over 100%, and it would be fairly cheap. This motherboard's system bus can run at 83Mhz, as well, so the potential performance gain with even a 166Mhz chip is pretty formidable.
The moral of the story? Well, for one thing, I'm much more of a tech freak than you probably thought. Beyond that, the important thing to know is that you don't have to spend a zillion bucks on a new PC every few years to keep up with the rest of the world. These things are modular and a lot of the parts can be reused. Finally, after this experience, I'll probably build a new PC before I buy one again. If you want something done right, do it yourself.
MAY 28: ADD DEPTH AND COLOR
I haven't dropped off the face of the planet, honest. I've been busy with Real Life and playing with my new toy, a Diamond Monster 3D card.
As you know from my last update, I've recently upgraded my computer. Fiddling with it has kept me from spending time on the web page here. After some twiddling with memory timing settings, I think I've got a very stable setup. Initially, overclocking my Pentium 100 to 112.5Mhz and running the system bus at 75Mhz (rather than the usual 60 or 66Mhz speeds) caused me some odd cold-boot problems. Programs would crash occasionally for about the first 3 minutes after I turned on the computer—not every time, but often enough to kinda scare me. I seem to have banished those problems, however, without having to clock my processor back to 100Mhz or the bus back to 66, by adjusting some BIOS settings. Everything's peachy now, and very fast.
The last item I want to tell you about is my new Diamond Monster 3D card, the crowning achievement of my little upgrade scheme. Based on the 3Dfx Voodoo chip set, this thing really is a monster. This card is intended solely for 3D acceleration, so it works in conjunction with a standard video card and just takes over when it's asked to generate 3D graphics.
PC video cards with 3D acceleration have become all the rage of late in the tech world, but few of the current cards provide a compelling 3D experience. They're just too slow to deliver the kind of fluid motion one would like. I'm here to tell you the 3Dfx-based cards are a glaring exception to that rule. These pups render around 2 million polygons per second. A Sony Playstation, until recently the best of the home game consoles, reportedly renders only 300,000 polygons per second, by contrast. But I could quote statistics to you all day long and you wouldn't begin to understand what this thing will do. You simply have to see it to believe it.
Quake, the biggest and baddest 3D game around, runs at a fluid 30-frames-per-second pace on the Monster 3D, even with my sad old P100 processor feeding it data. That is, Quake runs in fluid motion at high resolutions (up to 640x480) in over 65,000 colors, not just 256 colors like the software-only version of the game. The effect is stunning, almost cinematic, as one glides from room to room in an utterly convincing three-dimensional environment. Any one of the individual frames of animation from this game could easily pass for a scene like I used to render in a 3D ray-tracing program on my Amiga 3000 a few years back. Rendering a single complex scene (i.e., one frame of animation) could take around three hours on that computer. Things change.
3D is the next big step for desktop computers, and it's here now. A number of games, including MechWarrior 2 and Tomb Raider, support this card directly. Pod, an oh-so-hip racing game, makes my PC look like a Daytona USA arcade machine, only better. (I love racing games.) Other games use Microsoft's Direct 3D to access the 3Dfx Voodoo hardware, and that works fine, too. Word has it even Bill Gates has been giving Direct 3D demos with a Voodoo-equipped system.
You can get a 3Dfx Voodoo-based card with a full 4Mb of memory for about $150 now. Check out the excellent Operation 3.D.F.X. web site for all the newest developments and info on where to buy one of these things. If you don't want to buy one, you still owe it to yourself to find someplace where you can take a look at one running some 3D-accelerated applications, just so you can get a sense of what's possible now.
JULY 18: COMPARE AND CONTRAST
It's time for a late-night, insomnia-inspired update. I've been meaning to update the page for a while with comments on a whole load of tech-related things, but I've been too busy doing tech-related stuff to stop and write about it. Now that I can't sleep for thinking about it, I'll try to run through the list—or at least the highlights—of the interesting toys I've been testing.
First up on the list is a report on my ongoing system upgrade. A few weeks ago, I snagged a massive 4.3 gig Quantum hard drive in yet another online auction. My original 1 gig drive was getting terrifyingly crowded, even with compression running. The new Quantum has a killer peak transfer rate (16.6 Mb/sec) and supports direct-memory-access transfers at full speed. It's also a big drive—as in physical size, like Bill Clinton's gut—and Quantum claims, as a result, the thing will produce some very high sequential transfer rates. In other words, it reads big, one-shot files very quickly. Pulling streaming video off the drive, for instance, should be a breeze. On a more down-to-earth level, Netscape seems to load more quickly.
Overall, I really like the new drive. Two things about my experience with it stand out as pleasant surprises. First, this (enhanced IDE) drive works surprisingly well with my older (also enhanced IDE) Western Digital drive. No matter how I've configured them or what nasty, beta drivers I've installed, these two drives have always talked virtually flawlessly. That's a nice surprise, because I've heard horror stories from folks installing enhanced IDE drives from different manufacturers. The second unexpected little joy has been the fact that this new Quantum drive is almost whisper silent. My original hard drive grunches and t-t-t-t-t-t-talks to me every time it's accessed. Using the Quantum, my system now feels faster just because it doesn't sound like it's putting out so much effort to get something done. Much better.
Adding a new hard drive on my system allowed me to play with the next toy I want to say a bit about, Windows NT. Win NT version 4 now has Win95's clean, decent user interface, and NT is reportedly a much more advanced operating system under the surface. Assuming Microsoft gets NT right, I'd have every reason to switch to this snazzy new operating system. However, my verdict is stil out on this one. I've installed both NT and 95 on my computer, and I still boot into Win95 by default. I'd probably be more gung-ho about seriously trying to migrate to a new OS if I hadn't set up something like 10 new system and software configurations in the past month or so. I'm over-teched. For a while, at least, Win95 will remain my main OS.
Part of my tech exhaustion comes from having set up a new motherboard, two new graphics cards, and a new hard drive for my dad's PC over the July 4th weekend. That's right—I gave my dad's computer my patented Total System Upgrade. He's moved from a 90Mhz Pentium based on an ancient Intel chipset to a 100Mhz Pentium (just a bit overclocked) on an Abit motherboard like mine, a 128-bit graphics card, a Quantum 4.3 Gb hard drive, and a 3Dfx-based 3D graphics card. This upgrade turned out to be a rather big job for one weekend, all things considered, but his system is much improved.
To give you some idea how much improved it is, I can offer you a subjective comparison. His system is now just about identical to mine, with which I'm quite familiar, of course. In comparison to my home system, I offer my impressions of my brand-spanking new PC at work. This bad-boy Micron is a Pentium MMX machine with all the goodies (512K cache, Intel's latest TX chipset, and 32 megs SDRAM, for you fellow geeks). (Its arrival is another source of my tech exhaustion.) The first thing I did to it—before ever turning it on—was overclock the 166Mhz processor to 200. It worked like a charm, and I've never looked back.
The startling thing is not just how little subjective performance difference there is between my 100Mhz non-MMX Pentium machine at home and my 200Mhz MMX Pentium machine at work. The startling thing is that, for many day-to-day tasks, my 100Mhz home machine seems to have the better end of that difference. Why? My guess is that, for one thing, the graphics card I have here at home is quite a bit faster than the one on my PC at work. (My home graphics card is an STB Lightspeed 128 with special MDRAM memory. It's dangerously quick.) Also, Intel's older HX chipset is also probably a bit faster than the new TX chipset in certain, key ways. Finally, both 100 and 200Mhz systems run at a bus speed of 66Mhz. There are real limits to how much performance one can squeeze out of a computer by turning up only the main processor's clock speed.
For a final insult to the shiny (well, face it, it's flat beige), new 200Mhz MMX machine, I installed a couple of 3D games on both machines. The work PC has an S3 Virge/DX-based 2D/3D combo card; my home PC has a 3Dfx-based Monster 3D card. One of those games was Pod, a racer I've enjoyed quite a bit on my home PC. The demo version of it that came with my work PC supposedly uses both MMX and the Virge 3D card to enhance speed and visual quality. The version I run at home is tuned for the 3Dfx card. At home, Pod rivals anything I've seen in the arcades. The visuals are stunning and frame rates are high and smooth. At work, Pod just stinks. It's ugly and slow enough to be virtually unplayable. I had a similar experience with Psygnosis' Wipeout demo, which accesses both 3D cards via Microsoft's Direct3D.
Ted Whatshisname, CEO of Gateway 2000, recently verbally slapped Intel's marketing types by saying something tantamount to sacrilege in the computer industry: "Speed is not a feature." In a way, obviously, he was very right. The specific kinds of speed I get from my 100Mhz home PC matter much more to me than what I get from my 200Mhz MMX PC at work.
AUGUST ??: COMPLETE AND EVALUATE
A note on GeForce GTX 480 noise levels
Now that GeForce GTX 400-series graphics cards are out in the wild, although in limited numbers, I should say something quickly about the main issues for which these cards, and especially the GTX 480, have gained a reputation: power, noise, and heat. I talked about this some on the latest podcast, but I don't think I communicated it all that well in the context of our GeForce GTX 480 and 470 review.
I feel like the GTX 480 is getting a bit of a bad rap.
Yes, the GF100 cards' performance isn't all that it should be, and that's almost certainly due to the fact that the GPUs wouldn't reach Nvidia's projected clock speeds with all of the units onboard enabled. Typically in such cases, and again almost surely in this one, the established power and thermal envelopes are a constraining factor. High clock speeds might be possible by giving the chip additional voltage, but doing so would push the GPU's power draw, heat, and cooling demands into unreasonable territory. The GF100 is a large chip, and such problems can be compounded for a number of reasons by a large die area and lots of transistors.
Dealing with these issues is a balancing act, one that every chip company has face to one degree or another in turning out a product. Competitive issues aside, I think the balance Nvidia has struck with the GeForce GTX 480 is largely a reasonable one. You can look at the numbers we measured in our review, but the basics are pretty clear. For power draw and GPU temperatures, the GTX 480 stays within the generally established boundaries for the industry. That's not to say that the Radeon HD 5870 doesn't look a darn sight better in terms of power draw, but a test system equipped with single GTX 480 draws 60W less than the same system with dual Radeon HD 5870s in CrossFire. We're not talking about a paragon of efficiency here, but the GTX 480 isn't out on the bleeding edge, either. No new PSU standards were created with the introduction of this product.
Similarly, Nvidia has obviously biased the fan profiles on the GTX 480 toward lower noise levels than toward lower GPU temperatures, but the GPU temperature readings we got for the card were only a few degrees higher than what we saw from the Radeon HD 5870.
More notably, the GTX 480's cooler is an impressive bit of engineering that attempts to mitigate the effects of the GPU's relatively high power consumption—and thus heat production. Have a look at the noise levels we measured while running a real game, Left 4 Dead, that generally produces higher power draw numbers than most:
Once more, the Radeon HD 5870 is quieter—but only the 1GB version with the stock cooler. The slightly overclocked Asus Matrix card with 2GB of RAM was louder than the GTX 480. I don't want to overstate it, but heck, another example might be considered a victory of sorts: the GTX 480 is quieter than the GeForce GTX 295, even though the GTX 480 draws about the same amount of power under load.
For those of you who think that doesn't count for much, you're forgetting the bad old days of the GeForce FX 5800 Ultra, when Nvidia had some similar problems with a new GPU and attempted to make up for it by reducing image quality in multiple ways—skimping on texture filtering and dropping down to lower-fidelity texture formats, mostly—and strapping a cooler to the side of the card that we derisively dubbed the Dustbuster. If Nvidia is compromising on image quality with the GTX 400 series, we sure haven't detected it yet. The texture filtering algorithm looks to be the same as the other recent GeForces, which is to say excellent. And a single GTX 480 is nowhere near as loud as ye olde Dustbuster. I'm hesitant to compare across a such vast differences in time and equipment, but have a look at these numbers:
A single FX 5800 Ultra was nearly 9 dB louder than a GTX 480. Both objectively and subjectively, the difference between the two is huge.
To take these iffy cross-review comparisons even further, we measured the Radeon HD 4890 at 50.6 dB under load not long ago, slightly higher than the 49.9 dB at which we measured the GTX 480. Our SLI noise level results prove the GTX 480's cooler is capable of spinning up to higher speeds, but a single card just didn't go there during the course of our testing. We tested on an open test bench, and your results may vary in either direction in an enclosure, depending on cooling and venting. Still, that bit about the GTX 480 fitting well within the established boundaries of the market applies.
Remember, when you hear an incensed fanboy spouting off about how awful the GTX 480's noise and heat levels are, that his point only applies in a very limited sense, relative to some slightly better competition. I wouldn't hesitate to put a GeForce GTX 480 into a gaming rig of my own on that basis. Yes, I would prefer the Radeon HD 5870's overall combination of attributes, especially in terms of price and performance. But let's be clear: in an undeniably tough situation, Nvidia has avoided the temptation to reach the highest possible performance levels at the cost of reasonable power draw and acoustics. Folks seem to be missing that fact, which has caused Nvidia to send out its viral drones to spread the message. Such silliness shouldn't be necessary. This is one lesson we're happy they've learned, and I'd hate to fail to acknowledge it.
|Friday night topic: Light bulbs? Yep, light bulbs||180|
|International Women's Day Shortbread||16|
|Newest Thermaltake Urban case has dual doors||19|
|Deal of the week: Discounted Windows and cheap storage||11|
|MSI gaming barebones has Mini-ITX mobo, external overclocking button||32|
|Fan-made Morrowind remake looks amazing||33|
|Thursday Night Shortbread||41|
|Razer unveils homebrewed mechanical keyboard switches||53|