Bits and Bytes

The pool wants you!
— 9:52 AM on November 25, 2009

As Chicago said back in 1970, does anybody really know what time it is? The message of the song is that we really shouldn't care. But when it comes to our computer systems and other gadgets, having an accurate clock matters.

Widely used security protocols like Kerberos (which Windows uses to authenticate access permissions to folder shares) depend on the clocks of different computers being synchronized—if the clocks differ by more than a few minutes, Kerberos-based authentication attempts will fail. If you are a software developer, incremental build tools like make rely on the time stamps on source and object code files being accurate, regardless of the system where the file was created or modified. Widely used file synchronization protocols like rsync can also rely on file time-stamp information, depending on how they are used. These are but a few examples; in our increasingly wired world, many of our devices really do depend on knowing the time and assuming other devices to which they talk know, as well.

You may be thinking, "But wait, my computer's CMOS clock keeps track of the time, so why do I need to care?" The problem is, your computer's CMOS clock typically has worse accuracy than a cheap wristwatch. As computer motherboards became commodities, manufacturers started cutting corners on things like accurate CMOS clocks. Over a period of just a few weeks, that clock can drift way off. If that weren't bad enough, when the computer is actually running, the OS keeps track of the time itself instead of relying on the CMOS clock. However, the clock source used by the OS (typically derived from the CPU clock) can be even more inaccurate than the CMOS clock—a clock that drifts by a minute or more in a single day isn't uncommon.

Given that nearly every computer is now connected to the Internet, the solution seems obvious: synchronize the clock to a known accurate source online. All modern OSes provide a means to configure an Internet time service to keep the clock synchronized. But who provides the accurate time source to which you're synchronizing? That's a very good question... and a lead-in to a brief history lesson and the point of this whole blog post.

A brief history of (network) time
(With apologies to Stephen Hawking.)

The need for accurate time synchronization between computer systems was already recognized as an issue in the early days of the Internet, and it led to the development of the Network Time Protocol. Back in those days, the number of systems connected to the Internet was small, and usage was limited to government, military, and educational institutions. These same institutions provided a small number of central time servers that were used to synchronize time for all systems across the Internet.

Then the dot-com boom happened, and everybody got online. The number of systems connected to the Internet rose exponentially, and even embedded devices started using the NTP protocol to keep their clocks accurate. Existing time servers could barely keep up with the load; and if that wasn't bad enough, several manufacturers of consumer networking equipment inadvertently launched DOS attacks against several public NTP servers by hard-coding the IP addresses of specific time servers into the firmware of thousands of devices. D'oh!

Clearly, something had to be done. Enter the NTP Pool Project. Many of you are already familiar with distributed computing in the form of Internet-based computing efforts like Folding@home. The NTP Pool is essentially the same concept applied to time servers, i.e. distributed time serving.

How the pool works
The concept behind the NTP pool is fairly simple: use a number of systems distributed across the Internet to serve accurate time to everyone. The servers in the pool ultimately get their time from accurate time servers provided by governments, universities, ISPs, and anyone else who operates a public Stratum 1 or Stratum 2 time server. (Stratum 1 servers get their time directly from a known accurate source like a GPS or WWV Radio, while Stratum 2 servers get their time directly from Stratum 1 servers. Everyone else is Stratum 3 or below.)

The NTP Pool Project operates a DNS server. When you configure your system or device to use an NTP pool server like us.pool.ntp.org, you're actually asking the NTP Pool DNS server to randomly assign you to a set of time servers that are (hopefully) geographically close to you. The default NTP servers Microsoft Windows uses when you enable Internet time synchronization (time.windows.com) are part of the NTP pool, as are the default time servers configured by all of the major Linux distros. So, you're probably already using the NTP pool without even knowing it!

Joining the pool
If you have an always-on broadband connection, you can help the NTP Pool Project. By adding your system to the pool, you will improve the accuracy of Internet time service for everyone by helping share the load. You need a static IP address and the ability to unblock/forward UDP port 123 on your router or firewall. You also need to run the reference NTP server implementation. That's a no-brainer for UNIX/Linux users, since the server is probably already included on your system. For Windows users, you will need to install the Windows port of the NTP reference server, since Windows doesn't include one out of the box.

When you join the pool, you tell the NTP Pool the speed of your broadband connection. The NTP Pool will include you in the server rotation at a rate that depends on your broadband speed, only consuming a small fraction of your available bandwidth. If you're still concerned about bandwidth usage, you can configure a value lower than your actual connection speed to throttle the usage back even further. When joining the pool, you'll want to configure a few Stratum 2 servers near you from this list from which to synchronize your own server. Unless you're willing to do some extra legwork to get permission from the administrator(s) of the upstream server(s), only select servers that have an open access policy and no notification requirement (as noted in the list). If your ISP operates an NTP server for its customers, you can also use that as one of your upstream servers.

The NTP Pool project monitors the accuracy, network latency, and availability of your time server by polling it twice an hour, and assigns you a score based on this monitoring. Periods of high network latency, unavailability, or instances when the time your server reports is off by more than 100 milliseconds will result in deductions from your score. Conversely, periods of low latency and high accuracy cause that score to rise. Your system is only included in the pool rotation when your score is above +5 (scores range from -10 to +20).

You can access the monitoring data for your server on the NTP Pool web site. The screenshot below shows some of the monitoring history for my server. The dips and peaks in the "offset" graph (and corresponding drops in score) correspond to periods of high Internet activity (i.e. downloads) on my DSL connection, which trashed my network latency.

NTP Server Stats

So, dive in! The water's fine... and the next time someone asks you if you really know what time it is, you can give them a definitive answer.

39 comments — Last by chiikmilsen at 12:08 AM on 08/07/10

Clip surgery!
— 4:28 PM on May 4, 2009

No, I'm not talking about getting your dog or cat fixed. I'm talking about the Sansa Clip MP3 player, and my recent foray into the guts of one of them.

A few days ago, my daughter informed me that she'd dropped her Sansa Clip player, and that it would no longer power up. I determined that it would still power up when plugged into the USB port of a PC, leading me to believe that there could be a problem with the battery. A quick Google search didn't turn up anything particularly noteworthy on the subject of DIY Sansa Clip repairs, but I did find one site that had a number of pictures of a disassembled Clip, which seemed to indicate that popping the case open wasn't that big of a deal. I also found a number of forum posts from people experiencing the exact same symptoms. Hmm...

Since the Clip was out of warranty anyway, I figured I had nothing to lose by trying to crack it open.

So, armed with my trusty Swiss Army knife, I gently pried at the seam between the halves of the casing until the back cover popped off. I actually managed to get the Clip open without damaging it, other than a few small nicks in the plastic from the knife. And there it was: the guts of a Sansa Clip in all their glory:

Sans Clip guts

The strange silvery object covering most of the circuit board is the internal lithium-ion battery.  Closer examination revealed that one of the battery wires had broken loose from the circuit board. The wires are rather thin, and given that a lot of other people seem to be reporting similar symptoms, I think this may represent the weakest point in the Clip's design.

Picking up my Swiss Army knife again, I stripped about 1/16" of the insulation from the end of the detached wire:

Battery Wire

A few seconds with a soldering iron, and the wire was reattached to its proper location on the circuit board:

Clip Wire Fixed

After snapping the casing back together, the Clip worked good as new. I also flashed the player to the latest firmware from Sansa's web site. OGG Vorbis support FTW—and effectively a free capacity upgrade, since Ogg has equivalent fidelity at lower bitrates than MP3s. Kudos to Sansa for providing a free firmware upgrade to support this format.

So this story has a happy ending. My daughter's Sansa Clip actually works better than before, and I'm not out the cost of a new one!

26 comments — Last by steelcity_ballin at 1:29 PM on 05/11/09

Fun with raytracing
— 11:10 PM on April 20, 2009

Traditional GPUs render a scene by taking the scene geometry (expressed as 3D meshes of triangles) and "pasting" 2D bitmap images (textures) onto those meshes. Pixel and vertex shader programs running on the GPU may also be used to apply additional visual effects to the scene. The advantage of this approach is that it is easy to build highly parallel hardware that can execute texturing operations and shader programs very quickly.  All current consumer GPUs rely on this technique.

3D images can also be rendered via ray tracing, which attempts to model how light actually behaves in the real world.  Properly applied, it can yield very realistic images. Ray tracing is particularly effective at rendering reflective and refractive objects. Unfortunately, ray tracing is computationally very intensive. Affordable hardware that can do ray tracing in real-time at a level of detail comparable to that of rasterizing GPUs doesn't exist, and is probably still a few years off. Since framerates measured in seconds-per-frame tend to have a negative effect on playability, I don't think we are going to see mainstream ray traced games for a while yet.

Interestingly, most ray tracing algorithms actually work in reverse—instead of tracing the rays from the light sources, reflecting them off the objects in the scene, and then into the camera, they "shoot" rays out from each pixel of the image into the scene, tracing the paths of the rays backwards until they hit a light source. The optical properties of the surfaces each ray reflects off (or passes through), as well as the color of the light source determine the color of the corresponding pixel of the image. The reason for doing things backwards like this is efficiency—very few of the rays of light coming from a light source ever reach the camera, so doing things backwards reduces the amount of calculation required by several orders of magnitude.

Although real-time ray tracing for the masses is still a pipe dream, you can play around with ray tracing algorithms on your PC today (you just have to wait a while for each frame to render). The POV-Ray ray tracer is a popular ray tracing package; it has been around for years, and is available as a free download for Windows, Mac, and Linux. On Windows or Mac, download and run the appropriate installer from the POV-Ray site. For fans of the mighty penguin, your best bet is to install POV-Ray directly from your Linux distro's repositor.  On Ubuntu, the packages you want to install are povray and povray-includes; you probably also want to install povray-doc (local copy of the POV-Ray manuals) and povray-examples (sample POV-Ray images). Complete documentation for POV-Ray is also available on their web site, here.

POV-Ray images are created using POV-Ray SDL (Scene Description Language), a programming language that has some superficial similarities to C. In POV-Ray SDL, complex objects are built up out of simpler ones, much as complex data structures in C are built out of the basic data types. Objects are constructed according to the principles of CSG (Constructive Solid Geometry), which allows new objects to be created as the 3-dimensional union, difference, or intersection of simpler objects. The properties of the material each object is composed of are also defined, so that the ray tracer knows how the light rays are affected when they reflect off or pass through the object.

Many options are available for defining the properties of a surface or material. Simple surface pigmentation (as a red/green/blue color triple) can be specified, as well as how reflective or rough the surface is. Complex surface patterns can also be created using procedural textures. POV-Ray comes with a number of pre-defined textures for metal, wood, and stone surfaces; these can be used as-is, or modified to suit your whims. Translucent or transparent materials can also be defined, including a material's refractive index (which determines how the light rays bend as they pass through the object).

Simple bitmap images can also be applied to a surface of a CSG object, much like the texture mapping of a rasterizing GPU.

I've created a sample POV-Ray scene that illustrates many of the principles of CSG and demonstrates the use of POV-Ray's predefined stone, wood, and metal textures. This scene also shows how POV-Ray handles transparent refractive objects. You can download the POV-Ray file for the sample scene here.

The first four images were generated by the sample POV-Ray program linked above. They are all of the same scene; the only thing that has been changed is the camera location. Click each picture for a higher resolution version.


Still life with chessboard and LCD monitor


Closeup of the red glass pawn (nifty refractive effects!)


Another view, from off to the left and a little lower down than the first view


Closeup of the brass rook (you can see multiple images of the monitor reflected in the rook, and in the pawn off to the right)

All things considered, this is still a fairly primitive POV-Ray scene. The lighting and object models are simplistic (I chose to use only pawns and rooks because they are relatively easy to model in CSG), and I haven't enabled any of the more sophisticated effects (focal blur, radiosity, etc.), which would result in more photo-realistic images (along with much longer rendering times). But even without these effects, it is possible to produce some interesting images.

POV-Ray isn't limited to creating images that mimic everyday objects—its capabilities are only limited by your imagination and willingness to experiment. You can download another sample SDL file here that's actually quite a bit simpler than the first one. It creates a 3D grid of reflective metallic spheres and places the camera and a single light source at the same point inside the grid. The resulting image (consisting solely of repeated reflections of the light source between the spheres in the grid) is surreal:

I hope you'll decide to download a copy of POV-Ray and play around with it. Once you grasp the basic concepts behind CSG and get the hang of working in SDL, ray-tracing can be a lot of fun... and rather addictive!

61 comments — Last by lycium at 5:04 PM on 04/22/09

Portable audio—then and now
— 3:58 PM on April 15, 2009

I recently acquired a Sansa Clip 4GB MP3 player (which, I might add, I'm quite happy with). After using it for a few weeks, I was struck by how much we take for granted today when it comes to portable tech, and how far we've come in the past quarter century. As chance would have it, I also ran across my old Sony WM-2 while digging around in the crawlspace recently. (Alas, it no longer seems to be functional; pressing "play" results in some whirring noises, but the tape does not move.)

Sony essentially invented the high-fidelity portable audio market when it introduced the original cassette Walkman in 1979. If you wanted portable audio prior to that, you either had to put up with tiny, tinny-sounding transistor radios or bulky boom-boxes. The Walkman was a fundamental game changer—it was portable, had very good fidelity for the time, and let you play your own mix tapes, freeing you from the whims of people who made playlists for local radio stations.

Like many people back then, I had a sizable collection of cassette tapes for my Walkman. Sure, they were prone to wear, tear, and other physical abuse; the fidelity went slowly downhill the more you played them; and if you left one sitting in the car on a hot day... well, just forget it. But they still sounded better than the alternatives, and hey... blank tapes were cheap, and you could always dub a fresh copy from your vinyl LPs.

Let's compare specs, shall we?

  Sony Cassette WM-2 Sansa Clip
Year introduced 1981 2007
Size 4-1/4" x 3-1/8" x 1-1/8" 2-1/8" x 1-3/8" x 7/16"
Weight 8 oz (w/o batteries or media) < 1 oz
Battery run time 5 hours 15 hours
Play time (without changing media) 90 minutes (C-90 cassette tape) ~50 hours (4 GB at 160 kbps)

We sure have come a long way!

67 comments — Last by just brew it! at 1:13 PM on 04/25/09

Google + Python = world domination?
— 1:08 PM on April 7, 2009

Since its initial release in 1991, the Python programming language has steadily grown in popularity. Designed by Dutch programmer Guido van Rossum (a.k.a. Python's Benevolent Dictator for Life), its conciseness, power, flexibility, portability, and large ecosystem of readily available libraries have gained it many converts. Today it is used in a wide variety of diverse environments—Google is powered by Python, as is the popular Web development framework Zope. On UNIX/Linux it is also viewed by many as a modern replacement for Perl, which has historically been the "heavy lifting" scripting tool on those platforms. Where I work, we use Python for a wide variety of intraweb, scripting, and automation tasks, including a large regression testing framework we use to test our software.

Python is not without its issues, however. The most widely deployed implementation is interpreted (i.e. it doesn't compile all the way down to native machine code), which has significant performance implications. It also does not handle multithreaded code well, since parts of the Python interpreter are inherently single-threaded. Python's lack of full multithreading support has become increasingly problematic as multi-core CPUs have become the norm. Developers have typically worked around these limitations by coding performance-critical portions of the application in C or C++; the compiled C/C++ code is then called by the Python script. While certainly workable (and well supported by Python), this approach increases development times, complicates debugging, and hurts portability of high-performance Python applications.

Fast-forward to 2009. Guido is now a Google employee, and a team at Google has decided to take on the ambitious task of replacing much of the underpinnings of the Python language, with the goals of removing the interpreter performance bottlenecks and making true multithreaded Python applications possible. All without hurting backward compatibility with existing Python applications... a rather tall order! The project is code-named Unladen Swallow, and the team already has a detailed plan outlining its intended approach, which will occur in stages.

By mid-year, they intend to replace the existing Python interpreter with a more efficient one based on LLVM. This has some immediate benefits—eliminating the relatively inefficient stack-based architecture of the current Python interpreter with the more efficient LLVM register-based architecture should result in significant performance gains. Longer term, it paves the way for compilation to optimized native machine code, which has the potential to make Python performance comparable to that of other languages that compile to native machine code (like C/C++). Once the transition to LLVM is complete, the team plans to tackle other optimizations and enhancements, including the multithreading issue.

Speaking as a long-time (quarter of a century!) C/C++ developer and relatively recent convert to Python, I find the Unladen Swallow project a very exciting development. If successful, I believe it will position Python to compete directly with C/C++ in the systems, applications, and gaming markets. It could also accelerate Python's displacement of other scripting languages like Perl and PHP in the server infrastructure and Web application markets. Furthermore, the portability of Python (and its supporting libraries) has the potential to make it much easier for developers to port applications between platforms: moving apps between Windows, Mac OS, and Linux could literally become as simple as recompiling a set of Python modules. If Unladen Swallow pans out, I think it is entirely plausible that, in the not-too-distant future, we could even see a commercial cross-platform FPS game implemented entirely in Python—something that would be unthinkable with the current Python implementation.

Kudos to Google for its willingness to fund R&D efforts to improve a technology that has the potential to benefit everyone!

34 comments — Last by neurosurg at 4:43 AM on 11/11/09

Why should we care about basic research?
— 12:05 PM on May 29, 2008

I was surprised to learn today that my former employer Fermilab (I worked there until about 12 years ago) was given a temporary reprieve this week from rounds of rolling furloughs and mandatory layoffs, when an anonymous donor gave them $5 million dollars. While I applaud private sector support of basic research, I find it sad that one of our premier research institutions is in effect reduced to living hand-to-mouth, off of the kindness of strangers.

Cutting-edge basic research is a big factor in attracting bright young minds to science, and it pays huge dividends even when there are no obvious direct applications of the basic principles being studied. Fermilab (and the field of high energy physics in general) has made significant contributions to many fields of science and technology. The construction of the Tevatron required breakthroughs in the fields of cryogenics, superconducting magnets, electronics, and other related fields. There is a nuclear medicine research facility on site, which conducts research into cancer treatments based on the use of high energy neutron beams. The group I worked with did leading-edge work in massively parallel computing and clustering, which have replaced traditional (and far more expensive) supercomputers in many applications. Fermilab even supports research on the restoration of native prairie habitat at their Batavia, IL site. And lest we forget, the very World Wide Web that you are using to read this article is a product of the high energy physics community, having been invented by Tim Berners-Lee at CERN in 1990! (And that's just the stuff I could think of off the top of my head...)

By cutting back on basic research and not putting a greater emphasis on scientific literacy in our elementary and secondary schools, we are selling our future short. We are already at significant risk of ceding technological superiority to our foreign competitors. Most of our manufacturing has already been relocated to Asia. 20 years from now, will most of our R&D be outsourced as well? I am starting to think so. Even Bell Laboratories—for decades a symbol of technological prowess, and inventor of the transistor and the C/C++ programming languages, among other things—has fallen on hard times; the smoldering hulk of what's left of Bell Labs has been acquired by French telecommunications conglomerate Alcatel, and continues to shed employees at a rapid pace.

Allowing Fermilab—once one of the crown jewels of the US national laboratory system—deteriorate to the point where it is practically on life support is, quite frankly, a disgrace. This isn't just about some scientists working on esoteric research projects losing their jobs (which, while certainly tough for those being laid off, is no worse for them than what goes on in the corporate world every day). It is also a barometer of our collective attitude towards science and the pursuit of knowledge, and I believe it has broad and far-reaching implications for the future of Western society. Are we losing interest in asking the big questions, and (as a result) also losing the skills required to answer them? I believe our competitiveness in an increasingly global market, and ultimately our very way of life are at stake.

39 comments — Last by KeillRandor at 2:23 AM on 07/10/08

Ubuntu 8.04 field notes
— 1:33 PM on May 15, 2008

As I mentioned in one of the comments I posted in last Friday's FNT, I was planning on taking a laptop running Ubuntu 8.04 on a business trip this week. Well, here I am in Tuscon AZ... and this is Day 3 on the road with Linux!

Ubuntu pre-installs a reasonable set of desktop applications. Firefox, OpenOffice, Evolution (an Outlook-like e-mail client), Rhythmbox (music player), etc. are all there by default. Before leaving, I installed and/or configured the following additional applications:

  • Thunderbird. I prefer Thunderbird to the default Evolution e-mail client, so I installed it from the Ubuntu repository. Installation and setup were uneventful, and Thunderbird worked just like I am accustomed to on Windows.
  • Amarok. IMO much nicer than the default Rhythmbox music player. This installed without incident from the Ubuntu repository. (I should point out that Ubuntu also detected the sound hardware on the laptop and installed the correct drivers without any input from me.)
  • VMware Server 1.0.5. I needed this application for the trip. VMware Server was not available from the Ubuntu repository (since although it is free, it is proprietary commercial software), so I downloaded and attempted to install the Linux version from VMware's web site. This one was very nearly a train wreck. I did get it working eventually, but it involved a lot of Googling, and the installation of a third-party hack to make it compatible with Ubuntu 8.04. And the system clock in the virtual machine goes nuts whenever SpeedStep kicks in; this doesn't just affect the time-of-day clock, it causes the whole guest OS to behave oddly (so I have to disable SpeedStep whenever VMware is in use). To be fair, these are probably as much VMware's problem as Ubuntu's...
  • Remote Desktop (actually called Terminal Server Client on Ubuntu). I needed this to be able to remotely access my Windows XP desktop at the office in case I needed any files that were not on the laptop. This is installed by default, so I just needed to test it. I had one minor glitch—full-screen mode does not play nice with the compositing window manager (compiz) now used by Ubuntu. After some Googling, I came up with a fix (disable "Legacy Full-Screen Mode Support" in compiz).
  • VNC viewer, to view remote Linux desktops. Installed from the Ubuntu repository without incident.

During my initial setup and checkout of the laptop, I noticed something really annoying. It seems that in its default configuration, Ubuntu 8.04 has serious problems dealing with the lid of the Compaq nc6220 laptop being closed and reopened. CPU usage zooms up, the system becomes very sluggish, and stays that way until the next reboot. A bit of Google-fu eventually revealed that some other laptop models are affected as well, and that there is a workaround (a tweak of the video driver that tells Ubuntu to allow the BIOS to manage the LCD backlight instead of trying to do it itself). Although I was able to work around the issue, glaring problems like this can give people a negative first impression; I hope Ubuntu fixes this soon with a patch.

Once here in Tucson, I was pleasantly surprised to discover that Ubuntu detected and connected to the hotel Wi-Fi access point seamlessly. The wireless chip in the laptop had been detected automatically and the proper drivers installed during system setup, with absolutely no intervention on my part. Installation and setup of Skype was also uneventful (though the download of the package from Skype's site was painfully slow over the hotel Wi-Fi connection).

Although I had forgotten to install Skype before leaving, I did have the foresight to drag along a snapshot of the entire official Ubuntu 8.04 repository (all 50+ Gb of it on an external 2.5" hard drive), to make installation of any additional software from the repository faster. I figured the less I had to depend on the hotel Wi-Fi connection for downloading packages the better. After arriving, I also installed the following, all without incident:

  • Bluefish. I'm still trying to decide what text editor I prefer to use on Linux, and figured I'd give this one a spin. Looks pretty reasonable so far.
  • Mediawiki (and MySQL). I need to do some work for the office Intranet site, and decided I'd be better off doing it offline and uploading the content to the server when I arrive back at the office than trying to do it all remotely. (This decision was partially due to the lack of connectivity while on-site, more on this in a moment.)

We initially had no Internet access while on-site; so Monday afternoon we picked up a Verizon 3G wireless modem. With the Verizon modem connected to a Windows XP laptop running ICS, and my Ubuntu laptop wired to the XP system via their Ethernet ports, both systems were able to access the 'net. (I have not yet attempted to get the wireless modem working on Ubuntu directly; that's an experiment for another day.)

So overall, I'd say Ubuntu 8.04 has worked reasonably well. The issue with the laptop lid switch was troublesome until I figured out how to work around it, and the VMware installation issues were annoying (but are likely at least partly VMware's fault). Everything else I've tried to do has pretty much just worked.

So is Linux finally ready for the desktop? Based on the past few days, I would have to answer with a qualified "yes." Depending on your needs, Linux can definitely be a viable alternative to Windows. I also plan to install Ubuntu 8.04 on a desktop at home within the next week or two. For years, Linux desktop proponents have been like long-suffering Chicago Cubs fans who have been waiting 100 years (and counting...) for a championship. The mantra of a true Chicago Cubs fan is "Just wait until next year!". For desktop Linux, I think "next year" may have finally arrived.

 

98 comments — Last by vorgusa at 1:09 PM on 06/25/08