Intel announces silicon photonics breakthrough

The future of I/O connectivity is bright—literally. Intel has developed what it says is the “world’s first silicon-based optical data connection with integrated lasers using Hybrid Silicon Laser technology.” Using an integrated transmitter chip, an integrated receiver, and beams of light going through a fiber-optic cable, the prototype can transfer data at a whopping 50Gbps. More exciting still, Intel expects to be able to manufacture the components cheaply enough to make them a staple of both consumer and server I/O by the middle of the decade.

The video below does a decent job of introducing the technology:

(If the trendy music is too much for you, this clip provides a more geek-friendly introduction with extra nitty-gritty details.)

To achieve the phenomenal 50Gbps data rate, the integrated transmitter chip combines four lasers that each output light at a different wavelength. The chip’s optical modulators encode data into each laser beam at a rate of 12.5Gbps, and the beams are then multiplexed and pushed through a single optical fiber. The integrated receiver chip on the other end splits the beams and pushes them into its integrated photodetectors, which turn the light into bits and bytes.

The prototype operates at room temperature, and Intel claims it can run for 27 consecutive hours with zero errors. Yet the transmitter and receiver chips are both made using “low-cost manufacturing techniques familiar to the semiconductor industry.”

We’re still talking about a concept vehicle, of course; Intel still has some work to do before it can mass-produce the design. By then, we might be looking at even higher data rates—up to a terabit per second. That speed could be reached by increasing both the number of lasers (to 25) and the line rate (to 40Gbps per laser). The slides in the gallery below should illustrate that concept.

How does this silicon photonics breakthrough fit with Intel’s Light Peak technology? Intel made it clear during today’s conference call that Light Peak will bring 10Gbps optical connectivity next year, while the design announced today won’t be ready for mass-production for another two to three years. You might see a future version of Light Peak based on it, but Intel isn’t formally tying the two together yet.

Comments closed
    • moritzgedig
    • 9 years ago

    /[

    • Krogoth
    • 9 years ago

    F#$@ING PHOTONICS, HOW DO THEY WORK?

    (sorry, I couldn’t resist)

    • NeelyCam
    • 9 years ago

    Fastest ones at the moment are PCI Express Gen2 links at 5Gbps (for two wires).

      • NeelyCam
      • 9 years ago

      Reply fail – this was for #3

    • RtFusion
    • 9 years ago

    Not new. IBM has been working on this stuff for quite some time now. Not sure if they did any demo videos though with that kind of music.

    IBM video from 2007
    §[<http://www.youtube.com/watch?v=LU8BsfKxV2k<]§ IBM video from this year §[<http://www.youtube.com/watch?v=RWhcwVxI2sQ<]§ Some articles I found: §[<http://www.economist.com/node/16103910?story_id=16103910<]§ §[<http://www.computerweekly.com/Articles/2008/02/29/229654/ibm-unveils-optical-computing-system-for-moving-huge-data.htm<]§ §[<http://www-03.ibm.com/press/us/en/pressrelease/20815.wss<]§

      • Firestarter
      • 9 years ago

      A press release here and there is not the same as having a working prototype. If IBM has one, I’d love to see it!

    • liquidsquid
    • 9 years ago

    Remove the PCI bus in favor of simply power and a single optical port per device. Graphics cards could then truly become an external module… A PC becomes a series of light-based interconnected modules through a super-speed data hub/bridge, including your monitor, maybe even your memory. And no silly-expensive signal-integrity critical connectors to do it.

    This brings PC assembly to the masses if true.

    Another benefit: Resistance to ESD damage and hot-swapping of major components while the computer is active.

    Further advantage: If lasers/detectors are truly on-die, it could mean MUCH larger RAM memory capacity on a single memory bus. Limitation now is the transport mechanism of data from memory to CPU and back. Losses in PCB materials, capacitance, etc. With light, it is simply how sensitive the detectors are vs. how much light is available which could be significantly more.

      • indeego
      • 9 years ago

      It would mean, once and for all, the end of the desktop. Your portable device could add powerful GPU/Physics/output to large displays etc easily, and it would allow producers to package in a pretty design, something I’m sure they’d love to so (they hate to pay for support).

      It will probably be amongst the last physical ports for I/O, also, although saying “last” anything in IT is a surefire way to eat your words 2 years laterg{<.<}g

      • shank15217
      • 9 years ago

      Graphics cards have bandwidths at over 64Gbps today…

        • Majiir Paktu
        • 9 years ago

        Seeing as the maximum theoretical bandwidth of 16 PCI-Express 2.0 lanes is 64 Gbps, this is hardly true. You’ll always see inefficiency and overhead along that link, and modern cards aren’t pushing up against the limits of the PCI-Express link anyway. Consider that PCI-Express uses 8b/10b encoding, and that an optical link as described would use far less overhead for error correction, and it becomes perfectly reasonable to use exactly this prototype optical link for external video cards.

        Besides, it’s just that: a prototype. I think we can expect links capable of upwards of 100 Gbps in no time.

          • designerfx
          • 9 years ago

          the point is, as I originally asked elsewhere – there are connections that are faster than what intel is claiming here, already.

            • esgreat
            • 9 years ago

            I think the focus is not the speed but the fact that it can be implemented in silicon (meaning you don’t need to use a lot of costly rare elements). PCIe probably needs several “lanes” to get the high BW while you need fewer optical links for equivalent BW.

            Even so, expect this technology to take 5-10years to mature…I take this introduction as a glimpse of our future computers.

        • djgandy
        • 9 years ago

        They use memory parallelism.

      • NeelyCam
      • 9 years ago

      l[

        • esgreat
        • 9 years ago

        Go and read about “Signal Integrity”. With optics you fix almost all of it. It’s actually one of the major bottlenecks for I/O performance. Much cost has been associated with it in engineering solutions for it. And also you save lots of board routing as you can practically reduce 100 wires into 2 to 3 (i.e. DDR).

          • NeelyCam
          • 9 years ago

          Unless you can build your TX/RX circuitry in silicon, cost is prohibitive. If you use current silicon, power is a serious issue if trying to operate too fast; it tends to be more efficient to run multiple lower-speed electrical links than one higher-speed optical link.

          And yeah, DDR spec is a power hog, but mainly because DDR is made with cheap, crappy silicon. A new spec and slightly better silicon would go a long way to improve electrical memory links… Meanwhile, I don’t even wanna guess how expensive an optical-link-enabled memory module would be.

          Finally, the super-low loss of optical fiber doesn’t help much unless you’re running far enough make electrical channel losses too horrible (or cables too expensive). Short distances are what we’re talking about here, and electrical channel losses aren’t that bad at short distances (and like you said, you can pack hundreds of them on a motherboard… cheaply).

          Before switching to optical (and paying the overhead for optical sources/modulators etc.), my guess is that the money is put into developing well-behaving (i.e., with less discontinuities) channels that can be equalized easily and cheaply – this will improve “signal integrity” enough to render local optical links pointless.

            • esgreat
            • 9 years ago

            TX/RX integration into silicon seems to be goal, based on this article. I felt that its success would bring prohibitive optical costs down.

            As for distances, it’s not really short enough. It’s just that the routing congestion + mechanical space won’t allow for even shorter distances. But I suppose something like soldered down RAM would help reduce this bottleneck in the near future.

            I agree with most of what you wrote. My perspective when I wrote it was seeing how these bottlenecks are going to impact computers in 10 years’ time (or even longer). I thought about how the silicon size and power keeps getting smaller and can’t help but think how close we are to the limits in copper already. Cost will come down as they innovate and mass produce.

            But for now, yes your arguments are correct.

    • lilbuddhaman
    • 9 years ago

    So what kind of end-of-the-line FPS increase are we talking here ?

    • shank15217
    • 9 years ago

    Heh.. this is nothing compared to quantum teleportation research being done.

      • RtFusion
      • 9 years ago

      IIRC, optical computing is intertwined with quantum computing. You (or others) can correct me on this, but, photons do exhibit quantum entanglement which makes two photons be somehow linked to each other. if one spins to the right, the other one does as well instantaneously.

      Quantum entanglement, as I understand it, will play a role in quantum computing.

      Again, I could be wrong on that.

      • Majiir Paktu
      • 9 years ago

      Heh.. this is nothing compared to faster-than-light research being done.

      Seriously, since when does the presence of a more impressive /[

    • dreamer77dd
    • 9 years ago

    I just think of how this could be applied to other things in computers. I would like a motherboard that works off of light instead of copper. I am sure this could help with my modem, and if it helps with server transfers i am all for that. i am sure content creating company’s having that much head room could make some amazing stuff. i am sure CERN could use some cheap updated connection speed for their research. Well just like before lasers, what the hell are we going to use that for back in the 1960’s now it’s everywhere, walmart code scanners, to printers and other cheap but you could say advanced technology’s. One question.. is AMD getting into this or are they focused on something less? like brain waves communications? lol

      • OneArmedScissor
      • 9 years ago

      Getting away from copper will probably be the real benefit of all of this new fiber optic hooey.

    • Buzzard44
    • 9 years ago

    This sounds really similar to fiber optic connections – especially with the multiplexing of the different wavelengths of light and such. Even the bandwidth appears comparable.

    Forgive my ignorance, but what exactly makes this a breakthrough different from current optical communications?

      • cygnus1
      • 9 years ago

      It is exactly what you described. The breakthrough is that it’s being done cheaply in silicon. Integrating the lasers on to the cheaply manufactured chip is a huge way to cut costs out of fiber optic communications.

    • DrDillyBar
    • 9 years ago

    Pretty cool. Now all it needs is Tx/Rx in the same interface and a twisted pair cable and we can all upgrade to 50GB interweb.

      • cygnus1
      • 9 years ago

      No need to twist a pair of fibers… might break them 😉

      Twisting copper pairs is done to cut down on crosstalk and other EMI.

        • DrDillyBar
        • 9 years ago

        Indeed. It was there as a kind of metaphor.

    • eitje
    • 9 years ago

    This is sexy. I can imagine a time where all of the traces we find on motherboards today are replaced with a single layer of glass.

      • NeelyCam
      • 9 years ago

      Would break in an instant… better to use a flexible cable.

        • Majiir Paktu
        • 9 years ago

        Right, because optical fiber shatters the minute you bend it or coil it up.

          • NeelyCam
          • 9 years ago

          “Single layer of class” just might.

            • Waco
            • 9 years ago

            It’s not impossible to make flexible glass plates. IIRC I read something about them on Slashdot last week.

      • wira020
      • 9 years ago

      We’re still bound by electrical signal being used in cpu/gpu/memory.. so i dont think there’s benefit of having a lot of light receiver/transmitter module (that convert light to electric or the other way around) in a motherboard… unless, all component can use the light signal.. is it even possible for the cpu to only use light?… I;m not totally sure either, this is based on what i read here and at guru3d..

    • ShadowTiger
    • 9 years ago

    Who cares about having light based computing in a desktop. The real benefit comes from Internet hubs and switches that ISPs use. Once all calculations are done using light instead of electricity we will have internet that approaches the speed of light, and stuff like OnLive will actually work pretty well (it takes 0.1366 seconds for light to travel around the equator, though that will obviously be slower in fiber) Also, at that point bandwidth should jump up to very high numbers for the same price of deployment.

      • Kurotetsu
      • 9 years ago

      l[

        • cygnus1
        • 9 years ago

        Read the last paragraph, this is after Light Peak. Putting the laser on the chip is huge for all fiber optics.

          • Kurotetsu
          • 9 years ago

          Ah hah, missed that. Thanks.

        • OneArmedScissor
        • 9 years ago

        The “successor” to USB is, well, USB 3.0, which Intel seems to be pulling all the stops to avoid. What in the world do we need even the quoted 10Gbps transfers of Lightpeak for?

        At first, I was excited about Lightpeak because I thought it would put an end to the stupid USB connectors. Instead, they just made it compatible with USB connectors. Great fail. Give us USB 3.0 already.

          • NeelyCam
          • 9 years ago

          You seem to have a USB axe to grind?

      • isaacg
      • 9 years ago

      The Internet is more than just a series of tubes, those routers/switches actually have to analyze/prioritize all that data and decide where it goes, when it goes, etc, not just shoot it to the next fiber branch. I wouldn’t be surprised if a large amount of Internet traffic delay is caused by the router’s internal processing not the interconnects between them. I think we’re a LONG way from processing data in light instead of silicon, all this is is sending and receiving light. And most of the big pipes on the net are fiber already, from my understanding, it’s just a matter of getting companies to spend the money to replace the copper in the last few miles.

    • OneArmedScissor
    • 9 years ago

    Intel: “USB 3.0? Uh…err…look, what’s that over there?!?”

      • NeelyCam
      • 9 years ago

      Funny.

    • Richie_G
    • 9 years ago

    This is interesting.

    • designerfx
    • 9 years ago

    how much is the bandwidth of a normal motherboard? Isn’t it in a similar range?

      • khands
      • 9 years ago

      Not even close for most stuff on board.

    • bdwilcox
    • 9 years ago

    *[

      • sweatshopking
      • 9 years ago

      duh. after 27 hours of LASERS BLASTING THE F OUT OF YOUR SYSTEM, OBVIOUSLY, YOUR COMPUTER TURNS INTO A BLACKHOLE AND ABSORBS YOUR FAT ASS. this could be a super breakthrough!

        • OneArmedScissor
        • 9 years ago

        But don’t worry, it will just be a 16nm black hole by then. Just don’t feed it any huge Nvidia chips, and you’ll be safe.

          • NeelyCam
          • 9 years ago

          Or AMD CPUs

      • cygnus1
      • 9 years ago

      That’s without a single bit error. With proper CRC and encoding, errors get caught and retransmitted, no big deal.

      • ColdMist
      • 9 years ago

      For a tech preview, they are saying, “it didn’t burn out or have massive errors in 5 minutes of use.”

      That’s all. It shows it has long-term stability, even this early on in their design/testing phases.

      You haven’t worked much in industry, have you.

        • NeelyCam
        • 9 years ago

        l[

    • sweatshopking
    • 9 years ago

    crazy, but i need my cables to power stuff too. I would rather have the lower USB speeds, and charge or power stuff, than ultra fast with no electricity. They could put another cable adjacent, which could carry power, in which case, i’m good. But i need that power option.

      • AlvinTheNerd
      • 9 years ago

      You might want to look into HDBaseT

        • geekl33tgamer
        • 9 years ago

        USB 3 would render that useless to most people right? Unless you need the potential 100W power it can carry or HDMI video data up to 100M from the source (why?)…

        There’s enough power and bandwidth in USB 3 to do almost all the things HDBaseT was envisaged to do.

      • NeelyCam
      • 9 years ago

      I agree, for regular end-user applications.

      This optical stuff is probably aimed at links between supercomputers (or between CPU and memory) – both ends tend to have their own power sources.

Pin It on Pinterest

Share This