Thursday Shortbread

Eight is Enough

  1. WSJ reports Microsoft, others work on Yahoo! bid
  2. Intel: hp must keep its PC business unit – X-bit labs
  3. Wired reports hard disk space can be increased with a pinch of salt
  4. X-bit labs: Intel begins volume production of 22nm microprocessors
  5. VR-Zone reports AMD Radeon HD 7000 GPUs listed in leaked driver
  6. AnandTech on ARM’s Cortex A7: Bringing cheaper

    dual-core and more power efficient high-end devices

  7. VR-Zone: Gigabyte teases near final G1.Assassin 2 X79 motherboard

    and MSI X70A-GD65 8D motherboard pictured

  8. Red Orchestra 2: Rising Storm – debut trailer


  1. Business Insider: Ballmer thinks you have to be a computer scientist to use Android
  2. The Official Google Blog: Making search more secure
  3. VR-Zone: Fresco Logic does four ports of USB 3.0 goodness
  4. TC Magazine: PowerColor offers easy, 1080p HD / 3D wireless streaming with Slingit
  5. NCIX’s wicked sale event
  6. Newegg’s 72-hour sale
  7. Dealzon’s deals: $694 off 17.3” Dell XPS 17 i7-2670QM / 1080p / 3GB GeForce GT

    555M, $100 off 15.5” Sony Vaio VPCEH25FM/B i3-2330M, $150 coupon for Dell

    XPS 8300 i7-2600 / 16GB RAM / Geforce GTX 560 Ti, and $180 off 32” JVC

    JLC32BC3000 1080p LCD TV


  1. VR-Zone reports Samsung and Google unveil Galaxy

    Nexus, first Android Ice Cream Sandwich smartphone

  2. Android Ice Cream Sandwich: What will it look like on a tablet? (video) – Engadget
  3. Fudzilla: TI confirms OMAP 4460 is in Nexus Galaxy

    and Google confirms Android 4 will hit tablets

  4. TC Magazine: Asus chairman shows off Tegra 3-powered Transformer Prime tablet
  5. Microsoft’s Andy Lees: Talking to your phone isn’t

    super useful, NFC coming soon to Windows Phone

  6. WPCentral shares rumor: Windows Phone Tango

    and Apollo to finally offer new screen resolutions

  7. TechReviews on Apple iPhone 4S
  8. AnandTech’s Apple iOS 5 review
  9. Android 4.0 SDK
  10. AnandTech’s Amazon Kindle (4th gen) review

Software and gaming

  1. Building Windows 8: Designing search for the Start screen
  2. Fudzilla reports new form of Stuxnet spotted
  3. TC Magazine: New Trillian 5.1 beta build available, 5.2 version draws closer
  4. New multiplayer trailer for Battlefield 3
  5. GameSpy previews Red Orchestra 2: Rising Storm
  6. Bethesda Blog: New screenshots and previews of Skyrim
  7. [H]ard|OCP on Rage gameplay performance and image quality

Systems, storage, and networking

  1. ThinkComputers on Zotac Zbox Nano AD10 mini PC
  2. Hardware Heaven reviews Dell Inspiron 14z laptop
  3. Phoronix tests AMD FX-4100 Bulldozer on Linux
  4. iXBT Labs on Core i7 processors for LGA1156, LGA1155, and LGA1366
  5. ocaholic on UEFI – only graphical BIOS or more?
  6. Björn3D reviews Zotac Z68 ITX WiFi Supreme
  7. TWL on Zotac A75-ITX WiFi motherboard
  8. Legit Reviews on 1TB OCZ RevoDrive Hybrid PCI-E SSD
  9. Benchmark Reviews on Rosewill wireless-N Wi-Fi USB adapter


  1. X-bit labs review EVGA GTX 580 Classified and MSI R6950 Twin Frozr III
  2. Hardware Canucks review 24″ Dell UltraSharp U2412M IPS monitor
  3. Everything USB reviews Logitech C910 HD Pro webcam
  4. OCC reviews Cooler Master Storm Sirius 5.1 gaming headset
  5. t-break reviews Roccat Isku keyboard

Power, cases, and cooling

  1. Real World Labs on 1200W Thermaltake Toughpower PSU
  2. Hardware Heaven and techPowerUp! review 750W OCZ ZT series PSU
  3. Guru3D’s CM Storm Trooper case review
  4. Kitguru reviews Thermaltake Armor A30 case
  5. X-bit labs review Scythe Mugen 3 cooler
  6. Hardware Secrets on Cooler Master Hyper 212 EVO CPU cooler
  7. FrostyTech reviews NZXT Havik 140 heatsink
Comments closed
    • Jambe
    • 8 years ago

    First sub-kilowatt PSU review in the ‘bread in what seems like eons is… a 750W unit.


    • Xenolith
    • 8 years ago

    That leaked Catalyst driver pretty much confirms that it will be low/mid/mobile parts being shipped first, with high-end coming later.

    • deathBOB
    • 8 years ago

    “Intel: hp must keep its PC business unit – X-bit labs”

    Read: please don’t allow a company to form that can negotiate as an equal.

    • NeelyCam
    • 8 years ago

    [quote<]AnandTech on ARM's Cortex A7: Bringing cheaper dual-core and more power efficient high-end devices[/quote<] But, but... didn't NVidia patent the fifth "companion" core? Or were their attorneys so dumb that they didn't add claims to using a companion core with dual- or tri-core chips? All kidding aside, I noticed on the second page that in "Rich web services" the DualA15/DualA7 combo gives 10% in energy savings compared to a DualA9.. sounds great, right? That is, until you read the fine print: this is comparing 28/32nm DualA15/DualA7 to a 40nm DualA9. The silicon shrink alone should have provided a massive energy efficiency boost (weren't they quoting something like 30-50% improvement a while back?). [b<]If the new architecture at a new process node is only 10% more efficient than the old architecture at an old process node, it doesn't sound that good...[/b<] For a long time I've been suggesting that ARM will have trouble scaling the performance up (to get to x86 levels) while holding on to power efficiency. This plot seems to show that this really is the case. Note also the reverse implication: x86 power efficiency could go up when performance is sacrificed. The low-power race between ARM and x86 is heating up (no pun intended), and x86 is looking better every day.

      • deathBOB
      • 8 years ago

      “Note also the reverse implication: x86 power efficiency could go up when performance is sacrificed.”

      I don’t think this conclusion necessarily follows. X86 has to satisfy legacy requirements that ARM doesn’t right?

        • NeelyCam
        • 8 years ago

        The legacy requirements probably add some overhead, but I’ve seen comments from those who understand architectures much better than I do implying that this overhead is actually very small.

        Even if it’s noticeable, I was mainly talking about relative changes.. e.g., A15 is going to out-of-order execution to boost performance (seemingly at the expense of efficiency), Atom went in-order to improve efficiency at the expense of performance. There are probably many more similar but smaller tradeoffs that can be done.

        EDIT: My bad – A9 is already out-of-order. Nonetheless, the point was that extra features to boost performance tend to reduce efficiency..

      • OneArmedScissor
      • 8 years ago

      [quote<]I noticed on the second page that in "Rich web services" the DualA15/DualA7 combo gives 10% in energy savings compared to a DualA9.. [/quote<] I noticed that you are singling out and taking out of context one of five typical scenarios that show how several aspects of an entire new chip should function for a normal person, not just one part of it. The result is not just a net positive, but a positive in all cases. This is like saying Sandy Bridge is worse than Nehalem because it does a little worse clock to clock in a few benchmarks. Never seen that with a new architecture and new trade offs before! The end result is still a net positive, and not a very big one...which also took a simultaneous shrink and new architecture. In this case, they're actually showing how they avoided making trade offs. That's nothing to sneeze at. "Rich web services" likely means GPU accelerated Flash sites, anyways. ARM may not balloon up forever, but you're off your rocker if you think cutting power about in half is a sign of diminishing returns. x86 CPUs stopped eeking out much more than a few percent at a time long ago. Straight shrinks don't even do anything close to that.

        • NeelyCam
        • 8 years ago

        I think you misunderstood my point; let me try to elaborate. Overall, the “low-power companion chip” approach is a great idea, IMO. Everyone should do this if they can figure out how to get the OS to cooperate. I could totally see Intel put down an Atom core on an IvyBridge laptop chip to take care of the ‘easy stuff’..

        Anyways, I was focusing on A15 – this is supposed to be the architecture that gets ARM closer to competing with x86 on performance. A7 is clearly optimized for very light work loads, and it does a fantastic job there.. The “Internet Radio” and “OS/UI activity” efficiency numbers are impressive – some of it is coming from the shrink, I’m sure (more about this later), but even then A7 is clearly superior to A9 for these loads. But that just supports the first half of my overall point: going down in performance improves efficiency.

        On A15, I ignored the “HD gaming” benchmark on the notion that it is likely to be more GPU dominated than any other benchmark; one could rather easily boost the energy efficiency by sizing up the GPU and picking a convenient benchmark. Although your point is valid that the “Rich web services” benchmark is likely to be GPU-accelerated, I think it’s still the better of the two for evaluating the CPU efficiency. I didn’t mean to “single out” the worst-case benchmark – I was trying to find the most indicative one. Unfortunately there were only two A15 benchmarks available, so my choices were limited…

        What to me was somewhat shocking is that the energy efficiency improved only by about 10%, considering that A15 was not only a new architecture, but also at a better silicon node. Somehow you seem to think that the efficiency gains with architecture or silicon node upgrades are rather insignificant:

        [quote<]x86 CPUs stopped eeking out much more than a few percent at a time long ago. Straight shrinks don't even do anything close to that.[/quote<] I have to disagree; the numbers are pretty clear in TechReport reviews... First, the "straight shrink" - the best example I can find is [b<]45nm Core2Duo E8600 (3.3GHz,6MB) vs. 32nm Clarkdale i5-661 (3.33GHz,4MB). i5-661 has 23% lower Cinebench rendering task energy[/b<]: [url<][/url<] Then, architecture upgrade... this is trickier. The best I could come up with is comparing [b<]32nm Nehalem i7-980X (6c,3.33GHz,12MB) to 32nm SandyBridge i7-2600K (4c,3.4GHz,8MB). i7-2600K has 14% lower task energy, despite the 33% lower core count or cache size[/b<]: [url<][/url<] If we just ignore the advantages of E8600 and 980X, the combined architecture/shrink energy reduction should still be about 33%, with 23% reduction from shrink alone. Now, the question is: why the hell A15 didn't gain [i<]at least[/i<] the 23% energy reduction through shrink alone (which should apply to GPU and CPU alike)? Is the architectural performance boost at the expense of [i<]ineffiency[/i<] to blame? [b<][u<]Please explain why the energy reduction in "rich" web surfing was only 10% when ARM A15 has both architecture and process improvements behind it.[/u<][/b<] The only explanation I have is that they sacrificed efficiency for more performance (which, I might add, is absolutely the right thing to do, if they are ever going to seriously compete with x86 in the laptop space). And, think about what that means to x86's potential to 'scale down' to lower-performance cellphone chip markets...

    • bittermann
    • 8 years ago

    “Intel begins volume production of 22nm microprocessors”

    AMD better hope piledriver can fix BD’s shortcomings and that they can release it earlier than planned.

      • Yeats
      • 8 years ago

      It can’t and they won’t, sadly. By the time PD comes out, AMD will have lost even more ground to Intel in the mid- to high- end.

        • can-a-tuna
        • 8 years ago

        I can’t give thumbs up for that.

      • khands
      • 8 years ago

      Trinity is going to use Piledriver cores and I really think that’s where their volume will come from. That being said, I don’t think they’ll be able to fix everything before they have to face Ivy Bridge, in fact I hope they just manage to improve each shortcoming.

    • skitzo_zac
    • 8 years ago

    Multimedia V

    [quote<]t-break reviews Razer Isku keyboard[/quote<] It's actually Roccat Isku, not Razer.

    • sweatshopking
    • 8 years ago

    I think that ballmers statement was hyperbole, but the reality is that it IS much less stable, and crashes like 1000x more than iOS or wp7. It’s not the same quality, and there really isn’t much argument on that. Do you need to be a scientist? no, but you can expect crashes, and because of the variety of choice (personally, as a nerd i like it) learning a new launcher with a new phone, etc makes it much more work to learn, as you have to relearn with new phones.

      • JohnC
      • 8 years ago

      Not sure about stability (never had such issues myself), but… I used to own tablets with iOS, webOS and Android (now I only have iPad 2 and Touchpad left) and to be honest the Android-powered tablet was the least “user-friendly” one. Of course you don’t have to be any kind of “scientist” to learn it and you can easily learn every aspect of it over time so it will become a “second nature” to you, but for some reason I didn’t feel like I should be wasting more time with that.

      • ish718
      • 8 years ago

      You shouldn’t experience many crashes on Android, except for when you’re using poorly coded third party apps…

      • PenGun
      • 8 years ago

      I can get 2.3.7 to crash if I beat on it, cough Plants vs Zombies, but then it is a nightly Cyanogenmod install. I like it a lot. Doom runs flawlessly.

      The Samsung Touchwiz 2.3.4 the G S2 came with was rock solid. Never went down fer nothin’.

      Iris, coded overnight, is shaping up to be as dumb as Siri. I don’t understand how supposed geeks don’t just love this open and very fine OS.

      I think some of you might not actually be geeks. I remember the MS Certified guys making hacking extremely easy back in the day and … zzzzzzzz … shut up old man. 😉

    • DancinJack
    • 8 years ago

    [quote<]Business Insider: Ballmer thinks you have to be a computer scientist to use Android[[/quote<] [quote<]Microsoft's Andy Lees: Talking to your phone isn't super useful...[/quote<] Funny

      • vbdasc
      • 8 years ago

      Hmmm, I have read somewhere that once upon a time, Bill Gates was personally making sure that all the employees hired at MS did meet a certain level of intelligence. Sadly, this practice is obviously no longer in use.

      • destroy.all.monsters
      • 8 years ago

      These really are some stupid talking points. You’d think Microsoft couldn’t afford a good p.r. team.

      The funny thing is that elsewhere Ballmer says that he’s glad that the yahoo deal didn’t go through – and makes great pains to make it sound like it was just because of the timing – so he does know when not to make an ass out of himself at times.

      That said I think the jury is still out as to how useful Siri and its competitors are.

      • turkeysam
      • 8 years ago

      You need a degree in computer science to use Windows.

        • willmore
        • 8 years ago

        If you have a degree in computer science, you wouldn’t *touch* windows.

      • ModernPrimitive
      • 8 years ago

      Balmer is like Joe Biden on steroids. Has his foot in his mouth most of the time. I’m interesting in owning a WP7 device or maybe even the next iPhone. Jobs and his RDF was hard to stomach but Balmer is one that makes me a bit angry when i even see a picture of him… lol.

      As far as the attack on Android. Idiotic. I’ve used WinMo 6.0 / 6.1 / 6.5 and Android and you have to be worse off than retarded to not be able to use one from my perspective but maybe I’m giving myself too much credit there….

        • Arclight
        • 8 years ago

        [quote<]As far as the attack on Android. Idiotic. I've used WinMo 6.0 / 6.1 / 6.5 and Android and you have to be worse off than retarded to not be able to use one from my perspective but maybe I'm giving myself too much credit there....[/quote<] Nonesense. You give Ballmar too much credit!

        • Yeats
        • 8 years ago

        “Balmer is like Joe Biden on steroids…”

        Made me chuckle. +1 to you.

    • Arclight
    • 8 years ago

    [quote<]Business Insider: Ballmer thinks you have to be a computer scientist to use Android [/quote<] Ballmar is such a tool, somebody better tell him to retire or step down. He's doing more harm than good to M$. [quote<]Researchers in Singapore have discovered a way to bump up hard disk storage capacity to six times current figures, and all it takes is a pinch of sodium chloride -- also known as chemical grade table salt.[/quote<] Wow, that's major news for storage freaks, but the issue of performance remains. I still droll for a 120 Gb or 256 Gb SSD and wouldn't give a f@#k about multi Tb drives. But as said before, there are people who actually need more space (for pr0n, nah i keed, i keed

      • BobbinThreadbare
      • 8 years ago

      If density is getting increased 6 fold, won’t transfer rates get the same increase (assuming spindle speeds stay the same)?

      Briefly looking through TR’s latest review, the Cavier Black is about 1/4 the speed of the fastest SSD. Maybe there is some life in spinning drives yet.

        • Arclight
        • 8 years ago

        [quote<]If density is getting increased 6 fold, won't transfer rates get the same increase (assuming spindle speeds stay the same)?[/quote<] I have no idea. Never knew density influences performance for mechanical drives. If that is true though it sounds awesome.

          • Elsoze
          • 8 years ago

          It does. Given a constant rotation rate there is more data to pick up in one area in a denser (bigger) drive, hence your faster transfer rates. On the reverse though, you do tend to get slightly higher access times, which often is bad.

          The biggest issue is still going to be that’s a lot of data to lose on one drive. I wouldn’t mind having huge drives if they could do something like internal mirroring (for instance two platters mirror the other two platters) and not have the drives themselves be any bigger. There’s just too damn big of a chance of errors when you hit greater than 2 TB. Plus raid rebuild times are *horribly* long.

            • Arclight
            • 8 years ago

            Actually the biggest beef i have with mechanical drives are the vibration/seek noises they create. If i could have afford a big capacity SSD i would have hapilly bought one just to get rid of the noise (i have a Caviar Black, made the biggest mistake when i chose it).

            • LaChupacabra
            • 8 years ago

            With these kind of densities it would be pretty sweet to replace the 3.5″ drives in most desktops with smaller 2.5 or even 1.8. The storage needs for most people aren’t in the multiple terabytes (currently), they would be cheaper to make and would allow for more compact form factors. Maybe instead of the death of spinning disk these larger densities will bring the death (for most people) of 3.5″ drives.

        • MadManOriginal
        • 8 years ago

        Transfer rates yes, random access times no.

          • UberGerbil
          • 8 years ago

          And that’s the key. The reason SSDs seem [i<]so[/i<] much faster than spinning hard drives is because their access latencies are near zero, which matters much more than sustained transfer rate for most daily tasks (things like video transcoding aside). The transfer rates for a given SSD might be 6x a typical hard drive, but the access times are (12ms / .06ms) = 200x lower (or more). When you're trying to read 4K pages scattered here and there, that means the SSD generally will have fetched a hundred before the HD even gets around to the first. And if the program you're running, or the general smoothness of Windows, is dependent on loading those pages, that makes a huge difference.

        • willmore
        • 8 years ago

        For a 6x aerial density increase, you’ll get a sqrt(6) increase in linear density–which will be roughly proportional to your sequential read/write speeds. Possibly slower is more ECC is needed and possibly a little faster as the track pitch will be tighter and track-track seeks may be faster. Or the weaker head signal and higher ECC may make track-track seems slower. It’s very hard to say with real numbers.

        And, this nano-patterning isn’t anything new. I remember wandering the hallways and UW/Madison while my wife was there and reading posters in the ChemE department about different approaches to it. Then again, that may mean that someone finally got a working system.

          • BobbinThreadbare
          • 8 years ago

          Ok, thanks for the numbers. So double the transfer speed is probably a decent bet? That would be nice, but it’s not going to compete with SSDs.

Pin It on Pinterest

Share This