Christmas Eve Shortbread

7 Up

  1. The x86 power myth busted at AnandTech
  2. X-bit labs: Sharp demonstates 6″ screen with 2560×1600

    resolution, opens doors to extreme HD displays

  3. The NY Times: No sales pop for Windows 8
  4. InfoWorld: Microsoft re-issues botched Patch Tuesday patch
  5. Ars Technica: Where OS X security stands after a volatile 2012
  6. Fudzilla: Tegra 4 to kick off USB 3.0 tablet revolution

    and Archos churns out Android 4.1 iPad lookalike

  7. Engadget: Huawei’s 6.1″ 1080p Ascend Mate flaunted by exec


Christmas Eve

  1. ZDNet: China is largest single Android market, but U.S. is catching up, says analyst
  2. The Verge: Nokia’s Windows tablet to take on

    Surface with battery-equipped keyboard cover

  3. Podcasts from BCCHardware and NinjaLane
  4. Network World: How removing 386 support in Linux will destroy the world
  5. Newegg’s deals
  6. Dealzon’s deals: $400 coupon for 15.6″ 1080p Lenovo Y580 i7-3630QM /

    GeForce GTX 660M, $27 coupon for 15.6″ hp Envy dv6z A6-4400M,

    and $119 off 15.4″ MacBook Pro i7

Mobile

  1. Tbreak’s Huawei Ascend D1 XL review
  2. HT4U reviews Google Nexus 7 (in German)
  3. Madshrimps review OtterBox LiveStrong iPhone 4 / 4S case

Software and gaming

  1. InfoWorld: Chrome 25 will disable ‘silently installed’ extensions
  2. LanOC Reviews on Assassin’s Creed 3

Hardware

  1. TechReviewSource on hp Envy 23-d060qd TouchSmart
  2. Tech ARP’s BIOS option of the week – DRAM PreChrg to Act CMD
  3. Big Bruin reviews 750GB Seagate Momentus XT hybrid drive
  4. NikKTech reviews 240GB Intel 335 series SSD
  5. BCCHardware reviews Trendnet TPL-401E and TPL-405E
  6. HotHardware reviews Rosewill 9100BR keyboard
  7. VR-Zone’s 1200W PC Power & Cooling Silencer MKIII PSU review
  8. TweakTown reviews 800W In Win Commander III PSU
  9. Hardware Heaven reviews Fractal Design Node 605 case
  10. Technic3D reviews DeepCool FrostWin CPU cooler (in German)
Comments closed
    • ronch
    • 7 years ago

    From ‘The x86 power myth busted at AnandTech’

    Not conclusive. It’s not really an Apples to Apples comparison. What if Intel did its own ARM design, employing exactly the same microarchitecture as Atom but using a different ISA? The complex CISC-to RISC op decoders will probably go away, as well as the backend that converts RISC-like ops back into x86 form, among others. These should allow ARM to show some benefits over x86.

    And then there’s Intel’s manufacturing prowess. We all know how tuning the fab process for its chips allows Intel to squeeze every bit of power efficiency out of their silicon.

    Of course, all this is purely theoretical, as nobody in their right mind will do those projects just to prove the ARM vs. x86 debate. In real world terms Intel does manage to make Atom very energy efficient, and in the end, the final products are what eventually end up on the shelves for folks to choose from.

    • l33t-g4m3r
    • 7 years ago

    [quote<]The NY Times: No sales pop for Windows 8[/quote<] Why am I not surprised?

    • kureshii
    • 7 years ago

    [quote<]Network World: How removing 386 support in Linux will destroy the world[/quote<] Most useless article ever? Why was this even worth mentioning in a shortbread.

    • staceymon888i
    • 7 years ago
    • sirsoffrito
    • 7 years ago

    Is anyone else getting Chrome’s “Malware Ahead!” warning when trying to visit the Fudzilla links?

      • MadManOriginal
      • 7 years ago

      Nope, just one that says “Malinformation Ahead!”

      • Theolendras
      • 7 years ago

      I do

      • yogibbear
      • 7 years ago

      I do too

      • NeelyCam
      • 7 years ago

      I got something similar for Semiaccurate. Googled something, got a link to S|A. When I clicked the link, Chrome said “Don’t do it!!”

      [url<]http://www.google.com/safebrowsing/diagnostic?site=http://semiaccurate.com/[/url<] BTW, Fudzilla seems to be working for me in Chrome

    • sweatshopking
    • 7 years ago

    1600p in a 6 inch screen seems useless to me. you have to trade battery life and brightness for pixels you stopped being able to see a LONG time ago. I get it’s neat to do, and it might have some limited application, but 6′ 720p (MAYBE 1080) is pretty damn good. i wouldn’t pay more for a higher res.

    edit: please enlighten me on the advantage of a 1600 6 inch screen. With -4, there must be some.

      • dragosmp
      • 7 years ago

      I imagine it’s something like no aliasing, very clear font rendering… sure it can be overkill, but better to have the extra pixels than not

        • sweatshopking
        • 7 years ago

        at the expensive of battery life and brightness? no aliasing and clear font are achieved, and then some, by 1080 at 6′. at 1600, it’s overkill at the expense of things we need more. it’s a bad trade.

    • Meadows
    • 7 years ago

    YOUR GRAMMAR SUCKS #45: CHRISTMAS SPECIAL

    [url<]http://www.youtube.com/watch?v=PLfhizLZi7c[/url<]

      • tbone8ty
      • 7 years ago

      that was actually pretty funny

      • dpaus
      • 7 years ago

      Wow, what quality. Their production values are growing exponentially. I wonder if TR could hire them to do the next “Build Your Own PC” guide in the same style, but using fanboi language?

        • Meadows
        • 7 years ago

        They also collaborated with Tay Zonday to sing about the meaning of Christmas.
        [url<]http://www.youtube.com/watch?v=sqL9TzRGu_o[/url<] You're welcome, you will not get this tune out of your head for weeks.

      • derFunkenstein
      • 7 years ago

      My wife got me into YGS about a month ago and it’s been making me giggle myself silly.

        • Meadows
        • 7 years ago

        I like his “sophisticated man” character the best.

          • sweatshopking
          • 7 years ago

          you would.

          • derFunkenstein
          • 7 years ago

          Yeah, he’s pretty great. “I’m in collage…atutatatatata”

          But the best is Chad Broseph Huntington.

    • Game_boy
    • 7 years ago

    ““If people used to buy PCs every four years and are now buying them every five years, that could lower PC sales by 20 percent over time. That’s substantial.” – NY Times one

    25 percent. And he calls himself a market analyst.

      • lilbuddhaman
      • 7 years ago

      100years / 4yr = 25sales
      100years / 5yr = 20sales
      20s now/25s then = 0.8
      or a 20% decrease in sales.

      Right?

        • NeelyCam
        • 7 years ago

        Well… yes. Your units are a bit broken (and I wouldn’t take this approach), but overall the end result is correct.

      • MadManOriginal
      • 7 years ago

      Well, let’s see…the worldwide recession is just entering its fifth year. I’d say people and businesses put off unnecessary computer purchases during such a downturn, so we’ve only a little ways to go to hit this ‘new norm’ pattern if we haven’t already. (There are people who would have bought a computer in the last 4 years already but haven’t.)

        • Hattig
        • 7 years ago

        I suspect a lot of people have pushed out a PC replacement purchase by a year or two for many reasons – a primary one being “it’s fast enough still [I’ll just get more RAM/new graphics card/an SSD]”, a secondary one being “I bought a tablet”. The downturn hasn’t helped at all, as you write – (and it shows that political policies of financial austerity don’t increase consumer spending, and thus industry takes a dive, and then jobs take a dive, and then unemployment rises, and eventually taxation for the remaining employed have to rise to pay for it. Might as well just spend the money up front and try to ride it out)

      • Arag0n
      • 7 years ago

      Shocking news, because computers last longer analyst say that PC market is doomed. Do you understand why we have programmed obsolescence now? People believes things so stupidly easy…

    • NeelyCam
    • 7 years ago

    [quote<]The x86 power myth busted at AnandTech[/quote<] Wow. This is the first time I started to feel that AnandTech might be biased towards Intel (just like I've been told is the case so many times...). An extensive, multi-page review comparing a 32nm Atom to a 40nm Tegra3 with companion core disabled.. what's the point? And a boombastic article title claiming x86 power myth being busted..? Not based on this article. Yes, the 32nm Atom might've beat the 40nm Tegra-based WinRT tablet in efficiency. Great - this happens when you compare two very unequal process nodes. Graphics efficiency was mentioned even though no performance numbers were given for graphics. Do I really have to go back to older reviews to get the performance delta, and calculate the efficiency myself? I bet if I did that, the efficiency numbers would be [b<]much closer[/b<] than what this article implied, [i<]in spite of[/i<] the process node difference. Overall, a pretty uncharacteristically useless article from Anandtech. I would've wanted to see this comparison to 28nm Snapdragon or 32nm Exynos. Comparing to 40nm Tegra doesn't make sense - any conclusion about x86 vs. ARM ends up being inconclusive. One could argue that Intel has an advantage in process technology, so this comparison is somewhat fair, but we all know Intel seems to be behind their own process technology curve when it comes to phones/tables, so until 22nm stuff ACTUALLY hits the market, this 32nm vs. 40nm comparison based on an "idea" of advantage in process tech is pointless

      • Game_boy
      • 7 years ago

      He has somewhat of a point because Intel now have a two year lead on the foundries now. But yeah Tegra was a poor choice.

        • Arag0n
        • 7 years ago

        Not just that, 40nm bulk vs 32nm HKG, it’s more than 2 years advantage… they should compare with S4 Pro or S4 at 28nm bulk, it would be much more fair…. it was just a silly review IMHO, and most of tests were also single core based and consequently denied the multicore advantage of Tegra3 also.

          • mesyn191
          • 7 years ago

          Its not 40nm bulk, its 40nm G and 40nm LPG processes for Tegra 3. Its mentioned on the last page of the review. I have no clue how good exactly TSMC’s G and LPG processes are vs Intel’s “SoC 32nm” process but the difference is probably big.

          Not that it matters.

          Intel having a process advantage is hardly “unfair”, that is final for sale iteration of that product and it will be competing in the market place vs. the Tegra 3. It’d be like arguing its unfair to compare AMD chips to Intel chips because Intel has a process advantage there too.

          The Intel chip would’ve won the multi threaded apps too FYI. Its another very AMD-esque situation all over again, the per core performance advantage is too high.

            • NeelyCam
            • 7 years ago

            [quote<]Its not 40nm bulk, its 40nm G and 40nm LPG processes for Tegra 3.[/quote<] In general, "bulk" means non-SOI. All of these processes are "bulk". All of the 40nm TSMC ones are also without high-K/metal-gate transistor structure. Intel 32nm is "bulk" (=non-SOI) but uses high-K gate dielectric and metal gate

            • mesyn191
            • 7 years ago

            That isn’t correct.

            [url<]http://www.tsmc.com/english/dedicatedFoundry/technology/40nm.htm[/url<] "The 40nm G processes provide more than twice the density at the same leakage and more than a 40 percent speed improvement compared to TSMC's 65nm process." The 40nm LPG process is some sort of low power process from TSMC BTW. "Bulk" is an ambiguous term of art but in the process world means "process that emphasizes cost and yield with performance and power being distant 3rd or 4th rank priorities". As far as anyone knows the details on TSMC's 40nm G and LPG processes aren't public so we can't really compare it properly to Intel's 32nm "SoC" process, however we still know it isn't bulk at all. But generally Intel has had a significant lead over everyone else even when compared at the same nm pitch and process type while still maintaining good or even excellent yields. Pretty much anything that isn't made by Intel is likely to be at a significant disadvantage when being compared to Intel's products for many years to come because of this. Perhaps when the industry runs out of ways to shrink the chips for a while that may change as the other foundries might be able to catch up. Intel has some pretty good process R&D guys too though...who knows, they might be able to get optical or carbon based chips working far ahead of everyone else too.

            • NeelyCam
            • 7 years ago

            [quote<]That isn't correct.[/quote<] Which part exactly do you think isn't correct? Like you say, "bulk" is somewhat ambiguous, but those I know in the industry and academia consider "bulk" to mean 'basic' silicon substrate, as opposed to something more exotic like SOI (that cost more). Yes - there's potential for misunderstandings, and people may use the word "bulk" for different things. I guess what I think the word means is based on how it was used in Europe/USA conferences and companies.. maybe it means something different in Asia? Nonetheless, looks like we both agree on everything else but the meaning of "bulk", so let's just call it semantics

            • mesyn191
            • 7 years ago

            Everyone’s process tech is different so what is “bulk” for one company to produce is “exotic” for another even though they may both still put out the same number of parts. It all depends on their tools and what they’re “geared” to churn out.

            SOI for instance is no longer considered to be exotic, it just costs more (IIRC about 10% more) than “plain” silicon and offers little to no benefit now at today’s smaller lithographies. That is why even AMD/GF is abandoning it soon.

            The exotic stuff these days involves IGA or InSb. Stuff like Graphene they’re still trying to figure out at this point.

            • NeelyCam
            • 7 years ago

            As I said, it’s semantics. To you, anything cheap means “bulk”. To me, anything that starts off from cheap silicon wafers (=silicon substrate without insulating layers) is “non-bulk”. I called SOI exotic because most fabs are shunning it (with GloFo and IBM being major proponents).

        • NeelyCam
        • 7 years ago

        Intel may have the process advantage, but so far they are not using it for where it matters the most (cellphone/tablet)… who knows why. My guess is internal screwups – something that we’ll never know about because everything they do is ‘internal’ – but I suppose it could also be intentional segmentation, although I don’t see how that would help the company.

        Nonetheless, the comparison in this Anandtech review is completely unfair and pointless

          • Arag0n
          • 7 years ago

          Yeah, it’s like: Shocking news, 32nm HKG has lower TDP at idle than 40nm bulk.

          • eofpi
          • 7 years ago

          The more likely explanation is that margins are better on desktop/laptop parts than phone/tablet SoCs. Bleeding-edge fabs are horribly expensive, and higher margin chips will pay the fabs off faster.

        • Hattig
        • 7 years ago

        A two year lead, that they are using for leading edge performance CPUs, not their low power mobile SoCs.

        And people aren’t using ARM just because it’s low power. They’re using it for the ecosystem, the configurability, the ability to design your own chips, or select from several dozen decent options between $10 and $30.

        And Tegra 3 with functionality disabled – wow, fair comparison – a year old design that’s about to be replaced with Tegra 4 being compared to the latest and greatest Intel SoC. Hardly a fair comparison, especially since there are 32nm Samsung SoCs out there already, more up to date ARM cores than the A(, etc.

        But most damning – even Neelycam doubts it.

      • MadManOriginal
      • 7 years ago

      It has more to do with the fact that the test equipment and rig was whipped up by Intel than any bias on Anand’s part, at least that’s my take after reading the conclusion. He says he really wants to try this with the iPad and other devices too. The reason for a system that doesn’t use the companion core is to compare Windows 8 experiences, even if it’s RT vs x86, so I think that’s fair…would it make any sense to compare Android to Win x86? That would certainly get a lot of complaints too. It’s Microsoft’s fault the companion core can’t be used, but that doesn’t matter to the end user. It does make Tegra 3 a poor choice for the Surface though.

      I read it as mostly a cool insight into how Intel tests things, with the specific results nice but secondary. Being worried about the process node comparison not being representative of the market only applies for <1 year until Silvermont is out (which will bring other massive improvements) then Intel is tick-tocking Atom so as a longer-term view it’s useful, and as a general ‘no, x86 power draw doesn’t suck’ (which we already knew) it’s nice too.

      btw I made this thread just for you Neely [url<]https://techreport.com/forums/viewtopic.php?f=13&t=85581[/url<] 😀

        • NeelyCam
        • 7 years ago

        [quote<]btw I made this thread just for you Neely[/quote<] Oh, thanks! I appreciate it 🙂

      • ludi
      • 7 years ago

      At first, I had similar thoughts regarding Anand’s acceptance of WinRT being an equal to Win-x86 when Microsoft has extensive experience with one architecture and not the other. But OTOH, the fact that Intel has at least equivalent power/performance at this tablet product generation is still pretty impressive.

        • Arag0n
        • 7 years ago

        But the bad thing for consumers is that Intel has that because manufacturing advantage… why? Because it means that if Intel was to spin-off their foundry business (unlikely) ARM CPU´s made with intel foundry would be better than Intel hands down.

        I believe TSMC, GF or Samsung will eventually catchup with Intel, and then we may be doomed if Intel had an early win because an advantage they setup with computers/laptops revenue. Do not forget how inexpensive those ARM CPU´s are in front of ARM… we are talking about at least 3 to 1 price difference….

          • NeelyCam
          • 7 years ago

          [quote<]Because it means that if Intel was to spin-off their foundry business (unlikely) ARM CPU´s made with intel foundry would be better than Intel hands down.[/quote<] Better in what way? Performance? No way - x86 architecture with all the Intel/AMD-patented performance tricks aren't available for ARM. Don't forget - both Intel and AMD have been developing these high-performance cores [i<]much[/i<] longer than ARM. Efficiency? The current data pitting 28nm TSMC ARM A9 vs. somewhat similarly performing 32nm x86 Atom cores shows that x86 is more efficient with the same OS (Anand's iPhone 5 review has some interesting task energy data showing how well Snapdragon S4s do against Intel 32nm Atom). Testbench details can be argued, of course, but saying that when comparing ARM vs. Atom in the same process and performance envelope, ARM wins "hands down" is plain wrong. [quote<]Do not forget how inexpensive those ARM CPU´s are in front of ARM... we are talking about at least 3 to 1 price difference....[/quote<] Price != cost. If Intel wants to compete on price, they can - they can sacrifice both "design" and manufacturing margin to compete on price, and still be above cost. ARM chip makers like Qualcomm won't want to give up their >60% gross margin either... and they know if they wanted a price war, they'd lose

            • Arag0n
            • 7 years ago

            I know Price does not equal Cost, but I do not believe Intel will use their top-notch factory capacity to fight a price war with ARM, neither reach similar price points. Even if they do, and Intel wins long term as they seem to be almost to do with AMD, it would hurt consumers the shift from N ARM manufactures for NxM devices to Intel manufacturer for all devices, and consequently, it will make computing resources more expensive for most of us, even cost is the same.

            Everything else, well, maybe not hands down, but still I believe ARM as it is now, it is better suited for mobile than Intel will ever be anytime soon. Even out of mobile, for military proposes ARM (MIPS, or whatever other open ISA) is better option, because in case of blockade or war, Intel may be the homeland of your enemy while ARM manufacturers and designers can be just as many as countries on earth. War is not likely to happen anytime soon, but some countries pay attention to self-reliability, and Intel is a pretty bad choice for that propose, that’s why Chinese are investing in their own CPU design based on MIPS.

            You could argue also that world is safer if America has no chance to use IT companies such as Intel, Apple, Google, MS to blockade any country from a modern way of life.

            • NeelyCam
            • 7 years ago

            I somewhat agree on the military applications, although militaries wouldn’t care about violating companies’ IP rights. If the Chinese military wanted to design an x86 chip, nobody could stop them. I also like your cyberpunkish “paranoia” about corporate control of the world, and governments using them to their own ends. It could make a good movie.

            And you’re absolutely right – Intel crushing ARM is a horrible thing to consumers everywhere. I’m not saying I want it to happen – I’m just saying it [i<]will[/i<] happen. At that point I hope regulators are on top of things to limit the damage to consumers.. and we'll have to start having discussions about the format of our "free market" in general, and the balance between corporate freedom and regulation (and the role of external money in government).

            • Arag0n
            • 7 years ago

            Really, decreasing the reliance of IT companies on American ones can make the world more even. No one wants to rely on each other for basic needs such as water or electricity, right? Computing is becoming one of those. Countries may collaborate and create common markets, but any given country with sanity will try to warranty a minimum amount of those by home-made production.

            Military needs won’t care about violating companies IP, the point is usually you need the tech developed PRIOR to any war or military intervention required, so they need to design military systems around open API’s and if possible, national manufacturers or at least, designers.

      • Beelzebubba9
      • 7 years ago

      Maybe I read the article differently than you did, but Anand seemed pretty clear that he didn’t think it was a particularly fair comparison (and pointed out that there was a good reason Intel didn’t show up with an iPad 4 or the Samsung Nexus 10), but thought that the numbers would give a better insight into SoC performance, which is something he’s big into.

      With that in mind I think it’s hard to claim ‘bias’ – it shouldn’t be a surprise to anyone that Clovertrail is more power efficient than a previous generation SoC with one of it’s biggest efficiency features disabled.

      • ronch
      • 7 years ago

      Well, Anandtech has been, in my mind for a long time, biased towards Intel. They do try to give AMD some points but in the end, it’s so obvious how they’re all warm and fuzzy towards Intel. That’s why I do try to read their Intel vs. AMD vs. ARM comparisons with a bit of caution.

        • MadManOriginal
        • 7 years ago

        Let me guess, by ‘for a long time’ you mean since July 2006?

          • Beelzebubba9
          • 7 years ago

          People put far too much weight into the notion of bias, and such discussions are usually a crude cloak for an ad hominem argument. Anand made it pretty clear why he wrote the article, what the conditions that limited its usefulness were, and why he thought it was worth publishing anyway. At no point did I see any real indication of subversion or an attempt to hide an agenda while reading it.

          Since the “leak” of the Core 2 numbers back in 2006, it’s pretty clear that Intel finds a lot of value in maintaining a close relationship with Anand. And Anand clearly find a lot of value in the massive number of page hits his site gets from having exclusive access to the engineering resources and products of the biggest and most important semiconductor manufacturer in the world. It seems to me that if a reader has a fundamental issue with this relationship, then they shouldn’t be reading AT.

          That said, I’ve never personally found any editorial slant to Anand’s writing that I though merited additional skepticism. Had Intel’s staged tests shown that Clover Trail was vastly faster and more power efficient than its real competition with results that didn’t reflect what was already available on the internet, then I believe there’d be cause for real concern. But seeing as that wasn’t the case, I’m having a hard time finding fault with either the tone or the content of the article.

    • MadManOriginal
    • 7 years ago

    Dear various websites,

    Coding your website to make it look like snow is falling by displaying random, small white blocks streaming down the screen is neither creative nor cool. It just makes me think that my graphics card memory has just started dying. Please don’t do this.

    Thanks.

      • just brew it!
      • 7 years ago

      Reminds me of a prank I played on a co-worker many years ago. Wrote a program that ran in the background, and took a snapshot of the display frame buffer and copied it back into the frame buffer upside-down (yeah, security on early Sun workstations sucked). He actually thought his monitor was dying… started whacking it to try and get the image to flip back!

        • anotherengineer
        • 7 years ago

        You sneaky bastage!!!!

        Nice one though, my roomate in school, messed with my autofix spelling on word, so if I typed in “and” it would auto change to “gay” or something.

        Since I look at my keys more than the screen, half of my words were messed and was wondering wth happened, lol good times.

    • lilbuddhaman
    • 7 years ago

    Good god man take a break! ( yay for 1am work call-ins btw)

Pin It on Pinterest

Share This