Samsung working on custom 64-bit processor core

During an analyst event in South Korea, Samsung unveiled plans for some interesting new mobile technologies. The audio and slides for all the event’s presentations are available online, and there’s a lot of analyst-focused information to wade through to get to the juicy tech details. Here are the highlights.

The biggest revelation is that Samsung is working on a custom CPU core. The core will be 64-bit, and it should be ARM-compatible like the custom cores in the latest Qualcomm and Apple SoCs. There’s no timeline for its arrival, though. Samsung will apparently adopt ARM’s existing 64-bit core before rolling out a custom solution of its own.

Although Samsung didn’t discuss other details about its custom core, it did reveal some interesting information on through-silicon via (TSV) technology that allows memory and logic circuitry to be stacked on the same package. Samsung says it has "a real chip" that uses TSV and is running "all the software." That chip offers 14% better memory performance than LPDDR3 with 60% lower power consumption, the company says, and the next generation is supposed to boost the performance advantage to 30%. TSV makes a lot of sense for mobile processors, and I’m curious to see how quickly the tech can be deployed in Samsung SoCs.

After discussing TSV and its custom ARM core, Samsung touted the benefits of its 14-nm FinFET fabrication process. The company didn’t connect the dots between those technologies, but another slide confirms that Samsung will begin producing SoCs using its 14-nm FinFET process starting next year.

In other semiconductor-related news, Samsung revealed that its 3D V-NAND flash memory is coming to client SSDs next year. The vertically stacked NAND is already shipping in an enterprise-class drive, and it’s scheduled to hit mobile devices in 2015.

On the display front, Samsung is promising 560-PPI smartphone screens for next year. It looks like those screens will be a little larger than 5" and boast 2560×1440 display resolutions. 4K smartphone displays are expected in 2015, so PPI records will continue to be shattered.

The roadmap above suggests that foldable displays will arrive by 2016. "Technology barriers" appear to be in the way, but there’s apparently nothing stopping Samsung from releasing "bended" displays next year. It’s unclear what differentiates the bent displays from the curved screen of the Galaxy Round.

Comments closed
    • pohzzer
    • 6 years ago

    Total bummer seeing technology reach this incredible threshold of fascinating possibilities even as Fukushima steadily, and apparently unstoppably, deteriorates toward total site meltdown, rendering the northern hemisphere uninhabitable.

      • BlondIndian
      • 6 years ago

      WTF does your post or Fukushima have anything to do with Samsung ?

      • Pwnstar
      • 6 years ago

      TAKE YOUR MEDICATION, PLEASE!

    • pohzzer
    • 6 years ago

    Hope the 4K smartphones come with a jewelers loupe.

    16K smartphones by 2020?

    • Unknown-Error
    • 6 years ago

    So is this bye bye [b<]big.LITTLE[/b<] for the big guns? (excluding mediatek and a few others) [b<]Qualcomm[/b<] - Gone 'Custom' long ago [b<]Apple[/b<] - Gone 'Custom' And now [b<]Samsung[/b<] - about to go 'Custom'? Edit: [b<]nVidia[/b<] will also go custom 64-bit with 'Denver'

      • BlondIndian
      • 6 years ago

      big.LITTLE is different from custom cores . Samsung could theoretically design two 64-bit Cores and use big.LITTLE with them. However I agree that this is unlikely.

      • jihadjoe
      • 6 years ago

      Probably.

      big.LITTLE was stopgap measure that really only said that ARM couldn’t control clocks or power gate on their bigger chips as well as they would have liked.

      The handover process alone introduces a whole bunch of possible problems. Like the L2 cache is bigger on A15 than it is on A7, and A7 doesn’t have L3 at all. So what happens to coherency when big.LITTLE chooses to switch cores? Does it somehow know which portions of the cache is relevant to the current execution context and copy accordingly? Or does it have to reload everything from main memory?

    • ronch
    • 6 years ago

    I think it’s about time the big boys started designing big cores using ARM, and not just those cute little cores for use in smartphones and tablets, which are practically just toy computers. We want big ARM cores that feature all the tricks in the book used by Haswell. Come on, Samsung, Nvidia, [s<]AMD[/s<], Apple... put those billions to good use!

      • Klimax
      • 6 years ago

      What for? No advantage there.

        • ronch
        • 6 years ago

        To chart the future of computing.

        • Pwnstar
        • 6 years ago

        If ARM can do it for a lower price than Intel, that’s an advantage.

    • ronch
    • 6 years ago

    Bulldozer’s Chief Architect, Mike Butler, has gone to Samsung. Is he part of the team designing this core?

    It’s very interesting to see how ARM is increasingly gaining momentum, with many companies either adopting ARM’s off-the-shelf cores or designing their own cores to differentiate themselves from the competition which is getting a bit heated. Of course most of the companies that are able to design their own cores are big companies, considering the resources needed to pull such things off. Intel must be getting worried here. It’s up against an increasingly popular ISA backed by some really big boys in the industry. If they’re not careful x86 may well be left out in the cold many years from now. Hopefully my desktop or laptop 10-15 years from now is powered by something other than x86. I’d much rather have an ISA that’s open to anyone who wants to implement CPU cores using it than one that’s being controlled by a big bully against a far smaller competitor (that would be little ol’ AMD, of course).

      • Pwnstar
      • 6 years ago

      So Exynos 6 is Samsung’s Bulldozer? =P

        • ronch
        • 6 years ago

        Does it have lots of [s<]cores[/s<] modules, run hot, and have low per-core performance? /sarcazm

          • Pwnstar
          • 6 years ago

          Dunno, why don’t you ask Mike Butler that question?

            • ronch
            • 6 years ago

            How could I? I still haven’t gotten an answer as to whether or not he’s part of the team that’s designing this Samsung core.

            • Pwnstar
            • 6 years ago

            I was being snarky. =P

      • shaurz
      • 6 years ago

      Indeed. Intel’s decades-long stranglehold on x86 has been very damaging to CPU competition and innovation in the PC space. Hope to see some real desktops and laptops with ARMv8 cores. I hope I don’t have to wait 10 years though…

        • ronch
        • 6 years ago

        The only things keeping x86 alive are Windows and the x86 software installed base. When people buy a PC they want it to be able to run Windows as well as their current software. Well, what if an ARM-based PC could also run Windows and all your current PC apps? If performance, price and power efficiency are comparable to x86 microarchitectures, an ARM core would be an easy pick.

        But then again, as time goes on PCs will start (or have started) to look old, and people will want sleeker machines, and if Intel can’t successfully fend ARM off, they’ll find themselves alone in their sandbox.

    • RandomGamer342
    • 6 years ago

    [quote<]On the display front, Samsung is promising 560-PPI smartphone screens for next year. It looks like those screens will be a little larger than 5" and boast 2560x1440 display resolutions. 4K smartphone displays are expected in 2015, so PPI records will continue to be shattered.[/quote<] And once again, we lose any kind of battery gain the SoC upgrades give us

      • Airmantharp
      • 6 years ago

      Depends on the display tech. Stuff like Sharp’s IGZO etc. are designed to increase resolution without decreasing transmissivity, thus keeping backlight power levels the same.

        • suprem1ty
        • 6 years ago

        I wonder about the video card drawing more power though. Doing animations & games etc. on ~3.7 million pixels has got to be pretty hard on it.

        I kinda think the returns from increased resolution on mobile devices diminished a while ago. Once you hit that point where you can’t easily distinguish individual pixels / everything looks fluid and smooth what’s the point except to look better on paper?

          • Pwnstar
          • 6 years ago

          Higher resolution means you can fit more things into the display, like it is zoomed out. You can also hook the phone up to a high resolution TV better, without it being squished into a small area on screen (in other words, the actual resolution).

          • Airmantharp
          • 6 years ago

          I’m actually wondering myself what the point is- particularly, for a number of things you can expect upsampling to occur to keep performance in line. The power draw isn’t going to be a big issue as frame buffers are extremely small and most animations are not intense, but obviously for certain things there’s going to be a cost.

            • Pwnstar
            • 6 years ago

            Up-sampling doesn’t look as good as the real thing.

      • Pwnstar
      • 6 years ago

      Then increase the size of the battery, please.

    • tipoo
    • 6 years ago

    I don’t understand why they can spend the money on useless 560ppi displays, but can’t be bothered to calibrate a display properly before shipping it.

    Sure, there’s “Because consumers like big numbers and don’t understand color accuracy”, but the review circuit regularly calls them out on it.

    I will however be interested to see their CPU chops rather than just licencing ARM cores. And for the love of science, phone chip makers need to stop chasing GHz at the expense of efficiency, Apple really pulled a Core 2 Duo on everyone with its 1.3GHz part beating their 2.3GHz ones.

    • windwalker
    • 6 years ago

    I thought the consensus here was that 64 bit was a silly marketing gimmick.
    Or is it only when Apple beats all the Android “innovators” by many years?

      • tipoo
      • 6 years ago

      I don’t think that was the consensus here at all. 64 bit app builds had a notable undeniable performance increase. Not as big as the architecture change alone, but still a sizable sum on top of that.

      64 bit with the same RAM size could be a criticism (Anandtech showed the same apps using 30% more RAM, and even before this they were booted out of memory too often), but it’s not just marketing on its own.

        • maxxcool
        • 6 years ago

        64bit addressing is only for ram support. not performance increases. the only performance increases you will see is if the application needs to address more 3.75 gigs of ram.

          • tipoo
          • 6 years ago

          Just look at the 32 vs 64 bit performance tests *ON THE SAME CHIP* on the AT 5S review. There are differences in the ARMv8 ISA that increase the performance. You could say it’s not directly attributable to 64 bit, but they are directly tied to each other, and “ARMv8 ISA” isn’t very marketable.

          Even outside of ARM, 64 bit brings wider registers universally which alone help performance somewhat.

            • chuckula
            • 6 years ago

            64 bits… in isolation… provides a performance benefit for 64-bit math operations and that’s about it from a computation standpoint. It has some benefits for workloads that require large contiguous memory mapped ranges (where the memory is usually sparsely populated). 64-bits most certainly can have performance disadvantages in some cases not the least of which are due to the increased cache pressure from having to deal with 64 bit pointers and increased memory usage that can come with some 64 bit programs.

            The reason that 64 bit has gotten conflated with “big performance!” is that the two major architectures that consumers use, x86 and ARM, have folded extensions to 64-bits in with other major architectural changes that give you the real-world performance boosts. In other architectures like MIPS that jumped to 64-bits without making huge architectural changes that are really orthogonal to 64-bittedness, the performance is almost flat or even negative outside of workloads that were expressly trying to do 64-bit precision math using 32-bit registers.

            • WillBach
            • 6 years ago

            In ARMv8-A the 64-math and 64-pointers are separated architecturally. You can have one without having the other. There are many performance features that come with the new ISA but Apple could have (in theory) taken the larger registers without the larger pointers. But then they would have to invest extra effort for almost no benefit and then compile separate 64-bit and 32-bit iOSes.

            • chuckula
            • 6 years ago

            [quote<]In ARMv8-A the 64-math and 64-pointers are separated architecturally. You can have one without having the other.[/quote<] That's nice, but the same thing applies to x86 BTW: [url<]https://noggin.intel.com/content/the-x32-abi-a-new-software-convention-for-performance-on-intel%C2%AE-64-processors[/url<] As for separate 32bit and 64bit IOSes, well they are going to have to support both short of leaving 64 bit functionality deactivated until the very last iPhone 5 and iPad 4 is officially EOL'd and the iPhone 5S and iPad Air are the oldest devices that still receive updates. Apple has been able to support coexistence between new and old architectures in the past through fat binaries & libraries, but in a mobile device with already limited storage the overhead of the fat binary approach may be harder to justify.

            • WillBach
            • 6 years ago

            We’re in a pedantry spiral ๐Ÿ˜ฎ And you’re [s<]wrong[/s<] less right ๐Ÿ˜€ The x32 ABI only works on x86 CPUs that support the 64-bit extensions. You can't have an x86 CPU with the wider 64-bit registers that doesn't have architectural support for the wider 64-bit pointers. You can use the wider registers (in 64-bit mode) while using only 32-bit pointers (x32) but there's no way for a CPU to conform to the AMD64 spec (and make those wider registers available) without supporting the full 64-bit pointers. (Can you tell that I [b<]love[/b<] computer architecture? Reading the Tech Report is what got me to change majors all those years ago.) Anyway, that's a good point about Apple needing 32-bit iOSes for the older devices, but if the A7 didn't support the wider address range Apple would have to have two versions of the ARMv8-A iOS if they added the wider address range later. Back to the first point. You're right when you say that Apple get's the benefits of the ISA change with 64-bit and that's where the real benefit is for now. I just wanted to say that, in theory, Apple could have gotten those ISA benefits without the wider address range, even though it would have been a dumb idea ๐Ÿ™‚

            • tipoo
            • 6 years ago

            Yes, that was part of my point, ARM at 64 bit is inseparable from the new ARMv8 ISA, which has a whole host of improvements.

            • maxxcool
            • 6 years ago

            going to 64 bit on the same core yields no gains. the gains are purely from isa. your argument is invalid as the point is “going 64 alone makes a huge difference” which upon i replied.

            • tipoo
            • 6 years ago

            I’m aware. But you also can’t go to 64 bit alone without adopting ARMv8, on the ARM side of things.

            • Pwnstar
            • 6 years ago

            That’s right, it’s not.

            [quote<] it's not directly attributable to 64 bit[/quote<]

            • maxxcool
            • 6 years ago

            you do realize it has a different architecture.. right ? /faceplam/

            • willyolio
            • 6 years ago

            are you sure this isn’t simply due to a bad 32-bit emulation layer? 32-bit programs run on WOW64 within 64bit windows. WOW64 runs wonderfully with no decrease in performance, but are you sure it’s the same for apple’s iOS?

      • Voldenuit
      • 6 years ago

      [quote<]I thought the consensus here was that 64 bit was a silly marketing gimmick. Or is it only when Apple beats all the Android "innovators" by many years?[/quote<] 64-bit is a gimmick when Apple stiffs the iPhone 5S/5C and iPad Air users with 1 GB of RAM despite the 64-bit OS and application pointers using more RAM. On something like, say a Galaxy Note 3 that is already pushing the 32-bit address space with 3 GB of RAM, 64-bit will be needed soon, so it would not be a gimmick. If Apple wants to talk the talk, they need to walk the talk. They want to tout 64-bit CPUS, they better put more goddamn RAM in their devices.

        • LukeCWM
        • 6 years ago

        You summed it up perfectly. =]

        • tipoo
        • 6 years ago

        They could definitely use more RAM, especially as the same 64 bit apps are using 30% more. But that’s not to say 64 bit is completely pointless aside from RAM, there are changes to the ARMv8 ISA that make it much better. Look at Anandtechs tests with 32 vs 64 bit iOS apps, there is a definite performance improvement ASIDE from just the new architecture. Do I wish they moved it to 2GB RAM? Yes, absolutely. But it’s not just marketing, there is a performance angle as well.

        • WillBach
        • 6 years ago

        Apple is walking the walk – there are a lot of benefits outside of addressable memory to going with the new 64-bit capable ISA (ARMv8-A). While you could, in theory, make an ARMv8-A processor with only 32-bit addressing, there’d be only minuscule benefit over full 64-bit addressability. And developers won’t ever have to compile separate 32-bit ARMv8 and 64-bit ARMv8 binaries, which is a big win.

          • maxxcool
          • 6 years ago

          So you’d rather force people to code 64 bit when they have been coding 32bit all along and raise DEv costs? for zero gain in a pure 32-64 bit comparison ?

            • WillBach
            • 6 years ago

            What? No! It’s the inverse! Nothing is forced – 32-bit apps run [b<]just fine without even a recompile[/b<] on the A7. A simple recompile (unless you hard-coded pointer sizes) lets applications run 64-bit with a [b<]significant performance gain[/b<]. The only dev's who are forced to do more work are the ones writing the Objective-C runtime at Apple. Mike Ash has a great explanation (including some benchmark data) at [url=http://www.mikeash.com/pyblog/friday-qa-2013-09-27-arm64-and-you.html<]his blog[/url<].

      • Laykun
      • 6 years ago

      There were other more technical reasons for 64bit processing being a plus on mobile. It’s not just for addressing extra memory, but the way the internals of the processor change it can deliver performance increases to certain types of problems. I don’t remember what they were, but I remember it being somewhat significant.

        • tipoo
        • 6 years ago

        64 bit brings wider registers which help with a lot of things. It’s not just for memory addressing. You can move more data over the same speed bus. The proof is in the pudding, everyone here doubting 64 bit does anything should look at the anandtech 5S review, 64 bit builds increased performance on the same chip over 32.

          • axeman
          • 6 years ago

          edit: removed for saying what you said in another post, the bigger gains on A7 are from improvements in ARMv8A apart from 64bit-ness.

        • Billstevens
        • 6 years ago

        Any problem that would be slowed by having to break up registers into smaller 32bit chunks to do computation could be faster on a 64 bit architecture. Having a 64bit chip adds new capability for developers which I consider a plus, as long as the performance penalty for older legacy applications is negated by overall processor improvements being 64bit can only be a good thing. Though not necessarily a reason to jump up and down with excitement.

        Wikipedia sites encoder, decoders, and encryption software as benefiting from bigger resisters. Phones are doing more and more of what PCs do so at some point these capability improvements will matter a lot more than they may today.

      • brucethemoose
      • 6 years ago

      64-bit may be a marketing buzzword. But a new ISA is a BIG deal.

      • Pwnstar
      • 6 years ago

      because MOAR REGISTARS

      (misspelling is intentional)

      • trackerben
      • 6 years ago

      Samsung’s position agrees more or less with Apple’s

      [url<]http://news.cnet.com/8301-1035_3-57611431-94/samsung-to-launch-64-bit-phones-in-2014-says-report/?part=rss&subj=news&tag=title[/url<] Samsung to launch 64-bit phones in 2014, says report "...Currently, with the exception of Apple, smartphones use 32-bit processors. A 64-bit processor can address memory above 4GB and can deliver higher performance on certain operations -- such as those related to games -- than 32-bit chips."

    • MadManOriginal
    • 6 years ago

    4k smartphone screens…probably still with < 24 hour moderate use battery life. >:(

    • HisDivineOrder
    • 6 years ago

    So 4K phones are coming 2015, but these companies can’t give me a 120hz IPS 1600p panel connected to a SOC that can handle it for a decent price? Wtf.

      • internetsandman
      • 6 years ago

      All I want is 1600p@120hz, I don’t give a damn what I have to connect it to

      • TwoEars
      • 6 years ago

      I hear you.

      I have no reason to upgrade my computer as it is right now since it easily can handle 1920×1200@60Hz.

      I don’t want to go to 1080p@120Hz since I don’t want to lose vertical real estate.

      Give me a 2560×1440 or 2560×1600 @ 120 Hz. Time to get this show on the road.

        • brucethemoose
        • 6 years ago

        2560×1440 at 120hz is already here. I’d rather have a VA panel though.

          • Firestarter
          • 6 years ago

          Oh is it? Show me.

            • Airmantharp
            • 6 years ago

            Various Korean exporters have been using the LG panels (that everyone and their brother uses) with semi-custom controllers. A quick search would get you there, but note that such monitors are not without their drawbacks.

            • internetsandman
            • 6 years ago

            I think this should be amended to say:

            We want 1440/1600p@120hz stock, out of the box, from a reputable, top tier manufacturer and with a significant return policy and warranty that doesn’t have to be voided in order to achieve said 120hz

            • Firestarter
            • 6 years ago

            well honestly I wouldn’t mind going into unsupported territory, as long as it doesn’t immediately void my warranty

      • Kurotetsu
      • 6 years ago

      Because they can sell a hell of alot more 4k phones, with much healthier profit, then they can 1600p (with whatever dozen features you want) or 4k monitors.

    • OneArmedScissor
    • 6 years ago

    I would like to know if that 60% reduction in power is referring to the RAM or also the memory controller and whatever is linking them.

    There would be a huge difference between the two as this technology is scaled up. The RAM is going to make a bigger difference in anything running on a battery, but the memory controllers are the power guzzlers in in server / workstation CPUs and graphics cards.

      • NeelyCam
      • 6 years ago

      Some of the power savings is definitely coming from some WideIO-like link between the memory controller and the TSV-connected memory chip. Even LP-versions of DDR standards are completely suboptimal for an extremely short links.

      On that note, as far as I can tell, HMC is sort of applying similar technique inside the memory cube itself – stacked memory dies with TSVs for connecting them to the logic buffer chip. Meanwhile, the link between the memory cube buffer chip and the CPU/GPU is using some high-speed serial link (instead of a superwide DDR-like power-guzzling link)

        • NeelyCam
        • 6 years ago

        Am I a whiny b*tch..?

          • Pwnstar
          • 6 years ago

          Yes.

        • the
        • 6 years ago

        Agreed. It wouldn’t surprise me to see some SoC’s come with a companion eDRAM dies. Even with current stacking and pack-on-package technology there is room to drive down both power and latency while increasing memory performance.

    • jdaven
    • 6 years ago

    So who will get a 14 nm product to market first: Samsung or Intel? Both have a lot of resources and high tech fabs. It will be close.

      • chuckula
      • 6 years ago

      Pffft…. Intel ALREADY has 14nm products on the market… at least according to Samsung’s definition of “14nm”.

      In fact Jdaven… since you like Samsung’s metric so much, YOU MIGHT WIN THE BET! Here’s why: Intel won’t introduce any *new* “14nm” parts in 2014.. because according to Samsung’s definitions, Intel will actually be introducing 10nm parts in 2014! JDAVEN IS PROVEN RIGHT!

      • NeelyCam
      • 6 years ago

      My guess is Intel. Samsung will be about 12-18 months behind.

      What will be interesting how well Samsung will be able to shrink the die area. TSMC has already stated that they won’t shrink back-end between 20nm and 14nm – i.e., area won’t reduce much. Intel has stated their area scaling keeps going:

      [url<]http://static.cdn-seekingalpha.com/uploads/2013/7/6/1095245-13731690657196722-Ashraf-Eassa_origin.png[/url<] [url<]http://seekingalpha.com/article/1536872-intel-could-be-hiding-something-huge[/url<] Samsung hasn't said anything about it.

        • Antimatter
        • 6 years ago

        There’s speculation that Intel’s 22nm process is actually 26nm. At the “14nm” node Intel and other foundries will likely have similar sized transistors.

        [url<]http://www.electronicsweekly.com/mannerisms/general/the-intel-nanometre-2013-02/[/url<] [url<]https://www.semiwiki.com/forum/content/2640-intel-14nm-delayed.html[/url<]

          • Pwnstar
          • 6 years ago

          That’s not what the chart he linked says. Are you claiming it is wrong?

          • NeelyCam
          • 6 years ago

          Even if Intel’s “22nm” transistor is actually a “26nm” transistor, its performance is way better than TSMC “28nm” transistor’s, and likely to be better than a “20nm” TSMC transistor’s (because Intel’s is FinFET and TSMC’s is planar).

          The nanometer numbers have lost their correlation with transistor sizes a while ago. These days they are used to loosely represent 1) transistor performance or performance/watt improvements, and 2) area reduction. Soon, as the graph shows, the area reduction will stop going hand-in-hand with the “nanometers”, which will result in significant cost challenges, as NVidia was complaining here:

          [url<]http://www.extremetech.com/computing/123529-nvidia-deeply-unhappy-with-tsmc-claims-22nm-essentially-worthless[/url<]

      • derFunkenstein
      • 6 years ago

      Even if Broadwell is later than Intel wants, it’ll be out first.

      • tipoo
      • 6 years ago

      Hah. Intel. It won’t be close. Their fabrication process advantage is still huge, and even on the same size (their 22nm vs anyone elses 20nm) they have far more advanced processes due to Finfetts and all kinds of other things, it’s just not comparable directly with size.

      There are also differences in how size is measured, it’s not an exact science, it usually describes the minimum feature size but the average may be higher on other processes.

      • brucethemoose
      • 6 years ago

      You’re joking right? Cause I +1’d you for the sarcasm.

      • WillBach
      • 6 years ago

      If you specify SoC, it could be close. Last I heard (from [url=http://www.xbitlabs.com/news/mobile/display/20131017232018_Intel_14nm_Atom_Airmont_Processors_Are_On_Track_for_2014.html<]Intel: 14nm Atom โ€œAirmontโ€ Processors Are On-Track for 2014 - X-bit labs[/url<][list<][*<]Intelโ€™s new Merrifield (22nm) system-on-chip is due in late 2013 [/*<][*<]Moorefield SoC (22nm refresh) is due in the first half of 2014. [/*<][*<]Morganfield (first smartphone processor made on Intel's 14nm process) is due in Q1 2015.[/*<][/list<] Edit: typo.

        • brucethemoose
        • 6 years ago

        Though it’s not quite there, Broadwell is nearly an SoC.

          • WillBach
          • 6 years ago

          True! I should have said “mobile” SoC ๐Ÿ™‚

    • WasabiVengeance
    • 6 years ago

    We don’t need 5″ 2560×1440 displays. We need 20″ 3840×2400. PC monitors are so utterly frustrating these days. At least there are a few decent laptops now with high PPI.

      • Andrew Lauritzen
      • 6 years ago

      Head-mounted displays like Occulus Rift absolutely need 5″ 2560×1440 displays, but I agree with sentiment of wanting bigger high-dpi ones as well ๐Ÿ™‚ Problem is that yield issues are amplified the larger the screen.

        • brucethemoose
        • 6 years ago

        2560x1440x2 headsets? My wallet is ready.

      • Thrashdog
      • 6 years ago

      A 5″ 2560×1440 sounds great for HMDs, though.

      • willmore
      • 6 years ago

      It’s not like they’re mutually exclusive, you know.

        • LukeCWM
        • 6 years ago

        The industry seems to imply they are mutually exclusive.

      • windwalker
      • 6 years ago

      And when you’ll be willing to pay for them you’ll get them.

        • Pwnstar
        • 6 years ago

        If the industry doesn’t make them, how exactly do you pay for them?

          • windwalker
          • 6 years ago

          How many of those $3000 4K displays have you bought?
          Have you at least badgered your boss to buy some at work?

            • Pwnstar
            • 6 years ago

            Wait, are you saying there are 20” 4k displays? Because that’s the topic you replied to with your drivel.

            • windwalker
            • 6 years ago

            LOL.
            Oh yes, they’re too big.
            That’s why you haven’t bought any 4K displays.

            • Pwnstar
            • 6 years ago

            If they aren’t making what I want, why would I buy something I don’t want? Because you want it? Is that it?

            • windwalker
            • 6 years ago

            [url=http://www.engadget.com/2013/11/07/panasonic-4k-toughpad-tablet-us<]Here[/url<] you are, 20". How many of these bad boys are you buying?

            • Pwnstar
            • 6 years ago

            Your example has a bunch of stuff I don’t want to pay for in it. About $2,000 in computer parts and a touch screen. I’ll pass.

    • the
    • 6 years ago

    It’ll be interesting to compare the performance from four different 64 bit ARM architectures in the future: ARM’s own designs, Apple’s, Qualcomm’s and now Samsung’s. Competition is a good thing!

    As for when we’ll see Samsung’s own ARM core, I suspect it’ll be sometime in 2015 at the earliest. The announcement the Samsung will utilize ARM’s own designs first points toward A53 and A57 in 2014.

      • dpaus
      • 6 years ago

      I’d add AMD to that list. If they’re able to offer GCN modules within the ARM infrastructure, they’ll have a unique advantage in many ARM markets – especally when combined with their ‘Freedom Fabric’ solution.

        • the
        • 6 years ago

        AMD’s role is no different than other SoC designs that utilize ARM’s own CPU designs with regards to GPU’s. Certainly AMD is going to be a competitor on the GPU side against ARM’s Mali, Qualcomm’s Adreno, Imagination’s PowerVR, and nVidia’s mobile Kepler. If the rumors are true, Apple is developing their own GPU’s in-house as well.

        AMD’s already has a foot hold in the server market which gives them an advantage there for RAS and scalabiltiy. I wouldn’t expect AMD’s ARM based Opterons to aim for the single threaded performance crown, even within the subset of other ARM based systems. Any HPC type workloads will be supplement with a healthy dose of GNC based GPU’s.

      • blastdoor
      • 6 years ago

      Competition is indeed a good thing.

      It’s too bad (though understandable) that there is such limited info about Apple’s A7. Anand is now speculating that Apple has designed an ILP monster of a core: “As far as I can tell, peak issue width of Cyclone is 6 instructions.” (http://anandtech.com/show/7460/apple-ipad-air-review/2).

      So, correct me if I’m wrong, but isn’t that Itanium-class ILP?

      It’s hard to believe, but given the performance of the A7 while running at a lowly 1.3 GHz with just 2 cores, it almost has to be true that it’s an ILP beast.

      I wonder if Apple has done something interesting with its compiler in order to support this level of ILP….

        • Deanjo
        • 6 years ago

        [quote<]I wonder if Apple has done something interesting with its compiler in order to support this level of ILP....[/quote<] Nothing overly interesting recently. Apple started committing ILP optimizations to LLVM way back in 2006.

      • Narishma
      • 6 years ago

      Isn’t Nvidia making one of it’s own as well?

        • the
        • 6 years ago

        Yep, I forgot about nVidia’s Project Denver.

        My suspicions here is that nVidia is doing something radically different form all the other ARM CPU designs: they’re essentially going to integrate an ARM core as a command processor into an SMX cluster. Delegation of code is handed to the ALU’s in the cluster via ARM’s coprocessing facilities. This path is mainly an efficiency play on the GPU side as it allows from some register to register operations between CPU and GPU resources.

      • NeelyCam
      • 6 years ago

      I’ve predicted that ARM won’t be able to compete with the custom core designers simply because they don’t have enough of a cash flow to support competitive R&D resources. Apple’s and Qualcomm’s cores so far have been simply better optimized than what ARM has produced (just think A15 vs. Apple A6/A7 and Snapdragons).

      In the latest ARM financial results, ARM was cutting R&D funding (while increasing sales/marketing and administration). That will only make it more difficult for them to keep competing with Apple/Qualcomm/Samsung.

        • the
        • 6 years ago

        So far, no other player has been licensing their custom ARM cores, mainly because the custom cores are used as a gateway to sell an entire SoC. Qualcomm is the big example here with their SnagDragon cores. If there is a large enough vendor wanting a particular set of features in an SoC, Qualcomm will custom design it to those specs. nVidia has stated that Project Denver so far is going to show in a GPU, though Tegra is expected at some point. Samsung has yet to announce any end products using its customer core. Much like Qualcomm, nVidia’s and Samsung SoC’s can be purchased and put into other devices. Apple’s approach is far narrower: its customer cores only go into Apple SoC’s destined only for Apple products.*

        There is still a need to have a vendor for just the cores not tied to a particular SoC. This allows the flexibility of truly custom SoC’s instead of being tied to a particular vendor and their implementation. In addition, it enables the custom SoC to be manufactured at wider range of shops. For example, Altera is using Intel to manufacture an FPGA chip with a quad A53 on-die.

        *There is an interesting exception here: Apple still has to manufacture several PA-Semi chips that originated before Apple’s purchase. Those PA-Semi chips use a custom PowerPC core and can be found in some Amiga systems and even military hardware.

Pin It on Pinterest

Share This