Soft Machines debuts CPUs and SoCs based on VISC architecture

When we last checked in with Soft Machines about a year ago, the company had just announced its VISC CPU architecture, along with some surprising performance claims. Today, the company presented some more details about the VISC architecture, along with a roadmap for VISC CPUs and SoCs, at the 2015 Linley Processor Conference. We spoke with Soft Machines founder and CTO Mohammad Abdallah and the company's VP of marketing and business development, Mark Casey, to learn more about these chips.

The VISC architecture

VISC CPUs are built around the concept of "virtual cores" and "virtual hardware threads." A middleware layer sits between the guest operating system and its targeted instruction set architecture. This middleware translates the guest application's ISA into VISC's native instruction set and distributes its workload across the CPU's virtual cores.

The most fascinating aspect of VISC is that even in single-threaded workloads, the underlying hardware has the ability to divide that work into chunks that Soft Machines calls "threadlets."  In turn, a VISC CPU can distribute the work of a demanding single thread on a virtual core across multiple hardware cores. It can also dynamically provision computing resources in mixed workloads where a demanding thread and a lighter-weight task need simultaneous access to CPU resources. That flexible resource allocation purports to allow VISC to deliver two to three times the instructions per clock of traditional CPUs.

The virtual core concept is also key to another one of VISC's claimed aces in the hole: power scaling. Traditionally, CPUs rely in part on increasing clock frequency to improve performance, and increasing frequency requires lots of power. Overclockers will already be familiar with the huge role that voltage increases play in power consumption and heat generation. This power wall is one of the major limits to increases in CPU clock speed—and to some extent, performance—of late.

VISC may perform an end-run around this power wall to some degree, since it can ideally muster unused computing resources from idle hardware cores and bring them to bear on a gnarly task executing on a single virtual core. Since the VISC CPU gets its performance gains from scaling resources rather than frequency, its power scaling characteristics are supposed to be more linear than a traditional CPU with dynamic voltage and frequency scaling, whose power expenditures follow the dreaded exponential curve as load increases. These power savings are claimed to allow VISC CPUs to deliver up to four times the performance per watt of competing chips.

Hardware

Soft Machines won't produce VISC CPUs on its own. Instead, the company hopes to license its intellectual property to partners, much like ARM and Imagination Technologies do. It'll also work with partners to tailor VISC chips to their own applications.

The first commercial VISC CPU design, code-named Shasta, can present one or two virtual cores to the guest operating system on top of two physical cores with 1MB of L2 cache each. Shasta is a 64-bit CPU, and because of the VISC translation layer, it can run applications built for a number of other ISAs. Most critically, Soft Machines says it's been able to scale frequencies from 500MHz in its 28nm prototype CPU to 2GHz with Shasta. That's thanks in part to targeting a 16nm FinFET process. The chip has a built-in, generic 256-bit interconnect bus that's adaptable to customers' own interconnect specifications. Shasta can also be configured in symmetric multi-processor configurations using Soft Machines' proprietary coherency tech.

Because of its power-scaling characteristics, Soft Machines says the Shasta CPU can deliver "server performance at mobile power." According to its own testing with the SPEC2006 benchmark, the company says a Shasta CPU can deliver higher performance in similar power envelopes when compared to a number of competing CPU cores ranging from mobile to desktop designs.

Soft Machines is also developing an SoC based on the Shasta CPU, which it's calling Mojave. This chip can scale across a number of power targets, from "high-end Internet of Things" devices to servers. Mojave is built around two dual-core Shasta processors to start with, while the rest of the SoC is meant to be easily customizable by design partners.

Some potential IP blocks on Mojave include one to four channels of low-power or regular DDR4 memory running at anywhere from 2400 to 3200MT/s, one to 8MB of system cache, display and imaging blocks with up to three 4K-capable display outputs, and inputs for dual 20MP cameras. The company says it's also working with Imagination Technologies to integrate the graphics firm's next-generation graphics-processing IP with the Mojave SoC, and the two companies are working to coordinate their roadmaps to hit a mid-2016 tapeout.

According to Soft Machines, the first partner products based on VISC CPUs will be announced in mid-2016. The company's roadmap will include parallel development of VISC CPUs and SoCs. The company's Shasta+ core IP will feature one to four virtual cores when it arrives in 2017, and it'll target the 10-nm process node once it hits production later on. The accompanying Tabernas SoC, much like Mojave, will incorporate Shasta+ cores in an SMP design. The Tahoe CPU will include anywhere from one to eight VCs on the same 10-nm node when its IP arrives in 2018.

We're fascinated by Soft Machines' technology, and it'll be interesting to see whether shipping hardware can deliver on the company's performance claims. If VISC works as claimed, certain parts of the CPU marketplace could be about to heat up. We'll be keeping an eye out for more details of VISC CPUs as they become available.

Comments closed
    • Mr Bill
    • 4 years ago

    I thought AMD was already using Variable Instruction Set Computing (VISC). Is not the native language of an AMD CPU made up of micro ops that are split or combined to simulate the various X86 instructions?

      • just brew it!
      • 4 years ago

      Using micro-ops is not the same thing as VISC. AMD (and Intel) essentially decompose the CISC instructions of x86 into smaller, RISC-like units — these are what we call micro-ops. But the mapping of x86 instructions to micro-ops is fixed in the hardware, and does not change.

      VISC (if I’m understanding it correctly) aims to make the mapping (and the configuration of the underlying execution units) reconfigurable on the fly. Cool concept if they can make it work without taking a huge hit on complexity and/or clock speed, but I’m taking the claims with a grain of salt until I see working hardware.

      In some ways, what they’re doing appears to be an extension of what Transmeta did.

        • the
        • 4 years ago

        The key part of VISC is the software layer that cracks a single instruction stream into multiple threadlets. This process is software based though I wouldn’t be surprised if there was an FPGA or hidden core in the front end of the design to help speed up the threadlet generation. This is also what they’ll be licensing to other companies so it is somewhat portable.

        The neat thing here is it doesn’t inherently have to crack instructions into micro-ops or convert from one ISA to another. This can be quiet advantageous as a backend core can run the native instruction stream while a threadlet is being generated. This would (presumably) save transistors in the design while offering a known baseline of performance. A quick interrupt and context switch later, and the thread lets start executing across multiple cores. As a bonus, the caches would be warm from the native execution time.

          • Mr Bill
          • 4 years ago

          Sorta reminds me of LISP.
          Edit:
          Or a similar language suitable for simulating AI.

    • maddyUS
    • 4 years ago

    Completely useless company
    Management is burning money .. Fake presentation ….

    • just brew it!
    • 4 years ago

    I wonder if the company name was intended to be a [url=https://en.wikipedia.org/wiki/The_Soft_Machine<]literary[/url<] or [url=https://en.wikipedia.org/wiki/Soft_Machine<]musical[/url<] reference... or neither.

    • just brew it!
    • 4 years ago

    It is indeed cool/interesting tech if they can move from prototype to production at a competitive price point, and the performance (and performance/watt) is decent. Those are pretty big “ifs” though!

    The flexibility to reconfigure the CPU’s internals on the fly necessarily comes at a cost, in die area, power consumption, and achievable clock speeds; this is all but inevitable since there’s additional circuitry and additional complexity required to make it all happen. The billion dollar question is whether these costs are outweighed by the advantages this approach brings to the table. We won’t know the answer until there’s production silicon being benchmarked on real-world applications.

    • Klimax
    • 4 years ago

    Interesting, but I doubt they will see much better success then Intel. Especially with ILP extraction.

    I guess we’ll have to see…

    • Unknown-Error
    • 4 years ago

    To Lt. Gen. Wasson,

    Sir, could you order Tech. Sergeant David Kanter to do give us everyday civilians a nice detailed description of what Soft Machines is trying to achieve. I am still confused about how this thing actually works.

    Thanks a million in advance.

      • Klimax
      • 4 years ago

      It is upgraded HW virtualization. Physical cores are hidden behind virtualization layer which hosts virtual computer. Upgrade is that layer can try to divide sequential code to independent blocks and distribute them over multiple physical cores. Aka extracting instruction level parallelization (ILP). Mixed workloads sound like variant of Hyperthreading by Intel (IIRC some version have dynamic scaling of resources too).

      Did I miss anything?

      • maddyUS
      • 4 years ago

      All this company is trying to achieve is get more money and burn it.

    • TopHatKiller
    • 4 years ago

    AMD has already invested in this tech… along with their partners Samsung. Some time ago they’re were rumours that this is a “front for Zen” : obviously this is nonsense – but what isn’t nonsense is AMD giving this tech company $50m.
    Hum?

      • the
      • 4 years ago

      This would more of a front for Bulldozer 2.0. Some of the resource sharing of cores for a single virtual core strongly resembles what AMD has already done.

        • Meadows
        • 4 years ago

        I thought this was nothing like Bulldozer. That was more like hyperthreading on steroids, but it doesn’t magically have a way of using all those cores if the application won’t.

          • the
          • 4 years ago

          The VISC hardware they were describing last year had a significant amount of shared resources in the front end much like Bulldozer. Decoder block, scheduler, and caches are shared in both Bulldozer and VISC.

          One key difference is that there were no shared resources on the backend of the VISC design. Each core has its own integer and FPU where as Bulldozer uses a shared FPU cluster.

          The end result of VISC however is very different from Bulldozer. VISC can be described conceptually as ‘reverse hyperhtreading’ where the processing power of two cores can be put toward a single logical thread. Bulldozer is about running one thread per core but having cores share resources appropriately to reduce transistor count and die space.

            • chuckula
            • 4 years ago

            [quote<] VISC can be described conceptually as 'reverse hyperhtreading'[/quote<] Amen to that.

      • Jolly Good
      • 4 years ago

      Do you have any information about AMD investing $50 million? From everything I can find, SoftMachines has existed [url=http://www.eetimes.com/document.asp?doc_id=1324364<]since 2008[/url<], and, as of last year, had a total investment of $125 million. Their investment from AMD appears to have come [url=http://www.eetimes.com/document.asp?doc_id=1324403&page_number=2<]in 2010[/url<], when AMD were still flush with money. This article mentions a number of other investors, but it doesn't mention dollar amounts (they imply that Samsung invested much more than any of the others). The AMD investment was in 2010, and their (poorly performing) prototype wasn't shown until 2014. It's now 2015 and they're claiming their real design will be RTL complete in 2016. It is unlikely that AMD had any real knowledge of the current design when making their investment.

        • TopHatKiller
        • 4 years ago

        I’m sorry but I’m buggered if i can remember – errh; not even amd would invest in a company with no idea what they’re investing in.

      • chuckula
      • 4 years ago

      They also bought SeaMicro in 2012 with the promise of a new interconnect: [url<]https://techreport.com/news/22561/seamicro-fabric-could-be-the-glue-for-future-amd-chips[/url<] And then dumped all the technology and took a huge writeoff: [url<]https://techreport.com/news/28130/amd-posts-180-million-loss-shutters-seamicro-business[/url<]

        • the
        • 4 years ago

        The new interconnect could still happen with Zen next year. We’ll have to wait and see on this point.

        • TopHatKiller
        • 4 years ago

        Usual lazy reporting. amd retained the actual tech & ip from Seamicro – the ‘freedomfabric’ ip remains with amd.
        Also, what moron would name anything ‘freedomfabric’. Possibly my pants [underpants] utilize the unique properties of ‘freedomfabric.’ I’m not sure. I should try and plug’em into my pc and find out.
        Companies get paid for coming up with these names, you know!

        LATER: did try and plug my pants [underpants] into my pc to find out.
        Evidently, my pants, despite the word ‘freedom’ in the label, do not support some as yet unreleased amd tech. I’ve just gone and hurt myself. Personal, place too.
        This, like everything else, is amd’s fault.

    • omf
    • 4 years ago

    Ah – this reminds me of good ol’ Transmeta…

    [url<]https://en.wikipedia.org/wiki/Transmeta[/url<]

      • Meadows
      • 4 years ago

      It’s similar, but VISC promises to actually increase the speed of less-threaded computations, which is why I find it interesting. I wonder about the possible implications to gaming, for example.

        • Anonymous Coward
        • 4 years ago

        Magical parallelism for nothing, and also while saving power, and running any instruction set. How can it possibly go wrong?

          • Meadows
          • 4 years ago

          Every innovation in history started with someone claiming it was impossible. This is not to say I don’t have my doubts.

    • Laykun
    • 4 years ago

    That must be some pretty incredibly cache coherency tech.

    • aryehsapir
    • 4 years ago

    from: [url<]https://techreport.com/news/27259/cpu-startup-claims-to-achieve-3x-ipc-gains-with-visc-architecture[/url<] on October 23, 2014 : "over $125 million in funding to date" and counts among its investors Samsung Ventures, AMD, and Mubadala (the Abu Dhabi development company that owns GlobalFoundries). what would be the benefit for AMD (if any)?

      • maddyUS
      • 4 years ago

      So many are fooled in 8 years!!!!

    • Meadows
    • 4 years ago

    I wonder, what is the overhead of this “middleware” and what kind of latency implications might it have?

    • maxxcool
    • 4 years ago

    mmmm nooo.. seems to me unless you have gigabytes of tabled already translated instructions in RAM your going to have to actually decode, recode and reissue OPS. (IE what amd and intel do in hardware for macroOPS and microOPS.)

    Either your going to have huge latency from fetching previously translated instructions from the ramm’ed predefined tables.. or your going to spend “software” time converting things into macro/microOPS that the host CPUS ALREADY do but in a wider scale that takes more actual generalized CISC CORES than the highly efficient decoders.

    hmmm .. transmeta 2.0 ?

      • maxxcool
      • 4 years ago

      Seems to me there trying to sell a platform agnostic OS. Some application vendors like Kiosk, ATM and small emdeded application vendors *might* like this. But the thing that sticks in my head is possibly having to have in ram all the calls ‘too’ a cpu that might be made and the resulting optimized code to ‘reissue’ to the cpu multicore module.

      • NoOne ButMe
      • 4 years ago

      Mistakes Were made in original post. Misunderstood this until I reread it.

        • maxxcool
        • 4 years ago

        I blame Vicks-44 cold meds.. the blue pills ..

      • maxxcool
      • 4 years ago

      Alpha had a learning OS … wonder if this is related.

    • the
    • 4 years ago

    This hardware announcement a year after the VISC technology doesn’t surprise me as they indicated then that they actually had hardware working in their labs. In fairness, I was surprised a year ago. 🙂

    The one thing that is interesting is that they are disclosing clock speeds. Estimates from a year ago pointed toward something in the sub 1 Ghz range. That’d make it competitive in mobile but not a threat to anything on the desktop, much less servers. At 2 Ghz, things start to interesting if their claims are to be believed.

    The other thing that is interesting is that their disclosures last year I noticed similarities with AMD’s Bulldoze design. Much of this centered around the idea of sharing resources between two cores. Also of note is that AMD is an investor here.

    • guardianl
    • 4 years ago

    Nvidia already tried this with Denver and while the performance is competitive, the power usage is higher because dynamically compiling code from one ISA to a highly parellel ISA is computationally expensive.

    Give that we won’t see these VISC CPUs until 2017 (6+ months post tape out) even if they are competitive in simulations with todays CPUs the competition isn’t standing still.

    $100 says that intels top CPU in 2016 has faster perf. than any shipping VISC CPU in 2017 when say compiling chrome source with msvc compiler, gcc etc.

      • chuckula
      • 4 years ago

      [quote<]$100 says that intels top CPU in 2016 has faster perf. than any shipping VISC CPU in 2017 when say compiling chrome source with msvc compiler, gcc etc.[/quote<] Something tells me that this ain't designed to compete with a full x86 core on general-purpose software, despite what all the hype is saying. I feel a disturbance in the force that tells me that if this works out, it will be put into special purpose execution units like hardware packet inspectors/filters on firewalls or quasi-DSP applications that aren't really what people think of when they think of desktop computing. It's just a hunch right now, but we'll see.

        • guardianl
        • 4 years ago

        Solid hunch.

        It does look a lot like something between a CPU and GPGPU in terms of workload potential. 2 physical core design says ‘prototype’ tho.

          • the
          • 4 years ago

          All they need is a prototype as long as it works as advertised. The business plan for Soft Machine is mainly based around the idea of licensing this technology to other CPU designers.

      • maddyUS
      • 4 years ago

      They will tape out in next century, if anyone can fund them that longer …..

    • DPete27
    • 4 years ago

    Hard for me to decypher whether VISC is meant as a mobile CPU or an x86 competitor. I read [url=http://www.fudzilla.com/news/processors/38958-could-samsung-the-new-intel<]this article[/url<] this morning and it really got me thinking. And here we go, (if this is a legit solution) Samsung should snatch this tech up and replace AMD as a competitor to Intel.

      • Pwnstar
      • 4 years ago

      Intel can license it, too. There is no reason to compete.

        • w76
        • 4 years ago

        Not if Samsung bought these guys up and decided to end the licensing scheme. Plenty of reason to compete, unless you’re Intel, in which case you’ve already “won,” though even some people at Intel would likely enjoy some actual competition.

          • maddyUS
          • 4 years ago

          Fake stories are not bought by real guys!!!!

    • odizzido
    • 4 years ago

    nice read, thanks for the post 🙂

    • Duct Tape Dude
    • 4 years ago

    Paging David Kanter… Your opinion and insight on this are requested on the next podcast.

    Is this at all similar to how some x86 CPUs translate x86 instructions into intermediate instruction sets?

    And in terms of performance [and perf/watt], will this at all compare to…
    -Modern x86 CPUs?
    -Modern ARM CPUs?
    -Other novel CPU arches like The Mill? [url<]http://millcomputing.com/[/url<]

      • UberGerbil
      • 4 years ago

      I’ll second that. I’d love to hear Kanter’s take on this; paired with Damage, and added to all the other news this week, that would be an enjoyable podcast.

      • the
      • 4 years ago

      I’ll third the request for a Kanter summoning.

        • chuckula
        • 4 years ago

        KANTERJUICE
        KANTERJUICE
        KANTERJUICE!!!

          • the
          • 4 years ago

          en-KANTER-ation!

          • Growler
          • 4 years ago

          Kanter Barada Nikto.

    • chuckula
    • 4 years ago

    From last time, we see the same basic idea that they still have to prove: How to turn single-threaded tasks in a “native” architecture into parallelized tasks in the VISC architecture.

    All those cutesy power/performance slides can be implemented right now with boring old hardware IFF you can parallelize the workload. Their magic — if there is any — isn’t in the hardware, it’s in that middleware layer that purportedly distributes instructions from even single-threaded tasks to a large number of cores for concurrent execution.

    Let’s see if it actually works with real software outside of canned demos.

    Oh, and here’s the themsong for the Shasta launch: [url<]https://www.youtube.com/watch?v=tDXm_ogkBA8[/url<]

      • Waco
      • 4 years ago

      Single threaded tasks are incredibly hard to parallelize without a lot of power waste…so while I’m sure they might be able to make some benchmark look good, I highly doubt that “magic” sauce can translate a normal serial task into a nicely parallel workload without a lot of speculative execution (IE: wasted power).

        • pranav0091
        • 4 years ago

        I’m no CPU guy so the following may be a terrible idea.

        One way I can think of getting “parallel” work from “serial” code is to issue instructions at a rate faster than what can be executed and then spread the load of issued instructions onto multiple cores, but acting on the same register file array.

        This, although, begs for all kinds of problems – sync overheads, ordering guarantees, resource bottlenecks at high load (and resulting low instantaneous transistor utilisation) etc. Even then its dubious (highly so) that it’ll improve the performance per watt (or per unit area) over a “regular” core which can already work on its dedicated register-set.

        It should be possible to get some level of parallelism from serially coded programs, but I wonder if it’ll even be sufficient to overcome the power-penalty from using the middleware in the first place. Probably for some workloads, but not for all. The number of cores would also, in this scenario, give strongly diminishing returns. My guess, is that this arch is strongest when it has a few cores (2-4 maybe ?) and is progressively worse given more cores.

        The lack of numbers n certain axes in those images, sure doesnt give much confidence.
        Any thoughts from more knowledgeable folks ?

        <I work at Nvidia, but my opinions are purely personal>

          • blastdoor
          • 4 years ago

          [quote<]I'm no CPU guy so the following may be a terrible idea. One way I can think of getting "parallel" work from "serial" code is to issue instructions at a rate faster than what can be executed and then spread the load of issued instructions onto multiple cores, but acting on the same register file array. [/quote<] Kind of sounds like: [url<]https://en.wikipedia.org/wiki/Speculative_execution[/url<]

            • chuckula
            • 4 years ago

            It’s a radical form of speculative execution.

            The problem is that while it can produce theoretically better results some of the time, the power consumption will shoot way way up and the actual real-world gains will probably not be that large compared to the existing speculative instruction techniques that CPUs already implement.

            Plus, if power consumption has shot up because a whole lot of cores are running on different speculative branches of a single thread in parallel, then each core likely has to run at a lower frequency, so each branch on the speculation tree runs slower. I think the VISC guys are hoping that linear increases in power consumption from more cores is better than potentially exponential power increases when a single core is ramped to 4+ GHz for maximum performance in a traditional operating mode.

            It’s a question of diminishing returns. Additionally, even with extreme speculative execution, there are still many times when you need to wait for data to load from memory and you’ll still end up stalling without the complex cache/memory architectures we commonly see in complex processors.

            • UberGerbil
            • 4 years ago

            Yeah, my first thought was Sun’s [url=https://en.wikipedia.org/wiki/Rock_(processor)#Unconventional_features<]Rock[/url<], with its "Scout Threads" (see towards the end of [url=http://www.cnet.com/news/sun-has-high-expectations-for-niagara/<]this article[/url<]). Sun couldn't make that work a decade ago, but maybe these guys have made some progress. A lot of the fundamental barriers of dependencies in serial code are just that: fundamental barriers, and no amount of cleverness gets around the fact that you can't do anything with data that hasn't be computed yet.

            • the
            • 4 years ago

            The VISC papers ([url=http://softmachines.com/wp-content/uploads/2015/01/MPR-11303.pdf<]one PDF here[/url<]) I read last year did indicate that some speculative execution was used but this was also in the context of a code morphing software layer for analysis. It was the combination of these two ideas that kept things sane. The end result is something far more conservative which helps keep power consumption in check. The other thing worth noting is that performance scaling hit diminishing returns after four cores per virtual core. Two things that I felt was unanswered was VISC ability to dynamically assign cores between virtual cores dynamically and SMT across a global front end (basically over committing the number of virtual cores vs. physical cores). If the design could perform these tasks, then we'd be looking at a very efficient architecture in terms of raw resource utilization. Memory pressure was not presumed to be an issue due to the expected low clock speeds a year ago. However, at 2 Ghz this is a very real question for the design.

            • chuckula
            • 4 years ago

            Thanks for the paper. Speculation [pun intended] is fun but an actual description of how it works is better.

            • blastdoor
            • 4 years ago

            It seems like most processor designers regard software as fixed/exogenous, and they are trying to come up with a design that is optimal given that existing/fixed software base. That’s understandable and reasonable…. but stuff like this highlights the limitations of that approach.

            Another approach is to treat hardware and software as endogenous, and optimize both jointly.

          • the
          • 4 years ago

          You are correct that there is diminishing returns. Papers from last year didn’t show much gain beyond 4 cores per virtual core. For reference, gains at 4 cores per virtual core offered roughly 2x performance on average.

          I’m some-what optimistic about the overhead regarding the middleware. The overhead is indeed there but it can be overcome. nVidia’s Project Denver was quite impressive in this regard. They were able to get respectable performance within an ultra mobile power budget. I thought that combination was impossible due to the translation layer overhead and it’d simply consume too much power regardless of end level performance. I was pleasantly surprised to be wrong.

            • chuckula
            • 4 years ago

            If they can cache the results of the translation layer so they can be reused in an inexpensive manner, which is similar to how Denver operates, then it’s probably not too horrible to implement.

            If this technology turns out to be a winner, there’s no reason it couldn’t be ported to ARM or x86 more directly in a way that requires less of the pre-processing (although some pre-processing is probably always going to help).

            • the
            • 4 years ago

            Denver technically doesn’t need any pre-processing, it just makes it optimal. There actually is a hardware ARM decoder as part of Denver which gets by-passed when the core can utilize cached morphed code. This is partially why Denver’s worst case performance isn’t dire-OMG-why-did-they-make-this-design, just bad like you’d expect in such a scenario.

            If AMD wasn’t looking over the abyss, I can see an x86 implementation from there. AMD is an investor. At one point both AMD and Soft Machines had a few commoner investors as well. As for ARM adopters, I can see nVidia looking into this considering their investment already into code morphing. Qualcomm and Samsung are potential IP licensees in my eye. Broadcom would be an interesting licensee considering they’re developing an ARM core with SMT and know that a server chip needs lots of cache (think about holding the code morph cache). I do see ARM Holdings proper and Apple as the only two companies that wouldn’t be interested at all.

      • Laykun
      • 4 years ago

      You just sent me down a long rabbit hole that lead to the my personal discovery that the blonde spikey haired dude from StarShip Troopers is named Jake Busey … and he’s Gary Busey’s son. I’ll never be able to watch that movie the same way again.

    • DrDominodog51
    • 4 years ago

    It would be interesting someone released a dev board for this and the software can somehow convert i386 instructions to native.

      • Deanjo
      • 4 years ago

      [quote<] and the software can somehow convert i386 instructions to native.[/quote<] That's basically what Transmeta was doing nearly 20 years ago.

        • DrDominodog51
        • 4 years ago

        I know, but now days there are so many extensions to deal with it would be an achievement.

          • chuckula
          • 4 years ago

          You can translate it OK.

          Whether or not you can translate it and make it run faster on your own architecture than on the native chip’s architecture… that’s where it gets hard.

          That’s also where Transmeta never really succeeded.

            • the
            • 4 years ago

            Transmeta was competitive in their power budget but they operated during the hyper Moore’s Law period of competition between Intel and AMD. They were crushed between this two warring giants and left as a foot note in history.

            However, nVidia was able to get out a decent code morphing based design with Denver.

          • Deanjo
          • 4 years ago

          You did say “i386 instructions”. ;D

Pin It on Pinterest

Share This