The Machine, a new HP computing architecture

Don't see a specific place for your hardware question? This is the forum for you!

Moderators: mac_h8r1, Nemesis

The Machine, a new HP computing architecture

Postposted on Tue Jun 17, 2014 12:04 pm

Hello,

I just wanted to share an interesting video i've found about HP announcing their plans for a new type of computer architecture called The Machine. Performance expectations are really wild and the video warrants a watch for those excited about future posibilities (link below).

Youtube video of Martin Fink from HP explaining The Machine
Last edited by Arclight on Tue Jun 17, 2014 2:22 pm, edited 1 time in total.
nVidia video drivers FAIL, click for more info
Disclaimer: All answers and suggestions are provided by an enthusiastic amateur and are therefore without warranty either explicit or implicit. Basically you use my suggestions at your own risk.
Arclight
Gerbil Elite
 
Posts: 706
Joined: Tue Feb 01, 2011 3:50 am

Re: The Machine, a new HP computing architecture

Postposted on Tue Jun 17, 2014 1:38 pm

I didn't watch the video, but I've seen this PR-release masquerading as journalism all across the internet.

Most prominently, I've repeatedly seen a picture of people proudly displaying what's obviously a hunk of rubber, not a real functional computer. What's the point of that? To convince us, with a cheap-looking plasticky prop, that this isn't vaporware?

New hardware that's so revolutionary, so great, so transcendent of our current computational paradigm that it will have not one OS, but three? You want Linux? It'll have that! You want Android? (wait, isn't that already *nixy to begin with?)---It'll have that too! Also, it will have a revolutionary new HP OS that's the best at harnessing this new computing environment (so why would I want the other two...?). We have the buzzwords of the future with the hot buzzwords of now! Buy HP stock! We're relevant!

Wake me up when they actually have, you know, a functional prototype instead of empty marketing.
Glorious
Darth Gerbil
Gold subscriber
 
 
Posts: 7886
Joined: Tue Aug 27, 2002 6:35 pm

Re: The Machine, a new HP computing architecture

Postposted on Tue Jun 17, 2014 2:11 pm

On the one hand, this is the kind of groundbreaking, outside-the-box thinking that made organizations like HP and Bell Labs the technology powerhouses they were back in the day.

On the other hand, I think they've bitten off *way* more than they can chew here. It'll be a real stretch to even bring the individual underlying technologies they're talking about to the point of being commercially viable in such a short timeframe, let alone produce practical, cost-effective systems.

There's a ton of risk here. If they're really betting the farm on having this in the next ~4 years... well, so long HP, it was nice knowing you.
(this space intentionally left blank)
just brew it!
Administrator
Gold subscriber
 
 
Posts: 38097
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: The Machine, a new HP computing architecture

Postposted on Tue Jun 17, 2014 2:27 pm

JBI wrote:On the one hand, this is the kind of groundbreaking, outside-the-box thinking that made organizations like HP and Bell Labs the technology powerhouses they were back in the day.


That's clearly the kind of nostalgia they're trying to engender.

To the point where's it's laughable. When Martin Fink says he'll get "Our academies, all of thems" to facilitate creating this new OS because academic OS research has stagnated, yeah, he's delirious. The government pours tons of money into that kind of research, the problem is that it is essentially unusable commercially. Computer Science research, in particular, is riddled with sinecures promoting all sorts of things like "provably-correct type-safe!" "Guranteed no side effects!" "That isn't half-life3, it's my formal system for the functional abstraction of calculation!!!1"... just aren't relevant to business. Or anyone, really (yes, yes I'm exaggerating for rhetorical effect).

Furthermore, as I said in the past, those huge R & D organizations existed because of governmental/societal reasons. Bell Labs in particular, because before the breakup Bell had a monopoly on the phone system (remember that acoustically-coupled modem in War Games? Yeah, those ridiculous contraptions existed solely because people weren't legally allowed to physically interconnect with phone-lines!). Bell needed a huge R & D because no one in their right mind is going to develop for a monopsony.

JBI wrote:On the other hand, I think they've bitten off *way* more than they can chew here. It'll be a real stretch to even bring the individual underlying technologies they're talking about to the point of being commercially viable in such a short timeframe, let alone produce practical, cost-effective systems.


Yup. It's just empty marketing. I sincerely doubt they've beefed up their R&D to any substantial degree. Engineering is soooo expensive, we'll just spend more on marketing. :roll: And virtually everyone in the technology "press" just extols this silly Press Release without any actual thought.

Which is fine, I guess, because a lot of their readership just wants to talk about "Science!" and "Cool." Or so my facebook feed shows me. :-?
Glorious
Darth Gerbil
Gold subscriber
 
 
Posts: 7886
Joined: Tue Aug 27, 2002 6:35 pm

Re: The Machine, a new HP computing architecture

Postposted on Tue Jun 17, 2014 2:31 pm

There's one thing that HP does that no other PC manufacturer does - takes chances. It took a huge chance on Itanium. It took a decently-large chance on Palm and WebOS. It's taking a chance here.

Those other projects didn't turn out so hot, though, so maybe there's something else to be said here...
I do not understand what I do. For what I want to do, I do not do. But what I hate, I do.
derFunkenstein
Gerbil God
Gold subscriber
 
 
Posts: 21551
Joined: Fri Feb 21, 2003 9:13 pm
Location: WHAT?

Re: The Machine, a new HP computing architecture

Postposted on Tue Jun 17, 2014 2:32 pm

Glorious wrote:New hardware that's so revolutionary, so great, so transcendent of our current computational paradigm that it will have not one OS, but three? You want Linux? It'll have that! You want Android? (wait, isn't that already *nixy to begin with?)---It'll have that too! Also, it will have a revolutionary new HP OS that's the best at harnessing this new computing environment (so why would I want the other two...?)



It was indeed a pitch, it's obvious they are trying to garner some early adopters. What's really cool though, if they really can deliver, would be the memristors and the link between the SOC and the memory which seems to be based on fiber technology (ofc they had a mock-up, the real deal might not be ready to go out of the lab yet).

The part, for me, that seemed the most challenging would be the SoC of which not much was said, iirc, they also suggested the use of optical transistors (again we have no idea how much progress they have made here and if they are ready for the market).

There is also the big hurdle of creating the OS, establishing the necessarry standards and atracting the software companies to port their stuff and optimize their code. I doubt the first versions of the hardware will make their way to our living rooms, but who knows, in time if it's viewed as the best way forward...

Still if memristors can be sold stand alone i doubt that AMD, Intel, nvidia won't try to integrate it into current x86 architecture. It's a waiting game, like always and HP will be going against all the established names both in hardware and software (unless Microsoft decides to team up with them).
nVidia video drivers FAIL, click for more info
Disclaimer: All answers and suggestions are provided by an enthusiastic amateur and are therefore without warranty either explicit or implicit. Basically you use my suggestions at your own risk.
Arclight
Gerbil Elite
 
Posts: 706
Joined: Tue Feb 01, 2011 3:50 am

Re: The Machine, a new HP computing architecture

Postposted on Tue Jun 17, 2014 2:55 pm

Arclight wrote:It was indeed a pitch, it's obvious they are trying to garner some early adopters.

They're not even sure what they're building yet, it is still premature (by many years!) to be soliciting early adopters. It would be like IBM saying back in 1970, "We think we might have a revolutionary new concept to replace all these mainframes in about 10 years! We don't have any specs for it yet, but hey... you want in?"

Arclight wrote:What's really cool though, if they really can deliver, would be the memristors and the link between the SOC and the memory which seems to be based on fiber technology (ofc they had a mock-up, the real deal might not be ready to go out of the lab yet).

I agree, memristors would be cool. But DRAM and flash have gotten so dense, cheap, and fast that IMO getting memristors to the point where they can compete in price and performance is going to be a nearly vertical uphill battle. AFAICT memristors have a lot of catching up to do as it is, and the tech for DRAM and flash isn't standing still.

From a bit of Google searching, I get the impression that the underlying individual technologies are still years away from being ready to leave the lab, never mind complete systems with a supporting ecosystem to make them useful.

Arclight wrote:The part, for me, that seemed the most challenging would be the SoC of which not much was said, iirc, they also suggested the use of optical transistors (again we have no idea how much progress they have made here and if they are ready for the market).

I'd bet a ton of money that the answer is "no". You'd be hearing more about them if they were.
(this space intentionally left blank)
just brew it!
Administrator
Gold subscriber
 
 
Posts: 38097
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: The Machine, a new HP computing architecture

Postposted on Tue Jun 17, 2014 3:08 pm

just brew it! wrote:They're not even sure what they're building yet, it is still premature (by many years!) to be soliciting early adopters. It would be like IBM saying back in 1970, "We think we might have a revolutionary new concept to replace all these mainframes in about 10 years! We don't have any specs for it yet, but hey... you want in?"


This was the graph shown at the end. Sampling for memristors seems planned for 2015. After that i suppose that they can at least sell the memory chips.
nVidia video drivers FAIL, click for more info
Disclaimer: All answers and suggestions are provided by an enthusiastic amateur and are therefore without warranty either explicit or implicit. Basically you use my suggestions at your own risk.
Arclight
Gerbil Elite
 
Posts: 706
Joined: Tue Feb 01, 2011 3:50 am

Re: The Machine, a new HP computing architecture

Postposted on Tue Jun 17, 2014 11:41 pm

Glorious wrote:Also, it will have a revolutionary new HP OS that's the best at harnessing this new computing environment (so why would I want the other two...?).


VAX and VMS are coming back! :) Seriously, people would pay for new VMS machines, and it is quite formidable.

Then there is HP-UX which is homeless when Intel quits producing Itanium chips and the lawsuits quit working.

The common thread is big iron RISC boxes with high margins which were based on Itanium chips. HP makes a mint off of HP-UX boxes, and OpenVMS revenue wasn't shabby, if I remember correctly.

HP pulling off an OS isn't the most unbelievable thing about this. HP has quite the collection of operating systems (https://en.wikipedia.org/wiki/List_of_o ... tt-Packard) with NonStop and VMS being the two more interesting ones.
Flatland_Spider
Gerbil Elite
 
Posts: 875
Joined: Mon Sep 13, 2004 8:33 pm
Location: The 918/539

Re: The Machine, a new HP computing architecture

Postposted on Wed Jun 18, 2014 12:20 am

Flatland_Spider wrote:
Glorious wrote:Also, it will have a revolutionary new HP OS that's the best at harnessing this new computing environment (so why would I want the other two...?).


VAX and VMS are coming back! :) Seriously, people would pay for new VMS machines, and it is quite formidable.



I wonder how a reborn VAX and VMS would stack up against the growing trend of virtual machines, both in enterprise, and in data centers and hosting facilities.


EDIT - forgot to mention that VMware images can be as minimal as a VAX terminal, using the processing power back in the core of the network, as well as sharing GPU power between multiple stations.


EDIT2 - I also forgot to mention that I *am* excited to see how memristors and the optical interconnects perform. NEW HP creating a custom OS for it gives me pause for concern, and this could end up being super-niche. I will agree that HP-UX was nice, so maybe they can still pull it off. Just depends on how many of the old guard are still there.
Hz so good
Gerbil Elite
 
Posts: 733
Joined: Wed Dec 04, 2013 5:08 pm

Re: The Machine, a new HP computing architecture

Postposted on Wed Jun 18, 2014 1:37 am

Hz so good wrote:
Flatland_Spider wrote:
Glorious wrote:Also, it will have a revolutionary new HP OS that's the best at harnessing this new computing environment (so why would I want the other two...?).


VAX and VMS are coming back! :) Seriously, people would pay for new VMS machines, and it is quite formidable.



I wonder how a reborn VAX and VMS would stack up against the growing trend of virtual machines, both in enterprise, and in data centers and hosting facilities.


Or you could just virtualize OpenVMS on Itanium hardware today. That is a common scenario for many OpenVMS installations today as many have migrated away form the platform.
Dual Opteron 6376, 96 GB DDR3, Asus KGPE-D16, Geforce GTX 970
Mac Pro Dual Xeon E5645, 48 GB DDR3, GTX 770
Core i7 3930K@4.2 Ghz, 32 GB DDR3, GA-X79-UP5-Wifi
Core i7 2600K@4.4 Ghz, 16 GB DDR3, Radeon 6970, GA-X68XP-UD4
the
Gerbil First Class
Gold subscriber
 
 
Posts: 139
Joined: Tue Jun 29, 2010 2:26 am

Re: The Machine, a new HP computing architecture

Postposted on Wed Jun 18, 2014 2:21 am

Flatland_Spider wrote:
Glorious wrote:Also, it will have a revolutionary new HP OS that's the best at harnessing this new computing environment (so why would I want the other two...?).


VAX and VMS are coming back! :) Seriously, people would pay for new VMS machines, and it is quite formidable.

Then there is HP-UX which is homeless when Intel quits producing Itanium chips and the lawsuits quit working.

The common thread is big iron RISC boxes with high margins which were based on Itanium chips. HP makes a mint off of HP-UX boxes, and OpenVMS revenue wasn't shabby, if I remember correctly.

HP pulling off an OS isn't the most unbelievable thing about this. HP has quite the collection of operating systems (https://en.wikipedia.org/wiki/List_of_o ... tt-Packard) with NonStop and VMS being the two more interesting ones.


Except HP should have learned their lesson with Itanium: legacy software matters. The only reason Itanium has been able to limp along to today is due to the presence of legacy HP-UX and OpenVMS software. If HP/Intel launched Itanuim as a fresh new platform, it would have been ignored in the data center as there would have been no reason to purchase it over competing platforms with robust software. Any sort of performance advantage Itanium had was mainly on paper (it was only competitive briefly when the Itanium 2 first launched). If HP wants to launch a new platform, they better have a suite of software ready to go that shows off the platform's advantages.

Also NonStop is stopping off the Itanium train and jumping to x86.
Dual Opteron 6376, 96 GB DDR3, Asus KGPE-D16, Geforce GTX 970
Mac Pro Dual Xeon E5645, 48 GB DDR3, GTX 770
Core i7 3930K@4.2 Ghz, 32 GB DDR3, GA-X79-UP5-Wifi
Core i7 2600K@4.4 Ghz, 16 GB DDR3, Radeon 6970, GA-X68XP-UD4
the
Gerbil First Class
Gold subscriber
 
 
Posts: 139
Joined: Tue Jun 29, 2010 2:26 am

Re: The Machine, a new HP computing architecture

Postposted on Wed Jun 18, 2014 4:16 am

I've finally sat down and watched this HP presentation and there is a lot of marketing BS in there. I have a feeling that this was done somewhat intentionally to cloud what HP is actually doing and to generate media hype. The presentation has generated news of four different concepts: photonics, nonvolatile memory (memristors), a new OS, and a new architecture. Applying my executive to technical speak translator, there are some interesting ideas presented.

For first is photonics and while many of the advantages in scalability are true here, there is actually very little new. Optical cabling has strong advantages over copper to get a signal between two points very far away. This is already used today as the core networking is all fiber within the data center as well as the backbone of the internet. That isn't necessarily a fair comparison as HP is talking about doing that inside of a single logical system. It'd be news worthy is IBM hadn't shipping systems doing that years ago. The next step, which HP is hyping is to do the silicon to optical conversion on the same chip used for processing: silicon photonics. Not a new idea as both IBM and Intel have invested heavily in this area. These two companies are expected to both be shipping end products using silicon photonics in the coming years. HP seems to be pushing the idea of using optical links as the go between for processors and centralized memory. There is certainly no technical reason that prevents this and it simplifies memory management. This differs from the currently model where memory is distributed along with each processor node. The currently model has a speed advantage here due to data locality. Modern processors can access local memory in less than 25 ns. Access times increase as a modern processor attempts to access memory on remote socket. Optical interconnects directly to memory makes sense in HP's centralized topology but it wouldn't be advantage for current systems. Rather optical interconnects between processors would make more sense.

The focus on nonvolatile memory (NVM) is interesting. HP own the patents behind a completely new type: the memristor. If it lives up to the hype, it will genuinely be ground breaking. It would enable SSD's to have access and bandwidth similar to that of DRAM used for main memory in computers. Alternatively it could be seen as bringing the capacity and low cost of NAND to main memory. HP is proposing that the next step is combine those two ideas by using memristors as both main memory and storage. Imagine all files on your hard drive are simultaneously open and there is no load times as everything is already in memory. Much of this can be emulated today by what's called a RAM disk. There are three downsides to a RAM disk today: capacity is low due to using DRAM chips, data is not retained after a power cycle and OS uses a RAM disk as discrete storage and not as data in memory. This leads into HP's sole big new idea about their presentation.

The idea of a new operating system that does not deal with discrete files but rather everything exists in main memory is revolutionary. If an application wants to read data, a memory pointer to its location is all that is necessary. Data management will still be a problem with everything coexisting on the same logical data tier (i.e. how does a program know where to point to for a specific piece of data in this model?). There are some interesting problems here with security, backups, and RAS (ECC, hot swap, expansion) that need to be addressed with further research. Shoe horning existing applications into HP's proposed model may not work well so starting from scratch isn't unwarranted.

Early on in the presentation HP was talking about a new architecture to reduce power consumption. Reading between the lines, the main aspect is simply the memristor being used for centralized main memory: data wouldn't need to move and thus saving energy. The end processor nodes may not be radical new design that people are thinking either. Rather it sounds like they're using FPGA's coupled with an existing processor architecture. With the new OS design proposed, deploying code on an FPGA could be task the OS abstracts from the programmer. A developer may not know if their code is using the FPGA or a processor core at run time.
Dual Opteron 6376, 96 GB DDR3, Asus KGPE-D16, Geforce GTX 970
Mac Pro Dual Xeon E5645, 48 GB DDR3, GTX 770
Core i7 3930K@4.2 Ghz, 32 GB DDR3, GA-X79-UP5-Wifi
Core i7 2600K@4.4 Ghz, 16 GB DDR3, Radeon 6970, GA-X68XP-UD4
the
Gerbil First Class
Gold subscriber
 
 
Posts: 139
Joined: Tue Jun 29, 2010 2:26 am

Re: The Machine, a new HP computing architecture

Postposted on Wed Jun 18, 2014 6:14 am

Flatland_spider wrote:VAX and VMS are coming back! :) Seriously, people would pay for new VMS machines, and it is quite formidable.


We have OpenVMS all over the place at work. On emulated Vaxes and actual Alphas and Itaniums.

BUT, HP rather quietly declared VMS "end-of-life" about a year ago. Support is slated to continue to ~2020, but "support" has been minimal for years already.

So I know more than a little about VMS, the point is that HP doesn't want to anymore ;)

Flatland_spider wrote:The common thread is big iron RISC boxes with high margins which were based on Itanium chips. HP makes a mint off of HP-UX boxes, and OpenVMS revenue wasn't shabby, if I remember correctly.


Maybe, but evidently it wasn't quite enough.

Flatland_spider wrote:HP pulling off an OS isn't the most unbelievable thing about this. HP has quite the collection of operating systems (https://en.wikipedia.org/wiki/List_of_o ... tt-Packard) with NonStop and VMS being the two more interesting ones.


I'm sure they could, the point is that simultaneously supporting Linux/Android isn't exactly a vote of confidence and pretty much undermines the entire idea.

Hz so good wrote:I wonder how a reborn VAX and VMS would stack up against the growing trend of virtual machines, both in enterprise, and in data centers and hosting facilities.


Not well enough, apparently.

the wrote:Or you could just virtualize OpenVMS on Itanium hardware today. That is a common scenario for many OpenVMS installations today as many have migrated away form the platform.


Well, there is a minimalized HP-UX based hypervisor on Itanium (or the Integrity platform, as HP prefers to call it), but I'm not sure how many OpenVMS customers take advantage of that instead of just running OpenVMS natively on the hardware. We certainly don't, particularly since the HP salespeople never seemed all that confident in the hypervisor.

At this point though, it's all moot. It's time to migrate away from the entire thing.

the wrote:Except HP should have learned their lesson with Itanium: legacy software matters. The only reason Itanium has been able to limp along to today is due to the presence of legacy HP-UX and OpenVMS software. If HP/Intel launched Itanuim as a fresh new platform, it would have been ignored in the data center as there would have been no reason to purchase it over competing platforms with robust software. Any sort of performance advantage Itanium had was mainly on paper (it was only competitive briefly when the Itanium 2 first launched). If HP wants to launch a new platform, they better have a suite of software ready to go that shows off the platform's advantages.


Yup. Which is why an all-new HP OS isn't a good idea if they are simultaneously supporting not one, but two existing OSes on the same hardware.

the wrote:The idea of a new operating system that does not deal with discrete files but rather everything exists in main memory is revolutionary. If an application wants to read data, a memory pointer to its location is all that is necessary. Data management will still be a problem with everything coexisting on the same logical data tier (i.e. how does a program know where to point to for a specific piece of data in this model?). There are some interesting problems here with security, backups, and RAS (ECC, hot swap, expansion) that need to be addressed with further research. Shoe horning existing applications into HP's proposed model may not work well so starting from scratch isn't unwarranted.


But files have always been units of logical organization, not any kind of hardware-enforced concession. From an execution perspective, applications all work with only memory. The CPU doesn't know that Hard-drives exist, that's all background infrastucture and bookkeeping. Admittedly, it's a lot of background infrastructure and bookkeeping, but it's hard to see what's so revolutionary about any of this. Yes, Having hundreds of terabytes of universal and non-volatile memory is intriguing, but that's simply an extension of what we already have. Things like memcached already exist, and they're heavily used.

In general modern OSes cache like crazy, it's just the limitation/cost of DRAM and the bookkeeping required because of the inherent volatility that are the problems. Yes, getting rid of the fsync() bottleneck would be nice, but SSDs are getting to the point where it's just not the problem it used to be. Alleviating bookkeeping like that is nice, but it's neither sexy nor revolutionary. As JBI said, other technologies aren't going to standstill over the next few years, and we can already see the trend.

It's hard to imagine how developing an entirely new architecture and environment, years from now, are going to be any sort of clear win.

EDIT: I mean, to be clear dude, how is this even new? The original computers didn't have files either, they only had system memory and I/O. It's not revolutionary at all, it's practically regressive. The "Everything-is-a-file" Unix ideal was a reaction against multiple different device APIs handling different types of device I/O (which is evident, to a certain extent, in VMS which has the concept of different types of devices and different ways to handle them). VMS's approach might be more elegant, but the simplicity, however crude, of the Unix approach clearly won out. The point is that files have always been abstractions, and the Unix idea was actually to try and unify all I/O into a single and ubiquitous name space. It's always been about organization.

Completely flattening the memory hierarchy doesn't eliminate I/O or the need for logical organization. And while this "Machine" might only really have one or two types of I/O (unlike mainframes/minicomputers with endless peripherals), it's still going to need some way of accessing them. I'm sure HP could come up with an elegantly principled and orthogonal API in this new OS for handling them, the question is why bother if they are equally providing not one, but two *nixes.
Glorious
Darth Gerbil
Gold subscriber
 
 
Posts: 7886
Joined: Tue Aug 27, 2002 6:35 pm

Re: The Machine, a new HP computing architecture

Postposted on Wed Jun 18, 2014 8:54 am

Glorious wrote:
the wrote:Or you could just virtualize OpenVMS on Itanium hardware today. That is a common scenario for many OpenVMS installations today as many have migrated away form the platform.


Well, there is a minimalized HP-UX based hypervisor on Itanium (or the Integrity platform, as HP prefers to call it), but I'm not sure how many OpenVMS customers take advantage of that instead of just running OpenVMS natively on the hardware. We certainly don't, particularly since the HP salespeople never seemed all that confident in the hypervisor.

At this point though, it's all moot. It's time to migrate away from the entire thing.


I've heard of it used to consolidate ancient HP-UX and OpenVMS systems on to one new expensive Itanium hardware. It was the path of least resistance to upgrade these systems. The alternatives were most expensive with more hardware or a painful software migration. The sales rep could have had their own interests in mind by attempting to sell you more hardware.

Glorious wrote:
the wrote:The idea of a new operating system that does not deal with discrete files but rather everything exists in main memory is revolutionary. If an application wants to read data, a memory pointer to its location is all that is necessary. Data management will still be a problem with everything coexisting on the same logical data tier (i.e. how does a program know where to point to for a specific piece of data in this model?). There are some interesting problems here with security, backups, and RAS (ECC, hot swap, expansion) that need to be addressed with further research. Shoe horning existing applications into HP's proposed model may not work well so starting from scratch isn't unwarranted.


But files have always been units of logical organization, not any kind of hardware-enforced concession. From an execution perspective, applications all work with only memory. The CPU doesn't know that Hard-drives exist, that's all background infrastucture and bookkeeping. Admittedly, it's a lot of background infrastructure and bookkeeping, but it's hard to see what's so revolutionary about any of this. Yes, Having hundreds of terabytes of universal and non-volatile memory is intriguing, but that's simply an extension of what we already have. Things like memcached already exist, and they're heavily used.


Correct. It is the role of the programmer to be aware that the data they need isn't all going to be in memory and is responsible for loading it there. The revolutionary part is that developer aspect is stream lined as the data will already be in memory immediately accessible. Data access will behave more like a typical low latency system call. There will still need to be some bookkeeping but it is far lower level and very thin. By reducing/eliminating these bookkeeping tasks, latency and throughput will benefit greatly. The OS will also be thinner as the need for platform specific storage drivers disappear with primary storage being one common model.

Glorious wrote:In general modern OSes cache like crazy, it's just the limitation/cost of DRAM and the bookkeeping required because of the inherent volatility that are the problems. Yes, getting rid of the fsync() bottleneck would be nice, but SSDs are getting to the point where it's just not the problem it used to be. Alleviating bookkeeping like that is nice, but it's neither sexy nor revolutionary. As JBI said, other technologies aren't going to standstill over the next few years, and we can already see the trend.

It's hard to imagine how developing an entirely new architecture and environment, years from now, are going to be any sort of clear win.


The problem facing HP with this idea is that there is already a movement to reduce the overhead with technologies like PCIe based storage using the NVMe spec. The basic advantages will still be there but their magnitude won't be as impressive for small systems (say the typical dual socket server). This is why I see HP citing the 160 PB figure: it highlights the fixed amount of overhead as capacity increases. To get to 160 PB today with NVMe, there would to be 3,200 Ivy Bridge-E chips (presuming 5 TB hanging off of each processor's PCIe lanes). That's a pretty big cluster with multiple stages to access data (it is in local memory? No, then is it in local NVMe storage? Gotta hit a network interface to get its location in the cluster, then hit the network interface again to get it.) The purpose of the centralized memory pool is that the data location lookup is a near constant fixed cost.

Glorious wrote:EDIT: I mean, to be clear dude, how is this even new? The original computers didn't have files either, they only had system memory and I/O. It's not revolutionary at all, it's practically regressive. The "Everything-is-a-file" Unix ideal was a reaction against multiple different device APIs handling different types of device I/O (which is evident, to a certain extent, in VMS which has the concept of different types of devices and different ways to handle them). VMS's approach might be more elegant, but the simplicity, however crude, of the Unix approach clearly won out. The point is that files have always been abstractions, and the Unix idea was actually to try and unify all I/O into a single and ubiquitous name space. It's always been about organization.


Completely flattening the memory hierarchy doesn't eliminate I/O or the need for logical organization. And while this "Machine" might only really have one or two types of I/O (unlike mainframes/minicomputers with endless peripherals), it's still going to need some way of accessing them. I'm sure HP could come up with an elegantly principled and orthogonal API in this new OS for handling them, the question is why bother if they are equally providing not one, but two *nixes.[/quote]

Make no mistake, this idea would be the antonym of that Unix principle. The concept of a file would disappear into an in-memory location. As you point out, there would still be the need for some abstraction so that data can be located in-memory. Storage IO would be eliminated but general IO for things like GPU's, network, keyboards, mice etc. would need to be adapted. The only exception I can see to storage IO would be for backups though tape/disk backups could also be done with the assistance of another system over the network.
Dual Opteron 6376, 96 GB DDR3, Asus KGPE-D16, Geforce GTX 970
Mac Pro Dual Xeon E5645, 48 GB DDR3, GTX 770
Core i7 3930K@4.2 Ghz, 32 GB DDR3, GA-X79-UP5-Wifi
Core i7 2600K@4.4 Ghz, 16 GB DDR3, Radeon 6970, GA-X68XP-UD4
the
Gerbil First Class
Gold subscriber
 
 
Posts: 139
Joined: Tue Jun 29, 2010 2:26 am

Re: The Machine, a new HP computing architecture

Postposted on Wed Jun 18, 2014 9:37 am

Glorious wrote:New hardware that's so revolutionary, so great, so transcendent of our current computational paradigm that it will have not one OS, but three? You want Linux? It'll have that! You want Android? (wait, isn't that already *nixy to begin with?)---It'll have that too! Also, it will have a revolutionary new HP OS that's the best at harnessing this new computing environment (so why would I want the other two...?). We have the buzzwords of the future with the hot buzzwords of now! Buy HP stock! We're relevant!

Wake me up when they actually have, you know, a functional prototype instead of empty marketing.


Because your customers will not be creating entirely new software. They'll want the options of familiarity or maximizing the hardware. I mean, why does IBM offer AIX, Linux on Power, and IBM i all on Power hardware? :roll:

I'm fully with your main point, that this is currently vaporware and nothing real to show, but I think you're really grasping for straws beyond "they don't have a real prototype to show yet" with this "oh man 3 OS choices!" comment.
slowriot
Gerbil First Class
Gold subscriber
 
 
Posts: 165
Joined: Wed Apr 03, 2013 10:57 am

Re: The Machine, a new HP computing architecture

Postposted on Wed Jun 18, 2014 10:00 am

I simplified this presentation down into a few main points:
1) Right now, they're only spitting marketing BS/hype about concepts and have shown no physical hardware (to my knowledge). Anybody can do that.
2) Unified memory: Everyone that knows anything about PC tech can/has seen this coming. First we had hdds, then we had SSDs, now we have M.2 SSDs (which coincidentally are almost the same size as a RAM DIMM). Recent developments in 3D NAND and Memristor all point toward the implementation of unified memory. It doesn't take a visionary to predict that. In the consumer space and early enterprise adoption, I'm not even sure we'd need the fancy fiber-optic link (speaking of which, Intel demoed that a few years ago).
2b) Of course you have to/can redesign/optimize the OS for Unified Memory.
3) ASICs (application specific integrated circuits) are more power efficient than "universal" CPU architecture. That's no secret. Look at Cryptocurrency/Bitcoin mining ASICs. Dumb that down to consider an IGP a type of ASIC, and HP hasn't really invented anything new except customizing the on-die ASICs to match their intended "datacenter usage."
4) If you want to oversimplify HPs "The Machine" it's essentially just an evolution of AMD's Heterogeneous Systems Architecture (HSA)
Last edited by DPete27 on Wed Jun 18, 2014 10:14 am, edited 1 time in total.
Main: i5-3570K, ASRock Z77 Pro4-M, Asus GTX660 TOP, 120 GB Vertex 3 Max IOPS, 2 TB Samsung EcoGreen F4, 8GB 1600MHz G.Skill @1.25V, Silverstone PS07B
HTPC: A8-5600K, MSI FM2-A75IA-E53, 4TB Samsung SSHD, 8GB 1866MHz G.Skill, Hand-Built Wood Case
DPete27
Gerbil Jedi
Silver subscriber
 
 
Posts: 1733
Joined: Wed Jan 26, 2011 12:50 pm
Location: Madison, Wisconsin

Re: The Machine, a new HP computing architecture

Postposted on Wed Jun 18, 2014 10:05 am

the wrote:I've heard of it used to consolidate ancient HP-UX and OpenVMS systems on to one new expensive Itanium hardware. It was the path of least resistance to upgrade these systems. The alternatives were most expensive with more hardware or a painful software migration. The sales rep could have had their own interests in mind by attempting to sell you more hardware.


Well, our needs are different than a lot of customers. Still though, it's a still a (partial) software migration. You have to recompile everything, and the move from VAX to Itanium has some substantial gotchas.

Given that OpenVMS is now only nominally supported for another 5 years, I'd suggest just emulating a VAX on x86.

We've used both approaches, but I'd only recommend emulation at this point.

the wrote:Correct. It is the role of the programmer to be aware that the data they need isn't all going to be in memory and is responsible for loading it there. The revolutionary part is that developer aspect is stream lined as the data will already be in memory immediately accessible. Data access will behave more like a typical low latency system call. There will still need to be some bookkeeping but it is far lower level and very thin. By reducing/eliminating these bookkeeping tasks, latency and throughput will benefit greatly.


But it's already streamlined. Most developers these days are using very high-level language languages that are abstractions upon abstractions. Most of those hot web frameworks use ORM, which means the developer is barely aware of the database abstraction, let alone any actual abstraction from hardware. Even at lower levels, like C, you're using the standard library, which is again an abstraction. Whether you make mmap() either irrelevant or stupendously fast, it doesn't make a whole lot of difference. Most C programmers won't care, they'll just notice that their previously I/O bound program, well, isn't, anymore.

So yes, the "Machine" will make a lot of that stuff faster, but, again, have you been paying attention the last few years? It used to be if you wanted more than ~100 fysnc()s per second you were talking about a very expensive battery-backed up controller with 15k disks. That was the only way you could safely enable write-caching and thus get decent performance from spinning disks.

Now? You can buy a 120GB consumer SSD for like 70 bucks that gets two orders of magnitude more fysnc()s than any magnetic disk ever could and can easily compete (and beat) those very, very expensive controller setups. And at $70 that's literally the bottom of the barrel. At the enterprise level the differences are staggering. It's already a whole new world.

As JBI says, the rest of the world won't remain at a standstill, and we're clearly moving into a new age here already.

the wrote:The OS will also be thinner as the need for platform specific storage drivers disappear with primary storage being one common model.


Again, that's most abstracted away from anyone but the OS programmers, and even then, you can simplify it immensely by supporting just one specific type of storage hardware. Certain vendors do things like that already. :wink:

The problem facing HP with this idea is that there is already a movement to reduce the overhead with technologies like PCIe based storage using the NVMe spec. The basic advantages will still be there but their magnitude won't be as impressive for small systems (say the typical dual socket server). This is why I see HP citing the 160 PB figure: it highlights the fixed amount of overhead as capacity increases. To get to 160 PB today with NVMe, there would to be 3,200 Ivy Bridge-E chips (presuming 5 TB hanging off of each processor's PCIe lanes). That's a pretty big cluster with multiple stages to access data (it is in local memory? No, then is it in local NVMe storage? Gotta hit a network interface to get its location in the cluster, then hit the network interface again to get it.) The purpose of the centralized memory pool is that the data location lookup is a near constant fixed cost.


Yes, today.

That's the flaw. They are comparing some hypothetical future over the horizon with the environment today. It's practically a tautology. Yes, a computer 5 years from now will be better than a computer today.

The counter-example you are proposing is silly. Just look at current development trends, many companies are working on ultra-dense clusters with tightly interwoven interconnects. The only uniqueness here is the universal memory concept, but you could easily envision something similar to it implemented with DRAM and flash. That's a two-level hierarchy with some additional complexity, yes, but they're also proven technologies that not only work today but also haven't yet hit the end of their developmental cycle.

the wrote:Make no mistake, this idea would be the antonym of that Unix principle. The concept of a file would disappear into an in-memory location.


Just to start: Then why are they proposing to run Linux and Android on it? :o

Look, did you even read what I wrote? I'll quote myself:

Glorious wrote:Unix idea was actually to try and unify all I/O into a single and ubiquitous name space


You obviously don't understand the principle, because the entire point of the idea was to avoid having to worry about the particularities & peculiarities of any specific device. Instead, all devices were (theoretically) treated as files, hence, the programmer (theoretically) didn't need to worry about how to handle a teletype versus a tape drive, a CRT versus a line-plotter. In fact, the very "concept of a file" IS A "in-memory location," as that's essentially what a file-descriptor is.

File do not even exist at a hardware level. They are solely software constructs built for human convenience. What are you saying is inane and incorrect.

I mean, demonstrably, you can have an entire Linux install on a ramdisk. Today. The only difference is the volatility.

slowriot wrote:Because your customers will not be creating entirely new software. They'll want the options of familiarity or maximizing the hardware. I mean, why does IBM offer AIX, Linux on Power, and IBM i all on Power hardware?


The better question is why is Linux the newest offering of the three. :wink:

To be clear, they're not developing new OSes and offering them, they're supporting old ones. Outside of niches no one actually makes new OSes anymore.

slowriot wrote:I'm fully with your main point, that this is currently vaporware and nothing real to show, but I think you're really grasping for straws beyond "they don't have a real prototype to show yet" with this "oh man 3 OS choices!" comment.


Because saying it'll run both Linux and Android (which is utterly bizarre to begin with, because at the level they're talking those two OSes aren't actually any different) PLUS some new-fangled OS they're going to create out of thin air in collaboration with academia shows, emphatically, that this is boiler-plate marketing glurge. It's everything to everyone, and it completely undermines the notion that anything about it is revolutionary. If it is going to transcend our paradigm and open up new vistas and blah, blah, blah, why are they so carefully to acknowledge it'll fully (and purposelessly) support our current ones? Like I said, the difference at this level (datacenter-replacing clusters) between Android and Linux is non-existent. But yet the checkbox next to "android" had to filled, didn't it? :wink:
Glorious
Darth Gerbil
Gold subscriber
 
 
Posts: 7886
Joined: Tue Aug 27, 2002 6:35 pm

Re: The Machine, a new HP computing architecture

Postposted on Wed Jun 18, 2014 5:30 pm

Glorious wrote:
the wrote:Correct. It is the role of the programmer to be aware that the data they need isn't all going to be in memory and is responsible for loading it there. The revolutionary part is that developer aspect is stream lined as the data will already be in memory immediately accessible. Data access will behave more like a typical low latency system call. There will still need to be some bookkeeping but it is far lower level and very thin. By reducing/eliminating these bookkeeping tasks, latency and throughput will benefit greatly.


But it's already streamlined. Most developers these days are using very high-level language languages that are abstractions upon abstractions. Most of those hot web frameworks use ORM, which means the developer is barely aware of the database abstraction, let alone any actual abstraction from hardware. Even at lower levels, like C, you're using the standard library, which is again an abstraction. Whether you make mmap() either irrelevant or stupendously fast, it doesn't make a whole lot of difference. Most C programmers won't care, they'll just notice that their previously I/O bound program, well, isn't, anymore.

So yes, the "Machine" will make a lot of that stuff faster, but, again, have you been paying attention the last few years? It used to be if you wanted more than ~100 fysnc()s per second you were talking about a very expensive battery-backed up controller with 15k disks. That was the only way you could safely enable write-caching and thus get decent performance from spinning disks.

Now? You can buy a 120GB consumer SSD for like 70 bucks that gets two orders of magnitude more fysnc()s than any magnetic disk ever could and can easily compete (and beat) those very, very expensive controller setups. And at $70 that's literally the bottom of the barrel. At the enterprise level the differences are staggering. It's already a whole new world.


Agreed. That is why I'm comparing HP's design to modern PCIe based SSD's using NVMe which are just hitting the market. On a small scale, going in-memory will be faster but the difference will be purely academic. Both concepts remove issues with being IO bound in storage so the any performance delta would stem from reductions in overhead. This is why I particularly cited NVMe as being competitive on the small scale as it removes the overhead associated with AHCI. HP's proposal of a single unified pool of memory will show an advantage when capacity scales upward. Extrapolating other technologies like NVMe into the PB range will require multiple tiers to function at the hardware level even if they're seen as one logical unit in application software.

Glorious wrote:
the wrote:The OS will also be thinner as the need for platform specific storage drivers disappear with primary storage being one common model.


Again, that's most abstracted away from anyone but the OS programmers, and even then, you can simplify it immensely by supporting just one specific type of storage hardware. Certain vendors do things like that already. :wink:


True. Though 'support' is the keyword there as all that is unsupported should in theory be removed.

There is also the embedded market where one specific type of storage is common place and optimized for. This is more of an exception as embedded systems are not necessarily designed to be open to 3rd party modifications.

Glorious wrote:
The problem facing HP with this idea is that there is already a movement to reduce the overhead with technologies like PCIe based storage using the NVMe spec. The basic advantages will still be there but their magnitude won't be as impressive for small systems (say the typical dual socket server). This is why I see HP citing the 160 PB figure: it highlights the fixed amount of overhead as capacity increases. To get to 160 PB today with NVMe, there would to be 3,200 Ivy Bridge-E chips (presuming 5 TB hanging off of each processor's PCIe lanes). That's a pretty big cluster with multiple stages to access data (it is in local memory? No, then is it in local NVMe storage? Gotta hit a network interface to get its location in the cluster, then hit the network interface again to get it.) The purpose of the centralized memory pool is that the data location lookup is a near constant fixed cost.


Yes, today.

That's the flaw. They are comparing some hypothetical future over the horizon with the environment today. It's practically a tautology. Yes, a computer 5 years from now will be better than a computer today.

The counter-example you are proposing is silly. Just look at current development trends, many companies are working on ultra-dense clusters with tightly interwoven interconnects. The only uniqueness here is the universal memory concept, but you could easily envision something similar to it implemented with DRAM and flash. That's a two-level hierarchy with some additional complexity, yes, but they're also proven technologies that not only work today but also haven't yet hit the end of their developmental cycle.


I'd be three tier minimum today: buffered in local memory, local storage, and remote. The thing that really hurts performance is the remote tier. Typical clustering is costly as there is a full network stack involved.

And there are several examples of massive scale systems that don't use traditional clustering today. I already linked to IBM's POWER7 systems for HPC usage. It has a flat memory space between nodes, though they're not fully cache coherent. SGI also has the Altix UV2000 which goes up to 256 sockets with 64 TB of memory in a fully cache coherent manner. (The UV2000 supports even more memory but cache coherency stops at the 64 TB mark due to physical addressing limitations.) These two systems don't have the network stack involved in the remote tier so performance is vastly improved. HP's claims to scale far higher while maintaining relatively low latencies due to its unified memory. In 5 years it is safe to say that the likes of IBM and Intel will scale to more cores, higher socket counts and increased memory capacity but what they'll do to address the latency problems as node count increases is an open question. HP is simply brute forcing the solution by unifying memory into one location but at a latency of 250 ns. This is rather poor compared to local access of Ivy Bridge at 25 ns but rather impressive considering the scale HP is aiming at. For reference, SGI claims 100 to 500 ns on a 4 node UV 2000 with maximum latencies going to grow as more nodes are added.

Glorious wrote:
the wrote:Make no mistake, this idea would be the antonym of that Unix principle. The concept of a file would disappear into an in-memory location.


Just to start: Then why are they proposing to run Linux and Android on it? :o


Because the underlaying hardware isn't as radically different than the current systems? Massive amount of nonvolatile memory and the announcement of a new OS exploiting that are big ideas but they don't need to diverge from existing architectures. So how to get Linux or Android on it? Just setup firmware to handle RAM disk creation/retention and install on to that.

Glorious wrote:Look, did you even read what I wrote? I'll quote myself:

Glorious wrote:Unix idea was actually to try and unify all I/O into a single and ubiquitous name space


You obviously don't understand the principle, because the entire point of the idea was to avoid having to worry about the particularities & peculiarities of any specific device. Instead, all devices were (theoretically) treated as files, hence, the programmer (theoretically) didn't need to worry about how to handle a teletype versus a tape drive, a CRT versus a line-plotter. In fact, the very "concept of a file" IS A "in-memory location," as that's essentially what a file-descriptor is.

File do not even exist at a hardware level. They are solely software constructs built for human convenience. What are you saying is inane and incorrect.


The idea of having a construct to be able to easily locate and use memory as data storage is still necessary and is an idea that I also mentioned previously. I'm arguing that an all in-memory system can use a different construct to locate and access data than current storage models.

Using the same construct for all IO is indeed a Unix hallmark. There is no denying the advantage of a consistent programming model regardless of IO device. I can see cases where breaking programming consistancy would be advantageous for an entirely in-memory model though. One possibility would be that data is tracked as part of the virtual memory system. An application would request data to use. The new HP OS doesn't perform a copy from storage to memory like current OS. Rather it would simply alter the page table allocations to then exist inside the requesting applications virtual space. The OS would then return a pointer to the application of where it starts in the application virtual memory space. Saving data for storage outside of the application would be telling the OS to de-allocate pages from the application but still have them marked in memory for later usage. This an over simplified example as there is a host of issues that need to be addressed to do such operations right (security, coherency between two applications using the same data, application crashing etc.). Access IO like USB ports, GPUs and networking wouldn't necessarily follow the same mechanism due to security, coherency etc. issues. In fact, performing data copies instead moving raw allocations around would be preferable in some situations.

In this example, the best descriptor for a construct would be an application considering how memory is actually allocated using the virtual memory system and page tables.

Glorious wrote:I mean, demonstrably, you can have an entire Linux install on a ramdisk. Today. The only difference is the volatility.


Actually this highlights the difference between the current system and what is possible with the example I gave above. A current OS will behave the same booting off of a RAM disk as a hard disk, just radically faster with the RAM disk. Using the current model though, there will be data duplication invovled as data is copied from the RAM disk to memory allocated to the application. The RAM disk will have the same memory allocation overhead as well as the traditional file system overhead. The application will also have to ask the OS for the space to load data where as this step is implicite in the example above.
Last edited by the on Sat Jun 21, 2014 1:25 pm, edited 1 time in total.
Dual Opteron 6376, 96 GB DDR3, Asus KGPE-D16, Geforce GTX 970
Mac Pro Dual Xeon E5645, 48 GB DDR3, GTX 770
Core i7 3930K@4.2 Ghz, 32 GB DDR3, GA-X79-UP5-Wifi
Core i7 2600K@4.4 Ghz, 16 GB DDR3, Radeon 6970, GA-X68XP-UD4
the
Gerbil First Class
Gold subscriber
 
 
Posts: 139
Joined: Tue Jun 29, 2010 2:26 am

Re: The Machine, a new HP computing architecture

Postposted on Wed Jun 18, 2014 5:59 pm

the wrote:
Optical cabling has strong advantages over copper to get a signal between two points very far away. This is already used today as the core networking is all fiber within the data center as well as the backbone of the internet.


I should point out that ,yes, optical fiber does allow for great distances. The photons can't escape, so there's no possibility of leakage nor EMI interference to induce attenuation. One of the other major benefits of using optical fiber is when you multiplex multiple wavelengths. That buys you greater data density (ie total available bandwidth) on the same fiber. You also get a speed boost, since electrons only travel near-speed of light, but I have no idea what the exact difference is.

the wrote: HP seems to be pushing the idea of using optical links as the go between for processors and centralized memory. There is certainly no technical reason that prevents this and it simplifies memory management. This differs from the currently model where memory is distributed along with each processor node. The currently model has a speed advantage here due to data locality. Modern processors can access local memory in less than 25 ns. Access times increase as a modern processor attempts to access memory on remote socket. Optical interconnects directly to memory makes sense in HP's centralized topology but it wouldn't be advantage for current systems. Rather optical interconnects between processors would make more sense.


Yup. That's correct. In fact, with fast enough optical interconnects, and incredibly low latency photon/electron conversion, you could start seeing computers that look like small building blocks. System Memory could actually end up in a small, dedicated module, while connected back to the CPU module, a storage module. etc...


the wrote:The focus on nonvolatile memory (NVM) is interesting. HP own the patents behind a completely new type: the memristor. If it lives up to the hype, it will genuinely be ground breaking. It would enable SSD's to have access and bandwidth similar to that of DRAM used for main memory in computers.

One thing to remember is that DRAM is still comparatively slow. If memristor has much faster access times, THEN I'll start jumping for joy.
Hz so good
Gerbil Elite
 
Posts: 733
Joined: Wed Dec 04, 2013 5:08 pm

Re: The Machine, a new HP computing architecture

Postposted on Wed Jun 18, 2014 6:06 pm

DPete27 wrote:3) ASICs (application specific integrated circuits) are more power efficient than "universal" CPU architecture. That's no secret. Look at Cryptocurrency/Bitcoin mining ASICs. Dumb that down to consider an IGP a type of ASIC, and HP hasn't really invented anything new except customizing the on-die ASICs to match their intended "datacenter usage."


I should point out that normally, yes, ASICs are blazingly fast (like the L2 Switch Engines, CAM/TCAM, L3 Route Processors), there is an example of an architecture that operates more like a FPGA, but is still as fast as an ASIC: Juniper Trio.

Oh, and on some of the newer, high-speed line cards (like 4 port 10GigE cards) used in your local CO, actually use bog standard Xeon Processors from Intel. So there are applications where a universal CPU can be just as good as an ASIC.

*Obviously this doesn't include the newest ASICs from Cisco or Juniper, but those are crazy expensive, and only used in "Core Routers" that cost a kings ransom.
Hz so good
Gerbil Elite
 
Posts: 733
Joined: Wed Dec 04, 2013 5:08 pm

Re: The Machine, a new HP computing architecture

Postposted on Thu Jun 19, 2014 12:03 am

Hz so good wrote:
the wrote:
Optical cabling has strong advantages over copper to get a signal between two points very far away. This is already used today as the core networking is all fiber within the data center as well as the backbone of the internet.


I should point out that ,yes, optical fiber does allow for great distances. The photons can't escape, so there's no possibility of leakage nor EMI interference to induce attenuation. One of the other major benefits of using optical fiber is when you multiplex multiple wavelengths. That buys you greater data density (ie total available bandwidth) on the same fiber. You also get a speed boost, since electrons only travel near-speed of light, but I have no idea what the exact difference is.


The amusing thing is that light propagation through standard optical fiber is about 0.7*c. Despite the slower propagation speed, it still preferable for the reasons cited. What good is traveling at near c when it can only reach several meters away?
Dual Opteron 6376, 96 GB DDR3, Asus KGPE-D16, Geforce GTX 970
Mac Pro Dual Xeon E5645, 48 GB DDR3, GTX 770
Core i7 3930K@4.2 Ghz, 32 GB DDR3, GA-X79-UP5-Wifi
Core i7 2600K@4.4 Ghz, 16 GB DDR3, Radeon 6970, GA-X68XP-UD4
the
Gerbil First Class
Gold subscriber
 
 
Posts: 139
Joined: Tue Jun 29, 2010 2:26 am

Re: The Machine, a new HP computing architecture

Postposted on Thu Jun 19, 2014 8:39 am

Hz so good wrote:You also get a speed boost, since electrons only travel near-speed of light, but I have no idea what the exact difference is.

That wouldn't get you a speed boost per se; it would reduce latency.
(this space intentionally left blank)
just brew it!
Administrator
Gold subscriber
 
 
Posts: 38097
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: The Machine, a new HP computing architecture

Postposted on Thu Jun 19, 2014 9:26 am

the wrote:The amusing thing is that light propagation through standard optical fiber is about 0.7*c. Despite the slower propagation speed, it still preferable for the reasons cited. What good is traveling at near c when it can only reach several meters away?

just brew it! wrote:That wouldn't get you a speed boost per se; it would reduce latency.

Actually the is right--the speed through fiber is about 0.66c, which actually is not much faster than copper! Therefore I have my doubts about reducing latency (also, @Hz so Good: electrons don't travel at he speed of light, they often move much much slower, since that's electron drift). The arguments about EM interference, cable length, and bandwidth still stand though.

The more I read about The Machine the more I realize the only true godsend is the memristor. HP broke it down into 3 parts and I just wanted to share my opinions on each:

Processing
If I understand correctly, HP wants to use FPGAs.
I feel like FPGAs are a cheap shot. Yes, we will need specialized processors to take advantage of the new storage latencies, but FPGAs are often too specialized to replace a CPU. However, given that the first customers to take advantage of The Machine will likely be for HPC (high-performance computing, aka supercomputing) purposes (read: they have to make custom performant software) anyway, this won't be much of an issue for them.
Also, it's another cheap shot for HP to argue that FPGAs are more efficient than the Fujitsu K, a SPARC cluster from 2011, which clearly can't compete with power efficiency standards of today--let alone in a few years when The Machine is due.

Communications
Fiber is great.
It's small and lets you communicate cleanly over large distances, which is probably why HP chose to emphasize it over copper. In order to address a lot of memory, you need to be able to move it further away from the CPU (so you physically can fit more of it, and still link it to the processor). However, it's still very expensive to implement. I think the choice of fiber is a HPC optimization that won't scale to small devices as easily/cheaply.

Storage
Memristor-based memory is a great intermediate step between RAM and SSD.
It's probably where SSDs will eventually attempt to converge on at some point (Sandisk is currently at 150us (ie: 10^3x slower) with their ULLtra DIMMs: http://techreport.com/news/25951/sandis ... mory-stick), but I doubt it's feasible with standard NAND. CPU caches are still required, but blurring the line between long-term storage and RAM is a great thing. HP has something potentially great here.


What I honestly think will come out of this:
-Enterprise: HP will cater to the HPC and cloud markets, selling an all-in-one FPGA-memristor supercomputing system with a POSIX-compatible software stack.
-Consumer: General-purpose CPUs are here to stay. Memristors will replace on-board storage and maybe even RAM. Maybe we'll tie it to the PCIe bus or have support for it over the system bus, but either way we'll need a specialized interface for this, and I'm sure the GPU will access it with the same parity as the CPU.
-Everyone: Every OS/driver/software will take time to take full advantage of memristors.
Duct Tape Dude
Gerbil First Class
Gold subscriber
 
 
Posts: 149
Joined: Thu May 02, 2013 12:37 pm

Re: The Machine, a new HP computing architecture

Postposted on Thu Jun 19, 2014 1:30 pm

just brew it! wrote:
Hz so good wrote:You also get a speed boost, since electrons only travel near-speed of light, but I have no idea what the exact difference is.

That wouldn't get you a speed boost per se; it would reduce latency.




Yup, you are correct. I misspoke on that one.

And like the said, the speed on fiber is slower than c. I was thinking about in general, where electrons move at near-c and photons at c, but I forgot to include the limitations imposed by the medium there. My bad. :)
Hz so good
Gerbil Elite
 
Posts: 733
Joined: Wed Dec 04, 2013 5:08 pm

Re: The Machine, a new HP computing architecture

Postposted on Thu Jun 19, 2014 1:34 pm

Hz so good wrote:And like the said, the speed on fiber is slower than c. I was thinking about in general, where electrons move at near-c and photons at c, but I forgot to include the limitations imposed by the medium there. My bad. :)

And you certainly don't want to push those electrons to speeds over that of c in copper.
Life is hard; but it's harder if you're stupid. Big Al.
Captain Ned
Global Moderator
Gold subscriber
 
 
Posts: 20641
Joined: Wed Jan 16, 2002 7:00 pm
Location: Vermont, USA

Re: The Machine, a new HP computing architecture

Postposted on Thu Jun 19, 2014 1:49 pm

Captain Ned wrote:And you certainly don't want to push those electrons to speeds over that of c in copper.


Yeah, your power usage would be over 1.21 jigawatts!
Glorious
Darth Gerbil
Gold subscriber
 
 
Posts: 7886
Joined: Tue Aug 27, 2002 6:35 pm

Re: The Machine, a new HP computing architecture

Postposted on Thu Jun 19, 2014 1:50 pm

Captain Ned wrote:
Hz so good wrote:And like the said, the speed on fiber is slower than c. I was thinking about in general, where electrons move at near-c and photons at c, but I forgot to include the limitations imposed by the medium there. My bad. :)

And you certainly don't want to push those electrons to speeds over that of c in copper.


Kind of off topic but also on topic
Last edited by Arclight on Thu Jun 19, 2014 2:16 pm, edited 1 time in total.
nVidia video drivers FAIL, click for more info
Disclaimer: All answers and suggestions are provided by an enthusiastic amateur and are therefore without warranty either explicit or implicit. Basically you use my suggestions at your own risk.
Arclight
Gerbil Elite
 
Posts: 706
Joined: Tue Feb 01, 2011 3:50 am

Re: The Machine, a new HP computing architecture

Postposted on Thu Jun 19, 2014 1:51 pm

Captain Ned wrote:And you certainly don't want to push those electrons to speeds over that of c in copper.



Unless you've woken up to find yourself in Star Trek, where a localized subspace/warp field let the computer cores and interconnects operate at several times C. :P

I always thought that was a much cooler explanation for the insane computation abilities of the central computer, instead of those stupid neural gel-pack things from Voyager.
Hz so good
Gerbil Elite
 
Posts: 733
Joined: Wed Dec 04, 2013 5:08 pm


Return to General Hardware

Who is online

Users browsing this forum: Bing [Bot] and 3 guests