Personal computing discussed
Moderators: renee, mac_h8r1, Nemesis
JBI wrote:On the one hand, this is the kind of groundbreaking, outside-the-box thinking that made organizations like HP and Bell Labs the technology powerhouses they were back in the day.
JBI wrote:On the other hand, I think they've bitten off *way* more than they can chew here. It'll be a real stretch to even bring the individual underlying technologies they're talking about to the point of being commercially viable in such a short timeframe, let alone produce practical, cost-effective systems.
Glorious wrote:New hardware that's so revolutionary, so great, so transcendent of our current computational paradigm that it will have not one OS, but three? You want Linux? It'll have that! You want Android? (wait, isn't that already *nixy to begin with?)---It'll have that too! Also, it will have a revolutionary new HP OS that's the best at harnessing this new computing environment (so why would I want the other two...?)
Arclight wrote:It was indeed a pitch, it's obvious they are trying to garner some early adopters.
Arclight wrote:What's really cool though, if they really can deliver, would be the memristors and the link between the SOC and the memory which seems to be based on fiber technology (ofc they had a mock-up, the real deal might not be ready to go out of the lab yet).
Arclight wrote:The part, for me, that seemed the most challenging would be the SoC of which not much was said, iirc, they also suggested the use of optical transistors (again we have no idea how much progress they have made here and if they are ready for the market).
just brew it! wrote:They're not even sure what they're building yet, it is still premature (by many years!) to be soliciting early adopters. It would be like IBM saying back in 1970, "We think we might have a revolutionary new concept to replace all these mainframes in about 10 years! We don't have any specs for it yet, but hey... you want in?"
Glorious wrote:Also, it will have a revolutionary new HP OS that's the best at harnessing this new computing environment (so why would I want the other two...?).
Flatland_Spider wrote:Glorious wrote:Also, it will have a revolutionary new HP OS that's the best at harnessing this new computing environment (so why would I want the other two...?).
VAX and VMS are coming back! Seriously, people would pay for new VMS machines, and it is quite formidable.
Hz so good wrote:Flatland_Spider wrote:Glorious wrote:Also, it will have a revolutionary new HP OS that's the best at harnessing this new computing environment (so why would I want the other two...?).
VAX and VMS are coming back! Seriously, people would pay for new VMS machines, and it is quite formidable.
I wonder how a reborn VAX and VMS would stack up against the growing trend of virtual machines, both in enterprise, and in data centers and hosting facilities.
Flatland_Spider wrote:Glorious wrote:Also, it will have a revolutionary new HP OS that's the best at harnessing this new computing environment (so why would I want the other two...?).
VAX and VMS are coming back! Seriously, people would pay for new VMS machines, and it is quite formidable.
Then there is HP-UX which is homeless when Intel quits producing Itanium chips and the lawsuits quit working.
The common thread is big iron RISC boxes with high margins which were based on Itanium chips. HP makes a mint off of HP-UX boxes, and OpenVMS revenue wasn't shabby, if I remember correctly.
HP pulling off an OS isn't the most unbelievable thing about this. HP has quite the collection of operating systems (https://en.wikipedia.org/wiki/List_of_o ... tt-Packard) with NonStop and VMS being the two more interesting ones.
Flatland_spider wrote:VAX and VMS are coming back! Seriously, people would pay for new VMS machines, and it is quite formidable.
Flatland_spider wrote:The common thread is big iron RISC boxes with high margins which were based on Itanium chips. HP makes a mint off of HP-UX boxes, and OpenVMS revenue wasn't shabby, if I remember correctly.
Flatland_spider wrote:HP pulling off an OS isn't the most unbelievable thing about this. HP has quite the collection of operating systems (https://en.wikipedia.org/wiki/List_of_o ... tt-Packard) with NonStop and VMS being the two more interesting ones.
Hz so good wrote:I wonder how a reborn VAX and VMS would stack up against the growing trend of virtual machines, both in enterprise, and in data centers and hosting facilities.
the wrote:Or you could just virtualize OpenVMS on Itanium hardware today. That is a common scenario for many OpenVMS installations today as many have migrated away form the platform.
the wrote:Except HP should have learned their lesson with Itanium: legacy software matters. The only reason Itanium has been able to limp along to today is due to the presence of legacy HP-UX and OpenVMS software. If HP/Intel launched Itanuim as a fresh new platform, it would have been ignored in the data center as there would have been no reason to purchase it over competing platforms with robust software. Any sort of performance advantage Itanium had was mainly on paper (it was only competitive briefly when the Itanium 2 first launched). If HP wants to launch a new platform, they better have a suite of software ready to go that shows off the platform's advantages.
the wrote:The idea of a new operating system that does not deal with discrete files but rather everything exists in main memory is revolutionary. If an application wants to read data, a memory pointer to its location is all that is necessary. Data management will still be a problem with everything coexisting on the same logical data tier (i.e. how does a program know where to point to for a specific piece of data in this model?). There are some interesting problems here with security, backups, and RAS (ECC, hot swap, expansion) that need to be addressed with further research. Shoe horning existing applications into HP's proposed model may not work well so starting from scratch isn't unwarranted.
Glorious wrote:the wrote:Or you could just virtualize OpenVMS on Itanium hardware today. That is a common scenario for many OpenVMS installations today as many have migrated away form the platform.
Well, there is a minimalized HP-UX based hypervisor on Itanium (or the Integrity platform, as HP prefers to call it), but I'm not sure how many OpenVMS customers take advantage of that instead of just running OpenVMS natively on the hardware. We certainly don't, particularly since the HP salespeople never seemed all that confident in the hypervisor.
At this point though, it's all moot. It's time to migrate away from the entire thing.
Glorious wrote:the wrote:The idea of a new operating system that does not deal with discrete files but rather everything exists in main memory is revolutionary. If an application wants to read data, a memory pointer to its location is all that is necessary. Data management will still be a problem with everything coexisting on the same logical data tier (i.e. how does a program know where to point to for a specific piece of data in this model?). There are some interesting problems here with security, backups, and RAS (ECC, hot swap, expansion) that need to be addressed with further research. Shoe horning existing applications into HP's proposed model may not work well so starting from scratch isn't unwarranted.
But files have always been units of logical organization, not any kind of hardware-enforced concession. From an execution perspective, applications all work with only memory. The CPU doesn't know that Hard-drives exist, that's all background infrastucture and bookkeeping. Admittedly, it's a lot of background infrastructure and bookkeeping, but it's hard to see what's so revolutionary about any of this. Yes, Having hundreds of terabytes of universal and non-volatile memory is intriguing, but that's simply an extension of what we already have. Things like memcached already exist, and they're heavily used.
Glorious wrote:In general modern OSes cache like crazy, it's just the limitation/cost of DRAM and the bookkeeping required because of the inherent volatility that are the problems. Yes, getting rid of the fsync() bottleneck would be nice, but SSDs are getting to the point where it's just not the problem it used to be. Alleviating bookkeeping like that is nice, but it's neither sexy nor revolutionary. As JBI said, other technologies aren't going to standstill over the next few years, and we can already see the trend.
It's hard to imagine how developing an entirely new architecture and environment, years from now, are going to be any sort of clear win.
Glorious wrote:EDIT: I mean, to be clear dude, how is this even new? The original computers didn't have files either, they only had system memory and I/O. It's not revolutionary at all, it's practically regressive. The "Everything-is-a-file" Unix ideal was a reaction against multiple different device APIs handling different types of device I/O (which is evident, to a certain extent, in VMS which has the concept of different types of devices and different ways to handle them). VMS's approach might be more elegant, but the simplicity, however crude, of the Unix approach clearly won out. The point is that files have always been abstractions, and the Unix idea was actually to try and unify all I/O into a single and ubiquitous name space. It's always been about organization.
Glorious wrote:New hardware that's so revolutionary, so great, so transcendent of our current computational paradigm that it will have not one OS, but three? You want Linux? It'll have that! You want Android? (wait, isn't that already *nixy to begin with?)---It'll have that too! Also, it will have a revolutionary new HP OS that's the best at harnessing this new computing environment (so why would I want the other two...?). We have the buzzwords of the future with the hot buzzwords of now! Buy HP stock! We're relevant!
Wake me up when they actually have, you know, a functional prototype instead of empty marketing.
the wrote:I've heard of it used to consolidate ancient HP-UX and OpenVMS systems on to one new expensive Itanium hardware. It was the path of least resistance to upgrade these systems. The alternatives were most expensive with more hardware or a painful software migration. The sales rep could have had their own interests in mind by attempting to sell you more hardware.
the wrote:Correct. It is the role of the programmer to be aware that the data they need isn't all going to be in memory and is responsible for loading it there. The revolutionary part is that developer aspect is stream lined as the data will already be in memory immediately accessible. Data access will behave more like a typical low latency system call. There will still need to be some bookkeeping but it is far lower level and very thin. By reducing/eliminating these bookkeeping tasks, latency and throughput will benefit greatly.
the wrote:The OS will also be thinner as the need for platform specific storage drivers disappear with primary storage being one common model.
The problem facing HP with this idea is that there is already a movement to reduce the overhead with technologies like PCIe based storage using the NVMe spec. The basic advantages will still be there but their magnitude won't be as impressive for small systems (say the typical dual socket server). This is why I see HP citing the 160 PB figure: it highlights the fixed amount of overhead as capacity increases. To get to 160 PB today with NVMe, there would to be 3,200 Ivy Bridge-E chips (presuming 5 TB hanging off of each processor's PCIe lanes). That's a pretty big cluster with multiple stages to access data (it is in local memory? No, then is it in local NVMe storage? Gotta hit a network interface to get its location in the cluster, then hit the network interface again to get it.) The purpose of the centralized memory pool is that the data location lookup is a near constant fixed cost.
the wrote:Make no mistake, this idea would be the antonym of that Unix principle. The concept of a file would disappear into an in-memory location.
Glorious wrote:Unix idea was actually to try and unify all I/O into a single and ubiquitous name space
slowriot wrote:Because your customers will not be creating entirely new software. They'll want the options of familiarity or maximizing the hardware. I mean, why does IBM offer AIX, Linux on Power, and IBM i all on Power hardware?
slowriot wrote:I'm fully with your main point, that this is currently vaporware and nothing real to show, but I think you're really grasping for straws beyond "they don't have a real prototype to show yet" with this "oh man 3 OS choices!" comment.
Glorious wrote:the wrote:Correct. It is the role of the programmer to be aware that the data they need isn't all going to be in memory and is responsible for loading it there. The revolutionary part is that developer aspect is stream lined as the data will already be in memory immediately accessible. Data access will behave more like a typical low latency system call. There will still need to be some bookkeeping but it is far lower level and very thin. By reducing/eliminating these bookkeeping tasks, latency and throughput will benefit greatly.
But it's already streamlined. Most developers these days are using very high-level language languages that are abstractions upon abstractions. Most of those hot web frameworks use ORM, which means the developer is barely aware of the database abstraction, let alone any actual abstraction from hardware. Even at lower levels, like C, you're using the standard library, which is again an abstraction. Whether you make mmap() either irrelevant or stupendously fast, it doesn't make a whole lot of difference. Most C programmers won't care, they'll just notice that their previously I/O bound program, well, isn't, anymore.
So yes, the "Machine" will make a lot of that stuff faster, but, again, have you been paying attention the last few years? It used to be if you wanted more than ~100 fysnc()s per second you were talking about a very expensive battery-backed up controller with 15k disks. That was the only way you could safely enable write-caching and thus get decent performance from spinning disks.
Now? You can buy a 120GB consumer SSD for like 70 bucks that gets two orders of magnitude more fysnc()s than any magnetic disk ever could and can easily compete (and beat) those very, very expensive controller setups. And at $70 that's literally the bottom of the barrel. At the enterprise level the differences are staggering. It's already a whole new world.
Glorious wrote:the wrote:The OS will also be thinner as the need for platform specific storage drivers disappear with primary storage being one common model.
Again, that's most abstracted away from anyone but the OS programmers, and even then, you can simplify it immensely by supporting just one specific type of storage hardware. Certain vendors do things like that already.
Glorious wrote:The problem facing HP with this idea is that there is already a movement to reduce the overhead with technologies like PCIe based storage using the NVMe spec. The basic advantages will still be there but their magnitude won't be as impressive for small systems (say the typical dual socket server). This is why I see HP citing the 160 PB figure: it highlights the fixed amount of overhead as capacity increases. To get to 160 PB today with NVMe, there would to be 3,200 Ivy Bridge-E chips (presuming 5 TB hanging off of each processor's PCIe lanes). That's a pretty big cluster with multiple stages to access data (it is in local memory? No, then is it in local NVMe storage? Gotta hit a network interface to get its location in the cluster, then hit the network interface again to get it.) The purpose of the centralized memory pool is that the data location lookup is a near constant fixed cost.
Yes, today.
That's the flaw. They are comparing some hypothetical future over the horizon with the environment today. It's practically a tautology. Yes, a computer 5 years from now will be better than a computer today.
The counter-example you are proposing is silly. Just look at current development trends, many companies are working on ultra-dense clusters with tightly interwoven interconnects. The only uniqueness here is the universal memory concept, but you could easily envision something similar to it implemented with DRAM and flash. That's a two-level hierarchy with some additional complexity, yes, but they're also proven technologies that not only work today but also haven't yet hit the end of their developmental cycle.
Glorious wrote:the wrote:Make no mistake, this idea would be the antonym of that Unix principle. The concept of a file would disappear into an in-memory location.
Just to start: Then why are they proposing to run Linux and Android on it?
Glorious wrote:Look, did you even read what I wrote? I'll quote myself:Glorious wrote:Unix idea was actually to try and unify all I/O into a single and ubiquitous name space
You obviously don't understand the principle, because the entire point of the idea was to avoid having to worry about the particularities & peculiarities of any specific device. Instead, all devices were (theoretically) treated as files, hence, the programmer (theoretically) didn't need to worry about how to handle a teletype versus a tape drive, a CRT versus a line-plotter. In fact, the very "concept of a file" IS A "in-memory location," as that's essentially what a file-descriptor is.
File do not even exist at a hardware level. They are solely software constructs built for human convenience. What are you saying is inane and incorrect.
Glorious wrote:I mean, demonstrably, you can have an entire Linux install on a ramdisk. Today. The only difference is the volatility.
the wrote:
Optical cabling has strong advantages over copper to get a signal between two points very far away. This is already used today as the core networking is all fiber within the data center as well as the backbone of the internet.
the wrote:HP seems to be pushing the idea of using optical links as the go between for processors and centralized memory. There is certainly no technical reason that prevents this and it simplifies memory management. This differs from the currently model where memory is distributed along with each processor node. The currently model has a speed advantage here due to data locality. Modern processors can access local memory in less than 25 ns. Access times increase as a modern processor attempts to access memory on remote socket. Optical interconnects directly to memory makes sense in HP's centralized topology but it wouldn't be advantage for current systems. Rather optical interconnects between processors would make more sense.
the wrote:One thing to remember is that DRAM is still comparatively slow. If memristor has much faster access times, THEN I'll start jumping for joy.The focus on nonvolatile memory (NVM) is interesting. HP own the patents behind a completely new type: the memristor. If it lives up to the hype, it will genuinely be ground breaking. It would enable SSD's to have access and bandwidth similar to that of DRAM used for main memory in computers.
DPete27 wrote:3) ASICs (application specific integrated circuits) are more power efficient than "universal" CPU architecture. That's no secret. Look at Cryptocurrency/Bitcoin mining ASICs. Dumb that down to consider an IGP a type of ASIC, and HP hasn't really invented anything new except customizing the on-die ASICs to match their intended "datacenter usage."
Hz so good wrote:the wrote:
Optical cabling has strong advantages over copper to get a signal between two points very far away. This is already used today as the core networking is all fiber within the data center as well as the backbone of the internet.
I should point out that ,yes, optical fiber does allow for great distances. The photons can't escape, so there's no possibility of leakage nor EMI interference to induce attenuation. One of the other major benefits of using optical fiber is when you multiplex multiple wavelengths. That buys you greater data density (ie total available bandwidth) on the same fiber. You also get a speed boost, since electrons only travel near-speed of light, but I have no idea what the exact difference is.
Hz so good wrote:You also get a speed boost, since electrons only travel near-speed of light, but I have no idea what the exact difference is.
the wrote:The amusing thing is that light propagation through standard optical fiber is about 0.7*c. Despite the slower propagation speed, it still preferable for the reasons cited. What good is traveling at near c when it can only reach several meters away?
just brew it! wrote:That wouldn't get you a speed boost per se; it would reduce latency.
just brew it! wrote:Hz so good wrote:You also get a speed boost, since electrons only travel near-speed of light, but I have no idea what the exact difference is.
That wouldn't get you a speed boost per se; it would reduce latency.
Hz so good wrote:And like the said, the speed on fiber is slower than c. I was thinking about in general, where electrons move at near-c and photons at c, but I forgot to include the limitations imposed by the medium there. My bad.
Captain Ned wrote:And you certainly don't want to push those electrons to speeds over that of c in copper.
Captain Ned wrote:Hz so good wrote:And like the said, the speed on fiber is slower than c. I was thinking about in general, where electrons move at near-c and photons at c, but I forgot to include the limitations imposed by the medium there. My bad.
And you certainly don't want to push those electrons to speeds over that of c in copper.
Captain Ned wrote:And you certainly don't want to push those electrons to speeds over that of c in copper.