Personal computing discussed
Moderators: renee, Flying Fox, Thresher
UberGerbil wrote:Depending on what you mean by RAID, RAM already is or doesn't need it. If you mean operating in parallel like RAID-0, that's what dual-channel memory is doing (and has been for what, a decade on x86?). At the higher end, workstation and server chipsets have supported triple- or quad-channel memory; you don't see this at the low end because the performance doesn't justify the expense, particularly for consumer workloads where memory bandwidth just doesn't matter vs all the other things that can gate performance.
If you mean the traditional definition of RAID (the R stands for Redundant), various server architectures have redundant and/or hot-swappable RAM, and all of them support ECC.
TheJack wrote:Thanks guys,
for what I know dual channel is not the same as raid0. Running RAM in dual channel gives a performance boost of around 5% whereas with raid0 you get almost twice the performance. raid0 is what SSDs are doing to achieve their speeds. in raid0 the controller divides the data to the number of available drives ( say, 5 ) and lets each drive do a fifth of the necessary work. so this is one thing I know. what is ambiguous ( to me ) is when it is compared to dual channel.
Kougar wrote:TheJack wrote:Thanks guys,
for what I know dual channel is not the same as raid0. Running RAM in dual channel gives a performance boost of around 5% whereas with raid0 you get almost twice the performance. raid0 is what SSDs are doing to achieve their speeds. in raid0 the controller divides the data to the number of available drives ( say, 5 ) and lets each drive do a fifth of the necessary work. so this is one thing I know. what is ambiguous ( to me ) is when it is compared to dual channel.
You're trying to compare the wrong metrics is the problem. Start by comparing the memory bandwidth, not real-world performance. For example take the same DDR3-1600Mhz RAM. For dual channel memory bandwidth it'd be ~21GB/s but for quad-channel memory, it jumps to ~37GB/s of available memory bandwidth. So yes it is like RAID 0 in that respect given it's an 80% increase in bandwidth.
The problem is real world performance doesn't increase by 80% to match the raw bandwidth because other parts of the system are the limiting factors, namely the processor. When reading from a disk, all it needs to do is transfer a massive amount of data to show a HD resolution photo onto the screen, minimal processing is required to do this so it is just mass-moving data. But when a processor is reading from RAM, it's usually doing so to actually perform computational work on whatever data it's moving. The CPU spends more time doing computational work than polling to the system RAM, and software design intentionally is optimized to minimize calls to system RAM in order to maximize the program's performance. So conceptually "RAIDing" memory as you put it won't remove these limiting factors. It's an apples-to-oranges comparison with what RAM versus disks do, even though both handle massive amounts of data. Now if you take a server or enterprise workload that constantly pushes gigabytes of traffic over the memory bus then quad channel would start to show appreciable real-world performance benefits. But you won't find that on most consumer desktops.
SSDs don't actually "RAID 0" the NAND flash either, if they did it would make them dangerous to use for storing data. And while I think all modern SSDs today use a form of RAID parity for error checking it still isn't done for performance.
Come to think of it, Intel's FB-DIMM was sort of like RAIDing RAM... the memory controllers talked to the Advanced Memory Buffer which sat between the CPU and RAM separating them, and acted like a RAID controller in that it converted the signal to/from serial as well enabled the ability to add far more addressable memory modules. The problem with that is it was very power in-efficient (any master controller will need to be powerful to handle DDR3 or DDR4 speeds with minimal latency penalty, and hence burn power) and it increased the total latencies involved (which RAID can also do). Not to mention it added to the system and special FB-DIMM costs... even that was done to just add memory capacity though, and as higher capacity DRAM was developed it became unneeded.
Flying Fox wrote:Chip stacking is supposed to reduce power consumption. It seems like for RAMs it is more easily do-able.
http://www.extremetech.com/computing/15 ... -ddr-sdram
http://www.extremetech.com/computing/11 ... p-stacking
Our recent discussion of HMC: viewtopic.php?f=2&t=94402
TheJack wrote:I thought dual channel is for increased bandwidth, which is not the same as RAID0.
TheJack wrote:Hier is why I asked: with DDR4, stacking RAM will be possible. But I think heat could become a problem. If you could lower the voltage/frequency you may be able to overcome heat issues ( to some extend at least ) and compensate the decreased speed by RAID0ing the RAM layers. ????
just brew it! wrote:TheJack wrote:I thought dual channel is for increased bandwidth, which is not the same as RAID0.
I think you are misunderstanding what RAID-0 is for. RAID-0 *is* done for increased bandwidth. It does not improve reliability; in fact it makes it worse, since the failure of a single drive causes you to lose the contents of both drives. RAID-0 is essentially a "dual channel" hard drive.TheJack wrote:Hier is why I asked: with DDR4, stacking RAM will be possible. But I think heat could become a problem. If you could lower the voltage/frequency you may be able to overcome heat issues ( to some extend at least ) and compensate the decreased speed by RAID0ing the RAM layers. ????
Latency matters too. As you decrease the clock speed, latency goes up, and adding more channels can't compensate for that.
TheJack wrote:Nano holes may help with heat dissipation. Termites have a sophisticated system of ventilation where hot air is removed from their nests automatically through the interaction of hot and cold air. ( that would be too difficult though )
just brew it! wrote:TheJack wrote:Nano holes may help with heat dissipation. Termites have a sophisticated system of ventilation where hot air is removed from their nests automatically through the interaction of hot and cold air. ( that would be too difficult though )
Nano holes is not much different than adding a heat sink. Either way, you are increasing the surface area through which heat can be dissipated by convection. The problem with nano holes is that they will quickly become clogged with dust.
Flying Fox wrote:Nanoholes with air cooling is still not going to cut it, as chips that so confined in a tiny area you will still have the problem of how to generate air flow to convect the heat out. Outfits like IBM have been researching nanoscale pipes to carry liquid for cooling chips. That hopefully gets us somewhere.
Sorry to burst your bubble, but I am sure there are many crazy scientists with similar (and more) ideas already.
just brew it! wrote:You're not going to get enough natural air convection in a tightly confined space (your nano holes) for it to make much of a difference. And as previously noted, the holes will quickly get plugged by dust.
If you don't want a pump, use a heat pipe; they circulate their coolant without needing a mechanical pump. The temperature gradient itself essentially "pumps" the coolant inside the heat pipe by causing a phase change from liquid to gas.
LASR wrote:They already use chip stacking in mobile SOCs. In this case, they are actually stacking the CPU under the RAM. Cooling is not a problem unless you are working with very small thermal envelopes such as in tiny cell phones. Even on slightly larger tablet devices, passive cooling is enough to keep these devices from throttling.
Since RAM has a much lower TDP than SOCs, I am going to assume multi layer chip stacking will not be a problem with desktop-class cooling solutions.
Having said that, I believe you have some misconceptions about the notions of what RAID and multi channel memory actually mean. I would suggest reading up about what they are first, before trying to "make RAM RAIDable."
just brew it! wrote:Pretty sure the Samsung 3D flash stack is all inside the flash chip package, not multiple separate packages stacked and soldered like with the SOC/RAM used in cell phones. You'd probably need to X-ray the chips to see anything interesting.
TheJack wrote:just brew it! wrote:Pretty sure the Samsung 3D flash stack is all inside the flash chip package, not multiple separate packages stacked and soldered like with the SOC/RAM used in cell phones. You'd probably need to X-ray the chips to see anything interesting.
In other words, in terms of thickness, you couldn't tell the difference between a normal and stacked SSD?
TheJack wrote:just brew it! wrote:Pretty sure the Samsung 3D flash stack is all inside the flash chip package, not multiple separate packages stacked and soldered like with the SOC/RAM used in cell phones. You'd probably need to X-ray the chips to see anything interesting.
In other words, in terms of thickness, you couldn't tell the difference between a normal and stacked SSD?
Crayon Shin Chan wrote:I think in the context of RAM, RAID0 striping is called bank interleaving.
Dual channel is basically taking two existing sets of 64 wires, and treating them as one large pipe instead of two separate pipes. Sounds similar to RAID0, but has nothing to do with splitting up data and all to do with having an effectively wider path for data to flow across. Whereas RAID0 achieves that by striping.
Flying Fox wrote:Crayon Shin Chan wrote:I think in the context of RAM, RAID0 striping is called bank interleaving.
Dual channel is basically taking two existing sets of 64 wires, and treating them as one large pipe instead of two separate pipes. Sounds similar to RAID0, but has nothing to do with splitting up data and all to do with having an effectively wider path for data to flow across. Whereas RAID0 achieves that by striping.
In dual channel configuration, the "bank interleaving" happens at the stick level afaiu.