Ditiris wrote:Our Dell contact had the same answer on the riser card for the R730. He's checking with engineering to make sure the non-compliant 6.5" height of the FPGA card will fit. There should only be the one FPGA expansion card, so overheating better not be an issue. Space isn't a premium, so the T630 would also work.
I don't have my R730's in production so I'll go open one up here in a bit and do some measuring. It might be a tight fit, but you may be able to fit the card in the slots closest to the power supplies (on the right side when looking at the back of the server). Otherwise your best bet will then be to get the T630 (which you can order in a rackmount configuration).
Ditiris wrote:Is it possible to operate the two sides of the multi-socket board as independent for the purposes of recording the data from the FPGA PCIe card to the RAM disk? That is, could I populate the other socket with another 384GB RAM and put the second FPGA PCIe card over there? Or is it better to simple use two servers?
My exposure to multi-processor, multi-socket architecture is very limited. I can't find a good explanation of what the data path is between sockets and the chipset on a multi-socket board, such as the R730. Can anyone provide me a resource there? I would assume dedicated QPI or DMI links but I can't find anything explaining this... This is more for my own education than anything else, but I imagine we'll want to do processing on the board eventually.
I think what you're looking for here is targeting specific NUMA nodes. This lets each CPU access the memory that's closest to it (i.e. the DIMM slots off of that CPU's memory controller). I don't know that you can do that specifically for RAMDISKs, though.
The links between the CPUs are QPI links. If you do some Googling on NUMA QPI you'll find a wealth of information.