Personal computing discussed

Moderators: renee, Steel, notfred

 
erick2red
Gerbil In Training
Topic Author
Posts: 9
Joined: Tue Jun 16, 2009 9:44 am
Contact:

Networking hardware

Wed Apr 27, 2011 10:18 am

Hi everyone:

This is my first time at the tech report forums.
I have some networking hardware questions, and as I've found none network hardware dedicated forums, this is my place
So.

I'm trying to build a new network for my business and I want data transfer rates of 10Gbps, my question is if I would be able to reach that point with simple cable and Gigabits ethernets cards,
Any point in the right direction, would be thxs.

Routers, ethernet cards, and cables, suggestions pls

Thxs in advance.

Erick
 
mac_h8r1
Minister of Gerbil Affairs
Posts: 2974
Joined: Tue Sep 24, 2002 6:57 pm
Location: Somewhere in the Cloud
Contact:

Re: Networking hardware

Wed Apr 27, 2011 10:25 am

Welcome to the forum!

I moved this topic to the Networking forum so it gets more traffic (pun intended).

10Gbps is certainly a sexy number, but the question is do you have an actual *need* for that kind of bandwidth? Could you give us some more details about the size of the network, what you're transferring, etc? Most servers out there for a typical business can't come close to pushing that kind of data, and 10Gbps is usually reserved for backbone equipment. We'll take it from there!
mac_h8r1.postCount++;
Chaos reigns within. Reflect, repent, and reboot. Order shall return.
Slivovitz owns you.
 
just brew it!
Administrator
Posts: 54500
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Networking hardware

Wed Apr 27, 2011 10:38 am

Yup... really need more info about what this network will be used for, and why you need (or think you need) that kind of bandwidth.

Short answer is, no you can't do this with commodity networking hardware, unless you're talking about 10Gbps aggregate (across the entire network, not to a single node).

Slightly longer answer is, unless you've got some really expensive server gear there's no way you're going to be able to push that kind of data around anyway.
Nostalgia isn't what it used to be.
 
UberGerbil
Grand Admiral Gerbil
Posts: 10368
Joined: Thu Jun 19, 2003 3:11 pm

Re: Networking hardware

Wed Apr 27, 2011 12:36 pm

I recall seeing a setup that had two servers connected via a pair of quad-port gigE cards in each machine, for a total of 8 x 1000Mbps bandwidth between them (running some kind of network bonding software to actually make use of it). Of course cards like that are server-class and close to $500 each so you're looking at almost $2K in network hardware, not counting whatever the software cost (and whatever headaches it might bring with it).
 
erick2red
Gerbil In Training
Topic Author
Posts: 9
Joined: Tue Jun 16, 2009 9:44 am
Contact:

Re: Networking hardware

Wed Apr 27, 2011 1:23 pm

Hi:

So, what kind of data we transfer. My company makes equipment, kinda space telescope equipment, and simulating and testing those machines, all the raw data dumps have to be transfered over the networks. We also transfer high, super high resolution images, telescope sized images, and video/audio over the net, so I do need that bandwidth.
Besides in here says that with 10GBbps I will only get 1Gb/s of transfer rate.
Second: I know that kind of setup can't be mounted with commodity hardware, I know it will cost money, I have a couple of grands in my packets, no worry....
So any tips.
 
Contingency
Gerbil Jedi
Posts: 1534
Joined: Sat Jun 19, 2004 4:03 pm
Location: al.us
Contact:

Re: Networking hardware

Wed Apr 27, 2011 2:08 pm

How many nodes/what dump size are we talking about? You can do 10Gbps+ cheaply with a Tanenbaum system.
#182 TT: 13/DNVT, Precedence: Flash Override. Switch: Node Center. MSE forever.
 
erick2red
Gerbil In Training
Topic Author
Posts: 9
Joined: Tue Jun 16, 2009 9:44 am
Contact:

Re: Networking hardware

Wed Apr 27, 2011 2:17 pm

Contingency wrote:
How many nodes/what dump size are we talking about? You can do 10Gbps+ cheaply with a Tanenbaum system.


How many nodes ? High consumer maybe 10 or 20.
Dump size I think maybe about 500-800 Mb/s

Contingency wrote:
You can do 10Gbps+ cheaply with a Tanenbaum system.
Really ?
 
TheEmrys
Minister of Gerbil Affairs
Posts: 2529
Joined: Wed May 29, 2002 8:22 pm
Location: Northern Colorado
Contact:

Re: Networking hardware

Wed Apr 27, 2011 2:19 pm

erick2red wrote:
....says that with 10GBbps I will only get 1Gb/s of transfer rate.



I don't understand this. Maybe I am reading this wrong, but 10 GBps is 10 gigabytes per second.... that isn't usually how things are measured for network traffic. If we are talking about 10 gbps, we are looking at 10 gigabits per second, which is 1.25 GB (gigabytes) per second. Do you need 1.25 GBps of throughput?
Sony a7II 55/1.8 Minolta 100/2, 17-35D, Tamron 28-75/2.8
 
Scrotos
Graphmaster Gerbil
Posts: 1109
Joined: Tue Oct 02, 2007 12:57 pm
Location: Denver, CO.

Re: Networking hardware

Wed Apr 27, 2011 2:21 pm

He read the wiki thing wrong. It says:

10 Gigabit Ethernet (10GBASE-X) 10 Gbit/s = 1.25 GB/s
 
emorgoch
Gerbil Elite
Posts: 719
Joined: Tue Mar 27, 2007 11:26 am
Location: Toronto, ON

Re: Networking hardware

Wed Apr 27, 2011 2:27 pm

erick2red wrote:
Besides in here says that with 10GBbps I will only get 1Gb/s of transfer rate.
Second: I know that kind of setup can't be mounted with commodity hardware, I know it will cost money, I have a couple of grands in my packets, no worry....
So any tips.

Firstly, not quite sure what you were reading on that Wikipedia page, but there is a difference between a GBps and a Gbps. The former are gigaBYTES, the later gigaBITS. 1 byte equals 8 bits. Network connections are normally measured in bits per second. As for actual speed, a 1Gbps connection between systems will generally transfer 1GB of actual data in ~10 seconds.

As for money to burn, a 10Gbit port on a server is about $1000, and a switch is about $2000. 10Gbit switches start around the $15000 mark I believe. Factor that into your costs.
Intel i7 4790k @ stock, Asus Z97-PRO(Wi-Fi ac), 2x8GB Crucial DDR3 1600MHz, EVGA GTX 1080Ti FTW3
Samsung 950 Pro 512GB + 2TB Western Digital Black
Dell 2408WFP and Dell 2407WFP-HC for dual-24" goodness
Windows 10 64-bit
 
erick2red
Gerbil In Training
Topic Author
Posts: 9
Joined: Tue Jun 16, 2009 9:44 am
Contact:

Re: Networking hardware

Wed Apr 27, 2011 2:30 pm

Scrotos wrote:
He read the wiki thing wrong. It says:

10 Gigabit Ethernet (10GBASE-X) 10 Gbit/s = 1.25 GB/s


No, actually I read it right, I now realize I write it wrong, I know 10 gigabites per seconds are 1.25 gigabytes pers seconds, that's what i need 1.25 GB/s which is the same as 10Gbps.
 
Scrotos
Graphmaster Gerbil
Posts: 1109
Joined: Tue Oct 02, 2007 12:57 pm
Location: Denver, CO.

Re: Networking hardware

Wed Apr 27, 2011 2:51 pm

Well, either way most people knew what you meant and it was just the anal-retentive people who got tripped up on the misuse of B versus b. ;)

(I used to be one of 'em but I took my valium today)
 
Contingency
Gerbil Jedi
Posts: 1534
Joined: Sat Jun 19, 2004 4:03 pm
Location: al.us
Contact:

Re: Networking hardware

Wed Apr 27, 2011 3:07 pm

erick2red wrote:
Really ?


Your scenario is not a good fit for a Tanenbaum system. It shines with point to point and extremely large quantities of data.

erick2red wrote:
How many nodes ? High consumer maybe 10 or 20.
Dump size I think maybe about 500-800 Mb/s


You gave me a bandwidth estimate when I asked for size. This can be interpreted several ways. I recommend tracing a typical data flow from creation to destination(s), and writing it out.
#182 TT: 13/DNVT, Precedence: Flash Override. Switch: Node Center. MSE forever.
 
erick2red
Gerbil In Training
Topic Author
Posts: 9
Joined: Tue Jun 16, 2009 9:44 am
Contact:

Re: Networking hardware

Wed Apr 27, 2011 3:22 pm

Contingency wrote:
You gave me a bandwidth estimate when I asked for size. This can be interpreted several ways. I recommend tracing a typical data flow from creation to destination(s), and writing it out.

Sorry for asking can u give an example, I'm kinda loose at the technical side here.
 
notfred
Maximum Gerbil
Posts: 4610
Joined: Tue Aug 10, 2004 10:10 am
Location: Ottawa, Canada

Re: Networking hardware

Wed Apr 27, 2011 3:53 pm

I really think you will have difficulty getting a server to feed a 10Gig link (fast enough storage and interconnects between the storage and the networking subsystems) plus as others have posted the costs are orders of magnitude over a 1Gig system. I could see maybe putting a 4x1Gig card in the server and getting a switch that supports link bonding for that link whilst all the clients connect in at 1Gig.

If your data dumps are 600-800MB (and not multiple seconds of 600-800MB) then I would expect a 1Gig setup to get close to 6-8 seconds to transfer those to the client, driving multiple clients I could see the server needing a little more but not 10Gig.

If your server isn't packed full of RAM to help cache the storage and the storage is less than RAID arrays of SSDs then I don't think you'll get anywhere near 10Gig out of the server regardless of what speed network card you put in. Similarly look at the backplane connectivity - are the RAID controller and the network card both plugged in to PCIe x16 slots that connect to the same PCIe switch?
 
Aphasia
Grand Gerbil Poohbah
Posts: 3710
Joined: Tue Jan 01, 2002 7:00 pm
Location: Solna/Sweden
Contact:

Re: Networking hardware

Wed Apr 27, 2011 5:20 pm

I guess one reason that people is sceptical from the start when somebody mentions 10GigE without the why behind is that they have heard it before. When I worked for a client a few years ago, some marketing guy hade made a tech-specification for a commercial interactive screen system they wanted to build. Before it went out to the parties bidding for the projects, some other people that had some sense put the specification before a bunch of technical consultants. We were three consultants that tore it to pieces inside of five minutes.... but yeah, i guess an interactive "tv" for an infomercial really needed redundant 10GigE interfaces with no regard to what the distribution, no less the access layer could actually deliver. The guy had probably heard somewhere they we were building the new network with 10GigE capability and just went with it. That same network also transports surveillance images from a few thousand cameras in real time though.


In regards to the question.
10GigE is coming down in price, but its not cheap yet, and there are some options where you can skimp on the fiber and use copper 10GigE or SFP-server cards and then there are several varieties of SFP cables for 10GigE also over copper. But a SFP server card is still $600 at least, and depending on the topology you want from the network, you might be fine with a single larger datacenter switch with the right cards, or you might need several switches.

Then of course you will need to have real time data generation on the workstation to fill it, without going to disk in between or you wont get much use for it. On the server side you will need a SAN or storage server with enough spindles to write those 1.25GB/s of data that you generate.

That said, the cheapest Cisco switches with at least 10GigE uplinks arent that expensive... starts at $9999 + interfaces, and depending on your lenght requirements they can be $400, but most options are ~$1500 interface or so. Other brands do have some cheaper options, but not by much. And if you want more then two 10GigE ports, it will get more expensive.
 
Contingency
Gerbil Jedi
Posts: 1534
Joined: Sat Jun 19, 2004 4:03 pm
Location: al.us
Contact:

Re: Networking hardware

Wed Apr 27, 2011 6:03 pm

erick2red wrote:
Contingency wrote:
You gave me a bandwidth estimate when I asked for size. This can be interpreted several ways. I recommend tracing a typical data flow from creation to destination(s), and writing it out.

Sorry for asking can u give an example, I'm kinda loose at the technical side here.


You should know about the data that is being generated. Specifically:
Large file vs collection of files
Average/Maximum* file size (*not the largest ever, but which would hold true for 95% of instances)
Frequency of generation

Data flow itself:
After generation, where is the data located?
What interacts with this data? (include number of systems)
Is the data stored on those systems too? Pushed or pulled there?
Is the data modified on those system?
What happens to the data after these systems are finished? (deletion/archival/whatever)

Infrastructure:
Hardware capabilities of devices along data flow
Network infrastructure
#182 TT: 13/DNVT, Precedence: Flash Override. Switch: Node Center. MSE forever.
 
SecretSquirrel
Minister of Gerbil Affairs
Posts: 2726
Joined: Tue Jan 01, 2002 7:00 pm
Location: North DFW suburb...
Contact:

Re: Networking hardware

Thu Apr 28, 2011 7:29 am

erick2red wrote:
Hi everyone:

This is my first time at the tech report forums.
I have some networking hardware questions, and as I've found none network hardware dedicated forums, this is my place
So.

I'm trying to build a new network for my business and I want data transfer rates of 10Gbps, my question is if I would be able to reach that point with simple cable and Gigabits ethernets cards,
Any point in the right direction, would be thxs.

Routers, ethernet cards, and cables, suggestions pls

Thxs in advance.

Erick


My advice -- hire someone to do this right, assume that you don't have a competent IT organization already. There are a few folks on here who are top notch networking professionals and dozens who think they are because they wired their Linux system to a network attached hard drive. Can you tell the difference? There have been some good follow up questions asked, but in reality, the answers and information you get here will be worth exactly what you paid for it.

Putting in 10Gb network infrastructure is expensive. I was involved in the roll out of our 10Gb datacenter core at work and to give you a basic example, a 48 port 1Gb switch with two 10Gb uplinks is going to be in the 10k range. It is also technically complex. We have the storage problem. Even high end storage gear has a tough time sustaining 10Gb speeds once you exceed the size of their cache memory. We have the physical layer problem. Are you going to do copper or fiber? Do you know the limitations of each? We have the speed boundary issue. What are you going to do with gear that doesn't or can't support a 10Gb interface?

This is a project that will cost money. If you really need 10Gb gear, it's going to cost a lot of money. If you don't have the expertise in house then it is certainly worth it in terms of project success to engage a professional, either through a vendor or a consultant. Both have their benefits and drawbacks.

--SS
 
erick2red
Gerbil In Training
Topic Author
Posts: 9
Joined: Tue Jun 16, 2009 9:44 am
Contact:

Re: Networking hardware

Thu Apr 28, 2011 8:22 am

SecretSquirrel wrote:
My advice -- hire someone to do this right, assume that you don't have a competent IT organization already. [...] If you don't have the expertise in house then it is certainly worth it in terms of project success to engage a professional, either through a vendor or a consultant. Both have their benefits and drawbacks.

--SS


Same thoughts here, I do know more than simple attach my linux box to the network, and do so with the others, but I do realize the issue here is way more bigger.
Also I started to look into this more carefully and noted that maybe there no real need to that throughput. We doesn't have the servers architecture needed to engage with that flow of data. Actually If we build up 10Gb network then the bottleneck will be storage, and so many others issue, I just didn't get at first.

Yeah, my boss refuse to hire someone to do the job and I know that would be a better choice.
Anyway thxs to your comments I now have the arguments I need to fight back the negative position.
 
Aranarth
Graphmaster Gerbil
Posts: 1435
Joined: Tue Jan 17, 2006 6:56 am
Location: Big Rapids, Mich. (Est Time Zone)
Contact:

Re: Networking hardware

Thu Apr 28, 2011 9:47 am

In that case my recommendation would be to use gigabit ethernet over copper.

The cards are cheap, the wiring is cheap etc. etc.

Just don't skimp on your switch.
If you can get a gigbit switch with channel bonding you can enable that later on if you start saturating the port to the server and the server has spare capacity left.

Otherwise you may want to go with a dual network setup where you have your regular network and a secondary network specific for your high-bandwidth needs. It would still be a gigbit ethernet network but data going over it would not saturate your primary network.

The data capture server would have two network cards one for each network so that data captured is still available to everyone else if it is needed. Again if there is spare capacity left on the server you could enable channel bonding if needed.

The secondary server could also be used as a mirror for the primary server for additional data redundancy for critical data if the primary server goes down (and it WILL eventually go down).
Main machine: Core I7 -2600K @ 4.0Ghz / 16 gig ram / Radeon RX 580 8gb / 500gb toshiba ssd / 5tb hd
Old machine: Core 2 quad Q6600 @ 3ghz / 8 gig ram / Radeon 7870 / 240 gb PNY ssd / 1tb HD
 
erick2red
Gerbil In Training
Topic Author
Posts: 9
Joined: Tue Jun 16, 2009 9:44 am
Contact:

Re: Networking hardware

Thu Apr 28, 2011 10:28 am

Aranarth wrote:
Otherwise you may want to go with a dual network setup where you have your regular network and a secondary network specific for your high-bandwidth needs. It would still be a gigbit ethernet network but data going over it would not saturate your primary network..


This seems like a right setup.
I'm planning to split my network in two, between the production/investigation facilities and the general usage network

Who is online

Users browsing this forum: No registered users and 1 guest
GZIP: On