VMware Partner News

Tuesday, October 16, 2012

Storage Explained: NFS:FCoE:SAN:FC

Recently while attending yet another storage course I came accross a reasonably common question...
Something that most storage administrators struggled with at some stage...

Why would I use NFS on my Storage Array?

Warning: This might get a bit technical, but bear with me.

This seemingly simple question has a lot of factors to consider.
First thing to keep in mind is that NFS is a network protocol, thus adding further overhead to your stack. The only true storage protocol is SCSI, all storage is contained in this protocol for system purposes. So, keeping that in mind... The positive side to NFS is that face that it can run over ethernet, which in todays standards can go up to 10Gbps, as opposed to the max of 8Gbps that FC provides, but how does an FC packet compaire to a NFS packet... The payload (data) in your storage packet is contained in SCSI headers (take this as a given). An FC packet encapsulated the whole SCSI packet in another set of headers (the FC headers). When it comes to NFS it changes slightly... Fistly the SCSI packet gets encapsulated into ethernet headers to form the ethernet (layer 2)packet, this packet now gets encapsulated into IP headers to form the IP (layer3) packet, which allows routing to take place. After all of this the IP packet (which now contains the ethernet packet, which contains the SCSI packet, which contains the data) gets yet another set of headers... The layer4 headers for NFS. This is a lot of overhead! So in the end your performance gain is probably closer to 0.



Lets look at other factors...
SAN switching, and HBA's are quite expensive, and can cost a bundle, but so does a propper 10Gbps network switch, and propper 10Gbps NICs and SFPs. Also NFS is usually a licensed feature by your storage provider. So these costs in the real world would definately be the deciding factor here.

Which brings us to the alternative... FCoE. I've posted a write up on this in th past, and there's good reason why I prefer this. Note, this is only my opinion. FCoE takes your payload (contained in the SCSI packet), and adds the FC headers to it, and the Ethernet headers. That's it. So eventhough there's more to it than an FC packet it's still a lot lighter than an NFS packet, since you dont need the layer3 and 4 headers. So here the 10Gbps vs 8Gbps really comes to play. Another factor that makes this even more appealing is that you can run your normal network, and your storage over the same cable, thus saving costs. Downside though is that you need expensive switches for this.

All in all the question really has more to do with what you're planning for your environment than simply weighing up the pros and cons, but keeping all things in mind, and making an informed decision never hurt anyone.