+ Reply to Thread
Page 1 of 2 1 2 Last
Results 1 to 25 of 27
  1. Senior Member
    Join Date
    Apr 2013
    Posts
    2,413
    #1

    Default vSAN: Network Requirements

    Hey guys,

    So I know from reading, since I've never done a vSAN before, that VMware calls for a 10G connection per host connecting to a OOB Storage Fabric. What I'm curious about is in a Home-lab is a 10G connection truly needed or would a Quad 1G bonded link suffice per host vs a 10G link to an obvious FC or FCoE Infrastructure.

    The only thing I guess I'm concerned about is cache size on the OOB switches, since that's just a **** ton of traffic....

    I could do it to a 10G iSCSI-capable OOB Fabric but if I'm going with 10G in the home-lab at that point I mind as-well just make it a FCoE or FC fabric. But crious if (4) 1Gbit connections in a etherchannel would suffice cause then I could just find a 2960G and bump it's memory on the switch for a larger cache.

    This is purely for a Home-lab, in Production I would go for a Dual 10G FC or FCoE configuration.

    Thoughts?
    Last edited by Deathmage; 02-15-2016 at 04:07 PM.
    Reply With Quote Quote  

  2. SS -->
  3. Senior Member joelsfood's Avatar
    Join Date
    Sep 2014
    Location
    Chicago, IL
    Posts
    983

    Certifications
    CCIE:DC, CCNP:DC, CCNA:DC, CCDA, VCP:DCV, VCP:NV, JNCIA-JUNOS
    #2
    10g isn't required, it is just recommended. 1g is fine (in theory) though it will depend on the rate of change of your data.

    For the record, FC doesn't come in 10G(2/4/8/16/32), and you don't really update cache on a switch, as in general, data won't leave the ASIC/SoC, so you just have the cache built into that (though different QoS policies can change the assignment of the "buckets")
    Reply With Quote Quote  

  4. DPG
    DPG is online now
    Senior Member DPG's Avatar
    Join Date
    Jan 2008
    Posts
    761
    #3
    vSAN utilizes local storage and ethernet connections between the hosts. You can't user FC or FCoE.

    Reply With Quote Quote  

  5. Senior Member
    Join Date
    Apr 2013
    Posts
    2,413
    #4
    Quote Originally Posted by joelsfood View Post
    10g isn't required, it is just recommended. 1g is fine (in theory) though it will depend on the rate of change of your data.

    For the record, FC doesn't come in 10G(2/4/8/16/32), and you don't really update cache on a switch, as in general, data won't leave the ASIC/SoC, so you just have the cache built into that (though different QoS policies can change the assignment of the "buckets")
    Thanks for the information, was aiming for FCoE at 10G, but it's good to know the correct sizing for FC, I get them confused sometimes with the verbiage.

    Quote Originally Posted by DPG View Post
    vSAN utilizes local storage and ethernet connections between the hosts. You can't user FC or FCoE.
    vSAN requires 3 hosts minimal to make up a vSAN, wouldn't it need a Storage backplane over a OOB fabric to communicate with each other?
    Last edited by Deathmage; 02-15-2016 at 05:42 PM.
    Reply With Quote Quote  

  6. DPG
    DPG is online now
    Senior Member DPG's Avatar
    Join Date
    Jan 2008
    Posts
    761
    #5
    Quote Originally Posted by Deathmage View Post
    vSAN requires 3 hosts minimal to make up a vSAN, wouldn't it need a Storage backplane over a OOB fabric to communicate with each other?
    The vSAN network runs over IP.

    Reply With Quote Quote  

  7. Senior Member joelsfood's Avatar
    Join Date
    Sep 2014
    Location
    Chicago, IL
    Posts
    983

    Certifications
    CCIE:DC, CCNP:DC, CCNA:DC, CCDA, VCP:DCV, VCP:NV, JNCIA-JUNOS
    #6
    VSAN uses the IP network, as DPG mentioned, and essentially runs replication over Ethernet. Technically it doesn't even need 3 hosts, only needs 2 and a witness appliance.

    If you want to just check it out and get a feel for it, you don't even have to dedicate physical hosts/disks. William Lam has an appliance for download that has a prebuilt 3 node VSAN host> I expect he'll be updating it for VSAN 6.2 when the new version is officially out.
    Reply With Quote Quote  

  8. DPG
    DPG is online now
    Senior Member DPG's Avatar
    Join Date
    Jan 2008
    Posts
    761
    #7
    Also, don't bother with link aggregation for the vSAN network. You won't see much of a difference in performance.

    Reply With Quote Quote  

  9. Senior Member
    Join Date
    Apr 2013
    Posts
    2,413
    #8
    I guess the next thought is, do you see a benefit if you use Hardware RAID for like (4) 600 GB 10k Raptors in a RAID 10 on each host or just use the vSAN built-in distributed RAID. Curious if you could use them both.
    Reply With Quote Quote  

  10. DPG
    DPG is online now
    Senior Member DPG's Avatar
    Join Date
    Jan 2008
    Posts
    761
    #9
    No benefit since vSAN is optimized for single spindles.

    Reply With Quote Quote  

  11. Senior Member joelsfood's Avatar
    Join Date
    Sep 2014
    Location
    Chicago, IL
    Posts
    983

    Certifications
    CCIE:DC, CCNP:DC, CCNA:DC, CCDA, VCP:DCV, VCP:NV, JNCIA-JUNOS
    #10
    Utilizing passthrough controllers (giving VSAN individual access to each disk) is generally just as fast as using controller in raid 0 mode, plus it lets VSAN handle hot plug of failed drives, etc. As DPG mentioned, VSAN is designed around direct access to the disks.
    Reply With Quote Quote  

  12. Senior Member
    Join Date
    Apr 2013
    Posts
    2,413
    #11
    Well I'm seeing that it doesn't even support bonding, so that's kind of poo-poo.

    But it does support Jumbo, but with TSO and LPO the benefits would be marginal, I'll probably still enable it anyways.

    Kind of curious about making a TCP/IP Stack just for vSAN in ESXi 6.0 (since now you can) and applying this to the vDS vSAN VMKernal for that vSAN Port Group, kind of curious of these customization network stacks and want to see if they help with throughput in VMware. I have a Bigfoot 2100 Gaming Nic in my PC at home that does the exact same thing with bypassing the Windows Stack and the throughput is amazing...

    Quote Originally Posted by joelsfood View Post
    Utilizing passthrough controllers (giving VSAN individual access to each disk) is generally just as fast as using controller in raid 0 mode, plus it lets VSAN handle hot plug of failed drives, etc. As DPG mentioned, VSAN is designed around direct access to the disks.
    Quote Originally Posted by DPG View Post
    No benefit since vSAN is optimized for single spindles.
    Ahhh, so essentially, if I did hardware RAID it could actually mess with the vSAN configuration.
    Last edited by Deathmage; 02-15-2016 at 06:42 PM.
    Reply With Quote Quote  

  13. VMware Dude! TheProf's Avatar
    Join Date
    Jun 2010
    Location
    Canada
    Posts
    327

    Certifications
    vExpert | CCA | CCAA | MCSA | MCTS | MCITP:EMA 2010 | VCP5-DCV/DT | VTSP4/5 | VSP 5 | Network +
    #12
    You're not supposed to do any RAID configurations for VSAN. All of the stripping and replication occurs with the use of storage based policies that you create initially within vCenter.
    Reply With Quote Quote  

  14. Senior Member
    Join Date
    Apr 2013
    Posts
    2,413
    #13
    This is sweet, I don't need to sacrifice space for redundancy.
    Reply With Quote Quote  

  15. Member Konflikt's Avatar
    Join Date
    Nov 2013
    Location
    Hungary, Budapest
    Posts
    40

    Certifications
    ISTQB-CTFL ¤ 3xVCA ¤ VCP5/6 ¤ VCAP5-DCA/DCD ¤ vExpert'14-16 ¤ SolarWinds SCP ¤ BACP ¤ MCP ¤ MCS
    #14
    with all-flash configuration the 10GbE is recommended and one gig is not supported.
    with hybrid, 1 gig Eth can be used, but 10 gig is suggested.
    Reply With Quote Quote  

  16. Senior Member
    Join Date
    Apr 2013
    Posts
    2,413
    #15
    Quote Originally Posted by Konflikt View Post
    with all-flash configuration the 10GbE is recommended and one gig is not supported.
    with hybrid, 1 gig Eth can be used, but 10 gig is suggested.

    well I have just one 120 GB SSD per host so I guess 1G will suffice with Jumbo, TSO and LRO.
    Reply With Quote Quote  

  17. Senior Member
    Join Date
    Apr 2013
    Posts
    2,413
    #16
    vSAN, storage groups, is it Atleast one SSD per host or one SSD per hard drive?

    Like if a R610 has 6 open slots, could I do 1 + 5 or 3 + 3?
    Reply With Quote Quote  

  18. Senior Member joelsfood's Avatar
    Join Date
    Sep 2014
    Location
    Chicago, IL
    Posts
    983

    Certifications
    CCIE:DC, CCNP:DC, CCNA:DC, CCDA, VCP:DCV, VCP:NV, JNCIA-JUNOS
    #17
    At least one SSD per host. SSD capacity per host should be at least 10% of size of spinning disk capacity. IE, 2x600GB SAS spinning disk to go with your 120GB SSD
    Reply With Quote Quote  

  19. Senior Member
    Join Date
    Apr 2013
    Posts
    2,413
    #18
    be interested if this Black2 drive is supported....

    http://www.amazon.com/dp/B00GSJ9X4Q
    Reply With Quote Quote  

  20. VMware Dude! TheProf's Avatar
    Join Date
    Jun 2010
    Location
    Canada
    Posts
    327

    Certifications
    vExpert | CCA | CCAA | MCSA | MCTS | MCITP:EMA 2010 | VCP5-DCV/DT | VTSP4/5 | VSP 5 | Network +
    #19
    If you're using SSD for caching, it's one SSD per Disk Group and you can have multiple disk groups in a host. Which means technically speaking, you can have multiple SSDs in one host. The caching tier should be at least 10% of the total storage of your disk group.
    Reply With Quote Quote  

  21. Junior Member
    Join Date
    Aug 2012
    Posts
    11

    Certifications
    MCTS 70-680, MS 74-409, VCA-DCV, VCP-DCV
    #20
    I run the robo version of vSAN in my home lab and this definitely doesn't need more than 1gbps. ROBO requires 2 hosts only, but you need a third witness server. Cormac Hogan is your man when it comes to vSAN, so check his website.

    For non-robo setup, if you're planning to use this for production (even home lab production), you want 4 hosts (even though it will work with 3 only). See here for more.

    In hybrid scenario, single disk group can have 1 ssd only and up to 7 magnetic disks. The rule of thumb, as mentioned before, is that you should have around 10% of SSD cache to back your spinning disks storage. So you want for keep to under 1.5TB in spinning disks and you should be fine, it all depends on how your cache is utilised really. Use SexiGraf for vSAN monitoring, it's awesome (once the new version of vSAN hits GA you shouldn't need it anymore, as monitoring is improved).

    If your server's controller doesn't do pass-through, you will need to pass all your disks manually as RAID0 arrays.

    You can check hardware compatibility here - http://partnerweb.vmware.com/service/vsan/all.json. This is the final source of knowledge when it comes to what is or isn't compatible with your hardware platform, on the version of vSphere you're using. If something isn't supported, it still might work, but you can hit all sorts of issues with performance, etc. have a read here to get an idea. Queue depth on your raid controller needs to be adequately big, 600 is a good start, the more the better.

    With all the recent developments, vSAN is growing to be a great product, so definitely worth checking out!
    Last edited by am3rig0; 02-26-2016 at 09:50 AM.
    Reply With Quote Quote  

  22. Senior Member
    Join Date
    Apr 2013
    Posts
    2,413
    #21
    Quote Originally Posted by am3rig0 View Post
    I run the robo version of vSAN in my home lab and this definitely doesn't need more than 1gbps. ROBO requires 2 hosts only, but you need a third witness server. Cormac Hogan is your man when it comes to vSAN, so check his website.

    For non-robo setup, if you're planning to use this for production (even home lab production), you want 4 hosts (even though it will work with 3 only). See here for more.

    In hybrid scenario, single disk group can have 1 ssd only and up to 7 magnetic disks. The rule of thumb, as mentioned before, is that you should have around 10% of SSD cache to back your spinning disks storage. So you want for keep to under 1.5TB in spinning disks and you should be fine, it all depends on how your cache is utilised really. Use SexiGraf for vSAN monitoring, it's awesome (once the new version of vSAN hits GA you shouldn't need it anymore, as monitoring is improved).

    If your server's controller doesn't do pass-through, you will need to pass all your disks manually as RAID0 arrays.

    You can check hardware compatibility here - http://partnerweb.vmware.com/service/vsan/all.json. This is the final source of knowledge when it comes to what is or isn't compatible with your hardware platform, on the version of vSphere you're using. If something isn't supported, it still might work, but you can hit all sorts of issues with performance, etc. have a read here to get an idea. Queue depth on your raid controller needs to be adequately big, 600 is a good start, the more the better.

    With all the recent developments, vSAN is growing to be a great product, so definitely worth checking out!

    Sweet, thanks for the feedback.
    Reply With Quote Quote  

  23. Senior Member
    Join Date
    Apr 2013
    Posts
    2,413
    #22
    Pondering, I got one extra drive in my R610's, another 300 GB WD Enteprise 10k raptor or a 64/120 GB SSD for Flash Cache?

    Already have a Samsung Evo 120 GB SSD for the (3) 300 GB Raptors.

    I'd like to use Flash Cache on-top of vSAN if it's supported but it might be counter intuative.
    Reply With Quote Quote  

  24. Junior Member
    Join Date
    Aug 2012
    Posts
    11

    Certifications
    MCTS 70-680, MS 74-409, VCA-DCV, VCP-DCV
    #23
    I don't think you will be able to enable vflash on a VM stored on vsan.

    But if you have a lot of ram, you could give pernix freedom a go, sub 0.1 ms latency is cool, but overkill most likely. Not sure if supported with vsan, so you would need to check documentation or reach out to them.
    Reply With Quote Quote  

  25. Senior Member jdancer's Avatar
    Join Date
    May 2011
    Posts
    430

    Certifications
    CCNA: R&S, Security, CyberOps; CompTIA: A+, N+, S+, L+, P+; MTA, MCP, ITIL Foundation, CIW DDS
    #24
    Quote Originally Posted by TheProf View Post
    You're not supposed to do any RAID configurations for VSAN. All of the stripping and replication occurs with the use of storage based policies that you create initially within vCenter.
    So, you set up your physical disks with NO configuration at all, correct? Not even JBOD?
    Reply With Quote Quote  

  26. Senior Member jdancer's Avatar
    Join Date
    May 2011
    Posts
    430

    Certifications
    CCNA: R&S, Security, CyberOps; CompTIA: A+, N+, S+, L+, P+; MTA, MCP, ITIL Foundation, CIW DDS
    #25
    Quote Originally Posted by Deathmage View Post
    Pondering, I got one extra drive in my R610's, another 300 GB WD Enteprise 10k raptor or a 64/120 GB SSD for Flash Cache?

    Already have a Samsung Evo 120 GB SSD for the (3) 300 GB Raptors.

    I'd like to use Flash Cache on-top of vSAN if it's supported but it might be counter intuative.
    I think it's also best practice to have a single disk for ESXi scratch partition as well.
    Reply With Quote Quote  

+ Reply to Thread
Page 1 of 2 1 2 Last

Social Networking & Bookmarks