+ Reply to Thread
Page 3 of 17 First 123 456713 ... Last
Results 51 to 75 of 408
  1. DoWork
    Join Date
    Jun 2010
    Location
    A major Illinois hospital system near you
    Posts
    1,468

    Certifications
    vExpert, VCAP5-DCA/DCD, VCP5-DCV, VCIX-NV, VCP-NV, BSTM
    #51
    It's funny, in my original reply I had the multi-threaded portion in there as well but took it out for some reason. Probably due to poor editing when I was typing out my thoughts. Either way, always check your applications to ensure they're actually taking advantage of multi-threading. As for NUMA nodes, here's an example. If your VM needs 16 vCPUs (I have one right now that will eat all 16 when I give it them) and your server is a dual 8-core system, change your VM to 2 sockets, 8 cores. That should take care of vNUMA for you. Also, you want to make sure that you're also not breaking the NUMA boundary when it comes to RAM as well. If you only have 96GB of RAM in your system, a >48GB VM will break the NUMA boundary.
    Reply With Quote Quote  

  2. SS -->
  3. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #52
    Oh yes, another reason why you should right size your VMs! Watch out for both the CPU and RAM when it comes to NUMA config, either can break the NUMA boundary and performance will suffer. I've seen some people just disabling NUMA to get rid of NUMA problems, but they should've planned their hosts better to begin with.

    And guys there is a difference in performance when you choose 2 sockets and 8 cores over 2 cores and 8 sockets depending on your NUMA node config.
    Last edited by Essendon; 03-24-2014 at 05:15 AM.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  4. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #53
    Ok, next one:

    Company policy states that all management traffic for the vSphere environment must be separated in it's own VLAN. The networking team has provided you with that VLAN, put it on the correct links and gave you the subnet information. During the configuration of the management VMkernel port, you come to the conclusion that by design, the default gateway for your subnet does not ping back. Your hosts are in an HA enabled cluster. What problems could this give you, and how would you go about mitigating them?
    Reply With Quote Quote  

  5. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #54
    By default HA needs to ping the gateway to determine whether a host is isolated or not. To fix it either tell your network team to get their act together (accompanied by donuts) or change the advanced settings of HA (by using das.isolationaddress) to use a different IP.

    PS: VMware HA isolation failover would occur if the GW is not pingable.
    Reply With Quote Quote  

  6. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #55
    Quote Originally Posted by jibbajabba View Post
    By default HA needs to ping the gateway to determine whether a host is isolated or not. To fix it either tell your network team to get their act together (accompanied by donuts) or change the advanced settings of HA (by using das.isolationaddress) to use a different IP.

    PS: VMware HA isolation failover would occur if the GW is not pingable.
    If you change the advanced settings, I'm still missing one more advanced setting.
    Reply With Quote Quote  

  7. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #56
    Above is assuming you use an IP in the same range as your management network. If that is not the case then
    Code:
    das.allowNetworkX
    To configure a different network used to check the isolation address.
    Reply With Quote Quote  

  8. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #57
    Quote Originally Posted by jibbajabba View Post
    Above is assuming you use an IP in the same range as your management network. If that is not the case then
    Code:
    das.allowNetworkX
    To configure a different network used to check the isolation address.
    Yes, I'm using an IP in the same subnet, hence the default gateway, that is almost always in the same segment.

    Not what I am looking for, let me give you a slight hint: The default gateway is already set up on the ESXi system to the non pingable gateway the network team provided.
    Reply With Quote Quote  

  9. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #58
    I think he's after das.defaultisolationaddress=false and then have the other advanced setting.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  10. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #59
    Quote Originally Posted by Essendon View Post
    I think he's after das.defaultisolationaddress=false and then have the other advanced setting.
    Reply With Quote Quote  

  11. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #60
    I thought das.defaultisolationaddress=false is assumed when using das.isolationaddress
    Reply With Quote Quote  

  12. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #61
    Quote Originally Posted by jibbajabba View Post
    I thought das.defaultisolationaddress=false is assumed when using das.isolationaddress
    According to a few KB articles and the HA product documentation you have to specify it manually, and you need to do so in production. That's what I was after, otherwise good post.
    Reply With Quote Quote  

  13. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #62
    Quote Originally Posted by tomtom1 View Post
    According to a few KB articles and the HA product documentation you have to specify it manually, and you need to do so in production. That's what I was after, otherwise good post.
    Never assume ey

    Fair enough ... Never had to do that in production (that's my excuse anyway )
    Reply With Quote Quote  

  14. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #63
    Question 9

    Your company has 2 clusters of 10 servers each with DRS and HA enabled. All these servers are HP DL380 G7's and have 2 years left before they are EOL'd. There are ongoing discussions to introduce more server into the clusters to cater for increased growth Some company executive (aka smartypants) decides to buy 4 new servers with AMD processors while you are away on holidays. What can you do about these servers - are you able to add them to the 2 pre-existing clusters? Discuss your options.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  15. DoWork
    Join Date
    Jun 2010
    Location
    A major Illinois hospital system near you
    Posts
    1,468

    Certifications
    vExpert, VCAP5-DCA/DCD, VCP5-DCV, VCIX-NV, VCP-NV, BSTM
    #64
    Sure you can add them, but its just easier to create a separate cluster for the AMD machines outside of the two Intel-based ones. There's really no reason to keep them within the same cluster. vMotion and dvSwitch bounardaries are the datacenter object, so you can still have all the functions you need for the VMs in both clusters. vMotion workloads to the AMDs as you see fit. You can vMotion between Intel and AMD, however the VM must be offline to do so.
    Reply With Quote Quote  

  16. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #65
    Quote Originally Posted by Essendon View Post
    Question
    Your company has 2 clusters of 10 servers each with DRS and HA enabled. All these servers are HP DL380 G7's and have 2 years left before they are EOL'd. There are ongoing discussions to introduce more server into the clusters to cater for increased growth Some company executive (aka smartypants) decides to buy 4 new servers with AMD processors while you are away on holidays. What can you do about these servers - are you able to add them to the 2 pre-existing clusters? Discuss your options.
    Hooking in on this one: Assuming and according to my research the DL380's have Intel CPU's, vMotion (and therefore DRS) will not be happy since VM's with different CPU vendors cannot be moved with vMotion. You can't just add them to the cluster, but you have a few options. Each with pro's and con's.

    1) Create a separate cluster for the AMD based hosts and enable HA / DRS on this cluster according to company policy.
    Pro's: Maximum compatibility for hosts and VM's placed in this cluster.
    Con's: (Can) create(s) additional management overhead and could have impact on stuff like licensing.

    2) Another option you have is by adding 2 hosts to the both existing clusters (assuming EVC is not enabled on the existing clusters) and setting the new hosts as dedicated failover hosts. This ensures that VM's will never be vMotioned to these hosts, but they will be able to grab some of the workload if an HA event occurs.
    Pro's: Better use of the existing cluster infrastructure and therefore saving on additional (management) overhead in the cluster.
    Con's: The hosts identifies as dedicated failover hosts will never be used until an HA event occurs.

    Just my 2 cents, but I think I'd go with option 1, which would maximize the usability of the new hosts.
    Reply With Quote Quote  

  17. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #66
    Quote Originally Posted by QHalo View Post
    You can vMotion between Intel and AMD, however the VM must be offline to do so.
    At which time it would be a cold migration, which is technically not the same as a vMotion.
    Reply With Quote Quote  

  18. DoWork
    Join Date
    Jun 2010
    Location
    A major Illinois hospital system near you
    Posts
    1,468

    Certifications
    vExpert, VCAP5-DCA/DCD, VCP5-DCV, VCIX-NV, VCP-NV, BSTM
    #67
    potato patato
    Reply With Quote Quote  

  19. Member Konflikt's Avatar
    Join Date
    Nov 2013
    Location
    Hungary, Budapest
    Posts
    40

    Certifications
    ISTQB-CTFL ¤ 3xVCA ¤ VCP5/6 ¤ VCAP5-DCA/DCD ¤ vExpert'14-16 ¤ SolarWinds SCP ¤ BACP ¤ MCP ¤ MCS
    #68
    I would leave the old Intel based servers in the original cluster, and I would make a new one from the new AMD Opteron based servers. Main reason is that the vMotion (neither DRS) won't work between intel and AMD CPUs. Maybe in the future in eEVC mode (extended EVC for the inter-vendor vMotion - just kidding ).
    So it would't be a good idea to mix them. And even if both servers would based on the same CPU vendor (Intel or AMD) I would go with 2 clusters. The compute capacity difference per host between the almost EOL and the just purchased servers is probably huge, so it would not be the best for HA.
    drawbacks: we need more spare resources (depend on HA policy) for these two clusters, if the HA is in the scope.
    Reply With Quote Quote  

  20. DoWork
    Join Date
    Jun 2010
    Location
    A major Illinois hospital system near you
    Posts
    1,468

    Certifications
    vExpert, VCAP5-DCA/DCD, VCP5-DCV, VCIX-NV, VCP-NV, BSTM
    #69
    Quote Originally Posted by tomtom1 View Post
    Con's: (Can) create(s) additional management overhead and could have impact on stuff like licensing.
    Outside of CPU socket count, there's no other licensing concerns that I'm aware of that you wouldn't encounter if they were Intel-based CPUs.
    Reply With Quote Quote  

  21. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #70
    Yeah I havent come across such a licensing constraint either. Have you, Tom?

    Another thing to keep in mind is most people have EVC enabled clusters and such clusters will not allow for a different vendor's hosts be added to the same cluster. So it is best to have a separate cluster for the AMD hosts. Oh and clip Ms. Smartypants' wings!
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  22. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #71
    Quote Originally Posted by tomtom1 View Post
    Another option you have is by adding 2 hosts to the both existing clusters (assuming EVC is not enabled on the existing clusters) and setting the new hosts as dedicated failover hosts.
    I got that one right here. I came across an application one time that only supported Intel processors, so that was a constraint for a scenario like this, which was a really good one by the way. Need to think of a good one for tomorrow.
    Reply With Quote Quote  

  23. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #72
    Sorry my bad, didnt quite read that well enough! Thanks for shedding light on your experience with the application and the strange licensing constraint. The IT world never ceases to surprise, does it?!
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  24. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #73
    Question 10

    A company has hired you as their virtualization specialist to get their stretched cluster going. Both the network and storage are stretched and there is only 1 vCenter as is usually the case with a stretched cluster as opposed to having 2 in an SRM scenario.


    Stretched1.JPG

    - Is there something missing on the hardware side of things? Discuss.

    - Discuss their HA settings. Specifically talk about
    • Admission Control. Enable or disable? What policy setting do they need to ensure all VM's start up successfully in case either datacenter (entire datacenter, that is) fails.
    • How many datastores should they use for their heartbeating? What advanced HA setting is needed?
    • What will happen to the VM's running on the far left host if it fails?
    - Discuss their DRS settings. Specifically talk about
    • How do they ensure that the VM workload is balanced across the stretched cluster?
    • How do they also ensure that VM's successfully startup when a host(s) fails? Hint: talk about DRS rules
    - Lastly, does this setup provide for workload mobility and allow your client to migrate their VM workload to the other datacenter if an impending disaster threatens to wipe out one datacenter?
    Last edited by Essendon; 03-28-2014 at 12:56 AM.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  25. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #74
    - Is there something missing on the hardware side of things? Discuss.
    Assuming the design is leaning more towards the physical design then a logical design, I'm missing some redundancy in the pNICS and the FC switches. 1 NIC is drawn per host to 1 single instance of a fiber channel switch.

    - Discuss their HA settings. Specifically talk about
    • Admission Control. Enable or disable? What policy setting do they need to ensure all VM's start up successfully in case either datacenter (entire datacenter, that is) fails.
    • I would go with the option for Power Off, because the vSphere 5 default of Leave Powered On could create something you would avoid at all costs, a split brain scenario. The immediate power off in a host faillure event would ensure that the hosts in the other side could start the VM's.
    • How many datastores should they use for their heartbeating? What advanced HA setting is needed?
    • According to the Metro Cluster Case Study you should use a minimum of 4 datastores, 2 per sites. To increase the default of 2 datastores, you need the HA advanced setting of das.heartbeatDsPerHost to 4.
    • What will happen to the VM's running on the far left host if it fails?
    The local host in the local site will run these VM's, if you specify this with DRS rules.


    - Discuss their DRS settings. Specifically talk about
    • How do they ensure that the VM workload is balanced across the stretched cluster?
    • Create DRS should rules to ensure that a part of the workload is specifically running on either the left or the right part of the stretched cluster.
    • How do they also ensure that VM's successfully startup when a host(s) fails? Hint: talk about DRS rules
    • By using the DRS should rules, you can ensure that the host local to the site runs the workload first, unless that fails to. Because it is a should rule, it isn't a hard rule, and HA and stuff like maintenance mode will continue to work after both hosts in the site has failed.
    - Lastly, does this setup provide for workload mobility and allow your client to migrate their VM workload to the other datacenter if an impending disaster threatens to wipe out one datacenter.
    I'd say so, assuming the storage is capable of the correct replication.
    Reply With Quote Quote  

  26. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #75
    Anybody else with other ideas?
    Reply With Quote Quote  

+ Reply to Thread
Page 3 of 17 First 123 456713 ... Last

Social Networking & Bookmarks