+ Reply to Thread
Page 1 of 2 1 2 Last
Results 1 to 25 of 26
  1. Senior Member
    Join Date
    Apr 2013
    Posts
    2,411
    #1

    Default vCPU Design Considerations

    Hey guys,

    Anyone know of a rough design consideration for vCPU's in a datacenter. I like to always start with 2 vCPU's and work my way up, but I was curious how other comes to determinations for vCPU counts based on a VM's demand levels.
    Reply With Quote Quote  

  2. SS -->
  3. Not IT n00b dave330i's Avatar
    Join Date
    May 2011
    Location
    NoVA
    Posts
    1,963

    Certifications
    VCIX-NV, VCAP5-DCD/DCA, VCAP-CID/DTD, VCP4, Security+, MCTS-640, EMCISA, ITILv3 Foundation
    #2
    Depends on how many cores per ESXI host. With the modern 12+ core per socket, 2 vCPU is a good starting point.
    Reply With Quote Quote  

  4. Senior Member
    Join Date
    Mar 2013
    Location
    Midwest
    Posts
    512

    Certifications
    MCSA Server 2008, VCP 5 DCV, CompTIA A+, Net+, 70-640, 70-642, 70-620, 70-646
    #3
    Funny Death i was just researching this myself. I've always followed the 'physical world' in terms of going with sockets vs cores. Mostly i've never gone above 2 sockets because we don't have more than 2 physical sockets on a host in our environment.

    What i've learned this week though is it really doesn't matter - at least those are the KB articles i've found from VMware. The idea of sockets vs cores came as a licensing issue which is why it was introduced (that and older OS's may not recognize more than 4 cores).

    I'm still not sure which to do - i'm now leaning towards just giving as many sockets as possible and leaving cores at 1. For one that is vmware's default. Secondly from what i've read this allows vmware cpu scheduler to properly schedule cpu cycles.
    Reply With Quote Quote  

  5. Senior Member
    Join Date
    Apr 2013
    Posts
    2,411
    #4
    I only ask because we have a Exchange box having issues and I think my predecessor didn't grasp the idea of vCPU's. the Exchange box has 16 vCPU's and it's having performance issues...

    I would think maybe 2 vCPU's for 2000+ users or a stretch and go to 4 vCPU's NOT 16!!!!!!!

    I'm pretty sure our CPU Scheduler is ******** a brick.
    Last edited by Deathmage; 04-15-2016 at 09:10 PM.
    Reply With Quote Quote  

  6. Self-Described Huguenot blargoe's Avatar
    Join Date
    Nov 2005
    Location
    NC
    Posts
    4,095

    Certifications
    VCAP5-DCA; VCP 3/4/5/6 (DCV); EMCSA:CLARiiON; Linux+; MCSE:M 2000/2003; MCSE:S 2000/2003; MCTS:Exch2007; Security+; A+; CCNA (expired)
    #5
    Quote Originally Posted by Lexluethar View Post
    What i've learned this week though is it really doesn't matter - at least those are the KB articles i've found from VMware. The idea of sockets vs cores came as a licensing issue which is why it was introduced (that and older OS's may not recognize more than 4 cores).
    .
    I have ran into this with some SharePoint VMs that have been in this environment for 5-6 years... the machines needed more vCPU, but the OS did not support additional sockets. Fortunately, VMware doesn't care how many sockets and cores per socket you use. To the vmkernel, a core is a core.

    Quote Originally Posted by Lexluethar View Post
    I'm still not sure which to do - i'm now leaning towards just giving as many sockets as possible and leaving cores at 1. For one that is vmware's default. Secondly from what i've read this allows vmware cpu scheduler to properly schedule cpu cycles.
    .
    I do the opposite, because with hot-add, you can edit the number of vCPU on the fly in a running VM, but not the number of cores per vCPU.

    So I start with one socket and two cores per socket by default. If I think it is a VM that may need to scale up, I might go with 2 sockets/2 cores per socket, or possibly 1 socket/4 cores per socket. Especially for Windows 2008 VMs, where there are more limitations with socket count in the OS.
    IT guy since 12/00

    Recent: 10/27/2017 - Passed Microsoft 70-410 (one exam left for MCSA 2012)
    Working on: MCSA 2012 upgrade from 2003 (to heck with 2008!!), MCSA 2016 upgrade, more Linux
    Thinking about: VCP6-CMA, AWS Solution Architect (Associate), Python, VCAP6-DCD (for completing VCIX)
    Reply With Quote Quote  

  7. Senior Member
    Join Date
    Apr 2013
    Posts
    2,411
    #6
    Quote Originally Posted by blargoe View Post
    I have ran into this with some SharePoint VMs that have been in this environment for 5-6 years... the machines needed more vCPU, but the OS did not support additional sockets. Fortunately, VMware doesn't care how many sockets and cores per socket you use. To the vmkernel, a core is a core.



    I do the opposite, because with hot-add, you can edit the number of vCPU on the fly in a running VM, but not the number of cores per vCPU.

    So I start with one socket and two cores per socket by default. If I think it is a VM that may need to scale up, I might go with 2 sockets/2 cores per socket, or possibly 1 socket/4 cores per socket. Especially for Windows 2008 VMs, where there are more limitations with socket count in the OS.
    Would you make an Exchange box a 4/4 for 2008 R2? or 1/4?

    I mean you just never think of CPU cycles, and this is now making me ponder about design for a CPU socket and virtual core, lol...
    Reply With Quote Quote  

  8. Not IT n00b dave330i's Avatar
    Join Date
    May 2011
    Location
    NoVA
    Posts
    1,963

    Certifications
    VCIX-NV, VCAP5-DCD/DCA, VCAP-CID/DTD, VCP4, Security+, MCTS-640, EMCISA, ITILv3 Foundation
    #7
    Quote Originally Posted by Lexluethar View Post
    Funny Death i was just researching this myself. I've always followed the 'physical world' in terms of going with sockets vs cores. Mostly i've never gone above 2 sockets because we don't have more than 2 physical sockets on a host in our environment.

    What i've learned this week though is it really doesn't matter - at least those are the KB articles i've found from VMware. The idea of sockets vs cores came as a licensing issue which is why it was introduced (that and older OS's may not recognize more than 4 cores).

    I'm still not sure which to do - i'm now leaning towards just giving as many sockets as possible and leaving cores at 1. For one that is vmware's default. Secondly from what i've read this allows vmware cpu scheduler to properly schedule cpu cycles.

    There is a performance difference between socket and core.

    Does corespersocket Affect Performance? - VMware vSphere Blog - VMware Blogs

    Always use sockets unless licensing issue.
    2017 Certification Goals: Fun filled world of AWS
    "Simplify, then add lightness" -Colin Chapman
    Reply With Quote Quote  

  9. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #8
    Quote Originally Posted by dave330i View Post
    Always use sockets unless licensing issue.
    I don't completely agree here. Within your host's NUMA node sizing, it doesn't matter whether you have 2 sockets and 4 cores or 4 sockets and 2 cores. I agree with the licensing bit, most products go with sockets for licensing. So if your product's licensed for 2 sockets only and your machine needs say 16 cores - go with 2 sockets and 8 cores. Again, think of NUMA and vNUMA.

    @Trev - I'd go with 1 vCPU to begin with. Now this is complete generalization without knowing what apps are going to run and how much workload is going to be put on your hosts. Don't forget - performance isn't about vCPU misconfig, it can have a lot to do with storage and/or network. So do your investigations before dropping vCPUs dramatically.

    @blargoe - Enabling CPU hot-add incurs a fair bit of overhead depending on size of machine. If you're doing it for every VM, you're doing it wrong to begin with. Say if you have 200 VMs in a cluster, each with 8 vCPUs and hot-add enabled - you'll likely see gigs and gigs of unnecessary overhead. In addition, I've found if people get wind of there being the idea of hot-add CPUs to a VM - you'll see more and more VMs end up being oversized.

    @Trev again - Exchange and SQL design are slightly furry beasts, they are not ordinary apps. VMware have sizing guides for Exchange, HIGHLY recommend you look 'em up before you go 1/4 or 4/1 - not so simple dude!

    Guys - performance isn't just about cores - think of the larger picture:

    - host design
    - cluster design
    - network design
    - storage design
    Reply With Quote Quote  

  10. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #9
    Being a little pedantic, the thread's title should have been VM design considerations, not vCPU design considerations. You don't do vCPUs designs
    Reply With Quote Quote  

  11. DPG
    DPG is offline
    Senior Member DPG's Avatar
    Join Date
    Jan 2008
    Posts
    760
    #10
    Quote Originally Posted by Deathmage View Post
    I only ask because we have a Exchange box having issues and I think my predecessor didn't grasp the idea of vCPU's. the Exchange box has 16 vCPU's and it's having performance issues...

    I would think maybe 2 vCPU's for 2000+ users or a stretch and go to 4 vCPU's NOT 16!!!!!!!

    I'm pretty sure our CPU Scheduler is ******** a brick.
    The 16 vCPU's aren't going to impact performance unless there is contention with other VM's on the same host. Which Exchange roles does the VM have running? I run into memory hog implementations of Exchange much more often that one that has CPU issues.

    Reply With Quote Quote  

  12. kj0
    kj0 is offline
    Apple and VMware kj0's Avatar
    Join Date
    Apr 2012
    Location
    Brisbane, Australia.
    Posts
    742

    Certifications
    vExpert x 4 | Apple Mac OS X Associate | Cert III - IT.
    #11
    Even if you don't have Hot-Add enabled, if you oversize the vCPU's and you can create overhead as well.
    2017 Goals: VCP6-DCV | VCIX
    Blog: http://readysetvirtual.wordpress.com
    Reply With Quote Quote  

  13. Senior Member
    Join Date
    Mar 2013
    Location
    Midwest
    Posts
    512

    Certifications
    MCSA Server 2008, VCP 5 DCV, CompTIA A+, Net+, 70-640, 70-642, 70-620, 70-646
    #12
    As someone said i would NOT enable Hot-Add on all of your VMs, only ones that down time is not an option and under-performing servers is a huge issue. What i've read regarding hot-add is having that enabled causes a fair amount of overhead. What i understand is when hot-add is enabled the hypervisor has to assume at any point in to you are going add all available CPU's to that VM. With that said VMware takes that into account and has to soft allocate those resources to those VMs. Again, is that practical? I don't know, I've just read really wonky things with earlier versions of Hot-add and the KB articles i've found regarding them in 5.5 (because i've played with this idea) it causes some overhead so use it sparingly.

    As for the vCPU thing, i'm still not sure man. Okay you said just use sockets to allow the scheduler to do it's thing, but the scheduler is going the EXACT same thing with cores as well. Those threads are handled in the same fashion. The two big difference that i've heard are licensing considerations and Numa aware. If you have an application that is Numa aware you can use multiple cores and the application will perform better w/o relying on the cpu scheduler.
    Reply With Quote Quote  

  14. VMware Dude! TheProf's Avatar
    Join Date
    Jun 2010
    Location
    Canada
    Posts
    327

    Certifications
    vExpert | CCA | CCAA | MCSA | MCTS | MCITP:EMA 2010 | VCP5-DCV/DT | VTSP4/5 | VSP 5 | Network +
    #13
    Quote Originally Posted by Essendon View Post
    I don't completely agree here. Within your host's NUMA node sizing, it doesn't matter whether you have 2 sockets and 4 cores or 4 sockets and 2 cores. I agree with the licensing bit, most products go with sockets for licensing. So if your product's licensed for 2 sockets only and your machine needs say 16 cores - go with 2 sockets and 8 cores. Again, think of NUMA and vNUMA.

    @Trev - I'd go with 1 vCPU to begin with. Now this is complete generalization without knowing what apps are going to run and how much workload is going to be put on your hosts. Don't forget - performance isn't about vCPU misconfig, it can have a lot to do with storage and/or network. So do your investigations before dropping vCPUs dramatically.

    @blargoe - Enabling CPU hot-add incurs a fair bit of overhead depending on size of machine. If you're doing it for every VM, you're doing it wrong to begin with. Say if you have 200 VMs in a cluster, each with 8 vCPUs and hot-add enabled - you'll likely see gigs and gigs of unnecessary overhead. In addition, I've found if people get wind of there being the idea of hot-add CPUs to a VM - you'll see more and more VMs end up being oversized.

    @Trev again - Exchange and SQL design are slightly furry beasts, they are not ordinary apps. VMware have sizing guides for Exchange, HIGHLY recommend you look 'em up before you go 1/4 or 4/1 - not so simple dude!

    Guys - performance isn't just about cores - think of the larger picture:

    - host design
    - cluster design
    - network design
    - storage design
    I agree!

    In fact I always start with 1 vCPU and work my way up (assuming we're talking about VDI).
    Reply With Quote Quote  

  15. Not IT n00b dave330i's Avatar
    Join Date
    May 2011
    Location
    NoVA
    Posts
    1,963

    Certifications
    VCIX-NV, VCAP5-DCD/DCA, VCAP-CID/DTD, VCP4, Security+, MCTS-640, EMCISA, ITILv3 Foundation
    #14
    Quote Originally Posted by Essendon View Post
    I don't completely agree here. Within your host's NUMA node sizing, it doesn't matter whether you have 2 sockets and 4 cores or 4 sockets and 2 cores. I agree with the licensing bit, most products go with sockets for licensing. So if your product's licensed for 2 sockets only and your machine needs say 16 cores - go with 2 sockets and 8 cores. Again, think of NUMA and vNUMA.

    @Trev - I'd go with 1 vCPU to begin with. Now this is complete generalization without knowing what apps are going to run and how much workload is going to be put on your hosts. Don't forget - performance isn't about vCPU misconfig, it can have a lot to do with storage and/or network. So do your investigations before dropping vCPUs dramatically.

    @blargoe - Enabling CPU hot-add incurs a fair bit of overhead depending on size of machine. If you're doing it for every VM, you're doing it wrong to begin with. Say if you have 200 VMs in a cluster, each with 8 vCPUs and hot-add enabled - you'll likely see gigs and gigs of unnecessary overhead. In addition, I've found if people get wind of there being the idea of hot-add CPUs to a VM - you'll see more and more VMs end up being oversized.

    @Trev again - Exchange and SQL design are slightly furry beasts, they are not ordinary apps. VMware have sizing guides for Exchange, HIGHLY recommend you look 'em up before you go 1/4 or 4/1 - not so simple dude!

    Guys - performance isn't just about cores - think of the larger picture:

    - host design
    - cluster design
    - network design
    - storage design
    You'll be hard pressed to find modern app requiring single CPU.
    2017 Certification Goals: Fun filled world of AWS
    "Simplify, then add lightness" -Colin Chapman
    Reply With Quote Quote  

  16. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #15
    @Dave - Plenty floating around in the environment I look after (dozens of vCenters, ~800 hosts, god-knows-how-many VMs).
    Reply With Quote Quote  

  17. Cyber Donkey slinuxuzer's Avatar
    Join Date
    Jul 2003
    Location
    East Texas
    Posts
    617

    Certifications
    VCDX:NV - A+ Net+ Sec+ MCSA08 CISSP CCNA B.S. IT/WGU
    #16
    I am for starting with a single socket and single core. You can actually drive down performance for some applications by assigning a second core. For instance if you have a single threaded application that will never be able to use that second core, you will still be required to schedule that second vCPU and that takes overhead. We proved this out in the VMware optimize and scale course, less operations per minute (OPM)

    Also, over allocating vCPUs can have a similar problem, once you start driving your hosts beyond the 4:1 vCPU-pCPU consolidation ratios you will start driving CPU ready times up, at this point it isn't a gigahertz problem, it becomes a problem of how long it takes the vCPUs that have work to do to get scheduled on the underlying resource, they will be in a longer line with vCPUs that don't have work to do.

    The general rule of thumb is try to keep your vCPU-pCPU consolidation ratio at 4:1 or under.

    vCPU hotadd, turning this on actually disables vNUMA.

    Also, I would have to go back and read up on some things, but the general recommendation is to try and make your VM socket layout mirror the underlying host, there is overhead involved with having a VM that has more sockets than the host it is running on, its basically a conversion that has to take place, before hitting the hardware.

    There are also some design factors that come in with monster VMs and sizing them with one or more sockets, basically each socket is assigned a memory bank and you could see some improvements memory wise by allowing it access to only one memory bank, basically avoiding remote memory calls and traversing the QPI link between numa nodes.
    Reply With Quote Quote  

  18. Not IT n00b dave330i's Avatar
    Join Date
    May 2011
    Location
    NoVA
    Posts
    1,963

    Certifications
    VCIX-NV, VCAP5-DCD/DCA, VCAP-CID/DTD, VCP4, Security+, MCTS-640, EMCISA, ITILv3 Foundation
    #17
    Quote Originally Posted by slinuxuzer View Post
    The general rule of thumb is try to keep your vCPU-pCPU consolidation ratio at 4:1 or under.
    The 4:1 is for 1 vCPU VMs. You'll have to lower the ratio for multi-CPU VMs. You'll start running into CPU ready issues.

    @Essendon - My experience is 2 or more CPUs lately.
    2017 Certification Goals: Fun filled world of AWS
    "Simplify, then add lightness" -Colin Chapman
    Reply With Quote Quote  

  19. Senior Member
    Join Date
    Apr 2013
    Posts
    2,411
    #18
    Well here, maybe you guys can make sense of this then. the Production cluster where the Exchange box is, all the other VM's have 4 CPU's. 2 sockets, 2 vCPU's, but the hosts can't sustain the CPU's. More over all of the VM only use like 400 MHz at all times, so why have that much processor power being need to be processed by the CPU scheduler?


    M630 pic.jpg

    Prior to me coming onboard then never knew about vROPS, the past week this has run has been alarming. They score a whopping 6 out of 100.


    This is actually what I was thinking of, see the Exchange box is using cores, but in the way it is, it's literally hogging up 8 sockets per Xeon E5-2697 v3. On top of the other geez 30 VM's with 4 vCPU's each. cluster just can't sustain the load, that poor CPU scheduler.

    Yes, the CPU's aren't the only problem here. They do have storage issues, there SAN only has 2% free space of 70TB's with a 40 TB overprovision. They really needed his new VNX SAN, the 2007 era Clariion was showing it's age.

    Exchange CCR.JPG

    Thanks for the feedback so far guys, I got a feeling if we all were in a room someplace we could talk for hours.


    Quote Originally Posted by slinuxuzer View Post
    I am for starting with a single socket and single core. You can actually drive down performance for some applications by assigning a second core. For instance if you have a single threaded application that will never be able to use that second core, you will still be required to schedule that second vCPU and that takes overhead.
    That's exactly what I've been thinking, and it's probably taxxing the vmkernel. The Exchange box, as shown above doesn't even use the MHz of one the Xeon's cores, I think the max I saw it was 1200 Mhz. But even if it needs one core, since it has 4 sockets and 4 vCPU's it still schedules it with the scheduler and that wait time for those kind of resources just seem like a performance hit and a waste of cpu cycles.
    Last edited by Deathmage; 04-18-2016 at 02:46 PM.
    Reply With Quote Quote  

  20. Self-Described Huguenot blargoe's Avatar
    Join Date
    Nov 2005
    Location
    NC
    Posts
    4,095

    Certifications
    VCAP5-DCA; VCP 3/4/5/6 (DCV); EMCSA:CLARiiON; Linux+; MCSE:M 2000/2003; MCSE:S 2000/2003; MCTS:Exch2007; Security+; A+; CCNA (expired)
    #19
    I wonder if your storage issues are compounding your vCPU issues; if the kernel is waiting for the storage driver, all 8 vCPUs are going to be waiting, guest OS sees high "System" CPU time.
    IT guy since 12/00

    Recent: 10/27/2017 - Passed Microsoft 70-410 (one exam left for MCSA 2012)
    Working on: MCSA 2012 upgrade from 2003 (to heck with 2008!!), MCSA 2016 upgrade, more Linux
    Thinking about: VCP6-CMA, AWS Solution Architect (Associate), Python, VCAP6-DCD (for completing VCIX)
    Reply With Quote Quote  

  21. Self-Described Huguenot blargoe's Avatar
    Join Date
    Nov 2005
    Location
    NC
    Posts
    4,095

    Certifications
    VCAP5-DCA; VCP 3/4/5/6 (DCV); EMCSA:CLARiiON; Linux+; MCSE:M 2000/2003; MCSE:S 2000/2003; MCTS:Exch2007; Security+; A+; CCNA (expired)
    #20
    Looks like I have some things to re-think in my environment based on the discussion on this thread. That's why I love this place.

    I am pretty much set up based on the understanding I had of the way things worked 4 years ago, and haven't really changed much of anything other than a couple of version upgrades since then. Looks like I need to ask for some time to do another deep dive again.

    Is it a true statement that vNuma doesn't kick in until you go past 8vCPU? And that in general, if you can fit all of your memory accesses inside of a single NUMA node, that would be optimal? I guess I just don't have that many VMs that are big enough to cross that threshold.

    I still have quite a bit of Windows 2008 that was deployed with Standard or Enterprise edition, which do have CPU licensing limitations built in. In Windows Server 2012 R2 this limitation doesn't exist in the OS, and when covered with a Datacenter license on the host, I don't see a reason not to follow dave330i's recommendation of only increasing socket count except for an application or virtual appliance licensing requirement (I'm not familiar with the licensing model of RHEL or other Enterprise Linux distributions).

    I wasn't aware of a significant overhead issue with hot-add to be honest. I don't have it turned on everywhere, but I do have it enabled for certain groups of VMs that are prone to application changes/additions that I can predict will need to have memory increased. I haven't seen any documentation/articles suggesting not to turn it on. Looks like I have some research to do.
    Last edited by blargoe; 04-18-2016 at 08:03 PM.
    IT guy since 12/00

    Recent: 10/27/2017 - Passed Microsoft 70-410 (one exam left for MCSA 2012)
    Working on: MCSA 2012 upgrade from 2003 (to heck with 2008!!), MCSA 2016 upgrade, more Linux
    Thinking about: VCP6-CMA, AWS Solution Architect (Associate), Python, VCAP6-DCD (for completing VCIX)
    Reply With Quote Quote  

  22. Senior Member
    Join Date
    Apr 2013
    Posts
    2,411
    #21
    Quote Originally Posted by blargoe View Post
    Looks like I have some things to re-think in my environment based on the discussion on this thread. That's why I love this place.

    I am pretty much set up based on the understanding I had of the way things worked 4 years ago, and haven't really changed much of anything other than a couple of version upgrades since then. Looks like I need to ask for some time to do another deep dive again.

    Is it a true statement that vNuma doesn't kick in until you go past 8vCPU? And that in general, if you can fit all of your memory accesses inside of a single NUMA node, that would be optimal? I guess I just don't have that many VMs that are big enough to cross that threshold.

    I still have quite a bit of Windows 2008 that was deployed with Standard or Enterprise edition, which do have CPU licensing limitations built in. In Windows Server 2012 R2 this limitation doesn't exist in the OS, and when covered with a Datacenter license on the host, I don't see a reason not to follow dave330i's recommendation of only increasing socket count except for an application or virtual appliance licensing requirement (I'm not familiar with the licensing model of RHEL or other Enterprise Linux distributions).

    I wasn't aware of a significant overhead issue with hot-add to be honest. I don't have it turned on everywhere, but I do have it enabled for certain groups of VMs that are prone to application changes/additions that I can predict will need to have memory increased. I haven't seen any documentation/articles suggesting not to turn it on. Looks like I have some research to do.

    Well these Xeon's have 14 cores per socket and have HT, and I just had vROPS tell me to increase a VM to 10 vCPU's, we'll see if this helps. It was previously set to 4 socket and 6 vCPU's, changed it to 1 socket and 10 vCPU's, and so far the VM is way happier...
    Reply With Quote Quote  

  23. Not IT n00b dave330i's Avatar
    Join Date
    May 2011
    Location
    NoVA
    Posts
    1,963

    Certifications
    VCIX-NV, VCAP5-DCD/DCA, VCAP-CID/DTD, VCP4, Security+, MCTS-640, EMCISA, ITILv3 Foundation
    #22
    @Blargoe - vNUMA does kick in at 8 vCPU. It can be adjusted. We had to do it for exchange 13 servers.

    Hot plug does increase overhead. The bigger problem is that the new CPUS added are on node 0 unless you vMotion or power cycle the VM.

    A lot of the older designs do need to be revisited due to new technologies in hardware & software.
    Last edited by dave330i; 04-19-2016 at 12:05 AM.
    Reply With Quote Quote  

  24. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #23
    Quote Originally Posted by Deathmage View Post
    Well these Xeon's have 14 cores per socket and have HT, and I just had vROPS tell me to increase a VM to 10 vCPU's, we'll see if this helps. It was previously set to 4 socket and 6 vCPU's, changed it to 1 socket and 10 vCPU's, and so far the VM is way happier...
    Be careful about what vROps suggests. It's not so black and white. It uses something called - policies - which dictate the nature of recommendations it'll generate. You must base your policies on how your environment's designed - do you overcommit on RAM or CPU or neither - have you checked these settings?
    Reply With Quote Quote  

  25. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #24
    Quote Originally Posted by Deathmage View Post
    Well these Xeon's have 14 cores per socket and have HT, and I just had vROPS tell me to increase a VM to 10 vCPU's, we'll see if this helps. It was previously set to 4 socket and 6 vCPU's, changed it to 1 socket and 10 vCPU's, and so far the VM is way happier...
    Get the terminology correct too For instance, at 2 cores and 2 sockets = a machine has 4 vCPUs. Run up esxtop, switch to memory and see how much memory's being fetched from a remote NUMA node. Curious - what's the NUMA size on this hardware, 8?
    Reply With Quote Quote  

  26. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #25
    Quote Originally Posted by Deathmage View Post
    Well here, maybe you guys can make sense of this then. the Production cluster where the Exchange box is, all the other VM's have 4 CPU's. 2 sockets, 2 vCPU's, but the hosts can't sustain the CPU's. More over all of the VM only use like 400 MHz at all times, so why have that much processor power being need to be processed by the CPU scheduler?


    Attachment 7710

    Prior to me coming onboard then never knew about vROPS, the past week this has run has been alarming. They score a whopping 6 out of 100.


    This is actually what I was thinking of, see the Exchange box is using cores, but in the way it is, it's literally hogging up 8 sockets per Xeon E5-2697 v3. On top of the other geez 30 VM's with 4 vCPU's each. cluster just can't sustain the load, that poor CPU scheduler.

    Yes, the CPU's aren't the only problem here. They do have storage issues, there SAN only has 2% free space of 70TB's with a 40 TB overprovision. They really needed his new VNX SAN, the 2007 era Clariion was showing it's age.

    Attachment 7711

    Thanks for the feedback so far guys, I got a feeling if we all were in a room someplace we could talk for hours.




    That's exactly what I've been thinking, and it's probably taxxing the vmkernel. The Exchange box, as shown above doesn't even use the MHz of one the Xeon's cores, I think the max I saw it was 1200 Mhz. But even if it needs one core, since it has 4 sockets and 4 vCPU's it still schedules it with the scheduler and that wait time for those kind of resources just seem like a performance hit and a waste of cpu cycles.
    - Say a host's got 2 sockets and 8 cores, you have a total of 16 cores. How many total vCPUs (add up vCPUs from all VMs) do you have in that cluster? I suggest 4:1 for most environments to begin with, unless otherwise needed. You can go 6:1 or even 8:1 (for most single vCPU workload cluster, there aren't too many these days) before really start to stretch the limit. So what ratio do you have? Remember there are multiple hosts in the cluster.

    - What DRS levels do you have and what do the other hosts look like? Has DRS tried to move VMs around? I've seen people leaving DRS off (not having enough knowledge) and then wondering why their hosts and/or VMs underperforming.

    - 40TB overprovision!! Jeez.. That may be the issue all along. Remember it's not only about the disk being overprovisioned, it can also be about the FA ports are doing.

    You need to do a thorough review to be honest, don't go with trial and error. This isn't a home lab!
    Reply With Quote Quote  

+ Reply to Thread
Page 1 of 2 1 2 Last

Social Networking & Bookmarks