+ Reply to Thread
Page 4 of 17 First 1234 567814 ... Last
Results 76 to 100 of 408
  1. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #76
    Pre posting tomorrow's question:

    You currently have 1 VSS with 2 vmnics as uplinks in place for your vSphere environment. Your company recently bought Enterprise Plus licenses to leverage the PVLAN features of the DVS. Tell me how you would non disruptively migrate the following network types to the DVS.
    • Management traffic
    • vMotion traffic
    • VM traffic
    Last edited by tomtom1; 03-27-2014 at 06:36 AM.
    Reply With Quote Quote  

  2. SS -->
  3. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #77
    Quote Originally Posted by tomtom1 View Post
    - Is there something missing on the hardware side of things? Discuss.
    Assuming the design is leaning more towards the physical design then a logical design, I'm missing some redundancy in the pNICS and the FC switches. 1 NIC is drawn per host to 1 single instance of a fiber channel switch.

    - Discuss their HA settings. Specifically talk about
    • Admission Control. Enable or disable? What policy setting do they need to ensure all VM's start up successfully in case either datacenter (entire datacenter, that is) fails.
    • I would go with the option for Power Off, because the vSphere 5 default of Leave Powered On could create something you would avoid at all costs, a split brain scenario. The immediate power off in a host faillure event would ensure that the hosts in the other side could start the VM's.
    • How many datastores should they use for their heartbeating? What advanced HA setting is needed?
    • According to the Metro Cluster Case Study you should use a minimum of 4 datastores, 2 per sites. To increase the default of 2 datastores, you need the HA advanced setting of das.heartbeatDsPerHost to 4.
    • What will happen to the VM's running on the far left host if it fails?
    The local host in the local site will run these VM's, if you specify this with DRS rules.


    - Discuss their DRS settings. Specifically talk about
    • How do they ensure that the VM workload is balanced across the stretched cluster?
    • Create DRS should rules to ensure that a part of the workload is specifically running on either the left or the right part of the stretched cluster.
    • How do they also ensure that VM's successfully startup when a host(s) fails? Hint: talk about DRS rules
    • By using the DRS should rules, you can ensure that the host local to the site runs the workload first, unless that fails to. Because it is a should rule, it isn't a hard rule, and HA and stuff like maintenance mode will continue to work after both hosts in the site has failed.
    - Lastly, does this setup provide for workload mobility and allow your client to migrate their VM workload to the other datacenter if an impending disaster threatens to wipe out one datacenter.
    I'd say so, assuming the storage is capable of the correct replication.
    Great answer there, Tom. I'll add a few bits here and there.

    VMware HA:

    Admission Control: I'd set it to Enable. You always want to ensure that your cluster is able to restart all your VM's on another host if a HA event occurs. Setting Admission Control to Disable would allow you to power on more VM's than can be restarted in case of host failure. The only use cases I'd see for this are a test lab situation or when you dont care about high availability and are trying to make maximum use of your hardware (again a test lab really!).

    In addition, I'd set the Admission Control Policy to %age reserved and reserve 50% of the resources to be used only in the event of a complete site failure or during a planned migration in case of a impending catastrophic event. I've seen people setting the %age reserved to 30% for both CPU and Mem and then wonder why all their VM's didnt startup when one of their datacenters (say Building B) has fallen over completely. Sure you may think that setting a %age reserved of 50% is overkill, well do you want all your VM's protected? That's one of the things about a stretched cluster situation, you are probably running production workloads in either datacenter and you'd want your VM's to be highly available.
    Isolation response: I'd recommend setting the isolation response to your requirements and constraints. Isolation response is just that, how should your cluster respond when a host is isolated. In a well-designed network environment, it's very unlikely that a host will be isolated, there'll be some redundant path that can be used by the host. I'd leave the Isolation response to "Leave Powered On", especially in a environment that uses a FC as its storage protocol. In an environment that used iSCSI and/or NFS, the recommended option is "Power Off". In a network based storage protocol, it's likely that a disruption that's caused host isolation will also prevent a host from getting to its datastores. Hence, the need to quickly power off your VM's and have HA spin 'em up on another host.

    Another thing to keep in mind is that when your VM's are powered up by HA (based on your choice of isolation response), they can be restarted in the other datacenter. DRS rules will come in play here and will move the VM's over to the home datacenter. There'll be some latency experienced while the VM's run in the distant datacenter.

    Split-brain scenario: This may exist for a very short time only when the two datacenters have their networking re-established. HA will recognise this immediately and VM's with no access to their files will be powered-off.
    Workload mobility:

    Yes, this is whole purpose of a stretched cluster. You should be able to move your VM's around if needed. However, this kind of setup should be setup with care and would require regular monitoring to ensure VM locality otherwise you may experience latency and discover your VM's dont restart successfully in case of host or storage failure. Host and datastore affinities should be setup carefully and regularly checked.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  4. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #78
    Quote Originally Posted by tomtom1 View Post
    Pre posting tomorrow's question:

    You currently have 1 VSS with 2 vmnics as uplinks in place for your vSphere environment. Your company recently bought Enterprise Plus licenses to leverage the PVLAN features of the DVS. Tell me how you would non disruptively migrate the following network types to the DVS.
    • Management traffic
    • vMotion traffic
    • VM traffic
    1. Remove one NIC from the VSS (if portchannels are used make sure you change the failover policy away from IP Hash)

    2. Create a DVS, add hosts with the now available NIC to the DVS

    3. Create portgroups with the relevant VLANs matching Management, vMotion and VM Traffic

    4. If multiple vmkernel ports are used for vMotion for example, make sure you create two portgroups, excluding an uplink per portgroup

    - Portgroup 1
    - Active Uplink dvuplink1
    - Unused Uplink dvuplink2

    - Portgroup 2
    - Active Uplink dvuplink2
    - Unused Uplink dvuplink1

    5. Migrate the vmkernel interface to dvs
    - Either do this when adding the host
    - Add host without migrating and migrate later (Configuration > Networking > vDS > Manage Virtual Adapters > Add > Migrate)

    6. Migrate Virtual Machine Networking
    - Change NIC assignments manually per VM or
    - Home > Networking > vDS > Commands > Migrate Virtual Networking

    7. Remove VSS

    8. Add now unused vmnic to vDS (Configuration > Networking > vDS > Manage Physical Adapters > Add)

    Make sure that the correct configuration is applied to the vDS - This includes, but not limited to, Portchannels and failover policy, MTU and VLANs. If iSCSI is used you will need to remove the portbinding, which may or may not cause interruption to the storage network so I would suggest evacuating a host and remove / re-add the ISCSI layer making sure you follow the same uplink rules as the vmotion interfaces
    Last edited by jibbajabba; 03-27-2014 at 09:08 AM.
    Reply With Quote Quote  

  5. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #79
    Question 12

    You are the virtualization gun for an SMB that currently has its gear sitting in Datacenter Mickey. Due to increased growth they are looking at buying more hardware and sticking them in a new datacenter, Datacenter Minnie. Mickey is owned by the SMB so space wasn't an issue, however Minnie is a 3rd party and rack space is at a premium, so are cooling and power.

    Your company's position on budget/gear:

    - Tight budget for the first 12-18 months.
    - Only sufficient to purchase one blade chassis.
    - No money for training staff in blade management.

    Your company's requirements are:

    - Use minimum rack space.
    - Be able to scale-up if needed because they anticipate a potential client will have these massive SQL VM's.
    - No single point of failure.
    - Lower entry-cost point with regards to their ESXi hosts

    Future considerations:

    - The company anticipates winning a large VDI project for another client, though the tender process and the rest of negotiations aren't expected to finish for about 20 months. The chances of winning the project are not that high, contrary to what some douches in the company believe.

    Suggest whether the company should go with physical rack-mount servers or blade servers while taking into consideration your company's current monetary position, its requirements and future plans.

    Answer:

    A tight budget and a lower entry cost point are usually enough to weigh someone in favour of rackmount servers. Couple that with this particularly tight-arse company not coughing up enough coin for 2 chassis for redundancy's sake, rackmount servers are the only option for them.

    Let's look at this in more detail. Blade chassis systems are only cost effective if you fully (or mostly) populate them with blades. The initial cost of the system is usually prohibitive enough to deter many customers, but there are several advantages

    - far less cabling
    - reduced rack usage (higher density)
    - easy to replace a failed blade, just chuck a new one in, assign profile and away you go
    - great for a scale-out model and in VDI deployments

    If the company had sufficient budget and the client with massive SQL VM's on their roles, it may have been enough to sway them towards rackmount servers. Nowadays, blade servers easily come with 256-512GB RAM, but if you need more than that for your monster VM's, then rackmounts will be the way to go.

    As always, it's important to tailor your solution in line with the needs, the constraints and future requirements for your customer. You dont want to be in situation where you run out of pSwitch ports and/or storage. While we are at it, a company my team was resolving problems for had this massive virtualization project. They thought (or atleast in their minds, they did) they had a grip on everything - NO!. When they finished their P2V project, things were running satisfactorily, but then they had this new initiative which required these massive VM's (128 GB ones with 16 vCPU's), they absolutely killed the storage and their hosts. The VM's were all over their host's NUMA boundaries, the storage was on its knees and there regular datastore drops. Wasnt a pretty situation. Plan ahead, plan ahead, plan ahead! If you cant, call me!

    Last edited by Essendon; 03-29-2014 at 07:36 AM.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  6. DoWork
    Join Date
    Jun 2010
    Location
    A major Illinois hospital system near you
    Posts
    1,468

    Certifications
    vExpert, VCAP5-DCA/DCD, VCP5-DCV, VCIX-NV, VCP-NV, BSTM
    #80
    Buy a couple Nutanix blocks! /project over WHAT ELSE YA GOT ESSENDON!?!?

    Mad props for using 'douches' in the description as well. I'd +1 if I could but I need to spread some love elsewheres
    Last edited by QHalo; 03-28-2014 at 03:01 AM.
    Reply With Quote Quote  

  7. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #81
    LOL! There are too many people of that particular category nowadays mate, make me mad. I particularly love the title "Solutions Architect", some of these architects cant tell a server apart from a sewing machine mate. I had one call me the other day

    she said - I'd like to pick up a virtual server on the way home.
    Me - ummm right, why and what for?
    She - apparently, Facebook and Instagram run better on your phone I hear if you have a virtual server at home.
    Me - what??? who told you that, are you serious?
    She - O yes, we were discussing buying a bunch of them during our morning smoko.
    Me - complete silence.

    As for Nutanix, I'm all for it too!! That thing kicks arse, read the Nutanix bible by Steve Poitras, and man was I impressed!
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  8. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #82
    Definitely go with rackservers, there are a few risks and requirements here that prohibit the use of blade servers:

    1) Only one black chassis, which is in fact a single point of failure. Chances of this failing are slim, but existent.
    2) Lower entry cost point, which with blade servers cannot be easily reached, since you need to buy a chassis, and some blades.

    Constraints are:
    1) Tight budget (the amount is not specified) for the initial 1 to 1,5 years.

    Risks are:
    1) No money for staff training in blades, thus leaving them at risk when a problem occurs on the chassis, that they don't know how to solve.

    Also, the future growth is uncertain at this point, and mixed with all these risks, constraints and requirements, I'd say rackservers. Love to hear somebody else's view on this.
    Last edited by tomtom1; 03-29-2014 at 07:43 AM.
    Reply With Quote Quote  

  9. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #83
    Question 13:

    Your company has invested in Dell Equallogic storage. Upon verification after your implementation, you see that all EQL iSCSI disks are correctly being claimed by the right SATP, but the PSP associated with this SATP is set to VMW_SATP_MRU, whilst Dell best practices are to use the VMW_PSP_RR. Using esxcli, how would you fix this?


    Relevant information:
    Code:
    aa.6019cba11285a36e682655755d74fde8
       Display Name: EQLOGIC iSCSI Disk (naa.6019cba11285a36e682655755d74fde8)
       Has Settable Display Name: true
       Size: 307200
       Device Type: Direct-Access
       Multipath Plugin: NMP
       Devfs Path: /vmfs/devices/disks/naa.6019cba11285a36e682655755d74fde8
       Vendor: EQLOGIC
       Model: 100E-00
       Revision: 6.0
       SCSI Level: 5
       Is Pseudo: false
       Status: on
       Is RDM Capable: true
       Is Local: false
       Is Removable: false
       Is SSD: false
       Is Offline: false
       Is Perennially Reserved: false
       Queue Full Sample Size: 0
       Queue Full Threshold: 0
       Thin Provisioning Status: yes
       Attached Filters: VAAI_FILTER
       VAAI Status: supported
       Other UIDs: vml.02000000006019cba11285a36e682655755d74fde8313030452d30
       Is Local SAS Device: false
       Is Boot USB Device: false
       No of outstanding IOs with competing worlds: 32
    
    
       Device Display Name: EQLOGIC iSCSI Disk (naa.6019cba11285a36e682655755d74fde8)
       Storage Array Type: VMW_SATP_EQL
       Storage Array Type Device Config: SATP VMW_SATP_EQL does not support device configuration.
       Path Selection Policy: VMW_PSP_MRU
       Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0; lastPathIndex=1: NumIOsPending=0,numBytesPending=0}
       Path Selection Policy Device Custom Config:
       Working Paths: vmhba38:C1:T4:L0, vmhba38:C0:T4:L0
    Reply With Quote Quote  

  10. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #84
    Quote Originally Posted by tomtom1 View Post
    Definitely go with rackservers, there are a few risks and requirements here that prohibit the use of blade servers:

    1) Only one black chassis, which is in fact a single point of failure. Chances of this failing are slim, but existent.
    2) Lower entry cost point, which with blade servers cannot be easily reached, since you need to buy a chassis, and some blades.

    Constraints are:
    1) Tight budget (the amount is not specified) for the initial 1 to 1,5 years.

    Risks are:
    1) No money for staff training in blades, thus leaving them at risk when a problem occurs on the chassis, that they don't know how to solve.

    Also, the feature growth is uncertain at this point, and mixed with all these risks, constraints and requirements, I'd say rackservers.
    Couldnt agree more mate. Added a few more lines in the answer area of the question, included a client's situation I dealt with some time back.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  11. DoWork
    Join Date
    Jun 2010
    Location
    A major Illinois hospital system near you
    Posts
    1,468

    Certifications
    vExpert, VCAP5-DCA/DCD, VCP5-DCV, VCIX-NV, VCP-NV, BSTM
    #85
    Quote Originally Posted by tomtom1 View Post
    Question 13:

    Your company has invested in Dell Equallogic storage. Upon verification after your implementation, you see that all EQL iSCSI disks are correctly being claimed by the right SATP, but the PSP associated with this SATP is set to VMW_SATP_MRU, whilst Dell best practices are to use the VMW_PSP_RR. Using esxcli, how would you fix this?


    Relevant information:
    Code:
    aa.6019cba11285a36e682655755d74fde8
       Display Name: EQLOGIC iSCSI Disk (naa.6019cba11285a36e682655755d74fde8)
       Has Settable Display Name: true
       Size: 307200
       Device Type: Direct-Access
       Multipath Plugin: NMP
       Devfs Path: /vmfs/devices/disks/naa.6019cba11285a36e682655755d74fde8
       Vendor: EQLOGIC
       Model: 100E-00
       Revision: 6.0
       SCSI Level: 5
       Is Pseudo: false
       Status: on
       Is RDM Capable: true
       Is Local: false
       Is Removable: false
       Is SSD: false
       Is Offline: false
       Is Perennially Reserved: false
       Queue Full Sample Size: 0
       Queue Full Threshold: 0
       Thin Provisioning Status: yes
       Attached Filters: VAAI_FILTER
       VAAI Status: supported
       Other UIDs: vml.02000000006019cba11285a36e682655755d74fde8313030452d30
       Is Local SAS Device: false
       Is Boot USB Device: false
       No of outstanding IOs with competing worlds: 32
    
    
       Device Display Name: EQLOGIC iSCSI Disk (naa.6019cba11285a36e682655755d74fde8)
       Storage Array Type: VMW_SATP_EQL
       Storage Array Type Device Config: SATP VMW_SATP_EQL does not support device configuration.
       Path Selection Policy: VMW_PSP_MRU
       Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0; lastPathIndex=1: NumIOsPending=0,numBytesPending=0}
       Path Selection Policy Device Custom Config:
       Working Paths: vmhba38:C1:T4:L0, vmhba38:C0:T4:L0
    Modify the SATP default claim rule to claim the vendor=EQLOGIC as PSP WMW_PSP_RR.
    Reply With Quote Quote  

  12. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #86
    Quote Originally Posted by QHalo View Post
    Modify the SATP default claim rule to claim the vendor=EQLOGIC as PSP WMW_PSP_RR.
    Exact syntaxes please
    Reply With Quote Quote  

  13. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #87
    Bringing the difficulty down a notch or two

    Question 14

    A company has hired you as their virtualization administrator and is looking at P2V'ing some application servers. The problem they are facing is the application is a multi-tier application with various components depending on each other. They are concerned that they wouldnt be able to control the power on the various VM's that host the application.

    - How will you help them overcome there fears?
    - In addition, they are adamant the application servers have memory dedicated to them, how you will you do this? Discuss the consequences to other VM's
    - How will you determine the number of hosts and the grunt they need?
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  14. kj0
    kj0 is offline
    Apple and VMware kj0's Avatar
    Join Date
    Apr 2012
    Location
    Brisbane, Australia.
    Posts
    733

    Certifications
    vExpert x 4 | Apple Mac OS X Associate | Cert III - IT.
    #88
    Quote Originally Posted by Essendon View Post
    Bringing the difficulty down a notch or two

    Question 14

    A company has hired you as their virtualization administrator and is looking at P2V'ing some application servers. The problem they are facing is the application is a multi-tier application with various components depending on each other. They are concerned that they wouldnt be able to control the power on the various VM's that host the application.

    - How will you help them overcome there fears?
    - In addition, they are adamant the application servers have memory dedicated to them, how you will you do this? Discuss the consequences to other VM's
    - How will you determine the number of hosts and the grunt they need?
    Create a Resource pool that has a "power On" order set.

    Inside the Resource Pool, set reserved Memory levels.
    2017 Goals: VCP6-DCV | VCIX
    Blog: http://readysetvirtual.wordpress.com
    Reply With Quote Quote  

  15. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #89
    Elaborate please, kj0, when you have a moment
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  16. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #90
    Quote Originally Posted by kj0 View Post
    Create a Resource pool that has a "power On" order set.

    Inside the Resource Pool, set reserved Memory levels.
    You mean a vApp. One thing I would most definitely stay away from is guest VM reservations, since it messes up HA slot sizes and not in a good way. Even if the company is currently not leveraging HA, it might in the future.

    A vApp has some nasty implications though, if you decide to use one, you should understand the impact it has on the shares in times of resource contention. If you leave the shares at the default, and you start coming close to a saturated host, you might run into some problems with the calculation behind shares.

    To determine the resources necessary to complete this project, run some analysis tools (i.e. capacity planner, perfmon) on the current physical machines and determine:
    • Peak CPU usage
    • Average CPU usage
    • Peak memory usage
    • Average memory usage
    Reply With Quote Quote  

  17. kj0
    kj0 is offline
    Apple and VMware kj0's Avatar
    Join Date
    Apr 2012
    Location
    Brisbane, Australia.
    Posts
    733

    Certifications
    vExpert x 4 | Apple Mac OS X Associate | Cert III - IT.
    #91
    Quote Originally Posted by tomtom1 View Post
    You mean a vApp. One thing I would most definitely stay away from is guest VM reservations, since it messes up HA slot sizes and not in a good way. Even if the company is currently not leveraging HA, it might in the future.

    A vApp has some nasty implications though, if you decide to use one, you should understand the impact it has on the shares in times of resource contention. If you leave the shares at the default, and you start coming close to a saturated host, you might run into some problems with the calculation behind shares.

    To determine the resources necessary to complete this project, run some analysis tools (i.e. capacity planner, perfmon) on the current physical machines and determine:
    • Peak CPU usage
    • Average CPU usage
    • Peak memory usage
    • Average memory usage
    HAHA... Yeah, vApp is what I meant, Heads all over he shop at the moment with all this study. vMotion and DRS at the moment.

    When I get a second I'll do what I was originally going to. Do up some screenshots of the answer with vApps.


    Inside your vApps you can set your boot priority for the order of which your VMs will start up in. 120seconds between each is generally the ballpark.

    You can then set your reserves for the host memory for the VMs inside the vApp so that when your start your VM, it is guaranteed that Memory and can hold it.


    I think that's right.
    2017 Goals: VCP6-DCV | VCIX
    Blog: http://readysetvirtual.wordpress.com
    Reply With Quote Quote  

  18. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #92
    My take on guest RAM reservations:

    I steer away from them as much as possible, but there are cases where for political/business reasons you have to have them (SLA's) . After all, if a critical workload needs a guarantee of RAM in times of contention - you have to be able to give it what it needs. The key there lies in the term "contention". Contention shouldnt normally occur in a well-designed and well looked-after environment. Ironically, I've seen contention being caused by memory reservations themselves! Ill-informed admins keep dishing out RAM reservations only to discover ballooning beginning to occur (this first starts showing up on SQL VM's and heavily used app VM's). So use them very sparingly. Use them on a case-by-case basis. Evaluate every RAM reservation request closely.

    Most environments would almost certainly employ RAM reservations, and if you do you need to use %age reserved for failover as your admission control policy. This policy doesnt suffer from the very restrictive slot-based policy (which is the default). The only downside to the %age reserved policy is the need to evaluate your %ages from time-to-time and especially when you put add a new host to the cluster. You MUST change the %age when a host is added/removed.

    While we are at it, there are 2 more things I'd like to add. The more the RAM you allocate to a VM, the more the:

    - size of its .vswp file.
    - size of its memory overhead.

    You may think - Oh, it cant be too bad. Err, no. If you have say a VM with 32GB RAM, it's going to need a 32 GB .vswp file (which can sit in a datastore of your choice). You have 10 VM's like that across your environment, there goes 320GB of your swap datastore. Right, now let's reserve all their RAM, so there's no .vswp files for these 10 VM's. BUT, now your cluster has 320GB less RAM to play with. Other VM's which may need RAM are now going to be starved of it. The size of the memory overhead isnt insignificant either, a 32GB RAM and 4vCPU VM is going to have a fair overhead and with 10 or so of these VM's and you'll see like 2GB RAM (probably more) taken away by overheads alone.

    Moral of the story - Employ reservations, but very sparingly. And you must use %age reserved for failover whether or not you use reservations.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  19. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #93
    To stir the discussion up a bit further: What is, in general, your (everybody's) take on .vswp files in general. vSphere 5 also added a VMX-SWAP to let the VM world swap to a file. More and more swapping seems to occur. I've taken a quick peek at our production cluster (small managed hosting provider) and we're running about 123 GB of VM's. I managed to keep away from reservations, but at the same time, that means that 123 GB of our SAN storage is being "wasted" on swap files, currently not being used because the environment is not (and hope never will be) under contention.

    Used this nice PowerCLI script found on the VMware forums for this calculation, which shows you the memory provisioned for the entire cluster:

    Code:
    Get-Cluster Production | Get-VMhost | Select Name,@{N="Memory used MB";E={$_ | Get-VM | %{$_.ExtensionData.Summary.QuickStats.HostMemoryUsage} | Measure-Object -Sum | Select -ExpandProperty Sum
    }}
    You have a few options when configuring swap files:
    • Leave it at the default, which is store in the virtual machine working directory. This is default, but as in our situation, we've lost 123 GB to .vswp files.
    • Create a dedicated .vswp datastore based on shared storage. Pro's: you can ensure that the .vswp files end up on cheap storage. Cons: Requires configuration
    • Create a dedicated .vswp datastore based on local storage. Pro's: Easy to use a single disk (either SSD or local slow storage). Cons: Provides more vMotion overhead as the .vswp file needs to be copied around at every vMotion operation. Used in conjunction with DRS, this can create some significant overhead.
    I'm currently leaning towards the second option for our production cluster, since we use a mixture of RAID-5 and RAID-10 volume groups, and since we've got no contention it's really a waste of space to have .vswp files on the more expensive RAID-10 storage sets.

    Your thoughts?
    Reply With Quote Quote  

  20. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #94
    Great discussion topic this Tom. Like with most things, there's a trade-off. I'd go with Option 2 every time, there's some manual configuration but the benefits of a separate datastore far outweigh the slight disadvantage.

    1. Array replication is probably the most notable, by having a separate datastore that you do not replicate - you save on bandwidth and capacity.

    2. Like Tom's said, you can put the .vswp files on cheap storage, we have ours on RAID 6 storage.

    3. At the DR site, I'd recommend you dont dedicate a separate datastore for .vswp files - this datastore will always be sitting vacant, waiting for a site failover to occur. Just let the VM's .vswp file be with the VM's config file (default setting). .vswp files are recreated when a VM starts up at another site.

    4. I'd never put swap files on local storage as much as possible. vMotion performance is affected because the swap files will also need to be moved to the destination host.

    However, if you are not replicating to another site i.e. you are not using vSphere Replication or Array-based Replication, it's probably better to store the swap files with the VM. Same applies for SRM, if you dont have SRM, it's okay to store the swap files with the VM. It's all about how your environment's setup and what your future plans are.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  21. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #95
    Quote Originally Posted by kj0 View Post
    HAHA... Yeah, vApp is what I meant, Heads all over he shop at the moment with all this study. vMotion and DRS at the moment.

    When I get a second I'll do what I was originally going to. Do up some screenshots of the answer with vApps.


    Inside your vApps you can set your boot priority for the order of which your VMs will start up in. 120seconds between each is generally the ballpark.

    You can then set your reserves for the host memory for the VMs inside the vApp so that when your start your VM, it is guaranteed that Memory and can hold it.


    I think that's right.
    Gotta be careful about resource pools too bud. They are a PITA as far as I am concerned. The need to keep them going (manual adjustment of the values of CPU/RAM/disk) is enough to put me off of them. Consider the following:

    1. In this case, there are 4 VM's in the High Resource Pool and 8 in the Normal Resource Pool. You've split up the your cluster into these two pools having allocated more resources to the High pool and fewer to the Normal pool. All's well till the number of VM's in the pools stays more or less constant.

    Pie1.JPG

    2. As time goes on, and as things follow their course - you have admins that put VM's in the High pool at a system owner/manager's request or they are just careless and just chuck every VM in the high pool thinking there's plenty to go around.

    Pie2.JPG
    Now though, there are more VM's in the same High pool. So more VM's sharing the same pie which means there's now less to go around - less resources for more VM's. Duncan Epping's called this the Resource Pie Paradox. More consumers, less resources.

    So in essence, steer away from these resource pools, if you need to have them around, visit their resource allocations regularly to ensure your VM's arent being starved.

    Hope this helps.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  22. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #96
    Quote Originally Posted by tomtom1 View Post
    You mean a vApp. One thing I would most definitely stay away from is guest VM reservations, since it messes up HA slot sizes and not in a good way. Even if the company is currently not leveraging HA, it might in the future.

    A vApp has some nasty implications though, if you decide to use one, you should understand the impact it has on the shares in times of resource contention. If you leave the shares at the default, and you start coming close to a saturated host, you might run into some problems with the calculation behind shares.
    And HA doesnt respect any vApp startup order, you'll find HA will power up VM's in a random manner without regard to your vApp's power-up order.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  23. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #97
    Question 15:

    You're still working for the same old company you helped with the Equallogic storage PSP. Since you took over from another admin, you've made some nice improvements to the environment. You notice, that a lot of the VM's are currently in snapshot mode. Some even go back as far as 2012!

    You know from your studies, that this provides the following risks:
    • Increased capacity on the storage LUNs.
    • Decreased performance (increased read overhead)
    Which tools could you use to identify VM's that are in snapshot mode. Multiple valid answers exist.
    Reply With Quote Quote  

  24. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #98
    - vCOps
    - Alan Renouf's vCheck is particularly awesome!
    - Storage Views tab at the DC, Cluster, host and VM level
    - Presence of a vm-00001.vmdk file in the datastore the VM resides in, perhaps you can do this via some esxcli command too, IDK
    - Snapshot Manager, but looking at each VM will be like watching paint dry
    - Get-VM | Get-Snapshot | ...

    Off the top of my head, anyone have more ways? Great question
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  25. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #99
    this turns into a Q&A session between TomTom1 and Essendon
    Reply With Quote Quote  

  26. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #100
    Quote Originally Posted by jibbajabba View Post
    this turns into a Q&A session between TomTom1 and Essendon
    Everybody can post their take on these questions though, in fact, I (we) 'd love to see more participants! How we monitor snapshots is by using an alarm in vCenter, configured like this.

    Screen Shot 2014-03-31 at 17.54.08.jpg Screen Shot 2014-03-31 at 17.54.18.jpg

    The pro of this solution is that it leverages the vCenter itself, so no need for external tools or scripts to add complexity to the environment. Smaller VM's will fall through the limits at the start, but they will be in warning or error state quick enough.
    Reply With Quote Quote  

+ Reply to Thread
Page 4 of 17 First 1234 567814 ... Last

Social Networking & Bookmarks