+ Reply to Thread
Page 5 of 17 First 12345 678915 ... Last
Results 101 to 125 of 408
  1. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #101
    Question 16

    You somehow continue working for the same company (as in the previous question). You rock up one morning and see that several VM's in several datastores are in the "paused" state. Upon furiously clicking around and scratching your head, you discover the datastores the VM's were on are full. These datastores are thin-provisioned at the vSphere layer and thick-provisioned at the storage layer.

    a).
    How can resume your VM's? (multiple options exist)
    b). How can you prevent this issue in the future?
    c). Discuss the pro's and con's of

    - thin provisioning at both the storage layer and the vSphere layer
    - thin provisioning at the storage layer and thick-provisioning at the vSphere layer
    - thick provisioning at the storage layer and thin-provisioning at the vSphere layer
    - thick provisioning at both the storage layer and the vSphere layer
    Last edited by Essendon; 03-31-2014 at 11:16 PM.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  2. SS -->
  3. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #102
    Quote Originally Posted by jibbajabba View Post
    this turns into a Q&A session between TomTom1 and Essendon
    LOL! Certainly not the intention!

    I encourage everyone to participate, just type out your answer - no one's going to ridicule you. No one knows everything and I'm out here to learn too. So let's make this a good learning method. I do realize the difficulty of questions needs to be lowered for greater participation, but harder questions enhance the learning experience, dont they?!
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  4. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #103
    Quote Originally Posted by Essendon View Post
    Question 16

    You somehow continue working for the same company (as in the previous question). You rock up one morning and see that several VM's in several datastores are in the "paused" state. Upon furiously clicking around and scratching your head, you discover the datastores the VM's were on are full. These datastores are thin-provisioned at the vSphere layer and thick-provisioned at the storage layer.

    a).
    How can resume your VM's? (multiple options exist)
    b). How can you prevent this issue in the future?
    c). Discuss the pro's and con's of

    - thin provisioning at both the storage layer and the vSphere layer
    - thin provisioning at the storage layer and thick-provisioning at the vSphere layer
    - thick provisioning at the storage layer and thin-provisioning at the vSphere layer
    - thick provisioning at both the storage layer and the vSphere layer

    Datastores are thin provisioned at vSphere Layer ? You mean the VMDKs are thin provisioned surely

    a.1) Extend the LUN
    a.2) Add an extend to the LUN
    a.3) Remove Un-needed VMs
    a.4) Move Powered OFF VMs via SSH to another LUN (Storage vMotion is unlikely to work as it still requires storage available on the LUN to work, same with removing snapshots.)

    b.1) Set appropriate alerts / notifications
    b.2) Check for snapshots regularly
    b.3) Use Thick provision so remaining space is immediately apparent

    As for pro and cons of either. I have done this over and over again and to be honest, that is too much for me to write at the moment as I have to jump into a meeting, but a good article about this can be found >> Here <<
    Reply With Quote Quote  

  5. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #104
    Quote Originally Posted by jibbajabba View Post
    Datastores are thin provisioned at vSphere Layer ? You mean the VMDKs are thin provisioned surely

    a.1) Extend the LUN
    Should be, increase the LUN, or add an extent to it. Otherwise great answer.
    Reply With Quote Quote  

  6. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #105
    Quote Originally Posted by tomtom1 View Post
    Should be, increase the LUN, or add an extent to it. Otherwise great answer.
    I pull my "I am German" card on that one
    Reply With Quote Quote  

  7. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #106
    Quote Originally Posted by jibbajabba View Post
    I pull my "I am German" card on that one
    I'm not a native speaker, so go right ahead, but allow me to use that one too
    Reply With Quote Quote  

  8. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #107
    Quote Originally Posted by tomtom1 View Post
    I'm not a native speaker, so go right ahead, but allow me to use that one too
    Always .. great card to have ... I usually shoot myself when people don't know the meaning of "whom" or don't know the difference between lose and loose or their and they're
    Last edited by jibbajabba; 04-01-2014 at 06:00 PM.
    Reply With Quote Quote  

  9. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #108
    Quote Originally Posted by jibbajabba View Post
    As for pro and cons of either. I have done this over and over again and to be honest, that is too much for me to write at the moment as I have to jump into a meeting, but a good article about this can be found >> Here <<
    Going forth on this one, what do you use? In my environments, I mainly do thin (VMDK's) on thick (storage devices) because that gives me the best compatibillity with stuff like SDRS. I have situation where I'm doing thin on thin, but if I were to do a redesign, I'd go with thin on thick here. Since we design and manage smaller clusters, the vSphere and storage administrator are the same person (mostly me).

    From a personal point of view, I'd like to spend less time on monitoring storage usage on my SAN (hence the disk storage devices here) and more in my VMware environment, as I can leverage SDRS and SvMotion to compensate for the lack of storage tiering.

    Love to hear your take too, jibba.
    Reply With Quote Quote  

  10. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #109
    Quote Originally Posted by tomtom1 View Post
    Love to hear your take too, jibba.
    We use Thin on Thin ... I don't know who made that decision .. I am the guy who supposed to support the infrastructure but I was on holiday when it was implemented. Anyway, I don't like Thin on Thin simply because I have seen customers out of a sudden using a heck of a lot more storage than anticipated. If your pool on the SAN runs low there is usually not much you can do to fix it unless you buy another shelf.

    So yes, alerting / monitoring is very important (and staff not ignoring the mails for weeks *sigh*). As for thin provisioned LUNs. We use 4TB LUNs (again, for no real reason I don't think) and I personally think it is pointless to have thin provisioned LUNs at that size given the average VM size in our environment (we got 250GB - 1TB VMs) ... In cases like that I would either

    a. Use 4TB LUNs - Thick provisioned VMDKs
    b. Use larger LUNs - Thin provisioned VMDKs

    and in both cases Thick LUNs

    Simply I like to know where I am at with the storage. The problem I think is the business in most cases. Buy small, sell big, hope it works out

    I was "Beta Testing" Dell Equallogics years back and I managed to make the Dell guy speechless. He was all over the Thin on Thin thing. So we received an early model (pre Production 4000 Series it was I think) so he left it with us (not leaving our office for the day obviously) and let me play with it. Took me 15 minutes to "blow it up".

    I left the setup as is - at that point 500GB LUNs thin on thin .. so we had 10 VMs or so .. So I kicked off some clones ... some snapshots, some removals, some more clones, some uploads and deployed some OVFs.

    I "overrun" the system to a point where the alerting was somewhat overrun and the whole thing locked up lol ..

    Needless to say the Dell guy lost the colour of his face ... Was not recoverable ..

    Ok, it was a pre-release model and it was probably unlikely to happen in production anyway. My point is, you rely very heavily on proper alerting when using thin on thin and in my experience, even thin on thick, caused problems eventually.

    I would LOVE to hear from an architect about environments implemented where either implementation made perfect sense. VCDX anyone ?



    By the way - speaking of thin provision and storage vmotion - depending on your SAN you may see that you have to keep reclaiming space using VAAI because of all those svmotions ...
    Reply With Quote Quote  

  11. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #110
    In my environment, I have a mix of thin-on-thin, thin-on-thick and thick-on-thick. Boot LUN's are thick, the swap file LUN's are thick on storage and so are the vmdk's. There are several thin (storage) LUN's with thin vmdks' and with VAAI support not being the best with our HP 3PAR SAN, there have been a few instances where there have been out-of-storage issues. Oh and dead space reclamation with the 3PAR SAN is like trying to wake up a dead person, it just wont work. There's plenty of free space yet on the SAN, well over 2 times free storage than used. Thin-on-thick for me too, if I could do this all over again, I inherited this storage. My team needs to keep an eye on the SAN management console at all times, the alerting isnt all that great either.

    My employer goes with what jibba said too - just hope it works out. Buy small, promise big and hope for the best!
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  12. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #111
    Quote Originally Posted by Essendon View Post
    Thin-on-thick for me too, if I could do this all over again, I inherited this storage. My team needs to keep an eye on the SAN management console at all times, the alerting isnt all that great either.
    Weird, since 3PAR profiles to be an enterprise grade SAN..

    Question 17:

    Your company is leveraging the DvSwitch since it has Enterprise Plus licensing and creating every port group on every host is just a pain in the ***. However, you have noticed that traffic isn't spread very equally. What would be a nice way to ensure some loadbalancing on the active uplinks? If it's at all possible, you don't want to involve the network team.
    Reply With Quote Quote  

  13. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #112
    Well yeah they claim 3PAR, Data Protector and Virtual Connect to be enterprise grade but in reality they are far from it. They make great hardware if you ask me, the software has generally been just as shocking. Dont get me started on the error messages in Data Protector, my favourite is - "An unknown critical error has occurred." But I digress, lets stick to our QOTD here.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  14. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #113
    Quote Originally Posted by tomtom1 View Post
    Question 17:

    Your company is leveraging the DvSwitch since it has Enterprise Plus licensing and creating every port group on every host is just a pain in the ***. However, you have noticed that traffic isn't spread very equally. What would be a nice way to ensure some loadbalancing on the active uplinks? If it's at all possible, you don't want to involve the network team.
    A vDS would fit the bill here. The first pain point is having to create port groups over and over on multiple hosts. While doing this, you run the slight risk of human error resulting in different port group names across hosts and problems with vMotion can ensue. Host profiles are a good way to fix this particular problem, you create the host profile from a reference host and apply them to every other host and they all get the same settings (keep in mind a host needs to be in maintenance for a host profile to be applied to it).

    Coming back to the question though, you need to be able load balance across active uplinks. A vDS with the load balancing policy - based on physical NIC load - should be used here. This particular policy requires very little config and certainly requires no involvement from the networks team, so no config's needed on your pSwitches. When ESXi detects that utilization of a NIC exceeds 75% over a 30s interval, it moves traffic over to another NIC.

    Your only requirement is Enterprise Plus licensing, this level of licensing gives you the use of the vDS. Load balancing based on pNIC load is not available on a vSS.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  15. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #114
    Quote Originally Posted by Essendon View Post
    Your only requirement is Enterprise Plus licensing, this level of licensing gives you the use of the vDS. Load balancing based on pNIC load is not available on a vSS.
    You will still have to talk to your network team in case there are port channels setup.

    For example, if you currently use IP Hash because of portchannels, changing to Physical NIC Load might disconnect you from the network until you disabled all but one port in the portchannel.
    Reply With Quote Quote  

  16. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #115
    @jibba - agreed. Bloody Networks team!
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  17. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #116
    Question 18:

    You are the Virtualization Architect for your company. Your company is embarking on a new virtualization initiative in which they wish to virtualize a remote office. The remote office runs a mix of physical Windows and Linux servers all running critical applications that generate revenue for them.

    The physical Windows servers are HP DL380 G7’s 3Ghz 2-way (8 cores per proc.) with 96GB RAM and 2 x 10GbE cards in them, with hyperthreading enabled in the BIOS. There are 4 of these servers.

    The physical Linux servers are also HP DL380G7’s 3Ghz 2-way (8 cores per proc.) with 96GB RAM and 2 x 10GbE cards in them, with hyperthreading enabled in the BIOS. There are 4 of these servers too.

    A detailed capacity analysis of this environment over a period of 2 weeks has yielded the following info:

    - The average RAM utilization on the Windows servers is 12GB and the peak utilization is 36GB.
    - The average CPU utilization on the Windows servers is .5Ghz and the peak utilization is 1.5Ghz.
    - The average RAM utilization on the Linux servers is 8GB and the peak utilization is 32GB.
    - The average CPU utilization on the Linux servers is .5Ghz and the peak utilization is 1.5Ghz.

    Constraints:

    - The company wishes to reuse the physical servers as ESXi servers.

    Requirements:

    - In the first phase of this project, the first VM’s you create must be able to support the existing workload
    - In the second phase of the project, you will also be needed to build more Windows VM’s with the following specs:

    o 20 VM’s each with 16GB RAM and 2 vCPU’s
    o 10 VM’s each with 8GB RAM and 1 vCPU
    o 1 monster VM with 96GB RAM and 12 vCPU’s

    How many ESXi hosts will you need all up? Leave some room for overhead and spike in usage (upto 25%)

    How many clusters will you create?

    What admission control policy will you use? You need to ensure that the company makes maximum use of its hosts but still manages to restart all VM’s on a host if it fails.

    ** The quotes are just to identify the questions. **


    The company will rebuild the applications on these existing physical servers from backups. For the purposes of this question, ignore how they are going to do this.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  18. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #117
    Allright, here goes:


    Requirements gathering:
    -> The platform should support the existing workload.
    -> The platform should account for new virtual machines that need to be deployed. Workload unknown at this point.


    Assumptions:
    -> New workloads should have a maximum workload of 50% of the configuration.

    Constraints
    -> Existing hardware should be reused in this project.



    Capacity Analysis - existing workload:
    Current number of workload: 8 (based on the existing 8 servers)


    Average CPU capacity = 3 GHz * 16 = 48 GHz.
    Average CPU capacity peak = 8 * 1,5 GHz = 12 GHz
    Capacity necessary to support current physical infrastructure: 12 GHz


    Average Memory capacity = 96 GB
    Average Memory capacity peak =8 * 34 GB (32 + 36 / 2) = 272 GB
    Capacity necessary to support current physical infrastructure: 272 GB.


    Capacity Analysis - new workload:
    Expected growth: 31 VM's (based on the information given). The assumption is made that the VM’s will not exceed 50% workload.


    Maximum CPU capacity Large VM’s = 1vCPU = 3 GHz * 20 = 30 GHz.
    Maximum CPU capacity Small VM’s = 0,5vCPU = 1,5 GHz * 10 = 15 GHz.
    Maximum CPU capacity Monster VM = 6 vCPU = 3 GHz * 1 = 18 GHz.
    Capacity necessary to support new workload: 63 GHz


    Maximum Memory capacity Large VM’s = 8 GB * 20 = 160 GB.
    Maximum Memory capacity Small VM’s = 4 * 10 = 40 GB.
    Maximum Memory capacity Monster VM = 48 GB.
    Capacity necessary to support new workload: 248 GB.


    Total CPU capacity necessary: 75 GHz
    Total Memory capacity necessary: 520 GB


    Total CPU workload available: 48 GHz * 8 = 384 / 100 * 75 = 288 GHz
    Total Memory workload available: 96 * 8 = 768 / 100 * 75 = 576 GB


    We can handle the current and new workload with the resources we have available, but once we go into the necessary redundancy (N+1) we have too few resources (memory based) to provide.


    N.B. Based on the assumption that the new workload will not exceed 50% capacity and the fact that memory reclamation techniques (specifically TPS) are taken out of the calculation, we indeed have insufficient memory to support necessary redundancy. When we lower the estimated capacity, sufficient resources will be available.


    Since we have homogenous hosts I would create one cluster, and only deviate from this if there is a specific requirement laid out from the business.


    As admission control, I would go with the percentage based approach as this will allow for different VM’s (sizes) in the cluster and allows the business to leverage the best possible out of the vSphere cluster.


    Reserved memory resources: 13%
    Reserved CPU resources: 13%


    When sizing the VM’s correctly, the performance degradation should be non-existent to slim when a HA event occurs.
    Reply With Quote Quote  

  19. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #118
    Nice write-up .. But since the hosts are are same in size, surely you could still go with admission control n+1 ?

    What % would you set your admission control ? 12.5% ? 25% ?
    Reply With Quote Quote  

  20. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #119
    Quote Originally Posted by jibbajabba View Post
    Nice write-up .. But since the hosts are are same in size, surely you could still go with admission control n+1 ?

    What % would you set your admission control ? 12.5% ? 25% ?
    Admission control percentage at 13%, like I said, which is basically 12,5% rounded up. With 25 you receive an N+2 solution. I understand what you mean (thought of that), and I remember to have looked at a table somewhere that said, with X number of hosts in the cluster, it's best to have Y number of failover hosts. Been looking for that table though.
    Reply With Quote Quote  

  21. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #120
    What if we like, build this scenario out? Do something which will relate to all areas of an infrastructure design:

    -> Compute
    -> Network
    -> Storage
    -> Management
    -> Guest VM

    Perhaps that's an idea? We can discuss pro's and con's per (high level) solution?
    Reply With Quote Quote  

  22. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #121
    Great answer and great idea mate. Let's do that.

    Just FYI - Gregg Robertson has something very similar going on his website www.saffageek.co.uk, worth checking that out too.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  23. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #122
    I know, but he never finished it, so that's a real shame.

    @jibbajabba: This was the table I mentioned, found it on joshodgers website.


    So we should be good with a percentage of 13% percent, seeing as in his opinion N+2 becomes relevant at the 9th host.
    Last edited by tomtom1; 04-04-2014 at 06:29 AM.
    Reply With Quote Quote  

  24. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #123
    Wonder what the rationale is for N + 2 at the 9th host? More hosts = more chances of failure? Or just cover your arse kinda thing?
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  25. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #124
    Quote Originally Posted by Essendon View Post
    Wonder what the rationale is for N + 2 at the 9th host? More hosts = more chances of failure? Or just cover your arse kinda thing?
    To make sure you can take one host out / into maintenance mode and have at least n+1 still? I noticed a lot of times that a lot of clusters are nice and n+1 but are technically at risk if you need to perform maintenance / upgrade the host.
    Reply With Quote Quote  

  26. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #125
    Yep, that'll be why and it makes sense but then why not have n + 2 always (based on Josh Odgers' reasoning).
    Last edited by Essendon; 04-04-2014 at 08:53 AM.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

+ Reply to Thread
Page 5 of 17 First 12345 678915 ... Last

Social Networking & Bookmarks