+ Reply to Thread
Page 6 of 17 First ... 23456 7891016 ... Last
Results 126 to 150 of 408
  1. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #126
    Question 19:

    Continuing from Question 18 above, the time has come for you to look at the Networking aspect of things. You need to architect a solution based on the following

    Constraints:

    - No money to buy additional cards
    - Use the existing 2 x 10 GbE cards

    Assumptions:

    - Assume the top of rack switches can do 10GbE. Top of rack switches are Brocade's.

    Requirements:

    - Ease of management. The company's vSphere admins would like something that would reduce the amount of work they do
    - Ensure that one set of traffic doesnt overwhelm other types. The admin team would like to set limits on the various traffic types
    - Use Jumbo Frames
    - Be able to view upstream info

    Architect a networking solution for them based on the above. Make fair assumptions where necessary.

    Side questions

    - Would you use multi-NIC vMotion here?
    - Are Limits a good idea? Suggest alternatives, if any
    - Can they enable beacon probing here?
    - What network failover detection mechanism (within vSphere) should they use?
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  2. SS -->
  3. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #127
    Alrighty, here we go again.

    Additional assumptions:
    -> Top of rack switches support LLDP
    -> vSphere Enterprise Plus licenses are in place
    -> There are 2 top of rack switches, to achieve both a higher performance and redundancy in the network design.
    -> Top of rack switches support Jumbo Frames

    Logical network design:
    Untitled Diagram.jpg

    Technical solution:
    Create a distributed vSwitch to ensure ease of management and meet the requirement to be able to view upstream information from non-Cisco devices (LLDP). This type of discovery is only supported on the distributed vSwitch.

    Enable Jumbo frames end-to-end (DvSwitch -> Physical Switches -> Storage Array).

    With the DVS in place, you can use Network I/O control & create custom user-defined network resource pools, or edit the standard network resource pools. By using shares instead of limits, you ensure priority is given to the correct resource pools only in case of contention. If there is no contention, and sufficient bandwidth is available, there should be no problem in all traffic types using the traffic they need.

    You could configure it so that in times of contention, both the IP storage and the VM traffic resource pools get 50% (together) and the rest is split over the remaining pools. A logical example of this:

    Resource Pools.png

    When vMotion runs over a 10 GbE network and the adapter itself is being detected as 10 gigabit, vSphere allows 8 simultaneous vMotions. That means it can saturate a full 10 GbE link, which in turn leads to the following decision points:

    Don't use multi-NIC vMotion and use the load balancing (teaming) based on physical NIC load. When a big vMotion event occurs (DRS or maintenance mode), vSphere should be smart enough to route the vMotion traffic through one DvUplink and all other services through the other, avoiding contention and the need for the share mechanism to kick in.

    Beacon probing is a method to identify link failure by sending out some (broadcast) frames to do link testing. Other nodes in the same broadcast domain (meaning the other uplinks) will receive this also and they can keep checking if the network is healthy this way.

    In a situation with only 2 uplinks, beacon probing cannot identify the uplink that has failed with 100% accuracy since there are only 2 nodes. A third one (call it a witness, if you will and like Microsoft ) can help identify which NIC (path) has failed. So, I'd leave it at the default (link status only) here.
    Reply With Quote Quote  

  4. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #128
    Comprehensive answer (as always!) here already, I'll pitch in with a couple more lines.

    As for limits, they are usually never a good idea. Instead, shares should be used so as to ensure a fair distribution of resources when there's contention.

    Repeating what's been said already, jumbo frames MUST be end-to-end. Otherwise you'll find yourself scratching your head trying to determine why the heck you cant see them on a device.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  5. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #129
    A bit lighter than a complete design:

    Question 20

    Your voice team has recently asked for a piece of software from Cisco, which has been installed as a VM in your environment. Following a health check performed by Cisco, it has been concluded that conform their best practices CPU affinity needs to be configured on the VM.

    1) Which disadvantages will this give you, when you look at the availability, scaleability and performance of this VM?
    2) Where would you configure this? If you don't see this option, why not?
    Reply With Quote Quote  

  6. kj0
    kj0 is offline
    Apple and VMware kj0's Avatar
    Join Date
    Apr 2012
    Location
    Brisbane, Australia.
    Posts
    733

    Certifications
    vExpert x 4 | Apple Mac OS X Associate | Cert III - IT.
    #130
    1) Which disadvantages will this give you, when you look at the availability, scaleability and performance of this VM?
    DRS/HA/vMotion will not be able to run due to having CPU affinity.

    2) Where would you configure this? If you don't see this option, why not?
    You will not be able to set CPU affinity if you have DRS enable.

    2017 Goals: VCP6-DCV | VCIX
    Blog: http://readysetvirtual.wordpress.com
    Reply With Quote Quote  

  7. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #131
    Quote Originally Posted by kj0 View Post

    DRS/HA/vMotion will not be able to run due to having CPU affinity.

    You will not be able to set CPU affinity if you have DRS enable.
    A few small discrepancies:

    1) HA and CPU affinity are supported luckily CPU Affinity and vSphere HA

    2) Only DRS in fully automated mode will hide the affinity features, other modes (e.g. Manual or Partially Automated) will have this option.
    Reply With Quote Quote  

  8. kj0
    kj0 is offline
    Apple and VMware kj0's Avatar
    Join Date
    Apr 2012
    Location
    Brisbane, Australia.
    Posts
    733

    Certifications
    vExpert x 4 | Apple Mac OS X Associate | Cert III - IT.
    #132
    Quote Originally Posted by tomtom1 View Post
    A few small discrepancies:

    1) HA and CPU affinity are supported luckily CPU Affinity and vSphere HA

    2) Only DRS in fully automated mode will hide the affinity features, other modes (e.g. Manual or Partially Automated) will have this option.
    Mike Laverick was having this issues the other week on Twitter.
    2017 Goals: VCP6-DCV | VCIX
    Blog: http://readysetvirtual.wordpress.com
    Reply With Quote Quote  

  9. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #133
    Quote Originally Posted by kj0 View Post
    Mike Laverick was having this issues the other week on Twitter.
    Ah, I see it. Remember: only DRS in fully automated mode hides the CPU affinity option . Had you ever had the pleasure to meet Mike in person? Really great guy!
    Reply With Quote Quote  

  10. kj0
    kj0 is offline
    Apple and VMware kj0's Avatar
    Join Date
    Apr 2012
    Location
    Brisbane, Australia.
    Posts
    733

    Certifications
    vExpert x 4 | Apple Mac OS X Associate | Cert III - IT.
    #134
    Quote Originally Posted by tomtom1 View Post
    Had you ever had the pleasure to meet Mike in person? Really great guy!
    I've got many, many of his videos (vMUGS and what not) . Going through his Back-To-Basics series at the moment to get another view of 5.5 set up to cover everything for my exam. I've had quite a few post sessions with him on Twitter. To be honest, it's one of my goals to eventually meet him as a fellow vExpert, and just to say "Thanks" to him. - A beer with him will suffice.
    2017 Goals: VCP6-DCV | VCIX
    Blog: http://readysetvirtual.wordpress.com
    Reply With Quote Quote  

  11. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #135
    Mad respect for Mike. His SRM book is beyond awesome!
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  12. kj0
    kj0 is offline
    Apple and VMware kj0's Avatar
    Join Date
    Apr 2012
    Location
    Brisbane, Australia.
    Posts
    733

    Certifications
    vExpert x 4 | Apple Mac OS X Associate | Cert III - IT.
    #136
    Quote Originally Posted by Essendon View Post
    Mad respect for Mike. His SRM book is beyond awesome!
    Not just SRM, End User Computing book as well, Building End-User Computing Solutions with VMware View by Mike Laverick (eBook) Proceeds to UNICEF
    2017 Goals: VCP6-DCV | VCIX
    Blog: http://readysetvirtual.wordpress.com
    Reply With Quote Quote  

  13. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #137
    Question 21


    Circling back to you being the Virtualization Architect for your company, now pull your socks up to do a storage design for them


    Constraints:


    - Reuse current FC active-active storage array.
    - Maximum number of datastores = 256/host
    - Backups software can only handle 10 backup jobs at one time

    Assumptions:

    - VM reservations not be used, where possible.
    - VAAI enabled array
    - VADP backup technology solution being used
    - Dual HBA cards on every host (each host being vSphere 5.0)

    Requirements:

    - Minimize replication traffic and backups times

    - Avoid de-duplication where possible

    - The admin team would like there to be a way where they can just pick suitable storage for particular VM's based on VM's performance needs.

    - There is going to be a MSCS cluster in the midst.
    • Assign appropriate RDM's to them
    • Assign appropriate multipathing method
    • They must be on separate hosts at all times
    - There are going to be some heavy-hitting SQL and Exchange VM's in the mix. Discuss their placement keeping in mind the expected high I/O
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  14. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #138
    Additional assumptions
    -> Enterprise Plus licenses (requirement for the profile driven storage) are in place.


    - The admin team would like there to be a way where they can just pick suitable storage for particular VM's based on VM's performance needs.


    That is an immediate use case for Profile Driven Storage.

    Is the MSCS virtual - virtual or virtual - physical?

    I don't have a complete answer ready now, will need to do some more reading, but I'll let others kick in where I left off.
    Reply With Quote Quote  

  15. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #139
    The MSCS cluster is a virtual cluster.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  16. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #140
    Quote Originally Posted by Essendon View Post
    The MSCS cluster is a virtual cluster.
    Another assumption:
    -> Customer is using vSphere 5.5
    -> Redundant paths are available to the FC storage array
    -> DRS is available (vSphere Enterprise minimum)

    Logical storage design:
    ESXi Host.png

    Technical Solution
    Since the requirements lay out that a virtual (only) MSCS cluster is being used, the most flexibility can be reached by using virtual mode RDM's. By using virtual mode RDM's, we can still keep some of our VMware advantages (such as snapshots) whilst allowing a direct connection to the LUN.

    Based on our assumption that the customer is using vSphere 5.5, we can use round robin as multipathing policy (active - active FC SAN) for the RDM based disks. Seeing as RR as a path selection policies addresses 2 out of the 5 AMPRS values (both Availability and Performance) this would be the best choice. It would protect the customer to the loss of a path, and if both paths are available, the workload is balanced across the available paths.

    When the requirements tell us the VM's need to be separated at all times, we would go and configure a DRS rule to separate the VM's.

    Choose to group the VM's on the datastore (cluster) by RTO / RPO and by SLA. In doing so, you match the availability and the recoverability infrastructure qualities.
    Reply With Quote Quote  

  17. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #141
    Anyone else wishing to chuck in their ideas before I type out a comprehensive answer? Just throw in your ideas!

    You can talk about replication times, backups times, de-dupe and performance based placement of your heavy hitting VM's (Tom has hinted upon this already).
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  18. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #142
    I'll hold off with my part of today's question (which again is a bit more administrator oriented) until you (or someone else) has posted a more complete answer.
    Reply With Quote Quote  

  19. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #143
    Unable to write a detailed answer today, been an incredibly busy day at work..tomorrow, yes!
    Last edited by Essendon; 04-09-2014 at 01:41 AM.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  20. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #144
    Continuing the technical solution tomtom1's already presented -

    Replication times and backups times minimization - Note how the assumptions state that VM reservations not be used, this means that every VM will have a .vswp file associated with it. This becomes more significant when you think of the heavy VM's, your SQL's and Exchange VM's. Say they have 96GB RAM each, this in turn means a 96GB .vswp file and the overhead (many people overlook the overhead, DONT!). In order to minimize replication and backups times, put the swap files and the VM's own page file on a separate non-replicated datastore. If you think of a failover scenario. the swap files will be recreated when VM's are powered on at the destination, so there's no point in replicating your swap files.

    Pro's - lower replication times. lower backups times.
    Con's - more administration (mostly, once off).

    The pro's far outweigh the con's, so there you go.

    Avoiding de-dupe - Again, put your swap and page files on a separate datastore(s) and disable de-dupe on the associated LUN's. De-dupe ratios are going to be very low (probably zero gains) on such LUN's, hence no need to do it.

    Profile-driven storage - This is a very good use case for storage profiles. Check Tom's explanation for it. The only problem I have seen is actually enforcing it. Some admins have been known to chuck VM's and vmdk's wherever, you can probably spin up a PowerCLI script that detects if any VM's are non-compliant.

    MSCS clustering - This is an active-active array that the customer has. Multipathing can be Round-Robin or Fixed, it's best to check with your array's vendor for the preferred method. One of my clients has a 3PAR, before upgrading the OS on their 3PAR, their multipathing was set to Fixed - after the upgrade, however, they were asked to change it to RR. So check with your storage vendor.
    • vRDM - you retain some of the VMware functionality - snapshots are an immediate thought. vFRC supports vRDM's.
    • pRDM - SAN based snapshots/nirroring are an immediate thought. Another one is vFRC doesnt support pRDM's.
    Again, go with your customer's requirements. In this question, you have a V V cluster - so you'd likely go with vRDM's. Be aware of size limitations of vRDM's vs pRDM's (if on vSphere 5.0)

    It's almost a no-brainer to separate your MSCS cluster nodes on different hosts. Utilize DRS "should" rules. If you used "must" rules, HA will respect the "must" rule and if the host on which a node(s) should've been arent up, HA will not power-up your VM's. You may think, what if HA starts them up on the same host. Well that's not a problem, your trusty friend, DRS, will move one node to another host to respect your "should" rule.

    Here's a must read article on MSCS clustering > VMware KB: Microsoft Clustering on VMware vSphere: Guidelines for supported configurations

    VM placement considerations:

    Too often, storage is designed and provisioned based on size and not performance. Size is not normally an issue, you can chuck some more spindles into your disk shelves and you can get over your size issues. Performance is something however that's more involved. You got to find out whether your VM's will be performing more reads or writes, sequential or random. Keep in mind the RAID levels and the disks backing your LUN's. Perfmon is your friend (average disk read and write counters, for example) and so are vScsistats and esxtop.

    Here's a good article I have bookmarked > Troubleshooting Storage Performance in vSphere

    Generally though you'd want to separate your heavy VM's into their own datastores. Let's make this an interactive discussion.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  21. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #145
    Question 22:

    You have a storage DRS cluster, but way too often, you see storage migrations being performed in the working hours. Your storage is fast enough, but still, this provides an unwanted situation. Can you, and if possible at all how would you configure Storage DRS to behave like this?
    Reply With Quote Quote  

  22. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #146
    Quote Originally Posted by tomtom1 View Post
    Can you, and if possible at all how would you configure Storage DRS to behave like this?
    I presume you mean how to avoid situations like this ...

    I'd change the schedule of storage DRS to be out of office hours


    My own knowledge base made public: http://open902.com
    Reply With Quote Quote  

  23. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #147
    Question 23

    Continuing Question 22, the company has plans to implement Storage DRS for their vSphere cluster. This vSphere 5.0 environment also has an SRM implementation that is protecting a variety of workloads, namely file services, Exchange, SQL and some proprietary applications. The company has a DR datacenter in another city, which they replicate their LUN's to for recovery purposes.

    The file services are sitting in the following datastores:
    • File_DC1_FC_01
    • File_DC1_FC_02
    • File_DC1_FC_03


    The applications VM's are sitting in the following datastores:
    • App_DC1_FC_01
    • App_DC1_FC_02
    • App_DC1_FC_03


    Only the first 2 datastores for both file services and applications VM's have been picked up the array's SRA. The company's vSphere administrator (let's call her Dorothy) has pulled the trigger and enabled Storage DRS for both file services and applications VM's (creating Datastore_Cluster1 and Datastore_Cluster2). Dorothy has enabled all options, so the cluster is now balanced by utilized space and I/O. She's very happy with the results with the all six datastores sitting at reasonable utilization levels and the clusters now intelligently place VM's when her server admins build VM's. The server admins are completely gung-ho over this new development, one even took her out to dinner one night.

    As luck would have it, a cyclone comes along and flattens her primary Datacenter. But Dorothy is unperturbed by this natural calamity and reaches her recovery site. There are lots of worried faces that meet her there but they faces light up the moment they see her. Dorothy, the saviour's here!! - they say. She goes into the SRM console and fires up her Recovery Plan.

    Will she be able to recover all file services and applications VM's and why?

    Will Dorothy be the hero of the day or will she be fired and never seen again? You tell me!
    Last edited by Essendon; 04-10-2014 at 06:11 AM.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  24. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #148
    She likely has to update her CV very quickly as she will quickly notice the 'not configured' error on her protection group. sVmotion is not supported with 5.0 but was only introduced in 5.5

    She might be lucky to recover some VMs which weren't moved from its original location when the protection was enabled.

    SRM5.0 is simply not aware of the location of the VM once moved.

    I remember big outcries back in the day. I remember it was originally supported but then quickly changed to not supported.

    Edit: yepp, even Duncan fell into that trap

    http://www.yellow-bricks.com/2011/09...s-yes-it-does/
    Last edited by jibbajabba; 04-10-2014 at 07:13 AM.
    My own knowledge base made public: http://open902.com
    Reply With Quote Quote  

  25. Senior Member tomtom1's Avatar
    Join Date
    Feb 2014
    Posts
    374

    Certifications
    JNCIP,SP, JNCIS-SP, CCNP, VCAP5-DCA, VCP5, MCITP 2008 SA, CCNA
    #149
    Additional assumption:
    -> Dorothy has enabled SDRS in fully automated mode (all options)

    Also, an impact of using SRM would be that you group VM's based on RTO / RPO. Based on those datastores, the SRA will do the replication of the datastores. Seeing as you have less control over the VMDK placement since it balances on both capacity and latency, and not RPO / RTO, it is most probable dat Dorothy would not be able to save the day. Too bad, lol.
    Reply With Quote Quote  

  26. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #150
    Yep, Dorothy will soon be placing her resume on TE for a critique

    Great answers gents.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

+ Reply to Thread
Page 6 of 17 First ... 23456 7891016 ... Last

Social Networking & Bookmarks