+ Reply to Thread
Page 1 of 3 1 23 Last
Results 1 to 25 of 51

Thread: 2 boxes vs 1

  1. Member
    Join Date
    Feb 2011
    Posts
    71
    #1

    Default 2 boxes vs 1

    I really want to start learning vmware and am looking at setting up a lab. However, I am having a hard time deciding if I should build one powerful desktop to run nested vm's on or build two physical with esxi.

    The cost is going to be around the same so really trying to figure out which would be more beneficial. This will be for lab purposes only and more then likely powered off or suspended when not in use.

    Does anyone have any advice?
    Reply With Quote Quote  

  2. SS -->
  3. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #2
    Build one decent computer and save on power. 16GB of RAM, an i5/i7 and an SSD or two - this is a good combo for labbing ESXi.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  4. Senior Member
    Join Date
    Mar 2012
    Location
    Iowa, USA
    Posts
    224
    #3
    I'm doing the same and one thing I realized real quick is that if the lab is running ESXi on the metal then there needs to be more than two boxes. You'll need the two EXSi servers but also at least one storage server and a management computer. Do you intend to run the file servers on a computer you already have?

    I don't have any real advice to offer just that I realized that running ESXi on the metal will likely mean a minimum of four computers in a lab network. I've been warned about the network usage in an on the metal ESXi lab, do you have gigabit ethernet on your computers?

    I debating which path to take as well. I'm leaning towards a single box solution to avoid hardware compatibility headaches. With ESXi running in a virtual environment the issues of having the right NIC, drive controller, or even the right keyboard, all disappear.
    Reply With Quote Quote  

  5. Senior Member
    Join Date
    Apr 2012
    Location
    Sahuarita AZ
    Posts
    472

    Certifications
    MCSE
    #4
    As long as you have CPU cores, ram and spindles you are good. If can do that with one computer then that is a good way to go. If you have the budget to setup iscsi on some sort of san then go with two boxes so you can practice with migrations and imports.
    Reply With Quote Quote  

  6. Member
    Join Date
    Feb 2011
    Posts
    71
    #5
    Thanks for the quick replies. I guess I should include some more info about my current setup.

    At the moment I have a SynologyNAS (can do iSCSI and NFS) that I can dedicate 2 bays for just VM. I was planning on putting SSD's in them. As for the network, currently there are some open ports on a managed gigabit switch that I can dedicate to computers. While I do have another computer that I can use for management/daily purposes I would also like the lab available via vpn since I get some down time at work during the later hours.

    As for specs of the boxes:
    Nested VM box
    i7 3770
    32GB Ram
    vs
    2 Physical
    i5 2400
    16 GB Ram
    Last edited by mayhem87; 06-13-2012 at 06:12 AM.
    Reply With Quote Quote  

  7. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #6
    For a SAN, you can run Starwind's free iSCSI SAN which installs on a Windows VM and works without a problem. You can do migrations and imports and whatever else to your heart's content. You really dont need more than 1 box.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  8. Not IT n00b dave330i's Avatar
    Join Date
    May 2011
    Location
    NoVA
    Posts
    1,945

    Certifications
    VCIX-NV, VCAP5-DCD/DCA, VCAP-CID/DTD, VCP4, Security+, MCTS-640, EMCISA, ITILv3 Foundation
    #7
    You can buy a server (Dell, HP, etc) or 2 off ebay pretty cheap.
    2017 Certification Goals: Fun filled world of AWS
    "Simplify, then add lightness" -Colin Chapman
    Reply With Quote Quote  

  9. Member
    Join Date
    Feb 2011
    Posts
    71
    #8
    Think I'm going to go with nested and also use the computer for gns3 use.
    Reply With Quote Quote  

  10. Senior Member
    Join Date
    Mar 2012
    Location
    Iowa, USA
    Posts
    224
    #9
    Quote Originally Posted by Essendon View Post
    Build one decent computer and save on power. 16GB of RAM, an i5/i7 and an SSD or two - this is a good combo for labbing ESXi.
    I've seen a spec list like this many times, the SSD seems to be a critical component for speed in a nested VM lab. One question though, if going with a multiple box lab with ESXi on the metal are SSDs just as critical?
    Reply With Quote Quote  

  11. Not IT n00b dave330i's Avatar
    Join Date
    May 2011
    Location
    NoVA
    Posts
    1,945

    Certifications
    VCIX-NV, VCAP5-DCD/DCA, VCAP-CID/DTD, VCP4, Security+, MCTS-640, EMCISA, ITILv3 Foundation
    #10
    Quote Originally Posted by MacGuffin View Post
    I've seen a spec list like this many times, the SSD seems to be a critical component for speed in a nested VM lab. One question though, if going with a multiple box lab with ESXi on the metal are SSDs just as critical?
    Usually 2 biggest performance bottle necks are RAM & IOPS. SSDs have ~2000 IOPS while 15k SAS drive are around 180 IOPS.
    2017 Certification Goals: Fun filled world of AWS
    "Simplify, then add lightness" -Colin Chapman
    Reply With Quote Quote  

  12. Senior Member
    Join Date
    Mar 2012
    Location
    Iowa, USA
    Posts
    224
    #11
    Quote Originally Posted by dave330i View Post
    Usually 2 biggest performance bottle necks are RAM & IOPS. SSDs have ~2000 IOPS while 15k SAS drive are around 180 IOPS.
    Right, since SSDs don't have to move a read head across a spinning platter so seeks times are essentially zero. That means the operations they can perform in a given time can be much higher.

    What I'm considering is getting three or four small servers, two would be ESXi boxes, one a file server, and any more beyond that might switch roles between file server or ESXi box depending on need. I can get these servers with a SSD or HD but there is a difference in price, performance, and size. What kind of performance boost could I expect with a SSD drive? Is this performance boost worth the money and/or loss in drive space?

    I realize much of what I am asking is subjective but I have to start somewhere. I suppose I could get one with HD and another with SSD and test them out myself.
    Reply With Quote Quote  

  13. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #12
    The performance difference is quite significant, you'll have your VM's booting up in like 15 seconds. You dont need to assign too much disk to your VM's anyway, just go with a gig or so more than what the OS requires.

    While I agree by researching you learn more, but I reckon you should keep it simple and just get a desktop machine that runs an i5/i7 with 16GB or more of RAM and an SSD or two. Run nested ESXi and a Starwind iSCSI SAN on a VM on the host ESXi and your good to go. You can create any number of vNIC's on your nested ESXi VM's and play with vSwitches and dvSwtiches as much as you want. That way your not strapped by the number of pNIC's you have on your physical machine. The space and power savings are a no-brainer too.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  14. Senior Member MentholMoose's Avatar
    Join Date
    Sep 2009
    Location
    CA
    Posts
    1,550
    #13
    If you are going with a physical lab, the ESXi machines don't necessarily even need hard drives. There's not much need to lab with local datastores, and you can install ESXi on a USB stick. Many newer servers include internal USB ports for this purpose. You can then put additional, bigger, and/or faster/SSD drives in your SAN/NAS machine to store VMs.
    Reply With Quote Quote  

  15. Senior Member nhan.ng's Avatar
    Join Date
    Jul 2010
    Location
    Orange County, CA
    Posts
    184

    Certifications
    CompTIA A+, Network+, Project+, Security+, 70-680, MCDST
    #14
    I'm gonna go with 2 boxes plus a storage box. This is how most companies have their set up. You can play with the networking portion to see how it effecting your vm peformance. There's alot that go into keeping everything running smoothly, from troubleshooting network performance, fail over set up...etc. than just esxi itself.
    Reply With Quote Quote  

  16. Senior Member
    Join Date
    Mar 2012
    Location
    Iowa, USA
    Posts
    224
    #15
    Quote Originally Posted by Essendon View Post
    The performance difference is quite significant, you'll have your VM's booting up in like 15 seconds. You dont need to assign too much disk to your VM's anyway, just go with a gig or so more than what the OS requires.

    While I agree by researching you learn more, but I reckon you should keep it simple and just get a desktop machine that runs an i5/i7 with 16GB or more of RAM and an SSD or two. Run nested ESXi and a Starwind iSCSI SAN on a VM on the host ESXi and your good to go. You can create any number of vNIC's on your nested ESXi VM's and play with vSwitches and dvSwtiches as much as you want. That way your not strapped by the number of pNIC's you have on your physical machine. The space and power savings are a no-brainer too.
    Two SSDs? Is the idea to RAID them? Spread the load over two disks (not RAID just two independent volumes)? More space? Dual booting? All the above?

    Other than my confusion on the need or desire for dual SSDs I agree with your points.

    Quote Originally Posted by MentholMoose View Post
    If you are going with a physical lab, the ESXi machines don't necessarily even need hard drives. There's not much need to lab with local datastores, and you can install ESXi on a USB stick. Many newer servers include internal USB ports for this purpose. You can then put additional, bigger, and/or faster/SSD drives in your SAN/NAS machine to store VMs.
    Using USB sticks as the only persistent storage is intriguing in many ways. In my search I have not seen any servers in my price range that lack storage. As pointed out in other threads I've started I'm reluctant to build a server for many reasons.

    How does booting from a USB stick compare to HD and SSD when it comes to performance? I assume it lies somewhere in between. Anything else I should know about stripping out the SATA drives before I grab my screwdriver?

    One idea that just crossed my mind is that I could order two, three, or more, servers with identical specs and move all the drives into the file server. This could be a nice way to keep my costs low and performance high. Preconfigured systems tend to cost less than customized systems, even if that means removing one drive from one system and adding an identical drive to another.

    Quote Originally Posted by nhan.ng View Post
    I'm gonna go with 2 boxes plus a storage box. This is how most companies have their set up. You can play with the networking portion to see how it effecting your vm peformance. There's alot that go into keeping everything running smoothly, from troubleshooting network performance, fail over set up...etc. than just esxi itself.
    I'm tending to agree here. There's more to managing a virtual machine network than just setting up ESXi, there's hardware to manage as well. In a completely virtual environment there's something lost.

    I'm not sure how the cost difference between the two set ups work out. I'm looking at a two or three small servers for about $2500, each with 2GB or 4GB RAM, a dual core i5 or so, and a small HD. Add in a display, KVM switch of some sort, ethernet switch, USB sticks and maybe some other stuff and it adds up in the $3000 range. On the other hand I could go with a single desktop computer, quad core i7, 16GB RAM, a SSD in the 200 - 500GB range, display, keyboard, and other stuff and it adds up to be also in the $3000 range. A laptop would be about the same price as well but I'd lose a bit on the processor speed, drive space, screen size, and maybe other areas, but gain in the portability, power consumption, noise, and convenience. With some sacrifice in performance I could probably bring the price for any option down to about $2000 but I don't believe I'd want to go any lower than that.

    I'm thinking about a new laptop if only because this project gives me an excuse to replace my current one which is starting to have issues, it's just plain getting wore out.
    Reply With Quote Quote  

  17. VCDX in 2017 Essendon's Avatar
    Join Date
    Sep 2007
    Location
    Melbourne
    Posts
    4,489

    Certifications
    VCIX-NV, VCAP5-DCD/DTA/DCA, VCP-5/DT, MCSA: 2008, MCITP: EA, MCTS x5, ITIL v3, MCSA: M, MS in Telecom Engg
    #16
    Quote Originally Posted by MacGuffin View Post
    Two SSDs? Is the idea to RAID them? Spread the load over two disks (not RAID just two independent volumes)? More space? Dual booting? All the above?

    By two I meant spread the load around, two independent volumes. Just more space. A lappie is not a bad idea too. RAM and disk IOPS is really what you need to take into consideration, any solution would do.
    VCDX: DCV - Round 2 rescheduled (by VMware) for December 2017.

    Blog >> http://virtual10.com
    Reply With Quote Quote  

  18. Member
    Join Date
    Feb 2011
    Posts
    71
    #17
    @Macguffin
    you can build i5's for cheap

    Heres what I built up on newegg
    i5 3450s (quad core) $200
    G Skill 16GB Ram $95
    ASrock H77 Mobo $90
    Case w/ 420W PS $80

    after shipping came to around

    475.00 i was going to multiply this by 2 for two desktops so $950

    or

    The one powerful desktop
    i7 3770s (quad core + HT) $320
    G Skill 16GB Ram $220
    ASrock H77 Mobo $90
    Case w/ 420W PS $80

    plus shipping = 719.95

    obviously your still going to need some hdd's or ssd's somewhere in the mix.
    Last edited by mayhem87; 06-14-2012 at 06:13 AM.
    Reply With Quote Quote  

  19. Senior Member
    Join Date
    Mar 2012
    Location
    Iowa, USA
    Posts
    224
    #18
    Quote Originally Posted by Essendon View Post
    By two I meant spread the load around, two independent volumes. Just more space.
    That's what I thought you meant, just wanted to be clear on it.

    Quote Originally Posted by Essendon View Post
    A lappie is not a bad idea too. RAM and disk IOPS is really what you need to take into consideration, any solution would do.
    The laptops I'm looking at are higher end and so will have the 16GB RAM, quad core i7, and SSD that so many recommend. The real pricey part is the SSD storage. I can keep the laptop price under $3000 with a 256GB SSD, anything bigger and I can easily go over $4000. With a desktop I can keep it under $2000 if I don't get the SSD but after seeing some demonstrations and benchmarks I don't believe I'll be very happy with spinning disks.

    My current laptop has a 500GB spinning disk. If I do without the dual boot partition and don't move over my music library I should be happy with 256GB on a new laptop for both ESXi labbing and my everyday computing.

    I'm still wondering though, what kind of performance hit would I see in having HDs with running multiple ESXi servers? I'm guessing that since the storage would live on a separate file server in most cases the ESXi servers won't see the performance drop directly. Would the gigabit ethernet lag mask any performance lost with HDs over SSDs? I'm thinking I could also make up for some of the loss by RAID mirroring the drives in the file server. If I strip the drives from the computers running ESXi then I should have plenty of HDs to stack in the file server for a RAID.

    Thanks to everyone for the help here. Lots of good stuff but still plenty to think about.
    Reply With Quote Quote  

  20. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #19
    Your vCenter will need x64 Windows anyway so you might as wel go for 2007R2 and install the free iSCSI target - quick enough for lobbing. Single server setups tend to have the disk as bottleneck so SSD is a good, needed choice.
    Reply With Quote Quote  

  21. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #20
    Oh and for disks. I run a Adaptec 5805 with 2 arrays. 4x 300GB 15k SAS in Raid 10 and can get decent iops. The second array used 1TB 7200rpm spindles in Raid 10 and I am struggling with I/O when running nested VMs. So yes, storage will be bottleneck and I am not sure that SSDd are THAT much quicker. It all depends how many VMs you intend to run and what those VMs will be doing. I got a SLIGHT performance gain by using eager zeroed thick disks but that means you run out of storage quickly as surely your SSDs aren't the most expensive ones.

    I also run a FusionIO card and that thing flies but surely way above budget
    Reply With Quote Quote  

  22. Senior Member
    Join Date
    Mar 2012
    Location
    Iowa, USA
    Posts
    224
    #21
    Quote Originally Posted by mayhem87 View Post
    @Macguffin
    you can build i5's for cheap
    I've explained this in other threads, and I don't want to drive this thread to far off topic, so I'll just simply say that building a computer has been considered but is not likely a good option for me.

    Quote Originally Posted by mayhem87 View Post
    obviously your still going to need some hdd's or ssd's somewhere in the mix.
    I'll need other stuff two. I won't bore you with details but I'll simply say a lot of my computer equipment is just plain getting old. I used to have a stack of spare mice, keyboards, and so on but a lot of that stuff is now nearly worn out or just plain obsolete. Unless these computers you specced out have PCI slots, VGA and PS/2 ports I'll have to consider the price of a new display, keyboard, mouse and perhaps a few other things. That's just part of the reason why I believe building a computer would be a poor choice for me.
    Last edited by MacGuffin; 06-14-2012 at 07:55 AM. Reason: getting tyred, can't spel
    Reply With Quote Quote  

  23. Senior Member
    Join Date
    Mar 2012
    Location
    Iowa, USA
    Posts
    224
    #22
    Quote Originally Posted by jibbajabba View Post
    Your vCenter will need x64 Windows anyway so you might as wel go for 2007R2 and install the free iSCSI target - quick enough for lobbing. Single server setups tend to have the disk as bottleneck so SSD is a good, needed choice.
    Really? I need x64 Windows for the ESXi management software? I could have sworn I saw it run on x86 Windows XP. I did plan on running Win2008 Server for some stuff in the time limited evaluation mode. I've got a stack of WinXP computers around here too that I can use for some things. Did I mention my stack of old hardware?


    Quote Originally Posted by jibbajabba View Post
    Oh and for disks. I run a Adaptec 5805 with 2 arrays. 4x 300GB 15k SAS in Raid 10 and can get decent iops. The second array used 1TB 7200rpm spindles in Raid 10 and I am struggling with I/O when running nested VMs. So yes, storage will be bottleneck and I am not sure that SSDd are THAT much quicker. It all depends how many VMs you intend to run and what those VMs will be doing. I got a SLIGHT performance gain by using eager zeroed thick disks but that means you run out of storage quickly as surely your SSDs aren't the most expensive ones.

    I also run a FusionIO card and that thing flies but surely way above budget
    Some of the servers I was looking at did have 15k HDs as standard equipment. I should be able to RAID the drives or otherwise spread the load among the drives somehow. Your experience makes me feel better about that option.
    Reply With Quote Quote  

  24. Senior Member
    Join Date
    Aug 2008
    Posts
    3,951
    #23
    Quote Originally Posted by MentholMoose View Post
    If you are going with a physical lab, the ESXi machines don't necessarily even need hard drives. There's not much need to lab with local datastores, and you can install ESXi on a USB stick. Many newer servers include internal USB ports for this purpose. You can then put additional, bigger, and/or faster/SSD drives in your SAN/NAS machine to store VMs.
    This is what I did. I just replaced my big noisy ass DL385's with a pair of custom built boxes. AMD FX6100's on ASUS boards with 16 gigs of RAM, and additional NIC cards. I already had a storage NAS (Synology 1511+), so all I needed were boxes to provide proc and mem. Built both boxes for about 900 bucks, installed ESXi 5 on USB thumb drives
    Reply With Quote Quote  

  25. Google Ninja jibbajabba's Avatar
    Join Date
    Jun 2008
    Location
    Ninja Cave
    Posts
    4,240

    Certifications
    TechExam Certified Alien Abduction Professional
    #24
    Quote Originally Posted by MacGuffin View Post
    Really? I need x64 Windows for the ESXi management software? I could have sworn I saw it run on x86 Windows XP.

    The client, maybe, but not the server, that requires 64 Bit and enough oompf for SQL Express

    VMware KB: Minimum system requirements for installing vCenter Server

    With v4 you can get away with 32 Bit, but v5 requires 64 ..
    Last edited by jibbajabba; 06-14-2012 at 08:03 AM.
    Reply With Quote Quote  

  26. Senior Member MentholMoose's Avatar
    Join Date
    Sep 2009
    Location
    CA
    Posts
    1,550
    #25
    Quote Originally Posted by MacGuffin View Post
    Two SSDs? Is the idea to RAID them? Spread the load over two disks (not RAID just two independent volumes)? More space? Dual booting? All the above?
    Besides those possibilities, another is price. Currently there is a huge jump between 256 GB and 512 GB SSDs, so two 256 GB SSDs should be significantly cheaper than one 512 GB SSD. Now is a great time to buy a 256 GB SSD as there has been many deals lately. I've seen 256GB Crucial M4 and Samsung 830 SSDs (both are well regarded) going for $200 or less (I couldn't resist picking one up).

    Quote Originally Posted by MacGuffin View Post
    How does booting from a USB stick compare to HD and SSD when it comes to performance? I assume it lies somewhere in between. Anything else I should know about stripping out the SATA drives before I grab my screwdriver?
    Performance of what? The disk the hypervisor runs from does not need to be fast, unless you will be rebooting it constantly and really need fast boot times. And booting ESXi from a USB stick is not all that slow anyway. If you mean performance of VMs running on a USB stick, ESXi won't let you do this, and if there is some hack to allow it I assume performance would be poor. But like I said, labbing with VMs on a local datastore is not useful anyway.

    Quote Originally Posted by MacGuffin View Post
    I'm not sure how the cost difference between the two set ups work out. I'm looking at a two or three small servers for about $2500, each with 2GB or 4GB RAM, a dual core i5 or so, and a small HD.
    I'd aim a bit higher. 2 GB RAM is just too low to be useful, and 4 GB is better but still limiting. I recommend at least 8 GB RAM. Have you checked out Dell Outlet? Refurb Dell R210 II servers with specs like that (Core i3, 2 GB RAM, 500 GB SATA disk drive) are under $700, and a guaranteed compatible 8 GB RAM kit is $85 from Crucial. The R210 II is a nice server (I have access to some at work) and on the VMware HCL.
    Reply With Quote Quote  

+ Reply to Thread
Page 1 of 3 1 23 Last

Social Networking & Bookmarks