+ Reply to Thread
Page 1 of 2 1 2 Last
Results 1 to 25 of 44
  1. Senior Member
    Join Date
    Feb 2008
    Location
    West Yorkshire, UK
    Posts
    269

    Certifications
    A+, N+, 70-270, 70-290, 70-291, 70-293, 70-294, 70-298. MCSE 2003! 70-620
    #1

    Default Setting up an Exchange 2003 cluster

    I'm going to be a part of an Exchange 2003 cluster installation soon and I was wondering if anybody had done this themselves in the past? I've got quite a bit of theory in how to implement this, but if somebody could direct me to a step-by-step guide as to how to deploy this properly, that would be great!

    From what i've seen so far...

    * 2 network cards in each Server. One for heartbeat/cluster transmission, the other for standard Network traffic.

    * Enterprise Windows 2003 AND Enterprise Exchange 2003

    * Forestprep has to be run if Exchange hasn't been on the domain before?
    Reply With Quote Quote  

  2. SS -->
  3. Senior Member
    Join Date
    Mar 2007
    Posts
    12,308
    #2
    http://technet.microsoft.com/en-us/library/bb123612(EXCHG.65).aspx

    Shared storage is probably going to be your biggest hurdle to overcome.
    Reply With Quote Quote  

  4. New Member royal's Avatar
    Join Date
    Jul 2006
    Location
    Chicago, IL
    Posts
    3,373
    #3
    Here is the actual Clustering documentation that you will need to learn how to create Clusters. You can then refer to dynamik's link for installing Exchange on top of a cluster.

    http://www.microsoft.com/downloads/d...DisplayLang=en

    Here's the Technet Library Version:
    http://technet.microsoft.com/en-us/l.../cc778252.aspx

    Here is the "latest" documentation on how to install the MSDTC (not required in Exchange 2007):
    http://technet.microsoft.com/en-us/l.../bb124059.aspx
    Reply With Quote Quote  

  5. Member
    Join Date
    Jul 2006
    Location
    Wheatfield, NY
    Posts
    46

    Certifications
    A+, Network+, MCSE - 2003
    #4
    Quote Originally Posted by royal
    Here is the actual Clustering documentation that you will need to learn how to create Clusters. You can then refer to dynamik's link for installing Exchange on top of a cluster.

    http://www.microsoft.com/downloads/d...DisplayLang=en

    This is the same document that I used when setting up my 2-node cluster at work. Mine was for file shares with an EMC back end. This document worked great.
    Reply With Quote Quote  

  6. New Member royal's Avatar
    Join Date
    Jul 2006
    Location
    Chicago, IL
    Posts
    3,373
    #5
    Quote Originally Posted by penberth
    Quote Originally Posted by royal
    Here is the actual Clustering documentation that you will need to learn how to create Clusters. You can then refer to dynamik's link for installing Exchange on top of a cluster.

    http://www.microsoft.com/downloads/d...DisplayLang=en

    This is the same document that I used when setting up my 2-node cluster at work. Mine was for file shares with an EMC back end. This document worked great.
    Yep, it's the best clustering doc out there.
    Reply With Quote Quote  

  7. Senior Member
    Join Date
    Feb 2008
    Location
    West Yorkshire, UK
    Posts
    269

    Certifications
    A+, N+, 70-270, 70-290, 70-291, 70-293, 70-294, 70-298. MCSE 2003! 70-620
    #6
    Cool, i've downloaded that and will be reading that along the way, cheers. So do I need to set up the 2 node cluster first, then install Exchange after?
    Reply With Quote Quote  

  8. wibble! bertieb's Avatar
    Join Date
    Jun 2007
    Location
    Up and down the UK
    Posts
    1,029

    Certifications
    MCSE:CP&I, SI, MCITPx2, MCSAx2, MCTSx7, VCP6/5/4/3(DCV), EMCISA, Sec+, ITILv3F, legacy MS
    #7
    Quote Originally Posted by royal
    Here is the "latest" documentation on how to install the MSDTC (not required in Exchange 2007):
    http://technet.microsoft.com/en-us/l.../bb124059.aspx
    Thanks for pointing that one out Royal, I've previously had a few small issues with installing/configuring MSDTC on clusters (mostly referring to one of the MS links contained in that one) so I'll try this process on the next one I build to see if it helps to smooth it out a bit.
    Reply With Quote Quote  

  9. wibble! bertieb's Avatar
    Join Date
    Jun 2007
    Location
    Up and down the UK
    Posts
    1,029

    Certifications
    MCSE:CP&I, SI, MCITPx2, MCSAx2, MCTSx7, VCP6/5/4/3(DCV), EMCISA, Sec+, ITILv3F, legacy MS
    #8
    Quote Originally Posted by mr2nut
    Cool, i've downloaded that and will be reading that along the way, cheers. So do I need to set up the 2 node cluster first, then install Exchange after?
    Yep. As Royal says, refer to his links for 'installing the cluster' then refer to Dynamik's link for the actual Exchange2003 install.
    Reply With Quote Quote  

  10. Senior Member
    Join Date
    Feb 2008
    Location
    West Yorkshire, UK
    Posts
    269

    Certifications
    A+, N+, 70-270, 70-290, 70-291, 70-293, 70-294, 70-298. MCSE 2003! 70-620
    #9
    Cheers, these docs are great

    One thing that confuses me a bit..

    You have 2 identical servers with Enterprise 2003 installed, with 2 network adapters each. I would simply go for the cross-over cable for the heartbeat. Then the other 2 network cards for the private traffic for the users, would need to be on a different subnet mask, correct? I understand that bit, but the RAID setup confuses me slightly.

    You have the OS on a SCSI RAID controller in each machine, that bit I understand, but for the Cluster data itself, does this have to be in an external RAID box (like a NAS box with 2 network adapters) so they can connect through it? I would assume the cluster RAID has to be external to either Server incase the Server with the cluster RAID went down, in which case that would obviously destroy the idea of clustering

    I know that may sound a bit confusing, but that's because I am. I don't know how else to explain what i'm getting at. Maybe a diagram of a 2node cluster would help?
    Reply With Quote Quote  

  11. Senior Member
    Join Date
    Mar 2007
    Posts
    12,308
    #10
    Are you doing this with VMs or physical machines? Like I said, shared storage can be difficult for home lab work. You might want to setup iSCSI in something like Open Filer and connect both machines to that.

    You can't use NAS. I believe you can use SCSI, iSCSI or FC, but I'm not 100% on that.
    Reply With Quote Quote  

  12. Senior Member
    Join Date
    Feb 2008
    Location
    West Yorkshire, UK
    Posts
    269

    Certifications
    A+, N+, 70-270, 70-290, 70-291, 70-293, 70-294, 70-298. MCSE 2003! 70-620
    #11
    I'd like to do it with physical machines but would VM be a lot cheaper? Also, what are the benefits to each method?

    I'm still a bit lost about the whole 2 node cluster thing. Does the same data reside on each server on a seperate RAID within the Server, or do the Servers both have to look at an external storage device via the cluster network cards?
    Reply With Quote Quote  

  13. Senior Member
    Join Date
    Mar 2007
    Posts
    12,308
    #12
    VMs just let you get by with less physical hardware. They're just more convenient if you have a sufficiently powerful machine. If you have a couple of physical machines laying around, they'll work too. It really doesn't matter; I was just curious what your setup was like.

    Also, you don't need to use raid. If the guides mention it, it's probably just because that's a best practice. You don't need it in a lab. The data will reside on a shared storage device. As I mentioned earlier, I think you can use SCSI, iSCSI, and FC. You can't use SMB/CIFS/NFS (NAS).
    Reply With Quote Quote  

  14. Senior Member
    Join Date
    Feb 2008
    Location
    West Yorkshire, UK
    Posts
    269

    Certifications
    A+, N+, 70-270, 70-290, 70-291, 70-293, 70-294, 70-298. MCSE 2003! 70-620
    #13
    Quote Originally Posted by dynamik
    VMs just let you get by with less physical hardware. They're just more convenient if you have a sufficiently powerful machine. If you have a couple of physical machines laying around, they'll work too. It really doesn't matter; I was just curious what your setup was like.

    Also, you don't need to use raid. If the guides mention it, it's probably just because that's a best practice. You don't need it in a lab. The data will reside on a shared storage device. As I mentioned earlier, I think you can use SCSI, iSCSI, and FC. You can't use SMB/CIFS/NFS (NAS).
    Ahh, so you can't use a RAID NAS box as they don't use SCSI right? So the main part to getting Clusters running is it HAS to be a SCSI controller with the drive(s) on there, preferably RAID but can be a single drive? That's cleared things up a bit now.

    Here's a quick diagram of what I THINK should be going on. Right or wrong?

    Reply With Quote Quote  

  15. New Member royal's Avatar
    Join Date
    Jul 2006
    Location
    Chicago, IL
    Posts
    3,373
    #14
    Looks good.

    With Server 2003 clustering I would do the following:
    In the binding order of the NIC properties, make sure the public NIC is on top.
    In the Cluster properties, for hearbeat, make sure the order of NICs is where the private NIC is on top and is set to be only used for heartbeat/private. Make sure the public NIC is set to mixed (can be set to public only). I always do mixed for fault tolerance. If something bad happens to a hearbeat NIC, you can then temporarily send hearbeats over the public network. In Server 2008, your public NIC is forced to be mixed.
    Reply With Quote Quote  

  16. Senior Member
    Join Date
    Mar 2007
    Posts
    12,308
    #15
    Ideally, you'd want to put the iSCSI traffic on it's own network/NICs. You shouldn't have a lot of congestion on your heartbeat network. Looks fine for a lab though. The ability to add hardware, such as additional NICs, is another nice feature of VMs. You should start experimenting with them if you get a chance. You'll likely find them to be useful in your studies.
    Reply With Quote Quote  

  17. New Member royal's Avatar
    Join Date
    Jul 2006
    Location
    Chicago, IL
    Posts
    3,373
    #16
    I agree. With iscsi in production, you always want that traffic going over its own dedicated network. Ideally this would be gigabit but 10gigabit for iSCSI is here (or if it's not here just yet it will be soon). But ya, for labs, no sense for that unless maybe it's a pretty big test environment that mimics most/all of your production. For a small Vmware lab, run that over your regular NICs. Nothing bad will happen.
    Reply With Quote Quote  

  18. Senior Member
    Join Date
    Feb 2008
    Location
    West Yorkshire, UK
    Posts
    269

    Certifications
    A+, N+, 70-270, 70-290, 70-291, 70-293, 70-294, 70-298. MCSE 2003! 70-620
    #17
    Quote Originally Posted by royal
    I agree. With iscsi in production, you always want that traffic going over its own dedicated network. Ideally this would be gigabit but 10gigabit for iSCSI is here (or if it's not here just yet it will be soon). But ya, for labs, no sense for that unless maybe it's a pretty big test environment that mimics most/all of your production. For a small Vmware lab, run that over your regular NICs. Nothing bad will happen.
    Would it be possible at all to edit my tacky picture to show exactly what you mean please? I'm not too sure I can picture what your suggesting. Would be much appreciated
    Reply With Quote Quote  

  19. Senior Member
    Join Date
    Jul 2008
    Location
    Los Angeles
    Posts
    115
    #18
    I agree with dynamik in shared storage is going to be your toughest hurdle. Well, it may not be the toughest but it will probably the most expensive hurdle.

    There are a number of options for shared storage. the cheapest option is going to be a directly connected scsi option. You can find some decent and fairly inexpensive options from companies like Promise most of these will be Scsi connected and use ide or sata drives within the enclosure which keeps your costs lower. I work at a shop that is primarily HP so we make a lot of use of the MSA enclosures and these are easy to get in scsi or fiber channel. You also have a number of options with EMC and NetApp as both have iscsi and fiber channel offerings. We have had good luck with the NetApp line (haven't had that much personal experience with EMC). Another nice feature with the NetApp is their snap mirror feature which we use for quick restores as well as syncing with our NetApp at our warm backup location.

    The cluster configuration is not hard but more so confusing the first time you have a go at it. I prefer to use crossovers for heartbeat and keep it private as the heartbeat can create a fair amount of chatter. Also if you are going to keep you heartbeat on a private range I would recommend that you keep the range far away from any ranges you currently have in use (just incase you decide to change the heartbeat config later).

    A little planning can go along way in making your cluster install easier. Identify your addresses ahead of time if memory serves me correctly you will need 4 ips for the publics (2 for the physical machines and 2 for the virtual or cluster instances) as well as you ip range for the heartbeats.

    If you are going to run an antivirus package on the exchange cluster check to see if whatever you are running is cluster aware. I know that most of the Norton/Symantec corporate packages are aware but iirc you will have to create a separate resource within the cluster and it will have some dependency (you will have to check your vendor documentation).

    Partitioning for the shared storage can be simple or complex. Simple is 2 partitions the first is your basic data store where you are going to house your exchange data and the second is for the quorum which is a small partition that the cluster uses for holding its state information. I believe the MS recommendation is for a small partition like a couple hundred megs but we always keep them at 1GB on the quorum partition which is our personal preference. Take a few minutes to map out how you are going to store your data everything on one partition or separate for logs etc.

    You may also want to have a separate account created and ready to roll as the service account that will be used for any cluster related services.

    For the most part the cluster configuration is by the book. good luck and have fun!
    Reply With Quote Quote  

  20. Senior Member
    Join Date
    Mar 2007
    Posts
    12,308
    #19
    Quote Originally Posted by mr2nut
    Would it be possible at all to edit my tacky picture to show exactly what you mean please? I'm not too sure I can picture what your suggesting. Would be much appreciated
    Leave it exactly as you have it, add a 3rd NIC to each machine, draw a line straight from one to the other, and move the heartbeat to that one.
    Reply With Quote Quote  

  21. Senior Member
    Join Date
    Feb 2008
    Location
    West Yorkshire, UK
    Posts
    269

    Certifications
    A+, N+, 70-270, 70-290, 70-291, 70-293, 70-294, 70-298. MCSE 2003! 70-620
    #20
    Quote Originally Posted by dynamik
    Ideally, you'd want to put the iSCSI traffic on it's own network/NICs. You shouldn't have a lot of congestion on your heartbeat network. Looks fine for a lab though. The ability to add hardware, such as additional NICs, is another nice feature of VMs. You should start experimenting with them if you get a chance. You'll likely find them to be useful in your studies.
    I've had a go with MS virtual Server recently, installing Server 2k3 on one then XP Pro on the other for a test enivironment. I've heard VMWare is better though? As were a Microsoft partner we have an Enterprise Server and Exchange disc so I could try using Virtual PC right?

    We normally use ML350 HP Servers for SBS installations, reckon these would be fine purely for a small Exchange environment? Around 50 users.

    Also, would this work as an iSCSI device inbetween the two nodes?..

    http://www.bizrate.co.uk/harddrives/oid594052202.html
    Reply With Quote Quote  

  22. Senior Member
    Join Date
    Mar 2007
    Posts
    12,308
    #21
    Probably. You can use Open Filer, which is a free open source NAS/SAN appliance as well. If performance doesn't matter, you can install it in a VM.
    Reply With Quote Quote  

  23. Senior Member
    Join Date
    Feb 2008
    Location
    West Yorkshire, UK
    Posts
    269

    Certifications
    A+, N+, 70-270, 70-290, 70-291, 70-293, 70-294, 70-298. MCSE 2003! 70-620
    #22
    Quote Originally Posted by dynamik
    Quote Originally Posted by mr2nut
    Would it be possible at all to edit my tacky picture to show exactly what you mean please? I'm not too sure I can picture what your suggesting. Would be much appreciated
    Leave it exactly as you have it, add a 3rd NIC to each machine, draw a line straight from one to the other, and move the heartbeat to that one.
    Ah I see, so put a crossover cable between the 2 nodes with a 3rd NIC right? Sounds like you've done this quite a bit to know this much, so I assume this is best practise in the real world to have 3 NICS in each Server?
    Reply With Quote Quote  

  24. Senior Member
    Join Date
    Mar 2007
    Posts
    12,308
    #23
    Quote Originally Posted by mr2nut
    Ah I see, so put a crossover cable between the 2 nodes with a 3rd NIC right?
    I actually found a good picture on Wikipedia: http://en.wikipedia.org/wiki/Image:C...Scheme_New.JPG

    The cabling doesn't really matter. It could be straight-throughs connected to a switch. You just want to that traffic on it's own network.

    Quote Originally Posted by mr2nut
    Sounds like you've done this quite a bit to know this much, so I assume this is best practise in the real world to have 3 NICS in each Server?
    Ah, how easy it is to deceive people over the internet. Not so much. Royal's the expert. He's out doing it while I'm here talking about it. I just know a few best practices and a few other tricks.

    Honestly, a real server would probably have something like two 4-port NICs with one port on each card for storage, public, and private traffic, and each NIC would likely be connected to a different switch. That protects against failure of a switch, cable, or NIC.
    Reply With Quote Quote  

  25. Senior Member
    Join Date
    Feb 2008
    Location
    West Yorkshire, UK
    Posts
    269

    Certifications
    A+, N+, 70-270, 70-290, 70-291, 70-293, 70-294, 70-298. MCSE 2003! 70-620
    #24
    Quote Originally Posted by contentpros
    I agree with dynamik in shared storage is going to be your toughest hurdle. Well, it may not be the toughest but it will probably the most expensive hurdle.

    There are a number of options for shared storage. the cheapest option is going to be a directly connected scsi option. You can find some decent and fairly inexpensive options from companies like Promise most of these will be Scsi connected and use ide or sata drives within the enclosure which keeps your costs lower. I work at a shop that is primarily HP so we make a lot of use of the MSA enclosures and these are easy to get in scsi or fiber channel. You also have a number of options with EMC and NetApp as both have iscsi and fiber channel offerings. We have had good luck with the NetApp line (haven't had that much personal experience with EMC). Another nice feature with the NetApp is their snap mirror feature which we use for quick restores as well as syncing with our NetApp at our warm backup location.

    The cluster configuration is not hard but more so confusing the first time you have a go at it. I prefer to use crossovers for heartbeat and keep it private as the heartbeat can create a fair amount of chatter. Also if you are going to keep you heartbeat on a private range I would recommend that you keep the range far away from any ranges you currently have in use (just incase you decide to change the heartbeat config later).

    A little planning can go along way in making your cluster install easier. Identify your addresses ahead of time if memory serves me correctly you will need 4 ips for the publics (2 for the physical machines and 2 for the virtual or cluster instances) as well as you ip range for the heartbeats.

    If you are going to run an antivirus package on the exchange cluster check to see if whatever you are running is cluster aware. I know that most of the Norton/Symantec corporate packages are aware but iirc you will have to create a separate resource within the cluster and it will have some dependency (you will have to check your vendor documentation).

    Partitioning for the shared storage can be simple or complex. Simple is 2 partitions the first is your basic data store where you are going to house your exchange data and the second is for the quorum which is a small partition that the cluster uses for holding its state information. I believe the MS recommendation is for a small partition like a couple hundred megs but we always keep them at 1GB on the quorum partition which is our personal preference. Take a few minutes to map out how you are going to store your data everything on one partition or separate for logs etc.

    You may also want to have a separate account created and ready to roll as the service account that will be used for any cluster related services.

    For the most part the cluster configuration is by the book. good luck and have fun!
    Took a while to read all that Cheers

    So when you say directly connected SCSI, do you mean there is a seperate SCSI controller from the OS SCSI controller in each Server, and you connect these together with a SCSI cable? If i've grasped this right, do you install everything you need on the first node, then it will replicate to the second node itself? Like you say, i've never even touched clustering before but i've looked into it lately, mainly due to a client requesting it and i'm dedicated to learn it now, hate things beating me!
    Reply With Quote Quote  

  26. New Member royal's Avatar
    Join Date
    Jul 2006
    Location
    Chicago, IL
    Posts
    3,373
    #25
    Quote Originally Posted by mr2nut
    Quote Originally Posted by dynamik
    Quote Originally Posted by mr2nut
    Would it be possible at all to edit my tacky picture to show exactly what you mean please? I'm not too sure I can picture what your suggesting. Would be much appreciated
    Leave it exactly as you have it, add a 3rd NIC to each machine, draw a line straight from one to the other, and move the heartbeat to that one.
    Ah I see, so put a crossover cable between the 2 nodes with a 3rd NIC right? Sounds like you've done this quite a bit to know this much, so I assume this is best practise in the real world to have 3 NICS in each Server?
    If you're doing a cluster, ALWAYS HAVE at least 2 NICs. One for corporate communication (top of the binding order) and the heartbeat NIC. If you're doing iSCSI, you would add a 3rd NIC for iSCSI communication as you want to separate the iSCSI traffic. So no iscsi = 2 NICs minimum and iSCSI = 3 NICs minimum.

    Now if you want more information on storage that isn't really pertaining to what you're doing, read on. If you're doing DAS on a shared storage cabinet, then you'll just want a SAS controller card in your server with SAS cables. A lot of SAS controller cards have multiple ports so you can essentially have multiple SAS cabinets with 1 controller card. SAS connections can typically support 100 or so disks but you'll never really ever do that. For instance, most of the SMB storage shelves have at most 24 disks in a cabinet which 1 SAS Controller card can easily handle. So let's say your Exchange cluster design calls for shared storage that requires 100 disks that will utilize 8 cabinets. You would need 8 controller single port SAS cards or 4 controller cards that have 2 ports each, etc..

    When you start moving into the Enterprise Storage space, you'll almost always be doing either iSCSI or Fiber Channel which are actual SAN technologies. Fiber Channel requires you to have a Host Bus Adapter. If you have a bunch of servers, this can be costly. So a lot of companies have been going over to iSCSI as it can be less expensive overall. Now Fibre Channel is both a disk technology and the way you can connect. You can use Fibre Channel Disks or SAS disks in a shelve but connect to it via iSCSI or Fibre Channel. It really depends on what the controllers and shelves support. For instance, some Netapp SANs allow you to use SAS disks/Fibre Channel Disks and connect to it via either iSCSI or Fibre Channel.
    Reply With Quote Quote  

+ Reply to Thread
Page 1 of 2 1 2 Last

Social Networking & Bookmarks