Thursday, May 29, 2014

Querying Oracle Linked Servers in SQL Server – Issues and Fixes

Manual Oracle Uninstall

A number of people have contacted me regarding problems uninstalling Oracle products. The two methods listed below should only be used as a last resort and will remove *all* Oracle software allowing a re-install. If you make any mistakes they can be quite destructive so be careful.

Windows

In the past I've had many problems uninstalling all Oracle products from Windows systems. Here's my last resort method:
  • Uninstall all Oracle components using the Oracle Universal Installer (OUI).
  • Run regedit.exe and delete the HKEY_LOCAL_MACHINE/SOFTWARE/Oracle key. This contains registry entires for all Oracle products.
  • If you are running 64-bit Windows, you should also delete the HKEY_LOCAL_MACHINE/SOFTWARE/Wow6432Node/Oracle key if it exists.
  • Delete any references to Oracle services left behind in the following part of the registry (HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Services/Ora*). It should be pretty obvious which ones relate to Oracle.
  • Reboot your machine.
  • Delete the "C:\Oracle" directory, or whatever directory is your ORACLE_BASE.
  • Delete the "C:\Program Files\Oracle" directory.
  • If you are running 64-bit Wiindows, you should also delete the "C:\Program Files (x86)\Oracle" directory.
  • Remove any Oracle-related subdirectories from the "C:\ProgramData\Microsoft\Windows\Start Menu\Programs\" directory.
  • Empty the contents of your "C:\temp" directory.
  • Empty your recycle bin.
At this point your machine will be as clean of Oracle components as it can be without a complete OS reinstall.
Remember, manually editing your registry can be very destructive and force an OS reinstall so only do it as a last resort.
If some DLLs can't be deleted, try renaming them, the after a reboot delete them.

UNIX

Uninstalling all products from UNIX is a lot more consistent. If you do need to resort to a manual uninstall you should do something like:

  • Uninstall all Oracle components using the Oracle Universal Installer (OUI).
  • Stop any outstanding processes using the appropriate utilities.
    # oemctl stop oms user/password
    # agentctl stop
    # lsnrctl stop
    Alternatively you can kill them using the kill -9 pid command as the root user.
  • Delete the files and directories below the $ORACLE_HOME.
    # cd $ORACLE_HOME
    # rm -Rf *
  • With the exception of the product directory, delete directories below the $ORACLE_BASE.
    # cd $ORACLE_BASE
    # rm -Rf admin doc jre o*
  • Delete the /etc/oratab file. If using 9iAS delete the /etc/emtab file also.
    # rm /etc/oratab /etc/emtab

Sunday, May 25, 2014

Windows 7, Windows Server 2008 R2 and the Group Policy Central Store

ADMX Files and the Group Policy Central Store

Microsoft introduced the ADMX file format with Windows Vista and Windows Server 2008. This XML-based file format replaced the token-based ADM file format used by earlier versions of Windows to define administrative templates. Group Policy uses administrative templates to represent registry-based policy settings that appear when editing Group Policy. The content included in administrative templates describes the user interface used by Group Policy editors and registry locations where Windows stores policy settings. Windows Server 2008 R2 and Windows 7 provide a new set of administrative template files in the ADMX format.
Windows 7 ADMX files now include support for two registry types: REG_MULTI_SZ and REG_QWORD. The REG_MULTI_SZ registry data type represents multi strings entries within a single registry value. The REG_QWORD registry data type represents a 64-bit number, which is twice the size of the 32-bit number stored in REG_DWORD. These new aspects of the ADMX syntax are only viewable when using the GPMC and Group Policy editors from Windows Server 2008 R2 or Windows 7 Remote Server Administration Tools (RSAT). Group Policy editors and the GPMC from Windows Vista cannot read ADMX files containing this new syntax.

The Central Store

Earlier versions of Group Policy that used ADM files suffered from a symptom known as SYSVOL bloat. These versions of Windows copied the set of ADM files into each Group Policy object stored on SYSVOL. Each set of ADM files required approximately 4MB of disk space. A domain can realistically have 100 Group Policy objects. One hundred Group Policy objects multiplied by 4 megabytes of disk space equates to 400MB of redundant data—what a waste. Windows Server 2008 and Vista introduced the concept of the Group Policy Central Store to overcome SYSVOL bloat. The Group Policy Central Store is a single folder on each domain controllers SYSVOL that stores one set of ADMX files for the entire domain. The central store effectively relieves the symptoms of SYSVOL bloat and reduces the amount of data transferred during SYSVOL replication when new Group Policy objects are created. Some documentation refers to the Group Policy Central Store as an alternate location to store ADMX files (the other location is the local store found in %SYSTEMROOT%\PolicyDefinitions). A more accurate description of the Central Store is the preferred location.

So what’s the Problem?

The Group Policy Management Console and the Group Policy Management Editor always use the Group Policy Central store, when it is present. The pro here is that all instances of the GPMC and GPME use the same set of ADMX files. The con is that servicing ADMX files is difficult. Also, GPMC cannot use the local store as long as a Group Policy Central Store exists. So adding a single ADMX set for a single computer is not possible when using a central store. So, when we released Windows 7 and Windows Server 2008 R2, we also released a new set of ADMX files (within the operating system). These new ADMX files expose new Windows 7 and Windows 2008 R2 policy settings as well as policy settings for previous operating systems. Therefore, you need these files to configure Windows 7 Group Policies. Here’s where the dilemma continues.

A Central Store and Windows 7

If you have a central store (presumably hosted with Windows Server 2008 ADMX files), then you have two choices: upgrade the ADMX files or remove the central store.

Updating the Central Store

Updating the Central Store affects all users in the domain that use GPMC and its editor. It is important to understand this because newer ADMX files may not be compatible with older versions of Group Policy Tools, as in the case with Windows Server 2008 R2. The screen capture below occurs in Windows Vista and Windows Server 2008 computers attempting to read a Group Policy Central store hosted with Windows Server 2008 R2 ADMX files.
image
Windows Server 2008 R2 ADMX file, in this example the TerminalServer-Server.adml, contains an unknown element named . This element represents the REG_MULTI_SZ implementation that is new with Windows 7 and Windows Server 2008 R2. Newer ADMX files can contain new features, which older Group Policy Tools may not understand. This is why it is always a best practice to use the latest Group Policy Tools to manage Group Policy. Backwards compatibility is an important aspect of Group Policy; however, forward compatibility is not.
Also, you may be using Windows 7, but do not see Windows 7 policy settings. Remember, GPMC prefers the Group Policy Central Store over the local store. The Windows 7 GPMC (actually RSAT) uses the Group Policy Central Store (hosted with Windows Vista or Windows Server 2008 ADMX files) over its local store that hosts the Windows 7 ADMX. If you want to see Windows 7 policy settings, then you’ll need to upgrade your central store or remove it.
Note: I have successfully used Windows Vista RSAT with an upgraded Group Policy Central Store. However, the ADMX and ADML files were from a Windows 7 computer. Using Windows Server 2008 R2 ADMX files produces the error in the preceding image using GPMC from Windows Server 2008 or Windows Vista RSAT.
image

Removing the Group Policy Central Store

Removing the Central Store targets all Group Policy tools to use their local store for ADMX file. This allows Windows 7 RSAT and Windows Server 2008 R2 computer to use their ADMX files. Windows Vista RSAT and Windows Server 2008 use their local ADMX files. Windows Vista computers cannot manage or report on Windows 7 policy settings.

An Alternative to the Central Store

There is way for us to “have our cake and eat it too”. The answer is Terminal Services. I often suggest to customers that have many people managing Group Policy to setup a GPMC Terminal Server. Dedicating a single server as the means to manage Group Policy provides:
  • The concept of a central store
  • A single point of Group Policy management
  • Easy to audit and maintain
A dedicate Group Policy Terminal Server can provide the look and feel of a Group Policy Central Store without implementing a Central Store. ADMX files are located in one location, the terminal server. GPMC does not load the ADMX files from a network location. Domain controllers do not need to replicate the additional content of a Central Store—all the benefits of a Central Store, without creating one.

Group Policy is a critical part of the enterprise and yet it seems little is done to reduce its exposure. A dedicated Terminal Server running GPMC provide a true single point of management for the entire Group Policy experience. Terminal Services security can be implemented to reduce the number of people having access to GPMC. Auditing interactive logons can further assist with identifying changes made to Group Policy. Combine this with using Group Policy to prevent other computers from opening GPMC and you’ve effectively lowered the surface and exposure to Group Policy to only the people that actually need it.

Friday, May 23, 2014

Failover Cluster Step-by-Step Guide: Configuring a Two-Node Print Server Failover Cluster

Applies To: Windows Server 2008
A failover cluster is a group of independent computers that work together to increase the availability of applications and services. The clustered servers (called nodes) are connected by physical cables and by software. If one of the cluster nodes fails, another node begins to provide service (a process known as failover). Users experience a minimum of disruptions in service.
This guide describes the steps for installing and configuring a print server failover cluster that has two nodes. By creating the configuration in this guide, you can learn about failover clusters and familiarize yourself with the Failover Cluster Management snap-in interface in Windows Server® 2008 Enterprise or Windows Server® 2008 Datacenter.
noteNote
The failover cluster feature is not available in Windows Web Server 2008 or Windows Server 2008 Standard.

In Windows Server 2008, the improvements to failover clusters (formerly known as server clusters) are aimed at simplifying clusters, making them more secure, and enhancing cluster stability. Cluster setup and management are easier. Security and networking in clusters have been improved, as has the way a failover cluster communicates with storage. For more information about improvements to failover clusters, see http://go.microsoft.com/fwlink/?LinkId=62368.

Overview for a two-node print server cluster

Servers in a failover cluster can function in a variety of roles, including the roles of file server, print server, mail server, or database server, and they can provide high availability for a variety of other services and applications. This guide describes how to configure a two-node print server cluster.
A failover cluster usually includes a storage unit that is physically connected to all the servers in the cluster, although any given volume in the storage is only accessed by one server at a time. The following diagram shows a two-node failover cluster connected to a storage unit.
Two-node failover cluster connected to storage Storage volumes or logical unit numbers (LUNs) exposed to the nodes in a cluster must not be exposed to other servers, including servers in another cluster. The following diagram illustrates this.
Failover clusters with no overlap of LUNs Note that for the maximum availability of any server, it is important to follow best practices for server management—for example, carefully managing the physical environment of the servers, testing software changes before fully implementing them, and carefully keeping track of software updates and configuration changes on all clustered servers.
The following scenario describes how a print server failover cluster can be configured.

Requirements for a two-node failover cluster

To create a failover cluster with two nodes (regardless of the service or application that the nodes provide), you need the hardware, software, accounts, and network infrastructure described in the sections that follow.
We recommend that you first use the information provided in this guide in a test lab environment. A Step-by-Step guide is not necessarily meant to be used to deploy Windows Server features without the accompanying documentation (as listed in the Additional references section), and it should be used with discretion as a stand-alone document.

Hardware requirements for a two-node failover cluster

You will need the following hardware for a two-node failover cluster:
  • Servers: We recommend that you use a set of matching computers that contain the same or similar components.

    ImportantImportant
    You should use only hardware components that are compatible with Windows Server 2008.
  • Network adapters and cable (for network communication): The network hardware, like other components in the failover cluster solution, must be compatible with Windows Server 2008. If you use iSCSI, your network adapters must be dedicated to either network communication or iSCSI, not both.

    In the network infrastructure that connects your cluster nodes, avoid having single points of failure. There are multiple ways of accomplishing this. You can connect your cluster nodes by multiple, distinct networks. Alternatively, you can connect your cluster nodes with one network that is constructed with teamed network adapters, redundant switches, redundant routers, or similar hardware that removes single points of failure.

    noteNote
    If you connect cluster nodes with a single network, the network will pass the redundancy requirement in the Validate a Configuration Wizard. However, the report from the wizard will include a warning that the network should not have single points of failure.
    For more details about the network configuration required for a failover cluster, see Network infrastructure and domain account requirements for a two-node failover cluster, later in this topic.
  • Device controllers or appropriate adapters for the storage:

    • For Serial Attached SCSI or Fibre Channel: If you are using Serial Attached SCSI or Fibre Channel, in all clustered servers, all components of the storage stack should be identical. It is required that the multipath I/O (MPIO) software and Device Specific Module (DSM) software components be identical.  It is recommended that the mass-storage device controllers—that is, the host bus adapter (HBA), HBA drivers, and HBA firmware—that are attached to cluster storage be identical. If you use dissimilar HBAs, you should verify with the storage vendor that you are following their supported or recommended configurations.

      noteNote
      With Windows Server 2008, you cannot use parallel SCSI to connect the storage to the clustered servers.
    • For iSCSI: If you are using iSCSI, each clustered server must have one or more network adapters or host bus adapters that are dedicated to the cluster storage. The network you use for iSCSI cannot be used for network communication. In all clustered servers, the network adapters you use to connect to the iSCSI storage target should be identical, and we recommend that you use Gigabit Ethernet or higher.

      For iSCSI, you cannot use teamed network adapters, because they are not supported with iSCSI.

      For more information about iSCSI, see the iSCSI FAQ on the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=61375).
  • Storage: You must use shared storage that is compatible with Windows Server 2008.

    For a two-node failover cluster, the storage should contain at least two separate volumes (LUNs), configured at the hardware level. One volume will function as the witness disk (described in the next paragraph). One volume will contain the print drivers, print spooler directory, and print monitors. Storage requirements include the following:

    • To use the native disk support included in failover clustering, use basic disks, not dynamic disks.
    • We recommend that you format the partitions with NTFS (for the witness disk, the partition must be NTFS).
    • For the partition style of the disk, you can use either master boot record (MBR) or GUID partition table (GPT).
    The witness disk is a disk in the cluster storage that is designated to hold a copy of the cluster configuration database. (A witness disk is part of some, not all, quorum configurations.) For this two-node cluster, the quorum configuration will be Node and Disk Majority, the default for a cluster with an even number of nodes. Node and Disk Majority means that the nodes and the witness disk each contain copies of the cluster configuration, and the cluster has quorum as long as a majority (two out of three) of these copies are available.

Deploying storage area networks with failover clusters

When deploying a storage area network (SAN) with a failover cluster, follow these guidelines:
  • Confirm compatibility of the storage: Confirm with manufacturers and vendors that the storage, including drivers, firmware, and software used for the storage, are compatible with failover clusters in Windows Server 2008.

    ImportantImportant
    Storage that was compatible with server clusters in Windows Server 2003 might not be compatible with failover clusters in Windows Server 2008. Contact your vendor to ensure that your storage is compatible with failover clusters in Windows Server 2008.
    Failover clusters include the following new requirements for storage:

    • Because improvements in failover clusters require that the storage respond correctly to specific SCSI commands, the storage must follow the standard called SCSI Primary Commands-3 (SPC-3). In particular, the storage must support Persistent Reservations as specified in the SPC-3 standard.
    • The miniport driver used for the storage must work with the Microsoft Storport storage driver.
  • Isolate storage devices, one cluster per device: Servers from different clusters must not be able to access the same storage devices. In most cases, a LUN that is used for one set of cluster servers should be isolated from all other servers through LUN masking or zoning.
  • Consider using multipath I/O software: In a highly available storage fabric, you can deploy failover clusters with multiple host bus adapters by using multipath I/O software. This provides the highest level of redundancy and availability. For Windows Server 2008, your multipath solution must be based on Microsoft Multipath I/O (MPIO). Your hardware vendor will usually supply an MPIO device-specific module (DSM) for your hardware, although Windows Server 2008 includes one or more DSMs as part of the operating system.

    ImportantImportant
    Host bus adapters and multipath I/O software can be very version sensitive. If you are implementing a multipath solution for your cluster, you should work closely with your hardware vendor to choose the correct adapters, firmware, and software for Windows Server 2008.

Software requirements for a two-node failover cluster

The servers for a two-node failover cluster must run the same version of Windows Server 2008, including the same hardware version (32-bit, x64-based, or Itanium architecture-based). They should also have the same software updates (patches) and service packs.

Network infrastructure and domain account requirements for a two-node failover cluster

You will need the following network infrastructure for a two-node failover cluster and an administrative account with the following domain permissions:
  • Network settings and IP addresses: When you use identical network adapters for a network, also use identical communication settings on those adapters (for example, Speed, Duplex Mode, Flow Control, and Media Type). Also, compare the settings between the network adapter and the switch it connects to and make sure that no settings are in conflict.

    If you have private networks that are not routed to the rest of your network infrastructure, ensure that each of these private networks uses a unique subnet. This is necessary even if you give each network adapter a unique IP address. For example, if you have a cluster node in a central office that uses one physical network, and another node in a branch office that uses a separate physical network, do not specify 10.0.0.0/24 for both networks, even if you give each adapter a unique IP address.

    For more information about the network adapters, see Hardware requirements for a two-node failover cluster, earlier in this guide.
  • DNS: The servers in the cluster must be using Domain Name System (DNS) for name resolution. The DNS dynamic update protocol can be used.
  • Domain role: All servers in the cluster must be in the same Active Directory domain. As a best practice, all clustered servers should have the same domain role (either member server or domain controller). The recommended role is member server.
  • Domain controller: We recommend that your clustered servers be member servers. If they are, you need an additional server that acts as the domain controller in the domain that contains your failover cluster.
  • Clients: As needed for testing, you can connect one or more networked clients to the failover cluster that you create, and observe the effect on a client when you move or fail over the clustered print server from one cluster node to the other.
  • Account for administering the cluster: When you first create a cluster or add servers to it, you must be logged on to the domain with an account that has administrator rights and permissions on all servers in that cluster. The account does not need to be a Domain Admins account, but can be a Domain Users account that is in the Administrators group on each clustered server. In addition, if the account is not a Domain Admins account, the account (or the group that the account is a member of) must be given the Create Computer Objects and Read All Properties permissions in the domain.

    noteNote
    There is a change in the way the Cluster service runs in Windows Server 2008, as compared to Windows Server 2003. In Windows Server 2008, there is no Cluster service account. Instead, the Cluster service automatically runs in a special context that provides the specific permissions and privileges that are necessary for the service (similar to the local system context, but with reduced privileges).

Steps for installing a two-node print server cluster

You must complete the following steps to install a two-node print server failover cluster.
Step 1: Connect the cluster servers to the networks and storage
Step 2: Install the failover cluster feature
Step 3: Validate the cluster configuration
Step 4: Create the cluster
If you have already installed the cluster nodes and want to configure a file server failover cluster, see Steps for configuring a two-node print server cluster, later in this guide.

Step 1: Connect the cluster servers to the networks and storage

Use the following instructions to connect your selected cluster servers to networks and storage.
noteNote
Review Hardware requirements for a two-node failover cluster earlier in this guide, for details about the kinds of network adapters and device controllers that you can use with Windows Server 2008.

For a failover cluster network, avoid having single points of failure. There are multiple ways of accomplishing this. You can connect your cluster nodes by multiple, distinct networks. Alternatively, you can connect your cluster nodes with one network that is constructed with teamed network adapters, redundant switches, redundant routers, or similar hardware that removes single points of failure (If you use a network for iSCSI, you must create this network in addition to the other networks).
For a two-node print server cluster, when you connect the servers to the cluster storage, you must expose at least two volumes (LUNs). You can expose additional volumes as needed for thorough testing of your configuration. Do not expose the clustered volumes to servers that are not in the cluster.

To connect the cluster servers to the networks and storage

  1. Review the details about networks in Hardware requirements for a two-node failover cluster and Network infrastructure and domain account requirements for a two-node failover cluster, earlier in this guide.
  2. Connect and configure the networks that the servers in the cluster will use.
  3. If your test configuration includes clients or a non-clustered domain controller, make sure that these computers can connect to the clustered servers through at least one network.
  4. Follow the manufacturer's instructions for physically connecting the servers to the storage.
  5. Ensure that the disks (LUNs) that you want to use in the cluster are exposed to the servers that you will cluster (and only those servers). You can use any of the following interfaces to expose disks or LUNs:
    • The interface provided by the manufacturer of the storage.
    • If you are using iSCSI, an appropriate iSCSI interface.
    • Microsoft Storage Manager for SANs (part of the operating system in Windows Server 2008). To use this interface, you need to contact the manufacturer of your storage for a Virtual Disk Service (VDS) provider package that is designed for your storage.
  6. If you have purchased software that controls the format or function of the disk, follow instructions from the vendor about how to use that software with Windows Server 2008.
  7. On one of the servers that you want to cluster, click Start, click Administrative Tools, click Computer Management, and then click Disk Management. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.) In Disk Management, confirm that the cluster disks are visible.
  8. If you want to have a storage volume larger than 2 terabytes, and you are using the Windows interface to control the format of the disk, convert that disk to the partition style called GUID partition table (GPT). To do this, back up any data on the disk, delete all volumes on the disk and then, in Disk Management, right-click the disk (not a partition) and click Convert to GPT Disk.
    For volumes smaller than 2 terabytes, instead of using GPT, you can use the partition style called master boot record (MBR).
    ImportantImportant
    You can use either MBR or GPT for a disk that is used by a failover cluster, but you cannot use a disk that you converted to dynamic by using Disk Management.
    If you purchased software that controls the format or function of the disk, contact the vendor for instructions about how to use that software with Windows Server 2008.

  9. Check the format of any exposed volume or LUN. We recommend NTFS for the format (for the witness disk, you must use NTFS).

Step 2: Install the failover cluster feature

In this step, you install the failover cluster feature. The servers must be running Windows Server 2008.

To install the failover cluster feature on the servers

  1. If you recently installed Windows Server 2008, the Initial Configuration Tasks interface is displayed, as shown in the following illustration.
    Initial Configuration Tasks interface If this interface is displayed, under Customize This Server, click Add features. Then skip to step 3.
  2. If the Initial Configuration Tasks interface is not displayed and Server Manager is not running, click Start, click Administrative Tools, and then click Server Manager. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.)
    Server Manager interface In Server Manager, under Features Summary, click Add Features.
  3. In the Add Features Wizard, click Failover Clustering, and then click Install.
  4. Follow the instructions in the wizard to complete the installation of the feature. When the wizard finishes, close it.
  5. Repeat the process for each server that you want to include in the cluster.

Step 3: Validate the cluster configuration

Before creating a cluster, we strongly recommend that you validate your configuration. Validation helps you confirm that the configuration of your servers, network, and storage meets a set of specific requirements for failover clusters.

To validate the failover cluster configuration

  1. To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Management. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.)
    Failover Clusters snap-in
  2. Confirm that Failover Cluster Management is selected and then, in the center pane under Management, click Validate a Configuration.
    Validate a Configuration wizard
  3. Follow the instructions in the wizard to specify the two servers and the tests, and then run the tests. To fully validate your configuration, run all tests before creating a cluster.
  4. The Summary page appears after the tests run. To view Help topics that will help you interpret the results, click More about cluster validation tests.
  5. While still on the Summary page, click View Report and read the test results.
    To view the results of the tests after you close the wizard, see
    SystemRoot\Cluster\Reports\Validation Report date and time.html
    where SystemRoot is the folder in which the operating system is installed (for example, C:\Windows).
  6. As necessary, make changes in the configuration and rerun the tests.
  7. To view Help topics about cluster validation after you close the wizard, in Failover Cluster Management, click Help, click Help Topics, click the Contents tab, expand the contents for the failover cluster Help, and click Validating a Failover Cluster Configuration.

Step 4: Create the cluster

To create a cluster, you run the Create Cluster wizard.

To run the Create Cluster wizard

  1. To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Management. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.)
  2. Confirm that Failover Cluster Management is selected and then, in the center pane under Management, click Create a cluster.
    Create Cluster wizard Follow the instructions in the wizard to specify:
    • The servers to include in the cluster.
    • The name of the cluster.
    • Any IP address information that is not automatically supplied by your DHCP settings.
  3. After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report.

Steps for configuring a two-node print server cluster

To configure a two-node print server failover cluster, follow these steps:

To configure a two-node print server failover cluster

  1. To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Management. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.)
  2. In the console tree, if the cluster that you created is not displayed, right-click Failover Cluster Management, click Manage a Cluster, and then select the cluster you want to configure.
  3. In the console tree, click the plus sign next to the cluster that you created to expand the items underneath it.
  4. If the clustered servers are connected to a network that is not to be used for network communication in the cluster (for example, a network intended only for iSCSI), then under Networks, right-click that network, click Properties, and then click Do not allow the cluster to use this network. Click OK.
  5. Click Services and Applications. Under Actions (on the right), click Configure a Service or Application.
    High Availability Wizard
  6. Review the text on the first page of the wizard, and then click Next.
    High Availability wizard, page 2
  7. Click Print Server, and then click Next.
  8. Follow the instructions in the wizard to specify the following details:
    • A name for the clustered print server
    • Any IP address information that is not automatically supplied by your DHCP settings—for example, a static IPv4 address for this clustered print server
    • The storage volume or volumes that the clustered print server should use
  9. After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report.
  10. To close the wizard, click Finish.
  11. In the console tree, make sure Services and Applications is expanded, and then select the clustered print server that you just created.
  12. Under Actions, click Manage Printers.
    An instance of the Failover Cluster Management interface appears with Print Management in the console tree.
  13. Under Print Management, click Print Servers and locate the clustered print server that you want to configure.
    Always perform management tasks on the clustered print server. Do not manage the individual cluster nodes as print servers.
  14. Right-click the clustered print server, and then click Add Printer. Follow the instructions in the wizard to add a printer.
    This is the same wizard you would use to add a printer on a nonclustered server.
  15. When you have finished configuring settings for the clustered print server, to close the instance of the Failover Cluster Management interface with Print Management in the console tree, click File and then click Exit.
  16. To perform a basic test of failover, right-click the clustered print server instance, click Move this service or application to another node, and click the available choice of node. When prompted, confirm your choice.
    You can observe the status changes in the center pane of the snap-in as the clustered print server instance is moved.

Additional references

The following resources provide additional information about failover clusters:

Print Services Migration: Preparing to Migrate

Applies To: Windows Server 2008 R2

Access the migration tools

The Printer Migration Wizard and the Printbrm.exe command-line tool support all migrations to Windows Server 2008 R2.
noteNote
Although the Printer Migration Wizard supports migrations from servers running Windows Server 2003 or a Server Core installation, it cannot run on these servers directly. For information about migrating from servers running these operating systems, see Print Services Migration: Preparing to Migrate.

To access the Printer Migration Wizard

Open the Print Management snap-in to access the Printer Migration Wizard:
  1. If necessary, enable the Administrative Tools menu, which is hidden by default on Windows-based client operating systems.

    1. Right-click Start, and then click Properties. The Start Menu and Taskbar Properties dialog box opens.
    2. On the Start Menu tab, click Customize. The Customize Start Menu dialog box opens.
    3. Under System Administrative Tools, select Display on the All Programs menu or Display on the All Programs menu and the Start menu.
  2. In the Administrative Tools menu, click Print Management.
CautionCaution
The Print Management snap-in filter settings will not be migrated and need to be saved independently of the printer migration.

To access the Printbrm.exe command-line tool

  1. To open a Command Prompt window, click Start, click All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator.
  2. Type:

    %WINDIR%\System32\Spool\Tools\Printbrm.exe
    
To view the complete syntax for this command, at a command prompt, type:
Printbrm.exe /?
For a listing of the available syntax for the Printbrm.exe command, see Print Services Migration: Appendix A - Printbrm.exe Command-Line Tool Syntax.

Prepare the destination server

The second step in the migration process is to prepare the destination server.

Hardware requirements for the destination server

There are no specific hardware requirements for being a print server beyond those for the version of the server operating system you are using.
The amount of disk space needed for a migration is dependent on the number of print drivers to be migrated and the size of the drivers. Because print drivers can vary widely in size, the amount of disk space can range from one megabyte to several gigabytes.

Software requirements for the destination server

Verify that hard drive space is sufficient on the destination server for the backup.
No additional software is needed other than the necessary drivers required for the printers to be hosted. Migrate these drivers from the source server.
For cross-architecture migrations, verify that the destination server contains drivers for each supported architecture.

Installing the Print and Document Services role on the destination server

You must install the Print and Document Services role on the destination server before you begin the migration process. For more information on installing this and other server roles, see the Server Manager documentation (http://go.microsoft.com/fwlink/?LinkId=133026).

Preparing for cross-architecture migrations

If you are migrating from the x86-based architecture of Windows Server 2003 or Windows Server 2008 to the x64-based architecture of Windows Server 2008 R2, you should install x64-based drivers on the source server before creating the backup file. The migration process copies all installed drivers from the source server to the destination server. It recreates the printer queues on the destination server if the printer settings file contains the x64-based drivers.
Verify that each print queue on the source server has a driver installed for the operating system on the destination server before creating the printer settings file. For example, if you are migrating an x86-based source print server to an x64-based destination print server, verify that each print queue has an x64-based driver installed before you create the printer settings file. Any print queue that does not have a cross-architecture driver installed will not be migrated to the destination server.
To install cross-architecture drivers for a printer, you can use:
  • The Add Printer Driver Wizard, which is available in the Print Management snap-in.
  • The Printer Properties dialog box, which is available through the Printers folder in the Control Panel.
As a best practice, you need to install a driver with the same name as the native architecture. To add the x86-based driver to the x64-based destination server, use the x86-based client to remotely open the x64-based server using Windows Explorer and navigate to the remote printer folder and add the driver. To install an x64-based driver on the x86-based source server, use the x64-based client to remotely open the x86-based server using Windows Explorer and navigate to the remote printer folder and add the driver.

Preparing for additional scenarios

In the following instances, installing a feature on your destination server may require additional preparation before you migrate to it:
  • The server hosts Line Printer Remote (LPR) printers.
  • The server offers Internet Printing Protocol (IPP) printer connections.
  • The server hosts Web Services on Devices (WSD) printers.
  • The server is in a server cluster.
  • The server hosts plug and play printers.
For more information on these scenarios, see Print Services Migration: Appendix B - Additional Destination Server Scenarios.

Prepare the source server

Simple system-to-system migrations require no preparation for the source server. However, additional preparation is required for cross-architecture migrations. If performing the migration as quickly as possible is a priority, remove unused drivers, ports, and queues before starting the migration to improve its speed after verifying with users that the items to remove are no longer in use. In general, however, minimize changes to the source server environment to ensure you can roll back to this environment if necessary.
CautionCaution
If your source server is running multiple roles, renaming the source server or changing its IP address can cause other roles that are running on the source server to fail.

noteNote
You should delete native print drivers that are not currently associated with a print queue because these drivers increase the size of the printer settings file unnecessarily. The print spooler will not allow a native print driver that is currently associated with a print queue to be deleted.
The Print Spooler service will use non-native drivers. It routes these drivers to the Print Server service when a non-native client connects to a print queue and has to download a driver. You should remove any unused drivers and print queues.
Do not delete a non-native driver with a corresponding native print driver that is associated with a print queue. In this instance, the Print Spooler service will not prevent the non-native driver from being deleted. If the non-native driver's architecture matches the destination server's architecture, then you must block the driver's deletion. Cross-architecture drivers will never appear to be loaded by the Print Spooler service. Administrators should only delete them after confirming the drivers are no longer needed.

To install cross-architecture drivers using the Print Management snap-in on computers running Windows Vista and Windows Server 2008

  1. Open the Print Management snap-in. Click Start, click Administrative Tools, and then click Print Management.
  2. In the Print Management tree, under Print Servers, click the print server you want.
  3. Under the print server, right-click Drivers and then select Add Driver to open the Add Printer Driver Wizard.
  4. Follow the steps as indicated by the wizard.

To install cross-architecture drivers by using only the Printer Properties dialog box on computers running Windows XP and Windows Server 2003

  1. Click Start, click Control Panel, and double-click Printers.
  2. Select Printer. Right-click Sharing.
  3. Click Additional Drivers and select Processor from the list.
  4. Follow the instructions in the dialog boxes to install the correct driver. Only install the driver associated with the printer you are administering.
noteNote
You can only add a cross-architecture driver if you have already installed a native architecture version of the same driver.

See Also

Understanding Quorum Configurations in a Failover Cluster

Applies To: Windows Server 2008 R2
This topic contains the following sections:
For information about how to configure quorum options, see Select Quorum Options for a Failover Cluster.

How the quorum configuration affects the cluster

The quorum configuration in a failover cluster determines the number of failures that the cluster can sustain. If an additional failure occurs, the cluster must stop running. The relevant failures in this context are failures of nodes or, in some cases, of a disk witness (which contains a copy of the cluster configuration) or file share witness. It is essential that the cluster stop running if too many failures occur or if there is a problem with communication between the cluster nodes. For a more detailed explanation, see Why quorum is necessary later in this topic.
ImportantImportant
In most situations, use the quorum configuration that the cluster software identifies as appropriate for your cluster. Change the quorum configuration only if you have determined that the change is appropriate for your cluster.

Note that full function of a cluster depends not just on quorum, but on the capacity of each node to support the services and applications that fail over to that node. For example, a cluster that has five nodes could still have quorum after two nodes fail, but the level of service provided by each remaining cluster node would depend on the capacity of that node to support the services and applications that failed over to it.

Quorum configuration choices

You can choose from among four possible quorum configurations:
  • Node Majority (recommended for clusters with an odd number of nodes)

    Can sustain failures of half the nodes (rounding up) minus one. For example, a seven node cluster can sustain three node failures.
  • Node and Disk Majority (recommended for clusters with an even number of nodes)

    Can sustain failures of half the nodes (rounding up) if the disk witness remains online. For example, a six node cluster in which the disk witness is online could sustain three node failures.

    Can sustain failures of half the nodes (rounding up) minus one if the disk witness goes offline or fails. For example, a six node cluster with a failed disk witness could sustain two (3-1=2) node failures.
  • Node and File Share Majority (for clusters with special configurations)

    Works in a similar way to Node and Disk Majority, but instead of a disk witness, this cluster uses a file share witness.

    Note that if you use Node and File Share Majority, at least one of the available cluster nodes must contain a current copy of the cluster configuration before you can start the cluster. Otherwise, you must force the starting of the cluster through a particular node. For more information, see "Additional considerations" in Start or Stop the Cluster Service on a Cluster Node.
  • No Majority: Disk Only (not recommended)

    Can sustain failures of all nodes except one (if the disk is online). However, this configuration is not recommended because the disk might be a single point of failure.

Illustrations of quorum configurations

The following illustrations show how three of the quorum configurations work. A fourth configuration is described in words, because it is similar to the Node and Disk Majority configuration illustration.
noteNote
In the illustrations, for all configurations other than Disk Only, notice whether a majority of the relevant elements are in communication (regardless of the number of elements). When they are, the cluster continues to function. When they are not, the cluster stops functioning.

Cluster with Node Majority quorum configuration As shown in the preceding illustration, in a cluster with the Node Majority configuration, only nodes are counted when calculating a majority.
Cluster with Node and Disk Majority quorum As shown in the preceding illustration, in a cluster with the Node and Disk Majority configuration, the nodes and the disk witness are counted when calculating a majority.
Node and File Share Majority Quorum Configuration
In a cluster with the Node and File Share Majority configuration, the nodes and the file share witness are counted when calculating a majority. This is similar to the Node and Disk Majority quorum configuration shown in the previous illustration, except that the witness is a file share that all nodes in the cluster can access instead of a disk in cluster storage.
Cluster with Disk Only quorum configuration In a cluster with the Disk Only configuration, the number of nodes does not affect how quorum is achieved. The disk is the quorum. However, if communication with the disk is lost, the cluster becomes unavailable.

Why quorum is necessary

When network problems occur, they can interfere with communication between cluster nodes. A small set of nodes might be able to communicate together across a functioning part of a network but not be able to communicate with a different set of nodes in another part of the network. This can cause serious issues. In this "split" situation, at least one of the sets of nodes must stop running as a cluster.
To prevent the issues that are caused by a split in the cluster, the cluster software requires that any set of nodes running as a cluster must use a voting algorithm to determine whether, at a given time, that set has quorum. Because a given cluster has a specific set of nodes and a specific quorum configuration, the cluster will know how many "votes" constitutes a majority (that is, a quorum). If the number drops below the majority, the cluster stops running. Nodes will still listen for the presence of other nodes, in case another node appears again on the network, but the nodes will not begin to function as a cluster until the quorum exists again.
For example, in a five node cluster that is using a node majority, consider what happens if nodes 1, 2, and 3 can communicate with each other but not with nodes 4 and 5. Nodes 1, 2, and 3 constitute a majority, and they continue running as a cluster. Nodes 4 and 5, being a minority, stop running as a cluster. If node 3 loses communication with other nodes, all nodes stop running as a cluster. However, all functioning nodes will continue to listen for communication, so that when the network begins working again, the cluster can form and begin to run.

Additional references

Configuring a Service or Application for High Availability

Applies To: Windows Server 2008 R2
This topic provides an overview of the task of configuring specific services or applications for failover clustering by using the High Availability Wizard. Instructions for running the wizard are provided in Configure a Service or Application for High Availability.
ImportantImportant
If you want to cluster a mail server or database server application, see the application's documentation for information about the correct way to install it in a cluster environment. Mail server and database server applications are complex, and they might require configuration steps that fall outside the scope of this failover clustering Help.

This topic contains the following sections:

Applications and services listed in the High Availability Wizard

A variety of services and applications can work as "cluster-aware" applications, functioning in a coordinated way with cluster components.
noteNote
When configuring a service or application that is not cluster-aware, you can use generic options in the High Availability Wizard: Generic Service, Generic Application, or Generic Script. For information about using these options, see Understanding Generic Services and Applications that Can Be Configured in a Failover Cluster.

In the High Availability Wizard, you can choose from the generic options described in the previous note, or you can choose from the following services and applications:
  • DFS Namespace Server: Provides a virtual view of shared folders in an organization. When a user views the namespace, the folders appear to reside on a single hard disk. Users can navigate the namespace without needing to know the server names or shared folders that are hosting the data.
  • DHCP Server: Automatically provides client computers and other TCP/IP-based network devices with valid IP addresses.
  • Distributed Transaction Coordinator (DTC): Supports distributed applications that perform transactions. A transaction is a set of related tasks, such as updates to databases, that either succeed or fail as a unit.
  • File Server: Provides a central location on your network where you can store and share files with users.
  • Internet Storage Name Service (iSNS) Server: Provides a directory of iSCSI targets.
  • Message Queuing: Enables distributed applications that are running at different times to communicate across heterogeneous networks and with computers that may be offline.
  • Other Server: Provides a client access point and storage only. Add an application after completing the wizard.
  • Print Server: Manages a queue of print jobs for a shared printer.
  • Remote Desktop Connection Broker (formerly TS Session Broker): Supports session load balancing and session reconnection in a load-balanced remote desktop server farm. RD Connection Broker is also used to provide users access to RemoteApp programs and virtual desktops through RemoteApp and Desktop Connection.
  • Virtual Machine: Runs on a physical computer as a virtualized computer system. Multiple virtual machines can run on one computer.
  • WINS Server: Enables users to access resources by a NetBIOS name instead of requiring them to use IP addresses that are difficult to recognize and remember.

List of topics about configuring a service or application for high availability

Wednesday, May 21, 2014

Outlook 2010′s Clean Up Feature

How many times do you get a lot of back and forth emails that are completely unnecessary to keep?  Outlook 2010 offers a new feature called Clean Up, which deletes redundant messages in a conversation, and keeps only the most recent.  This is helpful because the most recent email contains the thread of all of the quoted replies in it.  Therefore, there is no need to keep all of the other emails, when their contents are quoted in the last email.
On the Home tab, click on the Clean Up button to see the menu of options available.  See the screenshots below.

Screenshot of Outlook 2010's Clean Up menu
Outlook 2010's Clean Up button and menu options

Screenshot of Outlook 2010's Clean Up button
Outlook 2010's Clean Up button, condensed view

These are the menu options:
  • Clean Up Conversation only cleans up the conversation that is selected.
  • Clean Up Folder cleans up all of the conversations in the currently shown folder.
  • Clean Up Folder & Subfolders cleans up all of the conversations in the current folder and its subfolders.


Reduce Inbox Spam in Outlook

Set Up Junk Mail Filtering

Outlook 2007
  1. Go to the Actions menu.
  2. Select Junk Mail.
  3. From the menu that pops out, select Junk Mail Options.
  4. On the Options tab, select the level of filtering you would like. (I keep mine on Low.)
  5. Click OK.
Outlook 2010
  1. Change to the Mail view (if you aren’t there already).
  2. Go to the Home tab of the ribbon.
  3. In the Delete section, click on the Junk button.
  4. From the menu that pops out, select Junk Mail Options.
  5. On the Options tab, select the level of filtering you would like. (I keep mine on Low.)
  6. Click OK.
With filtering turned on, junk messages will go to the Junk E-mail folder in Outlook. I recommend that you check this folder regularly. I have found that on occasion, even with the filtering set on Low, emails from friends or from my organization have been sent to my Junk folder.

Block An Email Address

If you regularly receive spam from a specific email address, you can block the sender. To block a sender:

  1. Right click on the email.
  2. Select Junk Email.
  3. Select Add Sender to Blocked Senders List (wording is slightly different in 2010, but similar).