2 Preparing Your Cluster
This chapter contains the information that your system administrator and network administrator need to help you, as the DBA, configure two nodes in your cluster. This chapter assumes a basic understanding of the Linux operating system. In some cases, you may need to refer to details in Oracle Grid Infrastructure Installation and Upgrade Guide for your operating system. In addition, you must have these additional privileges have to perform certain tasks in this chapter: root
or sudo
privileges for UNIX and Linux systems, Administrator privileges on Windows systems.
Topics:
- Verifying System Requirements
Before you begin your installation, check to ensure that your system meets the requirements for Oracle Real Application Clusters (Oracle RAC). - Preparing the Server
After you have verified that your system meets the basic requirements for installing Oracle RAC, the next step is to configure the server in preparation for installation. - Configuring the Network
Before you install Oracle Grid Infrastructure and Oracle RAC you need to decide on network names and configure IP addresses. - Preparing the Operating System and Software
When you install the Oracle software on your server, Oracle Universal Installer expects the operating system to have specific packages and software applications installed. - Configuring Installation Directories and Shared Storage
Oracle RAC requires access to a shared file system for storing Oracle Clusterware files. You must also determine where the Oracle software and database files will be installed.
2.1 Verifying System Requirements
Before you begin your installation, check to ensure that your system meets the requirements for Oracle Real Application Clusters (Oracle RAC).
The requirements can be grouped into the following categories:
Topics:
- Checking Operating System Certifications
You must ensure that you have a certified combination of the operating system and the Oracle Database software. - About Hardware Requirements
Each node you make part of your cluster, or Oracle Clusterware and Oracle RAC installation, must satisfy the minimum hardware requirements of the software. - About Shared Storage
Oracle Grid Infrastructure and Oracle RAC require access to shared storage, so each node and instance in the cluster can access the same set of files. - About Network Hardware Requirements
You must have the hardware to support communication, both public and private, between all the nodes in the cluster. - About IP Address Requirements
Oracle Grid Infrastructure and Oracle RAC use a variety of IP addresses for communication. - Verifying Operating System and Software Requirements
Ensure your system meets the operating system version and other software requirements.
2.1.1 Checking Operating System Certifications
You must ensure that you have a certified combination of the operating system and the Oracle Database software.
For a list of the certified operating systems and Oracle Database software, refer to My Oracle Support certification, which is located at the following website
https://support.oracle.com
You can find certification information by selecting the Certifications tab. You can also search for support notes that contain instructions on how to locate the Certification information for your platform.
Note:
Oracle Universal Installer verifies that your server and operating system meet the listed requirements. However, you should check the requirements before you start Oracle Universal Installer to ensure your server and operating system meet the requirement. This helps to avoid delays in the software installation process that you might incur if your hardware or software is not certified.
2.1.2 About Hardware Requirements
Each node you make part of your cluster, or Oracle Clusterware and Oracle RAC installation, must satisfy the minimum hardware requirements of the software.
Note:
When you install Oracle software, Oracle Universal Installer (OUI) automatically performs hardware prerequisite checks and notifies you if they are not met.
The minimum hardware requirements are:
-
At least 4 GB of RAM for Oracle Grid Infrastructure for a Cluster installations, including installations where you plan to install Oracle RAC
-
An amount of swap space that is at least equal to the amount of RAM
-
At least two network switches, each at least 1 GbE, to support the public and private networks
-
Servers should be either in runlevel 3 or runlevel 5.
-
Temporary space (at least 1 GB) available in
/tmp
-
A processor type (CPU) that is certified with the release of the Oracle software being installed (64-bit)
-
A minimum of 1024 x 786 display resolution, so that Oracle Universal Installer (OUI) displays correctly
-
All servers in the cluster have the same chip architecture, for example, all 64-bit processors
-
Access to either Storage Area Network (SAN) or Network-Attached Storage (NAS).
You must also have sufficient disk space in the software installation locations to store the Oracle software, as described in the following table.
Location | Amount | Purpose |
---|---|---|
Grid home directory | At least 8 GB | Software installation of Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM) |
Grid home directory | 100 GB is recommended | Additional disk space for the associated log files and patches |
Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) | At least 10 GB | Oracle Clusterware and Oracle ASM log files and for diagnostic collections generated by Trace File Analyzer (TFA) Collector |
Oracle home | At least 6.4 GB | Oracle Database software binaries |
Note:
Refer to the Oracle Grid Infrastructure Installation and Upgrade Guide and the Oracle Real Application Clusters Installation Guide for your operating system for the actual disk space requirements. The amount of disk space used by the Oracle software can vary, and might be higher than what is listed in this guide.
2.1.3 About Shared Storage
Oracle Grid Infrastructure and Oracle RAC require access to shared storage, so each node and instance in the cluster can access the same set of files.
An Oracle RAC database is a shared everything database. All data files, control files, redo log files, and the server parameter file (SPFILE) used by the Oracle RAC database must reside on shared storage that is accessible by all the Oracle RAC database instances. The Oracle RAC installation demonstrated in this guide uses Oracle ASM for the shared storage for Oracle Clusterware and Oracle Database files.
Oracle Clusterware achieves superior scalability and high availability by using the following components:
-
Voting disk–Manages cluster membership and arbitrates cluster ownership between the nodes in case of network failures. The voting disk is a file that resides on shared storage. For high availability, Oracle recommends that you have multiple voting disks, and that you have an odd number of voting disks. If you define a single voting disk, then use mirroring at the file system level for redundancy.
-
Oracle Cluster Registry (OCR)–Maintains cluster configuration information and configuration information about any cluster database within the cluster. The OCR contains information such as which database instances run on which nodes and which services run on which databases. The OCR also stores information about processes that Oracle Clusterware controls. The OCR resides on shared storage that is accessible by all the nodes in your cluster. Oracle Clusterware can multiplex, or maintain multiple copies of, the OCR and Oracle recommends that you use this feature to ensure high availability.
-
Grid Infrastructure Management Repository–Supports Oracle Database Quality of Service management, Memory Guard, and Cluster Health Monitor
These Oracle Clusterware components require the following disk space on a shared file system:
-
Three Oracle Clusterware Registry (OCR) files, a least 400 MB for each volume, or 1.2 GB total disk space
-
Three voting disk files, 300 MB each volume, or 900 MB total disk space
-
The Grid Infrastructure Management Repository requires an additional 5.9 GB of disk space and is stored in an OCR volume
See Also:
-
The chapter on configuring storage in Oracle Grid Infrastructure Installation and Upgrade Guide
2.1.4 About Network Hardware Requirements
You must have the hardware to support communication, both public and private, between all the nodes in the cluster.
Oracle Clusterware requires that you connect the nodes in the cluster to a private network by way of a private interconnect . The private interconnect is a separate network that you configure between cluster nodes. The interconnect serves as the communication path between nodes in the cluster. This interconnect should be a private interconnect, meaning it is not accessible to nodes that are not members of the cluster.
The interconnect used by Oracle RAC is the same interconnect that Oracle Clusterware uses. Each cluster database instance uses the interconnect for messaging to synchronize the use of shared resources by each instance. Oracle RAC also uses the interconnect to transmit data blocks that are shared between the instances.
When you configure the network for Oracle RAC and Oracle Clusterware, each node in the cluster must meet the following requirements:
-
Each node must have at least two network interface cards (NICs), or network adapters. One adapter is for the public network interface and the other adapter is for the private network interface (the interconnect). Install additional network adapters on a node if that node meets either of the following conditions:
-
Does not have at least two network adapters
-
Has two network interface cards but is using network attached storage (NAS). You should have a separate network adapter for NAS.
-
Has two network cards, but you want to use the Redundant Interconnect Usage feature, which allows multiple network adapters to be accessed as a single adapter. Redundant Interconnect Usage enables load-balancing and high availability across multiple (up to 4) private networks (also known as interconnects).
-
-
For easier management, public interface names should be the same for all nodes. If the public interface on one node uses the network adapter
eth0
, then you should configureeth0
as the public interface on all nodes. Network interface names are case-sensitive. -
The network adapter for the public interface must support TCP/IP.
-
The network adapter for the private interface must support the user datagram protocol (UDP) using high-speed network adapters and a network switch that supports TCP/IP (Gigabit Ethernet or better).
Note:
-
UDP is the default interface protocol for Oracle RAC and Oracle Clusterware.
-
You must use a switch for the interconnect. Oracle recommends that you use a dedicated network switch. Token-rings or crossover cables are not supported for the interconnect.
-
Loopback devices are not supported.
-
-
For the private network, the endpoints of all designated interconnect interfaces must be completely reachable on the network. Every node in the cluster must be able to connect to every private network interface in the cluster.
-
The host name of each node must conform to the RFC 952 standard, which permits alphanumeric characters. Host names using underscores ("_") are not allowed.
See Also:
-
Oracle Grid Infrastructure Installation and Upgrade Guide for Linux for more information about the configuration requirements for Redundant Interconnect Usage
2.1.5 About IP Address Requirements
Oracle Grid Infrastructure and Oracle RAC use a variety of IP addresses for communication.
When performing an advanced installation of Oracle Grid Infrastructure for a cluster, you can chose to use Grid Naming Service (GNS) and Dynamic Host Configuration Protocol (DHCP) for virtual IPs (VIPs). GNS uses multicast Domain Name Server (mDNS) to enable the cluster to assign host names and IP addresses dynamically as nodes are added and removed from the cluster, without requiring additional network address configuration in the domain name server (DNS).
You can configure the public network interfaces for cluster nodes to use IPv4, IPv6, or both types of IP addresses. During installation, the nodes can be connected to networks that have both address types, but you cannot configure the VIPs to use both address types during installation. After installation, you can configure cluster member nodes with a mixture of IPv4 and IPv6 addresses. Oracle Grid Infrastructure and Oracle RAC support the standard IPv6 address notations specified by RFC 2732.
This guide documents how to perform a typical installation, which does not use GNS and uses only IPv4 addresses. You must configure the following addresses manually in your corporate DNS:
-
A fixed public IP address for each node
-
A virtual IP address for each node
-
Three single client access name (SCAN) addresses for the cluster
Note:
Oracle Clusterware uses interfaces marked as private as the cluster interconnects. You do not need to specify IP addresses for the interconnect.
During installation a SCAN for the cluster is configured, which is a domain name that resolves to all the SCAN addresses allocated for the cluster. The public IP addresses, SCAN addresses, and VIP addresses must all be on the same subnet. The SCAN must be unique within your network. The SCAN addresses should not respond to ping
commands before installation. In a Typical installation, the SCAN is also the name of the cluster, so the SCAN must meet the same requirements as the cluster name and can be no longer than 15 characters.
During installation of the Oracle Grid Infrastructure for a cluster, a listener is created for each of the SCAN addresses. Clients that access the Oracle RAC database should use the SCAN or SCAN address, not the VIP name or address. If an application uses a SCAN to connect to the cluster database, then the network configuration files on the client computer do not have to be modified when nodes are added to or removed from the cluster. The SCAN and its associated IP addresses provide a stable name for clients to use for connections, independent of the nodes that form the cluster. Clients can connect to the cluster database using the easy connect naming method and the SCAN.
See Also:
-
Oracle Database Net Services Administrator's Guide for information about the easy connect naming method
-
Oracle Grid Infrastructure Installation and Upgrade Guide for your platform for more information about configuring GNS and IPv6 network addresses for your cluster
2.1.6 Verifying Operating System and Software Requirements
Ensure your system meets the operating system version and other software requirements.
Refer to the Oracle Grid Infrastructure Installation and Upgrade Guide and the Oracle Real Application Clusters Installation Guide for your platform for information about specific requirements for your environment.
Topics:
- About Operating System and Software Requirements
Before installing the Oracle software, check that your operating system meets the requirements. - About Installation Fixup Scripts
Oracle Universal Installer (OUI) detects when the minimum requirements for an installation are not met, and creates shell scripts, called Fixup scripts, to finish incomplete system configuration steps. - Checking the Current Operating System Configuration
Instead of waiting for the installation to notify you of an incorrect configuration, you can use operating system commands to manually check the operating system configuration before installation. This helps you determine if additional time will be needed to update the operating system before starting the Oracle Grid Infrastructure installation.
2.1.6.1 About Operating System and Software Requirements
Before installing the Oracle software, check that your operating system meets the requirements.
The operating system and software requirements might include:
-
The operating system release
-
The kernel version of the operating system
-
Modifying the values for kernel parameters
-
Installed packages, patches, or patch sets
-
Installed compilers and drivers
-
Web browser type and version
-
Additional application software requirements
If you are currently running an operating system release that is not supported by Oracle Database 12c release 2 (12.2), then you must first upgrade your operating system before installing Oracle Real Application Clusters.
If you are using Oracle Linux as your operating system, then you can use the Oracle Validated RPM system configuration script to configure your system.
2.1.6.2 About Installation Fixup Scripts
Oracle Universal Installer (OUI) detects when the minimum requirements for an installation are not met, and creates shell scripts, called Fixup scripts, to finish incomplete system configuration steps.
If OUI detects an incomplete task, then it generates a Fixup script (runfixup.sh
). You can run the script after you click Fix and Check Again.
Fixup scripts do the following:
-
If necessary, set kernel parameters to values required for successful installation, including:
-
Shared memory parameters
-
Open file descriptor and UDP send/receive parameters
-
-
Create and set permissions on the Oracle Inventory (central inventory) directory.
-
Create or reconfigure primary and secondary group memberships for the installation owner, if necessary, for the Oracle Inventory directory and the operating system privileges groups
-
Set shell limits, if necessary, to required values.
2.1.6.3 Checking the Current Operating System Configuration
Instead of waiting for the installation to notify you of an incorrect configuration, you can use operating system commands to manually check the operating system configuration before installation. This helps you determine if additional time will be needed to update the operating system before starting the Oracle Grid Infrastructure installation.
To determine if the operating system requirements for Oracle Linux have been met:
2.2 Preparing the Server
After you have verified that your system meets the basic requirements for installing Oracle RAC, the next step is to configure the server in preparation for installation.
Topics:
- About Operating System Users and Groups
Depending on whether this is the first time Oracle software is installed on this server, you may have to create operating system groups and users. - Configuring Operating System Users and Groups on Linux Systems
This task describes how to create thegrid
andoracle
users prior to installing the software. - Configuring Secure Shell on Linux Systems
To install Oracle software, Secure Shell (SSH) connectivity must be set up between all cluster member nodes. - About Configuring the Software Owner's Shell Environment on Linux Systems
On Oracle Linux, you run Oracle Universal Installer (OUI) from thegrid
account. OUI obtains information from the environment variables configured for thegrid
user.
2.2.1 About Operating System Users and Groups
Depending on whether this is the first time Oracle software is installed on this server, you may have to create operating system groups and users.
Topics:
- Required Operating System Users and Groups
To install Oracle Grid Infrastructure for a cluster and Oracle Real Application Clusters (Oracle RAC), you must create certain operating system groups and users. - Separate Operating System Users and Groups for Oracle Software Installations on UNIX and Linux Systems
Instead of using a single operating system user as the owner of every Oracle software installation, you can use multiple users, each owning one or more Oracle software installations. - Separate Operating System Users and Groups for Oracle Software Installations on Windows Systems
On the Windows platform, management of operating system users and groups is different from Linux and UNIX Systems. - Optional Operating System Users and Groups
You can use additional users and groups to divide administrative access privileges to the Oracle Grid Infrastructure for a cluster installation from other administrative users and groups associated with other Oracle installations.
2.2.1.1 Required Operating System Users and Groups
To install Oracle Grid Infrastructure for a cluster and Oracle Real Application Clusters (Oracle RAC), you must create certain operating system groups and users.
Table 2-1 Required Operating System Users and Groups for Oracle Grid Infrastructure and Oracle RAC Installations
Operating System User or Group | Description |
---|---|
The Oracle Inventory group for all Oracle software installations ( |
The Oracle Inventory group must be the primary group for Oracle software installation owners on Linux and UNIX platforms. Members of the Oracle Inventory group have access to the Oracle Inventory directory. This directory is the central inventory record of all Oracle software installations on a server and the installation logs and trace files from each installation. On Windows systems, this group is created and managed automatically. |
An Oracle software owner, or installation user. This is the user account you use when installing the software. |
If you want to use a single software owner for all installations, then typically the user name is Using separate users to install Oracle Grid Infrastructure and Oracle RAC is the recommended configuration. |
The OSRAC group for Oracle Database authentication. |
The OSRAC group is a system privileges group whose members are granted the SYSRAC privilege to administer Oracle Database and the SYSASM privilege to administer Oracle Clusterware and Oracle ASM. To provide fine-grained control of administrative privileges you can create multiple operating system groups as described in "Optional Operating System Users and Groups". |
Note:
When installing Oracle RAC on a Microsoft Windows platform:
-
OUI automatically creates the groups used for system authenticating access to the Oracle software.
-
The user performing the installation must be an Administrator user.
-
If you specify an Oracle Home user during installation, this user must be a domain user.
On Linux and UNIX systems, if you use one installation owner for both Oracle Grid Infrastructure for a cluster and Oracle RAC, then when you want to perform administration tasks, you must change the value for ORACLE_HOME
environment variable to match the instance you want to administer (Oracle ASM, in the Grid home, or a database instance in the Oracle home). To change the ORACLE_HOME
environment variable, use a command syntax similar to the following example, where
is the Oracle Grid Infrastructure for a cluster home:
/u01/app/12.2.0/grid
ORACLE_HOME=/u01/app/12.2.0/grid
; export ORACLE_HOME
If you try to administer an instance using SQL*Plus, LSNRCTL, or ASMCMD while ORACLE_HOME
is set to an incorrect binary path, then you will encounter errors. The Oracle home path does not affect SRVCTL commands.
On Windows systems, to perform administration tasks, you simply run the appropriate utility for either Oracle Grid Infrastructure or Oracle RAC and ORACLE_HOME
is configured automatically.
2.2.1.2 Separate Operating System Users and Groups for Oracle Software Installations on UNIX and Linux Systems
Instead of using a single operating system user as the owner of every Oracle software installation, you can use multiple users, each owning one or more Oracle software installations.
Instead of using a single operating system user as the owner of every Oracle software installation, you can use multiple users, each owning one or more Oracle software installations. A user created to own only the Oracle Grid Infrastructure for a cluster installation is called the grid
user. This user owns both the Oracle Clusterware and Oracle Automatic Storage Management binaries. A user created to own either all Oracle software installations (including Oracle Grid Infrastructure for a cluster), or only Oracle Database software installations, is called the oracle
user.
You can also use different users for each Oracle Database software installation. Additionally, you can specify a different OSDBA group for each Oracle Database software installation. By using different operating system groups for authenticating administrative access to each Oracle Database installation, users have SYSDBA
or SYSRAC
privileges for the databases associated with their OSDBA or OSRAC group, rather than for all the databases on the system.
Members of the OSDBA group can also be granted the SYSASM
system privilege, which gives them administrative access to Oracle ASM. As described in "Optional Operating System Users and Groups", you can configure a separate operating system group for Oracle ASM authentication to separate users with SYSASM
access to the Oracle ASM instances from users with SYSDBA
access to the database instances.
If you want to create separate Oracle software owners so you can use separate users and operating system privilege groups for the different Oracle software installations, then note that each of these users must have the Oracle central inventory group (oinstall
) as their primary group. Members of this group have the required write privileges to the Oracle Inventory directory.
Note:
The Oracle Grid Infrastructure for a cluster installation can be owned by only one user. You cannot have one user that owns the Oracle Clusterware installed files and a different user that owns the Oracle ASM installed files.
2.2.1.3 Separate Operating System Users and Groups for Oracle Software Installations on Windows Systems
On the Windows platform, management of operating system users and groups is different from Linux and UNIX Systems.
You create at least one Oracle Installation user (an Administrator user who installs Oracle software) when you install Oracle Grid Infrastructure. You can use this Oracle Installation user or a different Oracle Installation user when installing Oracle Database software.
You should create additional non-privileged Windows user accounts to use as Oracle Home users during installation. An Oracle Home User is different from an Oracle Installation user. An Oracle Installation user is the user who needs administrative privileges to install Oracle products. An Oracle Home user is a low-privileged Windows User Account specified during installation that runs most of the Windows services required for the Oracle home. Different Oracle homes on a system can share the same Oracle Home user or use different Oracle Home users. You do not need to create any operating system groups because they are created and managed automatically by the Oracle software.
The Oracle Home user can be a Built-in Account or a Windows domain user. The Windows domain user should be a low-privileged, non-administrative account to ensure that the Oracle Database services running under Oracle Home User have only those privileges required to run Oracle products. Create separate Oracle Home users for each Oracle software product to have separate administrative controls for the different Oracle Homes.
If you use different Oracle Home users for each Oracle Database software installation, then you can use the ORA_HOMENAME_RAC
group that is associated with each Oracle home to separate SYSRAC privileges for each installation. A member of an ORA_HOMENAME_RAC
group can use operating system authentication to log in to only the Oracle Databases that run from the Oracle home with the name HOMENAME
. Members of the ORA_RAC
group can use operating system authentication to log in to any Oracle database with SYSRAC privileges.
You can also use the Oracle ASM access control feature to enable role separation for Oracle ASM management. In previous releases, this feature was disabled on Windows because all Oracle Database services ran as Local System. For Oracle ASM administration, the new groups ORA_ASMADMIN
, ORA_ASMDBA
and ORA_ASMOPER
are automatically created and populated during Oracle Grid Infrastructure installation.
2.2.1.4 Optional Operating System Users and Groups
You can use additional users and groups to divide administrative access privileges to the Oracle Grid Infrastructure for a cluster installation from other administrative users and groups associated with other Oracle installations.
You implement separate administrative access for users by specifying membership in different operating system groups, You implement separate installation privileges by using different installation owners for each Oracle installation. The optional users and groups you can use are:
-
The Oracle Automatic Storage Management Group, or OSASM group (typically
asmadmin
for Linux andORA_ASMADMIN
for Windows). -
The ASM Database Administrator group, or OSDBA for ASM group (typically
asmdba
for Linux and ORA_ASMDBA for Windows). -
The OSOPER for Oracle Database group (typically,
oper
for Linux andORA_OPER
for Windows). -
The OSOPER for Oracle ASM group (typically
asmoper
for Linux andORA_ASMOPER
for Windows). -
The OSRACDBA group for the cluster agents to use the SYSRAC role instead of the SYSDBA role (typically
racdba
for Linux andORA_RAC
for Windows ). -
The OSBACKUPDBA group for Oracle Database.
-
The OSDGDBA group for Oracle Data Guard.
-
The OSKMDBA group for encryption key management.
See Also:
-
About Configuring the Software Owner's Shell Environment on Linux Systems
-
Oracle Database Security Guide for information about the privileges associated with administrative users and groups
Oracle Grid Infrastructure Installation and Upgrade Guide for your operating system for more information about configuring separation of privileges by role.
2.2.2 Configuring Operating System Users and Groups on Linux Systems
This task describes how to create the grid
and oracle
users prior to installing the software.
The instructions in this guide use one software owner for the Oracle Grid Infrastructure installation, and a separate user is used for the Oracle Database (Oracle Real Application Clusters) installation. The users are named grid
and oracle
. The oracle
user belongs to the oinstall
and dba
operating system groups.
To create software owners with all operating system-authenticated administration privileges:
2.2.3 Configuring Secure Shell on Linux Systems
To install Oracle software, Secure Shell (SSH) connectivity must be set up between all cluster member nodes.
OUI uses the ssh
and scp
commands during installation to run remote commands on and copy files to the other cluster nodes. You must configure SSH so that these commands do not prompt for a password. SSH is also used by the configuration assistants, Enterprise Manager, and when adding nodes to the cluster.
You can configure SSH from the Oracle Universal Installer (OUI) interface during installation for the user account running the installation. The automatic configuration creates passwordless SSH connectivity between all cluster member nodes.
To enable the script to run, you must remove stty
commands from the profiles of any Oracle software installation owners, and remove other security measures that are triggered during a login, and that generate messages to the terminal. These messages, mail checks, and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer. If they are not disabled, then SSH must be configured manually before an installation can be run.
On Linux systems, to enable Oracle Universal Installer to use the ssh
and scp
commands without being prompted for a pass phrase, you must have user equivalency in the cluster. User equivalency exists in a cluster when the following conditions exist on all nodes in the cluster:
-
A given user has the same user name, user ID (UID), and password.
-
A given user belongs to the same groups.
-
A given group has the same group ID (GID).
See Also:
-
"Configuring Operating System Users and Groups on Linux Systems" for information about configuring user equivalency
-
Oracle Grid Infrastructure Installation Guide for your platform for more information about manually configuring SSH
2.2.4 About Configuring the Software Owner's Shell Environment on Linux Systems
On Oracle Linux, you run Oracle Universal Installer (OUI) from the grid
account. OUI obtains information from the environment variables configured for the grid
user.
Before running OUI, you must make the following changes to the shell startup file for the software owner of Oracle Grid Infrastructure for a cluster:
-
Set the default file mode creation mask (
umask
) of the installation user (grid
) to 022 in the shell startup file. Setting the mask to 022 ensures that the user performing the software installation creates files with 644 permissions. -
Set
ulimit
settings for file descriptors (nofile
) and processes (nproc
) for the installation user (grid
). -
Set the software owner's environment variable
DISPLAY
environment variables in preparation for the Oracle Grid Infrastructure for a cluster installation. -
Remove any lines in the file that set values for
ORACLE_SID
,ORACLE_HOME
, orORACLE_BASE
environment variables.
After you have saved your changes, run the shell startup script to configure the environment.
Also, if the /tmp
directory has less than 1 GB of available disk space, but you have identified a different, non-shared file system that has at least 1 GB of available space, then you can set the TEMP
and TMPDIR
environment variables to specify the alternate temporary directory on this file system.
To review your current environment settings, use the env | more
command as the grid
user.
Note:
Remove any stty
commands from hidden files (such as logon or profile scripts) before you start the installation. On Linux systems, if there are any such files that contain stty
commands, then when these files are loaded by the remote shell during installation, OUI indicates an error and stops the installation.
See Also:
-
Oracle Grid Infrastructure Installation Guide for your platform for more information on how to configure the Oracle software owner environment before installation
2.3 Configuring the Network
Before you install Oracle Grid Infrastructure and Oracle RAC you need to decide on network names and configure IP addresses.
Clients and applications connect to the database using a single client access name (SCAN). The SCAN and its associated IP addresses provide a stable name for clients to use for connections, independent of the nodes that form the cluster. A SCAN works by being able to resolve to multiple IP addresses referencing multiple listeners in the cluster that handle public client connections. The installation process creates these listeners, called SCAN listeners.
To configure the network in preparation for installing Oracle Grid Infrastructure for a cluster:
After you have completed the installation process, configure clients to use the SCAN to access the cluster. Using the previous example, the clients would use docrac-scan
to connect to the cluster.
Topics:
- Verifying the Network Configuration
After you have configured the network, perform verification tests to make sure it is configured properly. If there are problems with the network connection between nodes in the cluster, then the Oracle Clusterware installation fails.
See Also:
2.3.1 Verifying the Network Configuration
After you have configured the network, perform verification tests to make sure it is configured properly. If there are problems with the network connection between nodes in the cluster, then the Oracle Clusterware installation fails.
To verify the network configuration on a two-node cluster that is running Oracle Linux:
2.4 Preparing the Operating System and Software
When you install the Oracle software on your server, Oracle Universal Installer expects the operating system to have specific packages and software applications installed.
Topics:
- About Setting the Time on All Nodes
Before starting the installation, ensure that the date and time settings on all the cluster nodes are set as closely as possible to the same date and time. - About Configuring Kernel Parameters
OUI checks the current settings for various kernel parameters to ensure they meet the minimum requirements for deploying Oracle RAC. For production database systems, Oracle recommends that you tune the settings to optimize the performance of your particular system. - About Performing Platform-Specific Configuration Tasks
You may be required to perform special configuration steps that are specific to the operating system on which you are installing Oracle RAC, or for the components used with your cluster.
2.4.1 About Setting the Time on All Nodes
Before starting the installation, ensure that the date and time settings on all the cluster nodes are set as closely as possible to the same date and time.
A cluster time synchronization mechanism ensures that the internal clocks of all the cluster members are synchronized. For Oracle RAC on Linux, you can use either the Network Time Protocol (NTP) or the Oracle Cluster Time Synchronization Service.
NTP is a protocol designed to synchronize the clocks of servers connected by a network. When using NTP, each server on the network runs client software to periodically make timing requests to one or more servers, referred to as reference NTP servers. The information returned by the timing request is used to adjust the server's clock. All the nodes in your cluster should use the same reference NTP server.
Note:
If you use NTP on Linux or UNIX platforms, then you must configure it using the -x
flag. See the Oracle Grid Infrastructure Installation and Upgrade Guide for your platform for details on how to configure NTP.
If you do not configure NTP, then Oracle configures and uses the Cluster Time Synchronization Service (CTSS). CTSS can also be used to synchronize the internal clocks of all the members in the cluster. CTSS keeps the member nodes of the cluster synchronized. CTSS designates the first node in the cluster as the master and then synchronizes all other nodes in the cluster to have the same time as the master node. CTSS does not use any external clock for synchronization.
Note:
Using NTP or CTSS does not protect your system against human error resulting from a change in the system time for a node.
2.4.2 About Configuring Kernel Parameters
OUI checks the current settings for various kernel parameters to ensure they meet the minimum requirements for deploying Oracle RAC. For production database systems, Oracle recommends that you tune the settings to optimize the performance of your particular system.
Note:
If you find parameter settings or shell limit values on your system that are greater than the values mentioned by OUI, then do not modify the parameter setting.
2.4.3 About Performing Platform-Specific Configuration Tasks
You may be required to perform special configuration steps that are specific to the operating system on which you are installing Oracle RAC, or for the components used with your cluster.
The following list provides examples of operating-specific installation tasks:
-
Configure the use of Huge Pages on SUSE Linux, Red Hat Enterprise Linux, or Oracle Linux.
-
Set shell limits for the
oracle
user on Red Hat Linux or Oracle Linux systems to increase the number of files and processes available to Oracle Clusterware and Oracle RAC. -
Create X library symbolic links on HP-UX.
-
Configure network tuning parameters on AIX Based Systems.
2.5 Configuring Installation Directories and Shared Storage
Oracle RAC requires access to a shared file system for storing Oracle Clusterware files. You must also determine where the Oracle software and database files will be installed.
You must complete certain storage configuration tasks before you start Oracle Universal Installer.
Note:
Additional configuration is required to use the Oracle Automatic Storage Management library Driver (ASMLIB) with third party vendor multipath disks. Refer to Oracle Grid Infrastructure Installation Guide for your operating system for more details about these requirements.
Topics:
- About the Oracle Inventory Directory
The Oracle Inventory (oraInventory
) directory is the central inventory record of all Oracle software installations on a server. - Locating the Oracle Inventory Directory
If you have an existing Oracle Inventory, then ensure that you use the same Oracle Inventory for all Oracle software installations, and ensure that all Oracle software users you intend to use for installation have permissions to write to this directory. - Creating the Oracle Grid Infrastructure for a Cluster Home Directory
During installation, you are prompted to provide a path to a home directory in which to place the software for Oracle Grid Infrastructure for a cluster. OUI installs Oracle Clusterware and Oracle ASM into a directory referred to asGrid_home
. - Creating the Oracle Base Directory
Before installing any Oracle software, you must configure an Oracle base directory. OUI uses the Oracle base directory to determine the location of the Oracle Inventory directory, and also for installing Oracle RAC. - About the Oracle Home Directory
The Oracle home directory is the location in which the Oracle RAC software is installed. - Configuring Shared Storage
Each node in a cluster requires external shared disks for storing the Oracle Clusterware (Oracle Cluster Registry and voting disk) files, and Oracle Database files. - Creating Files on a NAS Device for Use with Oracle Automatic Storage Management
If you have a certified NAS storage device, then you can create zero-padded files in an NFS mounted directory and use those files as disk devices in an Oracle ASM disk group. - About Oracle ASM with Oracle ASM Filter Driver
During Oracle Grid Infrastructure installation, you can choose to install and configure Oracle Automatic Storage Management Filter Driver (Oracle ASMFD). Oracle ASMFD helps prevent corruption in Oracle ASM disks and files within the disk group. - Using Oracle ASMFD to Configure Disks for Oracle ASM
Oracle ASM Filter Driver (Oracle ASMFD) simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
See Also:
2.5.1 About the Oracle Inventory Directory
The Oracle Inventory (oraInventory
) directory is the central inventory record of all Oracle software installations on a server.
The oraInventory
directory contains the following:
-
A registry of the Oracle home directories (Oracle Grid Infrastructure for a cluster and Oracle Database) on the system
-
Installation logs and trace files from installations of Oracle software. These files are also copied to the respective Oracle homes for future reference
OUI creates the oraInventory
directory during installation. By default, the Oracle Inventory directory is not installed under the Oracle Base directory. This is because all Oracle software installations share a common Oracle Inventory, so there is only one Oracle Inventory for all users, whereas each user that performs a software installation can use a separate Oracle base directory.
2.5.2 Locating the Oracle Inventory Directory
If you have an existing Oracle Inventory, then ensure that you use the same Oracle Inventory for all Oracle software installations, and ensure that all Oracle software users you intend to use for installation have permissions to write to this directory.
To determine if you have an Oracle central inventory directory (oraInventory) on your system:
See Also:
"About the Oracle Inventory Directory"2.5.3 Creating the Oracle Grid Infrastructure for a Cluster Home Directory
During installation, you are prompted to provide a path to a home directory in which to place the software for Oracle Grid Infrastructure for a cluster. OUI installs Oracle Clusterware and Oracle ASM into a directory referred to as Grid_home
.
Ensure that the directory path you provide for the Grid_home
meets the following requirements:
-
It should be created in a path outside existing Oracle homes.
-
It should not be located in a user home directory.
-
It should be created either as a subdirectory in a path where all files can be owned by
root
, or in a unique path. -
If you create the path before installation, then it should be owned by the installation owner of Oracle Grid Infrastructure for a cluster (
oracle
orgrid
), and set to 775 permissions.
Before you start the installation, you must have sufficient disk space on a file system for the Oracle Grid Infrastructure for a cluster directory. The file system that you use for the Grid home directory must have at least 4.5 GB of available disk space.
The path to the Grid home directory must be the same on all nodes. As the root
user, you should create a path compliant with Oracle Optimal Flexible Architecture (OFA) guidelines, so that OUI can select that directory during installation.
To create the Grid home directory:
Log in to the operating system as the root
user, then enter the following commands, where grid
is the name of the user that will install the Oracle Grid Infrastructure software:
# mkdir -p /u01/app/12.2.0/grid
# chown -R grid:oinstall /u01/app/12.2.0/grid
Note:
On Linux and UNIX systems, you must ensure the Grid home directory is not a subdirectory of the Oracle base directory. Installing Oracle Clusterware in an Oracle base directory causes installation errors.
Related Topics
2.5.4 Creating the Oracle Base Directory
Before installing any Oracle software, you must configure an Oracle base directory. OUI uses the Oracle base directory to determine the location of the Oracle Inventory directory, and also for installing Oracle RAC.
Oracle Universal Installer (OUI) creates the Oracle base directory for you in the location you specify. This directory is owned by the user performing the installation. The Oracle base directory (ORACLE_BASE
) helps to facilitate the organization of Oracle installations, and helps to ensure that installations of multiple databases maintain an Optimal Flexible Architecture (OFA) configuration.
OFA guidelines recommend that you use a path similar to the following for the Oracle base directory:
/mount_point/app/user
In the preceding path example, the variable mount_point is the mount point directory for the file system where you intend to install the Oracle software and user
is the Oracle software owner (typically oracle
). For OUI to recognize the path as an Oracle software path, it must be in the form u0[1-9]/app
, for example, /u01/app
.
The path to the Oracle base directory must be the same on all nodes. The permissions on the Oracle base directory should be at least 750.
Assume you have determined that the file system mounted as /u01
has sufficient room for both the Oracle Grid Infrastructure for a cluster and Oracle RAC software. You have decided to make the /u01/app/oracle/
directory the Oracle base directory. The user installing all the Oracle software is the oracle
user.
To create the Oracle base directory:
2.5.5 About the Oracle Home Directory
The Oracle home directory is the location in which the Oracle RAC software is installed.
You can use an Oracle home directory created in the local file system, for example,
. The same directory must exist on every node in the cluster. You do not have to create these directories before installation. By default, the installer suggests a subdirectory of the Oracle base directory for the Oracle home.
/u01/app/oracle/product/12.2.0/dbhome_1
You can also use a shared Oracle home. The location of the shared Oracle home can be on network storage, or a supported cluster file system such as Oracle Automatic Storage Management Cluster File System (Oracle ACFS). For more information about Oracle ACFS, see Oracle Automatic Storage Management Administrator's Guide.
If you use the local file system for the Oracle home directory, and you want to install a different release of Oracle RAC or Oracle Database on the same server, then you must use a separate Oracle home directory for each software installation. Multiple releases of the same product or different products can run from different Oracle homes concurrently. Products installed in one Oracle home do not conflict or interact with products installed in another Oracle home.
Using different Oracle homes for your installed software enables you to perform maintenance operations on the Oracle software in one home without effecting the software in another Oracle home. However, it also increases your software maintenance costs, because each Oracle home must be upgraded or patched separately.
2.5.6 Configuring Shared Storage
Each node in a cluster requires external shared disks for storing the Oracle Clusterware (Oracle Cluster Registry and voting disk) files, and Oracle Database files.
The supported types of shared storage depend upon the platform you are using, for example:
-
Oracle Automatic Storage Management (strongly recommended)
-
A supported cluster file system, such as OCFS2 for Linux, or General Parallel File System (GPFS) on IBM platforms
-
Network file system (NFS), which is not supported on Linux on POWER or on IBM zSeries Based Linux
-
(Upgrades only) Shared disk partitions consisting of block devices. Block devices are disk partitions that are not mounted using the Linux file system. Oracle Clusterware and Oracle RAC write to these partitions directly.
Note:
You cannot use OUI to install Oracle Clusterware files on block or raw devices. You cannot put Oracle Clusterware binaries and files on Oracle Automatic Storage Management Cluster File System (Oracle ACFS).
If you decide to use OCFS2 to store the Oracle Clusterware files, then you must use the proper version of OCFS2 for your operating system version. OCFS2 works with Oracle Linux and Red Hat Linux kernel version 2.6.
For all installations, you must choose the storage option to use for Oracle Clusterware files and Oracle Database files. The examples in this guide use Oracle ASM to store the Oracle Clusterware and Oracle Database files. The Oracle Grid Infrastructure for a cluster and Oracle RAC software is installed on disks local to each node, not on a shared file system.
This guide describes two different methods of configuring the shared disks for use with Oracle ASM:
Note:
For the most up-to-date information about supported storage options for Oracle RAC installations, refer to the Certifications pages on My Oracle Support at:
See Also:
-
Oracle Grid Infrastructure Installation and Upgrade Guide for your platform if you are using a cluster file system or NFS
2.5.7 Creating Files on a NAS Device for Use with Oracle Automatic Storage Management
If you have a certified NAS storage device, then you can create zero-padded files in an NFS mounted directory and use those files as disk devices in an Oracle ASM disk group.
Related Topics
2.5.8 About Oracle ASM with Oracle ASM Filter Driver
During Oracle Grid Infrastructure installation, you can choose to install and configure Oracle Automatic Storage Management Filter Driver (Oracle ASMFD). Oracle ASMFD helps prevent corruption in Oracle ASM disks and files within the disk group.
Oracle ASM Filter Driver (Oracle ASMFD) rejects write I/O requests that are not issued by Oracle software. This write filter helps to prevent users with administrative privileges from inadvertently overwriting Oracle ASM disks, thus preventing corruption in Oracle ASM disks and files within the disk group. For disk partitions, the area protected is the area on the disk managed by Oracle ASMFD, assuming the partition table is left untouched by the user.
Oracle ASMFD simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
If Oracle ASMLIB exists on your Linux system, then deinstall Oracle ASMLIB before installing Oracle Grid Infrastructure, so that you can choose to install and configure Oracle ASMFD during an Oracle Grid Infrastructure installation.
Note:
Oracle ASMFD is supported on Linux x86–64 and Oracle Solaris operating systems.Related Topics
2.5.9 Using Oracle ASMFD to Configure Disks for Oracle ASM
Oracle ASM Filter Driver (Oracle ASMFD) simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
The overall process of configuring disk with Oracle ASMFD is:
-
Unzip the Oracle Grid Infrastructure software into the Oracle Grid home directory.
-
Use the Oracle ASM command tool (ASMCMD) to label the disks to use for the disk group during installation.
-
Install and configure Oracle Grid Infrastructure. Select the disks and the option to use Oracle ASMFD during installation.