• Technology
  • March 17, 2026

Brief Steps to Setup Oracle RAC Database: Complete Guide

Alright, let's get down to brass tacks. Setting up Oracle Real Application Clusters (RAC) isn't exactly like installing a simple desktop app. It's complex, it's fiddly, and honestly, it can drive you up the wall if you miss a step. But when you need that high availability and scalability for your critical database, nothing else quite cuts it. I remember my first solo RAC setup years ago – spent a whole weekend troubleshooting network timeouts! We won't let that happen to you. This guide lays out the brief steps to setup Oracle RAC database clearly, based on real-world headaches and successes.

Before You Even Think About Installing

Jumping straight into the installer? Bad move. Been there, done that, got the T-shirt and the late-night outage calls. Planning is everything with RAC. It's like building a house – get the foundations wrong, and the whole thing wobbles.

The Absolute Must-Have Gear

You can't run RAC on a couple of old laptops in your basement. Well, technically you *might* get it to install, but performance and stability? Forget it. Here's the real hardware and OS landscape:

ComponentMinimum Requirement (Seriously, Bare Bones)Realistic Recommendation (For Production)Gotchas
Servers2 identical physical servers2+ identical physical servers (SAN-attached)Virtualized? Possible (OVM/KVM/ESXi), but requires specific configs and vendor support. Hyper-threading nuances can bite.
CPU2 cores per server (x86_64)8+ cores per server (x86_64)Architecture MUST match across nodes. Mixing AMD and Intel? Nope.
RAM4 GB per server64 GB+ per server (Depends HEAVILY on DB size & workload)OS + GI + DB needs. Underspec RAM, expect constant swapping.
OSOracle Linux 7/8, RHEL 7/8, SLES 12/15Oracle Linux 8 (with UEK) - Best supportPatch levels MUST match *exactly* across all nodes. Kernel params are critical.
Shared StorageAny block device visible to all nodesASMLib over SAN/NVMe, NFS (v4.1+ w/ Direct NFS), iSCSI (w/ proper MPIO)Latency kills performance. Network storage needs low latency (
Networking1 Network (Public)Separate NICs/Bonds for: Public, Private (Interconnect), Storage (Optional but recommended)Private NICs MUST be dedicated, same speed, ideally 10GbE+. Jumbo Frames (MTU=9000) is non-negotiable for interconnect!
Swap Space1.5x RAM (Old rule)Huge Pages configured = Less swap needed. Follow Oracle Doc (e.g., OL8: RAM > 16GB, swap = 4GB min.)Using HugePages drastically reduces swap pressure. Set vm.swappiness=1.

My Gear Stumble: Tried using mismatched 1GbE and 10GbE NICs for the interconnect once (don't ask why). Clusterware installed, then frequent node evictions started under load. Lesson: Uniformity and performance on the private network are non-negotiable.

Software Bits You Need to Gather

  • Oracle Grid Infrastructure (GI): This is the glue. Contains Clusterware (CRS) and Automatic Storage Management (ASM). You MUST get the version matching your planned Oracle DB version. Grab it from edelivery.oracle.com. Zip files usually look like `LINUX.X64_193000_grid_home.zip`.
  • Oracle Database Software: The actual RDBMS binaries. Again, match GI version. `LINUX.X64_193000_db_home.zip`.
  • Latest RU (Release Update) / RUR (Release Update Revision): Seriously, install the base, then patch IMMEDIATELY. Base releases often have annoying bugs. Find patches on My Oracle Support (MOS).
  • OPatch Utility: The tool to apply those patches. Make sure it's the version required for your GI/DB version (check MOS Doc ID 274526.1).

Why mention patching so early? Because I've seen setups fail spectacularly due to bugs fixed in the first RU. Installing the base and patching later adds complexity. Do it right after the initial GI config if possible.

The Networking Maze - Untangling Cables & Configs

This is where most people trip up. Get the network wrong, and RAC simply won't work reliably, or at all. Let's demystify.

What Needs an IP? Everything!

  • Public IPs: Each node gets one. Standard network for client access, admin. Resolvable via DNS *and* in `/etc/hosts` (Oracle *really* likes `/etc/hosts`).
  • Virtual IPs (VIPs): One per node. Lives on the public network. Critical for client failover. Must be unused, pingable, resolvable via DNS and `/etc/hosts`. This is what clients connect to!
  • SCAN (Single Client Access Name): One DNS name resolving to *three* IP addresses (SCAN VIPs) on the public network. Load balances client connections. DNS round-robin setup is crucial. *Do not* put SCAN IPs in `/etc/hosts` (unlike everything else!).
  • Private IPs: One per node, dedicated NICs/bonds. Used *only* for cluster heartbeat (cache fusion). Must be on a separate, isolated network segment. Absolutely NO DNS resolution needed. Put them in `/etc/hosts` on every node.

Biggest Networking Mistake: Putting the private interconnect on a VLAN shared with other traffic. The heartbeat traffic is constant and latency-sensitive. Any blip can cause a node to panic and evict itself ("split-brain" protection). Dedicated switches or isolated VLANs are best practice. Jumbo Frames (MTU 9000) must be enabled end-to-end for private NICs!

Here's how a typical `/etc/hords` snippet might look (simplified):

# Public Nodes
192.168.1.101   racnode1.example.com    racnode1
192.168.1.102   racnode2.example.com    racnode2

# VIPs
192.168.1.201   racnode1-vip.example.com    racnode1-vip
192.168.1.202   racnode2-vip.example.com    racnode2-vip

# SCAN - Resolved ONLY by DNS!
# (scan.example.com should have 3 A records: 192.168.1.210, .211, .212)

# Private Interconnect
10.10.10.1      racnode1-priv.example.com   racnode1-priv
10.10.10.2      racnode2-priv.example.com   racnode2-priv

Making Storage Play Nice: ASM is Your Friend

Shared storage is the heart of RAC. Oracle really pushes ASM (Automatic Storage Management), and honestly? It's worth learning. It handles striping, mirroring, and dynamic rebalancing for you. Trying to manage raw disks or filesystems across nodes is a world of pain I don't recommend.

Prepping Disks for ASM

  • Identify Disks/LUNs: Ensure each shared disk/LUN is visible and has the same path/name (like `/dev/sdb`) on ALL nodes. Multipathing software (like DM-MPIO, device-mapper-multipath) is essential for SAN/NAS to present consistent `/dev/mapper/mpathX` names.
  • Permissions: ASM disks need to be owned by the `grid` user and `asmadmin` group. Set via udev rules or ASMLib (Oracle's tool, less common now).
  • Partitioning (Optional but Recommended): Create a single partition per disk/LUN (`/dev/sdb1`, `/dev/mapper/mpathXp1`). Set partition type to `Linux (83)` or `Oracle ASM (F7)` if using GPT.

Here's a comparison of common shared storage options for RAC:

Storage TypeSetup ComplexityPerformanceManagement OverheadOracle's BlessingGood For
ASM on Block (SAN/NVMe)MediumHighestLow (Once configured)Preferred!Most Critical Production
ASM on NFS (via dNFS)MediumHigh (w/ good NAS)LowSupportedConsolidated Storage, Simplicity
ASM on iSCSIMedium/HighGood (w/ 10GbE+ & MPIO)MediumSupportedCost-effective SAN alternative
OCFS2 / ACFSHighGoodHighSupportedShared Oracle Homes (Rare), Non-DB Files
Raw DevicesHighHighVery HighDeprecated/AvoidLegacy systems only

ASM Lib vs udev: ASMLib used to be the way. Now, most folks (including Oracle) recommend using udev rules for persistent permissions. It's one less package to manage and ties into the OS better. Search MOS for "udev rules ASM" for templates specific to your OS.

The Main Event: Installing Oracle Grid Infrastructure

Finally! This creates the cluster foundation. Run this installer ONLY as the `grid` user from the GI home. Unzip your GI software into `/u01/app/grid` (or similar) owned by `grid:oinstall`.

Launch `gridSetup.sh`. Key screens:

  • Setup Option: "Configure a Standard Cluster" (usually).
  • Cluster Type: "Typical Install" is fine for standard hardware if your networking/storage is solid. "Advanced Install" gives more control over redundancy (Voting Disks, OCR).
  • SCAN Name: Enter your pre-configured SCAN DNS name (e.g., `myclusterscan.example.com`). The installer will probe DNS for the 3 IPs.
  • Cluster Nodes: Add all nodes (`racnode1`, `racnode2`, etc.). Provide SSH connectivity details (usually passwordless SSH between grid users is already set up).
  • Network Interface Usage: CRITICAL! Assign your Public network to the correct NIC (like bond0) and the Private network to your dedicated interconnect NIC (like bond1).
  • Storage Option: "Configure ASM".
  • Create ASM Disk Group: Create your first disk group (e.g., `DATA`) for OCR (Oracle Cluster Registry) and Voting Disks. Choose Normal or High redundancy. SELECT THE PREPARED DISKS (e.g., `ORCL:DISK1`, `ORCL:DISK2`). These are your ASM candidates. Specify an ASM password.
  • ASM Password: Store with Oracle (Wallet) or OS Authentication.
  • Failure Isolation Support (FIS): Enabled usually (uses IPMI, etc., for fencing). Configure if your hardware supports it.
  • Prerequisite Checks: RUN THEM. Fix anything marked as "Fixable". Ignore warnings at your peril.

The installer will copy software to all nodes and configure the cluster. This takes time. Watch `/u01/app/grid/cfgtoollogs/gridSetupActions*.log` on the first node.

OCR/Voting Disk Placement: NEVER put them on a single disk group or failure domain. For High Availability:

  • Normal Redundancy: At least 2 failure groups, OCR/Voting Files spread across them.
  • High Redundancy: At least 3 failure groups.
Putting everything on one shared LUN? Recipe for disaster if that LUN dies.

Once GI install finishes, run `root.sh` scripts *in sequence* as root on each node, as instructed. Monitor the output!

Installing the Oracle Database Software

Now for the actual database bits. Unzip the DB software into `/u01/app/oracle/product/19.0.0/dbhome_1` (or similar), owned by `oracle:oinstall`. Run `runInstaller` as the `oracle` user.

Key Screens:

  • Option: "Set Up Software Only". We'll create the database later with DBCA.
  • Grid Installation Options: "Oracle Real Application Clusters database Installation". Select ALL cluster nodes.
  • Database Edition: Enterprise Edition.
  • Install Location: Verify Oracle Base and Software Location.
  • Privileged OS Groups: Usually `dba` (OSDBA), `oper` (OSOPER), `asmadmin` (OSASM - often shared with grid user's group).
  • Prerequisite Checks: Run them, fix fixable items.

After software install, run the `root.sh` scripts as root on each node, sequentially.

PATCH! Apply the latest GI RU first (to the GI home as `grid` user using `opatch auto`), then apply the matching DB RU (to the DB home as `oracle` user using `opatch apply`). Seriously, do this before creating the database. It saves rolling patches later.

Creating Your RAC Database with DBCA

Time to make the magic happen. Run `dbca` as the `oracle` user.

  • Option: "Create Database".
  • Configuration Type: "Advanced configuration".
  • Deployment Type: "Oracle Real Application Clusters (RAC) database". Select ALL cluster nodes.
  • Database Type: "General Purpose or Transaction Processing".
  • Database Identification: Global Database Name (e.g., `mydb.example.com`), SID Prefix (e.g., `mydb`). Unique SIDs (`mydb1`, `mydb2`) will be generated per instance.
  • Management Options: Configure EM Express usually. Cloud Control if you have it.
  • Database Credentials: Set SYS, SYSTEM passwords. Different admin accounts.
  • Storage: "Use following for the database storage attributes" -> "Automatic Storage Management (ASM)".
  • Database Files Location: Select your +DATA disk group.
  • Fast Recovery Area Location: Select another ASM disk group (e.g., `+RECO`). Set size appropriately.
  • Database Options: Select what you need (Enterprise Manager Repository, Partitioning, OLAP etc.). Skip if unsure.
  • Initialization Parameters: Review Memory (SGA/PGA). Adjust `processes`, `sessions`. Character Set (AL32UTF8 recommended).
  • Creation Mode: "Create Database".

DBCA creates the database files in ASM, configures instances on all nodes, and starts the instances. Check logs in `$ORACLE_BASE/cfgtoollogs/dbca`.

Congratulations! You've navigated the brief steps to setup Oracle RAC database.

Wait, It's Working... Now What? (Post-Install Must-Dos)

Don't just walk away! The real work begins.

  • Backups: TEST RMAN backups IMMEDIATELY. Configure a sensible schedule and retention. Back up OCR and Voting Disks too!
  • Monitoring: Set up alerts for critical cluster/database events (OEM, Cloud Control, custom scripts). Monitor ASM space, interconnect traffic/errors, cluster stability.
  • Patching Strategy: Plan quarterly RU/RUR application. Test in non-prod first! GI first, then DB homes.
  • Service Management: Use Services to manage workloads and failover (`srvctl add service`).
  • Documentation: Document EVERYTHING – IPs, versions, patch levels, passwords (securely!), custom settings. Future you will thank past you.

Backup Horror Story: Assumed nightly RMAN backups were working. They weren't due to a typo in the script. Had a storage corruption 3 months later... lost a day's work restoring from older backups + redo. TEST YOUR BACKUPS WEEKLY.

Hitting Walls? Common RAC Setup Roadblocks

Even with the best planning, things break. Here are frequent trip-ups:

  • PRVF-0047 / PRVF-0051 / PRVF-0056: Network checks failing during GI install. Root causes: Missing `/etc/hosts` entries, DNS misconfiguration for SCAN, VIP conflict, network routes wrong, firewall blocking ports. Triple-check networking configs!
  • INS-20802 Cluster Verification Utility Failed: Prereq checks failing. Click "Fix & Check Again" if possible. Otherwise, manually resolve the listed issues (kernel params, packages, permissions).
  • CRS-0245 / CRS-0215: Errors during `root.sh`. Often permissions (`/u01/app` owner/groups wrong), missing packages, or environment variables (`ORACLE_HOME` set? It shouldn't be for root.sh!). Check the specific log mentioned.
  • ORA-15077 / ORA-15056: ASM disk discovery/permission issues. Verify disk permissions (`ls -l /dev/oracleasm/disks/*` or udev rules), ensure ASM instance started.
  • Node Evictions (reboots): Usually network "split-brain". Check private interconnect health (`oifcfg getif`, `ping -s 8972 racnode1-priv` test large packets), switch logs, MPIO config. Slow storage causing hang? Check ASM alert log.

Always consult logs! Primary locations:

  • GI/Clusterware: `$GRID_HOME/log//{crsd,ohasd,cssd,evmd}`
  • ASM: `$GRID_HOME/log//alert+ASM.log`
  • Database: `$ORACLE_BASE/diag/rdbms///trace/alert_.log`
  • DBCA: `$ORACLE_BASE/cfgtoollogs/dbca/`
  • OUI (Installer): `$ORACLE_BASE/oraInventory/logs`

Your Burning Oracle RAC Questions Answered (FAQs)

Q: How long does it *really* take to install Oracle RAC?

A: Honestly? Days, not hours. Seriously. Planning, hardware setup, OS config easily takes 1-2 days if you're thorough. The GI and DB software installs plus patching might be 4-6 hours. DBCA another 1-2 hours. Budget at least 2 full days for a first attempt, testing included. Rushing leads to mistakes.

Q: Can I use VMware for Oracle RAC?

A: Officially supported? Yes, but ONLY under very specific conditions (vSphere Enterprise Plus, specific cluster configurations like vSAN or approved shared storage like FC/NFS with VAAI, no vMotion during DB ops, CPU affinity/pinning often needed). Performance overhead is real. Licensing gets complex (vCPUs!). For critical prod, physical still reigns or consider Oracle Cloud.

Q: Why is the private interconnect so critical?

A: It handles the cluster heartbeat and Cache Fusion traffic (sharing data blocks between nodes' SGAs). Latency or packet loss causes nodes to think others are dead ("split-brain"), forcing evictions/reboots. Jumbo Frames (MTU 9000) is essential to reduce overhead. Dedicated 10GbE+ NICs/switches are highly recommended. Think of it as the nervous system of the cluster.

Q: Is ASM absolutely mandatory?

A: Technically, no. OCFS2 or ACFS are supported alternatives for DB files. BUT... ASM is vastly superior for managing database storage (striping, mirroring, online rebalance, easy add/remove disks). Managing raw devices is archaic and painful. ASM integrates tightly with RAC and is Oracle's strategic direction. Just use ASM.

Q: What's the biggest cost surprise with RAC?

A: Besides licensing (which is eye-watering)? Often the network infrastructure. Needing multiple high-speed dedicated switches (public, private, sometimes storage) plus NICs can add significant cost. Also, shared SAN/NAS storage performance needs careful sizing – underspec it, and the whole cluster crawls. Factor in hardware redundancy (PSUs, switches).

Q: Are there simpler alternatives to full RAC?

A: Maybe. For HA, Oracle Data Guard (physical standby) is simpler to setup and manage, provides robust disaster recovery, but requires application reconnection on failover. RAC offers transparent continuous availability. For scaling reads, Active Data Guard can help alongside RAC or standalone. For scaling writes, RAC is still the main game in the Oracle world. Oracle RAC One Node provides a single-instance RAC deployment for easier patching/HA, but doesn't scale writes. Evaluate your *actual* HA and scaling needs carefully.

So, that's the real deal on getting an Oracle RAC database up. It's complex, demanding, but incredibly powerful when done right. These brief steps to setup oracle rac database should give you a solid roadmap. Good luck, and may your cluster state always be ONLINE!

Comment

Recommended Article