The "RAC is Not High Availability" statement seems to be very misleading. Multiple RAC users use it for HA specifically, including FlashGrid SkyCluster customers on Azure. Can it provide 100% uptime guarantee? Of course not, nothing can. But 99.99% or even 99.999% is achievable (using multi-AZ, which is the default in SkyCluster). So, the question is whether RAC is the best HA tool for Oracle ...
As RAC isn’t supported in any third-party cloud, Azure specialists are going to investigate the solutions that do provide what is required and for Azure cloud, Oracle Data Guard is very compatible with Azure cloud infrastructure HA.
Issues re-starting cluster services in Oracle RAC after eviction of one node Apr 2, 2013 9:22AM edited Sep 10, 2018 6:15PM 8 comments Answered Hello, We have the following scenario: in a 2 nodes Oracle RAC configuration, after a node eviction due to memory issues in node 1, crs resource in node 2 remains in state OFFLINE (CLEANING).
Oracle DB 12.1.0.2 Solaris 11.4 Hello Team, I would like to get some help on how to increase SGA on only second node in RAC. First node: dware11 Second node: dware12 sga_max_size first node: 11200M second node: 11200M free memory first node: 3 gb second node: 15 GB What is the maximum memory can I increase SGA on second node?
High Availability Data Guard, Sharding and Global Data Services (MOSC) ORA-00245: control file backup failed; in Oracle RAC, target might not be on shared storage Jan 7, 2016 8:21AM edited Jan 27, 2016 4:04AM 7 comments Answered
Hello, We are running Oracle 19c RAC as a 2 node cluster with a 2 node physical standby all on RedHat 8.2. The SYS password shows the expiry date as 03-JAN-2022 and I understood now with physical standbys, we did not have to worry about password file needing to be copied, but when I reset the password for SYS, the expiry date stays the same. I am changing the password using "alter user" but ...
1. We have 2 node Oracle RAC 19c (19.3.0.0) database installed over Oracle Linux 7.6 x86-64 (UEK). We gracefully shutdown cluster on each node and restart it turn by turn. The secondary node started normally after server restart and joined cluster services.