Skip to content

Desaster Recovery in the cloud, Part 2

January 25, 2012

In Part 1 we talked about some general requirements for setting up a simple-to-use disaster recovery solution for Oracle databases.

Today, I built up the same inside the Amazon cloud, using different data centers in order to build up a robust DR solution.

I used Amazon’s EC2 (Elastic Compute Cloud) for that. If you are new to Amazon EC2, have a look into this earlier series of blog posts covering EC2 basics

Specifically I used two different availability zones inside Europe, called eu-west-1a and eu-west-1b. Think of these as two independent data centers run by Amazon, both located in Europe. The same scenario would work with e.g. eu-west-1a and us-east-1a, another Amazon data center located somewhere in Virginia, USA. Or we could even use different cloud providers to work around major issues concerning a single provider.

While I could have set up my own Oracle Home installation, for ease of use I preferred one of Oracle’s pre-built Amazon Machine Images. A list of these is available at http://aws.amazon.com/amis/Oracle, and I used the one with AMI ID ami-8d97bcf9, containing an Oracle Database 11g Release 2 (11.2.0.1) Standard Edition – 64 Bit on Oracle Linux 5.4.

Setup is as follows:

  1. Created an EC2 security group (firewall ruleset) opening up ports 22 (SSH), 1521 (Oracle listener) and 8081 (HTTP for Dbvisit’s Web Console).
  2. Fired up two instances of AMI ID ami-8d97bcf9: one called PrimaryDB in availability zone eu-west-1a, one called StandbyDB in availability zone eu-west-1b.
  3. Reserved two Elastic IP addresses and assigned one of them to each instance.
  4. Logged in as root to the primary instance (the only login possible initially). I just followed the wizard to build up a database. Alternatively, it’s possible to just cancel the wizard and use Oracle’s DBCA to build a database.
  5. Logged in as root to the standby instance, followed the wizard, but answered “N” when it asked “Would you like to create a database now?” The standby database will be built later as a copy of the primary.
  6. Adjusted listener.ora file on both instances to listen on the Public DNS address of the elastic IP address, e.g. ec2-176-34-178-144.eu-west-1.compute.amazonaws.com for the Elastic IP address 176.34.178.144.
  7. Downloaded and installed the Dbvisit Standby software for Redhat Linux into both instances, under /u01/app/dbvisit.
  8. Established SSH public/private keys so that the oracle OS user can connect from each server to the other server without interaction

Then I browsed to Dbvisit’s Web console on the primary server, logged in and started the creation process for a standby database. This is basically four steps:

  1. Setup and configure the standby environment: The wizard asks for everything around the planned standby environment. After finishing will all the questions, nothings yet happens on the database, but a Dbvisit database configuration file (DDC file) is built.

    Configuring the standby environment

    Configuring the standby environment

  2. A few manual modifications in this DDC file were needed because of some specialties in the EC2 cloud:
    As the host name of the EC2 instance is not fixed, I instructed Dbvisit to use the Public DNS name instead of the regular host name. This is done by setting:
    HOSTNAME_CMD = /u01/app/dbvisit/return_eip_hostname.sh
    with return_eip_hostname.sh being a very small shell script containing:
    echo <Public DNS name of this server>
    As this Public DNS Name is only valid with all its components (Fully Qualified Domain Name), we need to set one more parameter:
    USE_LONG_SERVER_NAME = Yes
  3. Then I created the standby database: One of the reasons I really like Dbvisit, it’s really just clicking a button, and it builds up the standby database!

    Creating the standby database

    Creating the standby database

  4. Schedule transfer and apply jobs, e.g. in a 5-minute interval let it transfer and apply archived logs.
    At the time being I wasn’t able to get that working in EC2 as the Web GUI got confused because of the non-fixed hostnames.
    I reported this issue to Dbvisit and already got feedback that they look into it and come back with a solution. As soon as this works, I will make an update to this post!

After setting up the service and the startup trigger as described in Part 1, let’s try to connect using this TNS entry which contains the Public DNS addresses of both instances:

CLOUDDB =
   (DESCRIPTION=
     (ADDRESS_LIST=
       (LOAD_BALANCE=OFF)
       (FAILOVER=ON)
       (ADDRESS=(PROTOCOL=TCP)( HOST=ec2-xxx.com)(PORT=1521))
       (ADDRESS=(PROTOCOL=TCP)( HOST=ec2-yyy.com)(PORT=1521))
     )
     (CONNECT_DATA=
       (SERVICE_NAME=MYSERVICE)
     )
   )

 

Have fun trying out!

 

Cheers

Patrick

Advertisements

From → Oracle

9 Comments
  1. Hi Patrick,

    This is very interesting and informative. For Dbvisit Standby users (or potential users) I think that the most likely scenario is using AWS for the Standby site, keeping their own data center for the Primary site. Have you tried this? What issues do you see?

    –Mark

    Mark Ripma

    • pschwanke permalink

      Hi Mark,

      great to hear that you liked it.
      I didn’t yet try out replicating on-premise primary to cloud based standby replication, but I agree it’s a very interesting use case.
      In this case it may be preferable to use Amazon’s VPC (Virtual Private Cloud) because it offers having on-premise and cloud servers in the same network segment connected by VPN.
      I wanted to first get the current issue with the non-static hostnames fixed because this generally affects working with the Amazon cloud here, pure cloud as well as mixed (on-premise and cloud).
      As soon as this gets fixed, I will come back and try out what you proposed.

      Kind regards
      Patrick

    • JMB permalink

      Hello all,

      I am considering also this scenario. But does anyone know any approximate actual price? Say a 50 Gb standby database, for example.

  2. nxt permalink

    Can you go into more detail on step 6 changes to support elastic ip address. It seems more complex that just changing the listener.ora. I cannot get it to work just doing and stopping ands starting the listener.

    • pschwanke permalink

      Actually it’s three steps that have to be done once, namely after the Elastic IP address has been assigned to the instance and is reflected in the Public DNS name of the instance:
      1. Stop the listener: $ lsnrctl stop
      2. Change the listener.ora appropriately, e.g. replace (HOST=localhost) with (HOST=ec2-176-34-178-144.eu-west-1.compute.amazonaws.com). If you want to script, you could use something like this:
      $ publicdnsname=`ec2-describe-instances $INSTANCEID | awk ‘/INSTANCE/ {print $4}’`
      $ sedscript=”‘s/(HOST[ ]*=[^)]\+)/(HOST = $publicdnsname)/g'”
      $ listenerorapath=’$ORACLE_HOME/network/admin/listener.ora’
      $ sed -i $sedscript $listenerorapath
      3. Restart the listener: $ lsnrctl start
      Now it’s listening on the instance’s IP address and can be reached from anywhere using the Public DNS address.

      Was this helpful for you?

      Cheers
      Patrick

      • Thanks for the swift reply. I did this and trying to connect from outside AWS does not work. I will test from another instance inside AWS which is what you did.. I suspect this will work as AWS translates the publicIP to the interrnal private IP when you are inside AWS.

      • pschwanke permalink

        Did you configure EC2 security group to allow incoming traffic for listener port?

        Also, did you check iptables firewall inside the Amazon instance? Try to stop it:
        $ service iptables stop
        $ service ip6tables stop
        $ chkconfig –level 2345 iptables off
        $ chkconfig –level 2345 ip6tables off

        Then retry connecting!

        Patrick

  3. nxt permalink

    patrick
    Connection works fine inside Amazon. Not sure what stops clients outside amazon connecting although I only want to connect from inside amazon so that is fine. I suppose a good default security feature. I also upgraded the Ruby, put in automation scripts to automatically attach and mount the ebs files. I will post all my scripts soon on my website. http://ec2dream.blogspot.com

Trackbacks & Pingbacks

  1. Desaster Recovery in the cloud, Part 1 « DatabasesInCloud

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: