Thursday, June 30, 2011

Reverse engineering object DDL

This is an hands-on to retrieves the creation DDL for the "EMPLOYEE" table
and the dependents(index, constraints etc) associated with the table.
This is known as Reverse engineering object DDL which is very useful in
data migration and understanding the default values that oracle use while
creating the database objects.

This can be achieved with the help of DBMS_METADATA package and GUI tool in TOAD.

1. Create a table and insert data
---------------------------------
SQL> create table employee as(
  2  select * from scott.emp);

Table created.

2. Create index and add constraints to employee table
-----------------------------------------------------
SQL> CREATE UNIQUE INDEX PK_EMPLOYEE ON EMPLOYEE ("EMPNO");

Index created.

SQL> CREATE INDEX MANAGER_IDX ON EMPLOYEE ("MGR");

Index created.

SQL> ALTER TABLE EMPLOYEE ADD CONSTRAINT PK_EMPLOYEE UNIQUE ("EMPNO");

Table altered.

3.Now Let us try to fetch the DDLs using DBMS_METADATA package
--------------------------------------------------------------------------
To generate complete, uninterrupted output, set the PAGESIZE to 0 and set LONG to some large number, as shown

note: You may categorize the DDLs and can be spooled in to a file if you want to make a sql script.

SET LONG 2000000
SET PAGESIZE 0

SQL> SELECT DBMS_METADATA.GET_DDL('TABLE','EMPLOYEE')
           FROM DUAL;

  CREATE TABLE "SAMPLE"."EMPLOYEE"
   (    "EMPNO" NUMBER(4,0),
        "ENAME" VARCHAR2(10),
        "JOB" VARCHAR2(9),
        "MGR" NUMBER(4,0),
        "HIREDATE" DATE,
        "SAL" NUMBER(7,2),
        "COMM" NUMBER(7,2),
        "DEPTNO" NUMBER(2,0),
         CONSTRAINT "PK_EMPLOYEE" UNIQUE ("EMPNO")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DE
FAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "EXAMPLE"  ENABLE
 ) SEGMENT CREATION IMMEDIATE
 PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "EXAMPLE"


SQL> SELECT DBMS_METADATA.GET_DEPENDENT_DDL
          ('INDEX','EMPLOYEE') FROM DUAL;

  CREATE UNIQUE INDEX "SAMPLE"."PK_EMPLOYEE" ON "SAMPLE"."EMPLOYEE" ("EMPNO")
  PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "EXAMPLE"

  CREATE INDEX "SAMPLE"."MANAGER_IDX" ON "SAMPLE"."EMPLOYEE" ("MGR")
  PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "EXAMPLE"

SQL> SELECT DBMS_METADATA.GET_DEPENDENT_DDL
           ('CONSTRAINT','EMPLOYEE') FROM DUAL;

  ALTER TABLE "SAMPLE"."EMPLOYEE" ADD CONSTRAINT "PK_EMPLOYEE" UNIQUE ("EMPNO")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "EXAMPLE"  ENABLE


!!!!FEELING LAZY???? THEN TRY TOAD'S "GENERATE SCHEMA SCRIPT" TOOL!!!!!

Tuesday, June 28, 2011

PARTITIONING AN EXISTING TABLE USING DATAPUMP

PARTITIONING AN EXISTING TABLE USING DATAPUMP
------------------------------------------------------------------

1.CREATE A SAMPLE SCHEMA (Copy the below threads to sample.sql and execute)
----------------------------------------------------------------------------

CREATE TABLE big_table (
  id            NUMBER(10),
  created_date  DATE,
  lookup_id     NUMBER(10),
  data          VARCHAR2(50)
);

DECLARE
  l_lookup_id    lookup.id%TYPE;
  l_create_date  DATE;
BEGIN
  FOR i IN 1 .. 1000000 LOOP
    IF MOD(i, 3) = 0 THEN
      l_create_date := ADD_MONTHS(SYSDATE, -24);
      l_lookup_id   := 2;
    ELSIF MOD(i, 2) = 0 THEN
      l_create_date := ADD_MONTHS(SYSDATE, -12);
      l_lookup_id   := 1;
    ELSE
      l_create_date := SYSDATE;
      l_lookup_id   := 3;
    END IF;
   
    INSERT INTO big_table (id, created_date, lookup_id, data)
    VALUES (i, l_create_date, l_lookup_id, 'This is some data for ' || i);
  END LOOP;
  COMMIT;
END;
/

-- Apply some constraints to the table.
ALTER TABLE big_table ADD (
  CONSTRAINT big_table_pk PRIMARY KEY (id)
);

CREATE INDEX bita_created_date_i ON big_table(created_date);





-----------TABLE BIG_TABLE IS CREATED-----------------------------------

SQL>  select table_name, tablespace_name, partitioned
      from dba_tables where table_name like 'BIG_%';

TABLE_NAME                     TABLESPACE_NAME                PAR
------------------------------ ------------------------------ ---
BIG_TABLE                      EXAMPLE                        NO  ----->(No partition)

Export the table using datapump:
--------------------------------

expdp admin/admin@oradb1 tables=sample.big_table directory=ar1 dumpfile=big_table.dmp parallel=4

Screen shot of my expdp:
--------------------------
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "ADMIN"."SYS_EXPORT_TABLE_01":  admin/********@oradb1 tables=sample.big_table directory=ar1 dumpfile=big_table.dmp parallel=4
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 55 MB
. . exported "SAMPLE"."BIG_TABLE"                        46.61 MB 1000000 rows
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Master table "ADMIN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ADMIN.SYS_EXPORT_TABLE_01 is:
  /u01/software/datapump/big_table.dmp
Job "ADMIN"."SYS_EXPORT_TABLE_01" successfully completed at 14:52:38

NOW DROP AND RE-CREATE THE TABLE "BIG_TABLE" WITH PARTITION OPTIONS
--------------------------------------------------------------------

SQL> drop table big_table;

Table dropped.



SQL> CREATE TABLE big_table (
    id            NUMBER(10),
    created_date  DATE,
    lookup_id     NUMBER(10),
    data          VARCHAR2(50)
)
PARTITION BY RANGE (created_date)
(PARTITION big_table_2003 VALUES LESS THAN (TO_DATE('01/01/2004', 'DD/MM/YYYY')),
 PARTITION big_table_2004 VALUES LESS THAN (TO_DATE('01/01/2005', 'DD/MM/YYYY')),
 PARTITION big_table_2005 VALUES LESS THAN (MAXVALUE));

SQL>  select table_name, tablespace_name, partitioned
      from dba_tables where table_name like 'BIG_%';

TABLE_NAME                     TABLESPACE_NAME                PAR
------------------------------ ------------------------------ ---
BIG_TABLE                      EXAMPLE                        YES ----> TABLE IS PARTITIONED

SQL> select count(*) from big_table;

  COUNT(*)
----------
         0  ----------> no data

IMPORT THE DATA USING DATAPUMP
-------------------------------

impdp admin/admin@oradb1 directory=ar1 dumpfile=big_table.dmp logfile=big_table.log table_exists_action=append parallel=4

note: job will be completed with one error because of "TABLE_EXISTS_ACTION=append" parameter.
Exclude Metadata and copy only data to avoid the error caused by the above parameter

screen shot:
-------------
Import: Release 11.2.0.1.0 - Production on Tue Jun 28 15:11:53 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table "ADMIN"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "ADMIN"."SYS_IMPORT_FULL_01":  admin/********@oradb1 directory=ar1 dumpfile=big_table.dmp logfile=big_table.log table_exists_action=append parallel=4
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39152: Table "SAMPLE"."BIG_TABLE" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SAMPLE"."BIG_TABLE"                        46.61 MB 1000000 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Job "ADMIN"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 15:12:49

SQL> select count(*) from big_table;

  COUNT(*)
----------
   1000000


SQL> insert into big_table (id, created_date, lookup_id, data)
  2  values(1001,'27-jun-03',5,'This is some old data');

1 row created.


SQL> select * from big_table partition(big_table_2004);

no rows selected

SQL> select * from big_table partition(big_table_2003);

        ID CREATED_D  LOOKUP_ID DATA
---------- --------- ---------- -------------------------
      1001 27-JUN-03          5 This is some old data

TESTING PARTITION PRUNING
-----------------------------------------

SQL>  SET AUTOTRACE TRACEONLY EXPLAIN
SQL> select * from big_table
  2  where created_date='27-JUN-03';

Execution Plan
----------------------------------------------------------
Plan hash value: 1710340555

----------------------------------------------------------------------------------------------------
| Id  | Operation              | Name      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |           |     1 |    37 |     3   (0)| 00:00:01 |       |       |
|   1 |  PARTITION RANGE SINGLE|           |     1 |    37 |     3   (0)| 00:00:01 |   KEY |   KEY |
|*  2 |   TABLE ACCESS FULL    | BIG_TABLE |     1 |    37 |     3   (0)| 00:00:01 |   KEY |   KEY |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("CREATED_DATE"='27-JUN-03')

Thursday, June 23, 2011

Goldengate Initial load (File to Replicat Method)

file to replicat initial load method

On Source:
------------
SQL> select count(*) from arun.sales;

  COUNT(*)
----------
         9

On Target:

SQL> select count(*) from arun.sales;

  COUNT(*)
----------
         3

On Source:
----------

GGSCI (RAC2) 1>EDIT PARAMS ILOAD100

SOURCEISTABLE
USERID ggs_admin@oradb1, PASSWORD ggs_admin
RMTHOST rac1, MGRPORT 7809
RMTFILE ./dirdat/int2.dat, purge
TABLE SCOTT.EMP;
table arun.sales;

execute extract command from $GGS_HOME

 $ extract paramfile /u01/app/oracle/ggs/dirprm/iload100.prm reportfile dirrpt/iload100.rpt

CHECK THE REPORT FILE

view /u01/app/oracle/ggs/dirrpt/iload100.rpt

On Target:
-----------

GGSCI (rac1) 3> edit params iload200

REPLICAT ILOAD200
SPECIALRUN
ASSUMETARGETDEFS
HANDLECOLLISIONS
USERID ggs_admin@oradb2, password ggs_admin
EXTFILE ./dirdat/int2.dat
DISCARDFILE ./dirdat/iload200.dsc, purge
MAP SCOTT.EMP, TARGET SCOTT.EMP;
MAP ARUN.SALES, TARGET ARUN.SALES;
END RUNTIME

GGSCI (rac1) 4> add replicat iload200, extfile ./dirdat/int2.dat, checkpointtable ggs_admin.chkpt

note: checkpointtable is optional because the replicat process is a one time run process in this case. check my previous blog for checkpoint creation instructions.


Now start the replicat process on target
----------------------------------------

GGSCI (rac1) 2> start iload200

Now check both Source and target:
---------------------------------
On Source:

SQL> select count(*) from arun.sales;

  COUNT(*)
----------
         9

On Target:

SQL> select count(*) from arun.sales;

  COUNT(*)
----------
         9

At this point transaction synchronization can start.

Monday, June 13, 2011

GoldenGate Active-Active(Bidirectional) Replication

GoldenGate Active-Active(Bidirectional) Replication:
-----------------------------------------------------

Project: Goldengate Active-Active(Bidirectional) Replication:

Scenario:
We have two locations with same database configuration
Location1=11g-r2 in archive log mode (ONE NODE RAC)/OEL/VMWARE
Location2=11g=r2 in archive log mode (ONE NODE RAC)/OEL/VMWARE
Data entry clerks are employed at both sites inputting orders into a
Order management system. Both database should be in sync after every
Transaction.

--------------------------------------------------------------------------------------------------
On Location1:

Download, Install goldengate software and configure environment variables
-------------------------------------------------------------------------
Login as Oracle user
$ mkdir /u01/app/oracle/ggs  <------ (downoad the goldengate ZIP file to this directory assuming this as a shared storage)
$ unzip V18159-01.zip
Archive: V18159-01.zip
inflating: ggs_redhatAS50_x64_ora11g_64bit_v10.4.0.19_002.tar
$tar -xvf ggs_redhatAS50_x64_ora11g_64bit_v10.4.0.19_002.tar

vi /home/oracle/.bash_profile

#user specific configuration for Oracle

export ORACLE_SID=oradb1   ------> note: on location-2 i have the SID=oradb2
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
export GRID_HOME=/u01/app/11.2.0/grid   -----> note: this is my CRS_HOME
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin:.

#end of file

Create subdir:
---------------
$cd /u01/app/oracle/ggs
$./ggsci
GGSCI (rac1) 2> create subdirs

Create Goldengate administrator
-------------------------------
[oracle@rac1 admin]$ sqlplus sys/oracle@oradb1 as sysdba

SQL> create tablespace ggs_data
     datafile '+data' size 200m;

Tablespace created.

SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
GGS_DATA

6 rows selected.

SQL> create user ggs_admin identified by ggs_admin
  2  default tablespace ggs_data
  3  temporary tablespace temp;

User created.


SQL> grant dba to ggs_admin;
Grant succeeded.

CREATE CHECKPOINT TABLE FOR REPLICAT ON BOTH SOURCE AND TARGET
---------------------------------------------------------------

GGSCI (rac1) 1> edit params ./GLOBAL
CHECKPOINTTABLE ggs_admin.chkpt

GGSCI (rac1) 11> dblogin userid ggs_admin@oradb1, password ggs_admin
Successfully logged into database.

GGSCI (rac1) 12> ADD CHECKPOINTTABLE ggs_admin.chkpt
Successfully created checkpoint table GGS_ADMIN.CHKPT.

confirm the table - SQL> desc ggs_admin.chkpt


Create manager Process:
------------------------
GGSCI (rac1) 1> EDIT PARAM MGR

PORT 7809
PURGEOLDEXTRACTS /u01/app/oracle/ggs/dirdat/sa*, USECHECKPOINTS, MINKEEPHOURS 2
--AUTOSTART ( optional configuration)
--AUTORESTART (optional configuration)

cONFIGURE VIP FOR GOLDENGATE APPLICATION ( virtual IP is needed in RAC configuration for automatic failover)
------------------------------------------------------------------------------------------------------------
go to grid_home/bin   ------->grid_home is my CRS_HOME

./crs_profile -create ggsvip -t application -a ${GRID_HOME}/bin/usrvip -o oi=eth1,ov=192.168.1.240,on=255.255.255.0

note:( ggsvip - name of the application vip
       oi=eth1 - is the public interface
       on=255.255.255.0 - mask)

./crs_register ggsvip 

[oracle@rac1 bin]$ su - root

/u01/app/11.2.0/grid/bin/crs_setperm ggsvip -o root 
/u01/app/11.2.0/grid/bin/crs_setperm ggsvip -u user:oracle:r-x   

[oracle@rac1 bin]$ /u01/app/11.2.0/grid/bin/crs_start ggsvip
Attempting to start `ggsvip` on member `rac1`
Start of `ggsvip` on member `rac1` succeeded.

Test the Goldengate VIP
-----------------------------------
[oracle@rac1 bin]$ ping 192.168.1.240
PING 192.168.1.240 (192.168.1.240) 56(84) bytes of data.
64 bytes from 192.168.1.240: icmp_seq=0 ttl=64 time=0.040 ms

create the script ggs_action.sh for starting the goldengate process on cluster (crs_home/crs/public)
----------------------------------------------------------------------------------------------------

#!/bin/sh
#############################################################################
#@(#) Clusterware script to manage Golden Gate
#
#
#
# Author: Arundas kalathil
#
#
# Script to Manage Golden Gate from Clusterware
#############################################################################
GGS_HOME=/u01/app/oracle/ggs
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${GGS_HOME}
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export GGS_HOME LD_LIBRARY_PATH ORACLE_HOME
# Function runCmd to run the Golden Gate Script Execution
runCmd()
{
ggsciCmd=$1
result=`${GGS_HOME}/ggsci << EOF
${ggsciCmd}
exit
EOF`
}
# Function CheckMgr to check the Golden Gate Manager process
checkMgr()
#Configuring & Managing Golden Gate in Oracle Clusterware
{
if ( [ -f "${GGS_HOME}/dirpcs/MGR.pcm" ] )
then
pid=`cut -f8 “${GGS_HOME}/dirpcs/MGR.pcm”`
if [ ${pid} = `ps -e |grep ${pid} |grep mgr |cut -d " " -f2` ]
then
exit 0
else
if [ ${pid} = `ps -e |grep ${pid} |grep mgr |cut -d " " -f1` ]
then
exit 0
else
exit 1
fi
fi
else
exit 1
fi
}
# Main Code to get the input and run it
case $1 in
‘start’) runCmd ‘start manager’
sleep 5
checkMgr
;;
‘stop’) runCmd ‘stop er *’
runCmd ‘stop er *!’
runCmd ‘stop manager!’
exit 0
;;
‘check’) checkMgr
;;
‘clean’) runCmd ‘stop er *’
runCmd ‘stop er *!’
runCmd ‘kill er *’
runCmd ‘stop manager!’
exit 0
;;
‘abort’) runCmd ‘stop er *!’
runCmd ‘kill er *’
runCmd ‘stop manager!’
exit 0
;;
esac
# End of Script

---------------------------
copy the script to grid_home/crs/public and give the execute permission


create a profile to start on cluster
---------------------------------------
./crs_profile -create goldengate_app -t application -r ggsvip -a ${GRID_HOME}/crs/public/ggs_action.sh -o ci=10

Register and start the application
---------------------------------------
./crs_register goldengate_app
./crs_start goldengate_app



CREATE SUPPLIMENTAL LOGGING
-----------------------------
In order to extract the commited transactions from the online redo logs,
the database must be configured for supplemental logging on primary key columns.

[oracle@rac1 admin]$ sqlplus sys/oracle@oradb1 as sysdba

SQL>  ALTER DATABASE  ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

Database altered.

SQL> ALTER SYSTEM SWITCH LOGFILE;

(note: in RAC- SQL> alter system archive log current;)

CONFIGURE GOLDENGATE TO ADD SUPPLEMENTAL LOG DATA TO THE SOURCE TABLES
-----------------------------------------------------------------------
VERY IMPORTENT--> add supplemental log data to the source tables with ADD TRANDATA command


GGSCI (rac1) 2> dblogin userid ggs_admin@oradb1, password ggs_admin
Successfully logged into database.

GGSCI (rac1) 3> ADD TRANDATA scott.emp
Logging of supplemental redo data enabled for table SCOTT.EMP.

note: every source table must have primary key enabled else goldengate
will define all variable columns to represent uniqueness.This will increase
the volume of supplemental log being written to redologs and trailfiles.


cREATE PASSWORD FILE FOR ASM in each node
------------------------------------------

cd $GRID_HOME/dbs
orapwd file=orapw+ASM1 password=ORACLE entries=20

ADD THE FOLL: IN LISTENER.ORA IN EACH NODE
-------------------------------------------

SID_LIST_LISTENER_rac=
(SID_LIST=
(SID_DESC =
      (GLOBAL_DBNAME  = +ASM)
      (SID_NAME       = +ASM1)
      (ORACLE_HOME    = /u01/app/oracle/product/11.2.0/dbhome_1
    )
  )

note: this is because goldengate extract process need to go through oracle user

CREATE TNSNAMES ENTRIES FOR ASM
------------------------------------

ASM =
(DESCRIPTION =
  (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.arun.com)(PORT = 1521))
  )
  (CONNECT_DATA =
    (SERVICE_NAME = +ASM)
  )
)

=================================================================================================================================
On Location-2 repeate the same procedure
=================================================================================================================================

With datapump import the TABLES/SCHEMA OR tablespaces -------->( user network import using db_link)
CREATE A DATABASE LINK AND DATAPUMP ALL THE TABLES WITH NETWORK_LINK
---------------------------------------------------------------------
note: add tns entry to tnsnames.ora and test with tnsping <serive_name>
in my case it is oradb2


SQL>  create shared public database link torac1
  2  connect to system identified by oracle                     ------------> FOR RAC CONFIGURATION
  3  authenticated by system identified by oracle
  4  using 'oradb2';

Database link created.

 impdp system/oracle@oradb2 tables=scott.emp network_link=torac1

below is the screenshot

Import: Release 11.2.0.1.0 - Production on Fri Jun 10 14:07:45 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_IMPORT_TABLE_01":  system/********@oradb2 tables=scott.emp network_link=torac1
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . imported "SCOTT"."EMP"                                   17 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYSTEM"."SYS_IMPORT_TABLE_01" successfully completed at 14:08:07
----------------------------------------------------------------------------------

SQL> connect sys/oracle@oradb1 as sysdba
Connected.
SQL> select count (*) from scott.emp;

  COUNT(*)
----------
        17

SQL> connect sys/oracle@oradb2 as sysdba
Connected.
SQL> select count (*) from scott.emp;

  COUNT(*)
----------
        17


Before configuring the initial load configure the extract for data change capture
define the local trail for the extract process

note: For configuring initial load you need to create an additional extract and replicat process with SOURCEISTABLE AND SPECIALRUN parameters.
This is not used in this example scinario. In real world you need to configure so it can make the source and target in sync and
replication can start from this point.

CONFIGURE REPLICATION FROM LOCATION-1 TO LOCATION-2 ( BEFORE STARTING, MAKE SURE TO DO A ADD TRANDATA ----> Supplemental logging)
==================================================================================================================================
on Location1(source):

GGSCI (rac1) 1> EDIT PARAMS ELOAD1

-- change data capture parameter file to extract
-- oltp table change
--
EXTRACT ELOAD1
SETENV (ORACLE_SID=ORADB1)
USERID ggs_admin@oradb1, password ggs_admin
EXTTRAIL ./dirdat/sa
GETAPPLOPS
IGNOREREPLICATES
TRANLOGOPTIONS ASMUSER SYS@ASM, ASMPASSWORD ORACLE
TABLE scott.emp;
-- end of file

GGSCI (rac1) 5>  add extract ELOAD1, tranlog, threads 1, begin now ----> note:threads is only used for RAC configuration
EXTRACT added.

GGSCI (rac1) 6> add exttrail ./dirdat/sa, extract ELOAD1, megabytes 50
EXTTRAIL added.

CREATE DATAPUMP PROCESS AND ASSOCIATE TO EXTTRAIL SOURCE
--------------------------------------------------------
GGSCI (rac1) 10> add extract EPMP01, exttrailsource ./dirdat/sa, begin now
EXTRACT added.

GGSCI (rac1) 11> edit params EPMP01

-- DATA PUMP PARAMETER FILE TO READ THE LOCAL
-- TRAIL OF TABLE CHANGES
--
EXTRACT EPMP01
PASSTHRU
RMTHOST rac2, MGRPORT 7809
RMTTRAIL ./dirdat/ta
TABLE SCOTT.EMP;

GGSCI (rac1) 30>  add rmttrail ./dirdat/ta, extract EPMP01
RMTTRAIL added.


ON LOCATION-2(TARGET):
======================

CREATE REPLICAT PROCESS AND CONFIGURE EXCEPTION HANDLER TO TRAP THE ERRORS. ALSO CONFIGURE LOOP DETECTION AND CONFLICT
----------------------------------------------------------------------------------------------------------------------



GGSCI (rac2) 5> edit params RLOAD1

-- Replicator parameter file to apply changes
-- to tables
--
--
--
-- This starts the macro
--
MACRO #exception_handler
BEGIN
, TARGET ggs_admin.exceptions
, COLMAP ( rep_name = "RLOAD1"
, table_name = @GETENV ("GGHEADER", "TABLENAME")
, errno = @GETENV ("LASTERR", "DBERRNUM")
, dberrmsg = @GETENV ("LASTERR", "DBERRMSG")
, optype = @GETENV ("LASTERR", "OPTYPE")
, errtype = @GETENV ("LASTERR", "ERRTYPE")
, logrba = @GETENV ("GGHEADER", "LOGRBA")
, logposition = @GETENV ("GGHEADER", "LOGPOSITION")
, committimestamp = @GETENV ("GGHEADER", "COMMITTIMESTAMP"))
, INSERTALLRECORDS
, EXCEPTIONSONLY;
END;
-- END OF THE MACRO
--
REPLICAT RLOAD1
ASSUMETARGETDEFS
SETENV (ORACLE_SID=ORADB2)
USERID ggs_admin@oradb2, PASSWORD ggs_admin
DISCARDFILE ./dirrpt/rload.dsc, PURGE
REPERROR (DEFAULT, EXCEPTION)
REPERROR (DEFAULT2, ABEND)
REPERROR (-1, EXCEPTION)
MAP SCOTT.EMP, TARGET SCOTT.EMP;
MAP SCOTT.EMP #exception_handler()

--end of file

GGSCI (rac2) 14>  add replicat rload1, exttrail ./dirdat/ta, checkpointtable ggs_admin.chkpt, begin now
REPLICAT added.

SQL>create table ggs_admin.exceptions
( rep_name varchar2(8)
, table_name varchar2(61)
, errno number
, dberrmsg varchar2(4000)
, optype varchar2(20)
, errtype varchar2(20)
, logrba number
, logposition number
, committimestamp timestamp
);

SQL> create tablespace MY_INDEXES
     datafile '+data' size 200m;

ALTER TABLE ggs_admin.exceptions ADD (
  CONSTRAINT PK_CTS
 PRIMARY KEY
 (logrba, logposition, committimestamp) USING INDEX PCTFREE 0 TABLESPACE MY_INDEXES);

GGSCI (rac2) 33> START RLOAD1  ---->To start the replicat process

on location-1 start the extract process (ELOAD1 and EPMP01)
GGSCI (rac1) 3> START ELOAD1
GGSCI (rac1) 4> START EPMP01

CONFIGURE REPLICATION FROM LOCATION-2 TO LOCATION-1 --------->Repeate the same process. In this case
Location-2 will be the source and Location-1 will be the target.

CONFIRM ALL THE PROCESS ARE RUNNING ON BOTH LOCATION:
-----------------------------------------------------

GGSCI (rac1) 56> INFO ALL

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     ELOAD1      00:00:00      00:00:09
EXTRACT     RUNNING     EPMP01      00:00:00      00:00:04
REPLICAT    RUNNING     RLOAD1      00:00:00      00:00:05


NOW TEST THE ACTIVE-ACTIVE REPLICATION:
=======================================

ON LOCATION1:
-------------
SQL> SELECT COUNT(*) FROM EMP;

  COUNT(*)
----------
        17

SQL> UPDATE EMP
  2  SET SAL=5000         ------------->MODIFYING THE SALARY, NEW SALARY IS 5000
  3  WHERE ENAME='FORD';

1 row updated.

SQL> COMMIT;

Commit complete.


ON LOCATION2:
-------------
SQL> SELECT COUNT(*) FROM EMP;

  COUNT(*)
----------
        17
SQL> SELECT ENAME, SAL FROM EMP
  2  WHERE ENAME='FORD';

ENAME             SAL
---------- ----------
FORD             5000   ----------->UPDATED SALARY

Bingo!!!!

NOW LET US TEST THE REPLICATION FROM LOCATION-2 TO LOCATION-1
=======================================================

ON LOCATION-2:

SQL> INSERT INTO EMP(EMPNO,ENAME,JOB,SAL)
  2  VALUES(66,'KEN','DBA','6000');

1 row created.

SQL> COMMIT;

Commit complete.

ON LOCATION-1:

SQL> SELECT EMPNO,ENAME,JOB,SAL FROM EMP
  2  WHERE ENAME='KEN';

     EMPNO ENAME      JOB              SAL
---------- ---------- --------- ----------
        66 KEN        DBA             6000


NOTE: MORE ADVANCE CONFIGURATION IS NEEDED FOR TRIGGERS AND CONFLICT DETECTION WHICH IS NOT COVERED IN THIS EXAMPLE.
=====================================================================================================================================