LOGICAL VOLUME MANAGER
------------------------------------
Logical volume management provides a higher-level view of the disk storage
on a computer system than the traditional view of disks and partitions.
This gives the system administrator much more flexibility in allocating storage
to applications and users. Below method shows how to create partitions using LVM
--------------------------------------------------------------------------------
[root@rac1 ~]# fdisk -l /dev/sdb
Disk /dev/sdb: 209 MB, 209715200 bytes
64 heads, 32 sectors/track, 200 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdb doesn't contain a valid partition table
[root@rac1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4) ----> hit p
Partition number (1-4): 1
First cylinder (1-200, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-200, default 200):
Using default value 200
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): l
0 Empty 1e Hidden W95 FAT1 80 Old Minix c1 DRDOS/sec (FAT-
1 FAT12 24 NEC DOS 81 Minix / old Lin c4 DRDOS/sec (FAT-
2 XENIX root 39 Plan 9 82 Linux swap c6 DRDOS/sec (FAT-
3 XENIX usr 3c PartitionMagic 83 Linux c7 Syrinx
4 FAT16 <32M 40 Venix 80286 84 OS/2 hidden C: da Non-FS data
5 Extended 41 PPC PReP Boot 85 Linux extended db CP/M / CTOS / .
6 FAT16 42 SFS 86 NTFS volume set de Dell Utility
7 HPFS/NTFS 4d QNX4.x 87 NTFS volume set df BootIt
8 AIX 4e QNX4.x 2nd part 8e Linux LVM e1 DOS access
9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e3 DOS R/O
a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e4 SpeedStor
b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS eb BeOS fs
c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi ee EFI GPT
e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD ef EFI (FAT-12/16/
f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD f0 Linux/PA-RISC b
10 OPUS 55 EZ-Drive a7 NeXTSTEP f1 SpeedStor
11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f4 SpeedStor
12 Compaq diagnost 5c Priam Edisk a9 NetBSD f2 DOS secondary
14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot fb VMware VMFS
16 Hidden FAT16 63 GNU HURD or Sys b7 BSDI fs fc VMware VMKCORE
17 Hidden HPFS/NTF 64 Novell Netware b8 BSDI swap fd Linux raid auto
18 AST SmartSleep 65 Novell Netware bb Boot Wizard hid fe LANstep
1b Hidden W95 FAT3 70 DiskSecure Mult be Solaris boot ff BBT
1c Hidden W95 FAT3 75 PC/IX bf Solaris
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@rac1 ~]# fdisk -l /dev/sdb
Disk /dev/sdb: 209 MB, 209715200 bytes
64 heads, 32 sectors/track, 200 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 200 204799+ 8e Linux LVM
Note the id and system (8e Linux LVM)
Now we prepare our new partitions for LVM:
-----------------------------------------------
[root@rac1 ~]# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created
note: to remove you can use pvremove command
Now run pvdisplay command to see the current state of your physical volume
--------------------------------------------------------------------------
[root@rac1 ~]# pvdisplay
"/dev/sdb1" is a new physical volume of "200.00 MB"
--- NEW Physical volume ---
PV Name /dev/sdb1
VG Name
PV Size 200.00 MB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID dfcH5C-A7sh-RKDw-dL80-8ln4-nb7l-9N157z
Create a volume group. In my case I created a group called "datavgrp"
---------------------------------------------------------------------
[root@rac1 ~]# vgcreate datavgrp /dev/sdb1
Volume group "datavgrp" successfully created
Vgdisplay command will show the status of your volume group
------------------------------------------------------------
[root@rac1 ~]# vgdisplay
--- Volume group ---
VG Name datavgrp
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 196.00 MB
PE Size 4.00 MB
Total PE 49
Alloc PE / Size 0 / 0
Free PE / Size 49 / 196.00 MB
VG UUID iyNgDB-6q11-3g1d-AcBN-OMYX-zK0f-wwuoWB
Now create the logical volumes
-------------------------------
[root@rac1 ~]# lvcreate --name bkp_vol1 --size 20M datavgrp
Logical volume "bkp_vol1" created
[root@rac1 ~]# lvcreate --name bkp_vol2 --size 100M datavgrp
Logical volume "bkp_vol2" created
[root@rac1 ~]# lvcreate --name bkp_vol3 --size 70M datavgrp
Rounding up size to full physical extent 72.00 MB
Logical volume "bkp_vol3" created
[root@rac1 ~]# lvscan
ACTIVE '/dev/datavgrp/bkp_vol1' [20.00 MB] inherit
ACTIVE '/dev/datavgrp/bkp_vol2' [100.00 MB] inherit
ACTIVE '/dev/datavgrp/bkp_vol3' [72.00 MB] inherit
now we have three logical volumes, but we don't have any filesystems in them,
and without a filesystem we can't save anything in them. Therefore we create
an ext3 filesystem
lvdisplay will give you the details of logical volume
------------------------------------------------------
[root@rac1 dev]# lvdisplay
--- Logical volume ---
LV Name /dev/datavgrp/bkp_vol1
VG Name datavgrp
LV UUID jJUDn4-eJzb-SvPz-sWIz-XmWM-PC9s-eYqWWi
LV Write Access read/write
LV Status available
# open 0
LV Size 20.00 MB
Current LE 5
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Name /dev/datavgrp/bkp_vol2
VG Name datavgrp
LV UUID 3t2bAn-GPEu-iQCD-YK2J-D3oZ-nJ2X-LsjZFJ
LV Write Access read/write
LV Status available
# open 0
LV Size 100.00 MB
Current LE 25
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Name /dev/datavgrp/bkp_vol3
VG Name datavgrp
LV UUID zvIUK8-feaL-N3v9-Vk0e-foq9-NdHu-qz3J1P
LV Write Access read/write
LV Status available
# open 0
LV Size 72.00 MB
Current LE 18
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
Make the filesystem
--------------------
[root@rac1 dev]# mkfs.ext3 /dev/datavgrp/bkp_vol1
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
5136 inodes, 20480 blocks
1024 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=20971520
3 block groups
8192 blocks per group, 8192 fragments per group
1712 inodes per group
Superblock backups stored on blocks:
8193
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 23 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@rac1 dev]# [root@rac1 dev]# mkfs.ext3 /dev/datavgrp/bkp_vol1
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
5136 inodes, 20480 blocks
1024 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=20971520
3 block groups
8192 blocks per group, 8192 fragments per group
1712 inodes per group
Superblock backups stored on blocks:
8193
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done
[root@rac1 dev]# mkfs.ext3 /dev/datavgrp/bkp_vol1
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
5136 inodes, 20480 blocks
1024 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=20971520
3 block groups
8192 blocks per group, 8192 fragments per group
1712 inodes per group
Superblock backups stored on blocks:
8193
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 23 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Repeate the same for the other volumes
mkfs.ext3 /dev/datavgrp/bkp_vol2
mkfs.ext3 /dev/datavgrp/bkp_vol3
Now we can mount the volumes Or configure the mount points in /etc/fstab
---------------------------------------------------------------------------
mount /dev/datavgrp/bkp_vol1 /var/backup1
mount /dev/datavgrp/bkp_vol2 /var/backup3
mount /dev/datavgrp/bkp_vol3 /var/backup3
------------------X-----------------X------------------X----------------------X
Oracle Tips
Disclaimer: "All data and information provided on this blog is for informational purposes only. This blog will not be liable for any errors, omissions or damages arising from its display or use."
Monday, August 15, 2011
Tuesday, July 5, 2011
Configure Goldengate on Oracle Exadata
Goldengate configuration process is exactly same as 11g RAC environment except we use DBFS as the shared mount point for Goldengate files. The manager process must only run on one node. To prevent extract and replicat processes being started concurrently, mount DBFS on a single RAC node. Ensure the mount point configuration is added on node’s /etc/fstab file.
For example:
/sbin/mount.dbfs#/@DBconnectstring /mnt/oracle/dbfs fuse rw,user, noauto 0 0
If the Goldengate home is not in shared storage, ensure goldengate is installed in each node in the cluster and that the parameter files exist in the local subdirectories. Checkpoint and trail files must be placed on shared storage.
Below are the steps for creating the goldengate configuration for shared storage.
1. Make sure the DBFS is mounted. As a root user create directories “dirchk” and “dirdat”
mkdir /mnt/oracle/dbfs/ggs/dirchk
mkdir /mnt/oracle/dbfs/ggs/dirdat
2.As root, grant the read write permission on ggs directory to oracle user
chown –R oracle:oinstall /mnt/oracle/dbfs/ggs
3. On each node delete the dirchk and dirdat directories
rmdir $GGS_HOME/dirchk
rmdir $GGS_HOME/dirdat
4. Create a symbolic link to point the DBFS mount point directories
ln –s /mnt/oracle/dbfs/ggs/dirchk dirchk
ln –s /mnt/oracle/dbfs/ggs/dirdat dirdat
Now create the extract and replicat process ----> refer to
my previous threads for ASM configuration, Extract and replicat process creation.
Thursday, June 30, 2011
Reverse engineering object DDL
This is an hands-on to retrieves the creation DDL for the "EMPLOYEE" table
and the dependents(index, constraints etc) associated with the table.
This is known as Reverse engineering object DDL which is very useful in
data migration and understanding the default values that oracle use while
creating the database objects.
This can be achieved with the help of DBMS_METADATA package and GUI tool in TOAD.
1. Create a table and insert data
---------------------------------
SQL> create table employee as(
2 select * from scott.emp);
Table created.
2. Create index and add constraints to employee table
-----------------------------------------------------
SQL> CREATE UNIQUE INDEX PK_EMPLOYEE ON EMPLOYEE ("EMPNO");
Index created.
SQL> CREATE INDEX MANAGER_IDX ON EMPLOYEE ("MGR");
Index created.
SQL> ALTER TABLE EMPLOYEE ADD CONSTRAINT PK_EMPLOYEE UNIQUE ("EMPNO");
Table altered.
3.Now Let us try to fetch the DDLs using DBMS_METADATA package
--------------------------------------------------------------------------
To generate complete, uninterrupted output, set the PAGESIZE to 0 and set LONG to some large number, as shown
note: You may categorize the DDLs and can be spooled in to a file if you want to make a sql script.
SET LONG 2000000
SET PAGESIZE 0
SQL> SELECT DBMS_METADATA.GET_DDL('TABLE','EMPLOYEE')
FROM DUAL;
CREATE TABLE "SAMPLE"."EMPLOYEE"
( "EMPNO" NUMBER(4,0),
"ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9),
"MGR" NUMBER(4,0),
"HIREDATE" DATE,
"SAL" NUMBER(7,2),
"COMM" NUMBER(7,2),
"DEPTNO" NUMBER(2,0),
CONSTRAINT "PK_EMPLOYEE" UNIQUE ("EMPNO")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DE
FAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "EXAMPLE" ENABLE
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "EXAMPLE"
SQL> SELECT DBMS_METADATA.GET_DEPENDENT_DDL
('INDEX','EMPLOYEE') FROM DUAL;
CREATE UNIQUE INDEX "SAMPLE"."PK_EMPLOYEE" ON "SAMPLE"."EMPLOYEE" ("EMPNO")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "EXAMPLE"
CREATE INDEX "SAMPLE"."MANAGER_IDX" ON "SAMPLE"."EMPLOYEE" ("MGR")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "EXAMPLE"
SQL> SELECT DBMS_METADATA.GET_DEPENDENT_DDL
('CONSTRAINT','EMPLOYEE') FROM DUAL;
ALTER TABLE "SAMPLE"."EMPLOYEE" ADD CONSTRAINT "PK_EMPLOYEE" UNIQUE ("EMPNO")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "EXAMPLE" ENABLE
!!!!FEELING LAZY???? THEN TRY TOAD'S "GENERATE SCHEMA SCRIPT" TOOL!!!!!
and the dependents(index, constraints etc) associated with the table.
This is known as Reverse engineering object DDL which is very useful in
data migration and understanding the default values that oracle use while
creating the database objects.
This can be achieved with the help of DBMS_METADATA package and GUI tool in TOAD.
1. Create a table and insert data
---------------------------------
SQL> create table employee as(
2 select * from scott.emp);
Table created.
2. Create index and add constraints to employee table
-----------------------------------------------------
SQL> CREATE UNIQUE INDEX PK_EMPLOYEE ON EMPLOYEE ("EMPNO");
Index created.
SQL> CREATE INDEX MANAGER_IDX ON EMPLOYEE ("MGR");
Index created.
SQL> ALTER TABLE EMPLOYEE ADD CONSTRAINT PK_EMPLOYEE UNIQUE ("EMPNO");
Table altered.
3.Now Let us try to fetch the DDLs using DBMS_METADATA package
--------------------------------------------------------------------------
To generate complete, uninterrupted output, set the PAGESIZE to 0 and set LONG to some large number, as shown
note: You may categorize the DDLs and can be spooled in to a file if you want to make a sql script.
SET LONG 2000000
SET PAGESIZE 0
SQL> SELECT DBMS_METADATA.GET_DDL('TABLE','EMPLOYEE')
FROM DUAL;
CREATE TABLE "SAMPLE"."EMPLOYEE"
( "EMPNO" NUMBER(4,0),
"ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9),
"MGR" NUMBER(4,0),
"HIREDATE" DATE,
"SAL" NUMBER(7,2),
"COMM" NUMBER(7,2),
"DEPTNO" NUMBER(2,0),
CONSTRAINT "PK_EMPLOYEE" UNIQUE ("EMPNO")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DE
FAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "EXAMPLE" ENABLE
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "EXAMPLE"
SQL> SELECT DBMS_METADATA.GET_DEPENDENT_DDL
('INDEX','EMPLOYEE') FROM DUAL;
CREATE UNIQUE INDEX "SAMPLE"."PK_EMPLOYEE" ON "SAMPLE"."EMPLOYEE" ("EMPNO")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "EXAMPLE"
CREATE INDEX "SAMPLE"."MANAGER_IDX" ON "SAMPLE"."EMPLOYEE" ("MGR")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "EXAMPLE"
SQL> SELECT DBMS_METADATA.GET_DEPENDENT_DDL
('CONSTRAINT','EMPLOYEE') FROM DUAL;
ALTER TABLE "SAMPLE"."EMPLOYEE" ADD CONSTRAINT "PK_EMPLOYEE" UNIQUE ("EMPNO")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "EXAMPLE" ENABLE
!!!!FEELING LAZY???? THEN TRY TOAD'S "GENERATE SCHEMA SCRIPT" TOOL!!!!!
Tuesday, June 28, 2011
PARTITIONING AN EXISTING TABLE USING DATAPUMP
PARTITIONING AN EXISTING TABLE USING DATAPUMP
------------------------------------------------------------------
1.CREATE A SAMPLE SCHEMA (Copy the below threads to sample.sql and execute)
----------------------------------------------------------------------------
CREATE TABLE big_table (
id NUMBER(10),
created_date DATE,
lookup_id NUMBER(10),
data VARCHAR2(50)
);
DECLARE
l_lookup_id lookup.id%TYPE;
l_create_date DATE;
BEGIN
FOR i IN 1 .. 1000000 LOOP
IF MOD(i, 3) = 0 THEN
l_create_date := ADD_MONTHS(SYSDATE, -24);
l_lookup_id := 2;
ELSIF MOD(i, 2) = 0 THEN
l_create_date := ADD_MONTHS(SYSDATE, -12);
l_lookup_id := 1;
ELSE
l_create_date := SYSDATE;
l_lookup_id := 3;
END IF;
INSERT INTO big_table (id, created_date, lookup_id, data)
VALUES (i, l_create_date, l_lookup_id, 'This is some data for ' || i);
END LOOP;
COMMIT;
END;
/
-- Apply some constraints to the table.
ALTER TABLE big_table ADD (
CONSTRAINT big_table_pk PRIMARY KEY (id)
);
CREATE INDEX bita_created_date_i ON big_table(created_date);
-----------TABLE BIG_TABLE IS CREATED-----------------------------------
SQL> select table_name, tablespace_name, partitioned
from dba_tables where table_name like 'BIG_%';
TABLE_NAME TABLESPACE_NAME PAR
------------------------------ ------------------------------ ---
BIG_TABLE EXAMPLE NO ----->(No partition)
Export the table using datapump:
--------------------------------
expdp admin/admin@oradb1 tables=sample.big_table directory=ar1 dumpfile=big_table.dmp parallel=4
Screen shot of my expdp:
--------------------------
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "ADMIN"."SYS_EXPORT_TABLE_01": admin/********@oradb1 tables=sample.big_table directory=ar1 dumpfile=big_table.dmp parallel=4
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 55 MB
. . exported "SAMPLE"."BIG_TABLE" 46.61 MB 1000000 rows
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Master table "ADMIN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ADMIN.SYS_EXPORT_TABLE_01 is:
/u01/software/datapump/big_table.dmp
Job "ADMIN"."SYS_EXPORT_TABLE_01" successfully completed at 14:52:38
NOW DROP AND RE-CREATE THE TABLE "BIG_TABLE" WITH PARTITION OPTIONS
--------------------------------------------------------------------
SQL> drop table big_table;
Table dropped.
SQL> CREATE TABLE big_table (
id NUMBER(10),
created_date DATE,
lookup_id NUMBER(10),
data VARCHAR2(50)
)
PARTITION BY RANGE (created_date)
(PARTITION big_table_2003 VALUES LESS THAN (TO_DATE('01/01/2004', 'DD/MM/YYYY')),
PARTITION big_table_2004 VALUES LESS THAN (TO_DATE('01/01/2005', 'DD/MM/YYYY')),
PARTITION big_table_2005 VALUES LESS THAN (MAXVALUE));
SQL> select table_name, tablespace_name, partitioned
from dba_tables where table_name like 'BIG_%';
TABLE_NAME TABLESPACE_NAME PAR
------------------------------ ------------------------------ ---
BIG_TABLE EXAMPLE YES ----> TABLE IS PARTITIONED
SQL> select count(*) from big_table;
COUNT(*)
----------
0 ----------> no data
IMPORT THE DATA USING DATAPUMP
-------------------------------
impdp admin/admin@oradb1 directory=ar1 dumpfile=big_table.dmp logfile=big_table.log table_exists_action=append parallel=4
note: job will be completed with one error because of "TABLE_EXISTS_ACTION=append" parameter.
Exclude Metadata and copy only data to avoid the error caused by the above parameter
screen shot:
-------------
Import: Release 11.2.0.1.0 - Production on Tue Jun 28 15:11:53 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table "ADMIN"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "ADMIN"."SYS_IMPORT_FULL_01": admin/********@oradb1 directory=ar1 dumpfile=big_table.dmp logfile=big_table.log table_exists_action=append parallel=4
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39152: Table "SAMPLE"."BIG_TABLE" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SAMPLE"."BIG_TABLE" 46.61 MB 1000000 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Job "ADMIN"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 15:12:49
SQL> select count(*) from big_table;
COUNT(*)
----------
1000000
SQL> insert into big_table (id, created_date, lookup_id, data)
2 values(1001,'27-jun-03',5,'This is some old data');
1 row created.
SQL> select * from big_table partition(big_table_2004);
no rows selected
SQL> select * from big_table partition(big_table_2003);
ID CREATED_D LOOKUP_ID DATA
---------- --------- ---------- -------------------------
1001 27-JUN-03 5 This is some old data
TESTING PARTITION PRUNING
-----------------------------------------
SQL> SET AUTOTRACE TRACEONLY EXPLAIN
SQL> select * from big_table
2 where created_date='27-JUN-03';
Execution Plan
----------------------------------------------------------
Plan hash value: 1710340555
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 37 | 3 (0)| 00:00:01 | | |
| 1 | PARTITION RANGE SINGLE| | 1 | 37 | 3 (0)| 00:00:01 | KEY | KEY |
|* 2 | TABLE ACCESS FULL | BIG_TABLE | 1 | 37 | 3 (0)| 00:00:01 | KEY | KEY |
----------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("CREATED_DATE"='27-JUN-03')
------------------------------------------------------------------
1.CREATE A SAMPLE SCHEMA (Copy the below threads to sample.sql and execute)
----------------------------------------------------------------------------
CREATE TABLE big_table (
id NUMBER(10),
created_date DATE,
lookup_id NUMBER(10),
data VARCHAR2(50)
);
DECLARE
l_lookup_id lookup.id%TYPE;
l_create_date DATE;
BEGIN
FOR i IN 1 .. 1000000 LOOP
IF MOD(i, 3) = 0 THEN
l_create_date := ADD_MONTHS(SYSDATE, -24);
l_lookup_id := 2;
ELSIF MOD(i, 2) = 0 THEN
l_create_date := ADD_MONTHS(SYSDATE, -12);
l_lookup_id := 1;
ELSE
l_create_date := SYSDATE;
l_lookup_id := 3;
END IF;
INSERT INTO big_table (id, created_date, lookup_id, data)
VALUES (i, l_create_date, l_lookup_id, 'This is some data for ' || i);
END LOOP;
COMMIT;
END;
/
-- Apply some constraints to the table.
ALTER TABLE big_table ADD (
CONSTRAINT big_table_pk PRIMARY KEY (id)
);
CREATE INDEX bita_created_date_i ON big_table(created_date);
-----------TABLE BIG_TABLE IS CREATED-----------------------------------
SQL> select table_name, tablespace_name, partitioned
from dba_tables where table_name like 'BIG_%';
TABLE_NAME TABLESPACE_NAME PAR
------------------------------ ------------------------------ ---
BIG_TABLE EXAMPLE NO ----->(No partition)
Export the table using datapump:
--------------------------------
expdp admin/admin@oradb1 tables=sample.big_table directory=ar1 dumpfile=big_table.dmp parallel=4
Screen shot of my expdp:
--------------------------
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "ADMIN"."SYS_EXPORT_TABLE_01": admin/********@oradb1 tables=sample.big_table directory=ar1 dumpfile=big_table.dmp parallel=4
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 55 MB
. . exported "SAMPLE"."BIG_TABLE" 46.61 MB 1000000 rows
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Master table "ADMIN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ADMIN.SYS_EXPORT_TABLE_01 is:
/u01/software/datapump/big_table.dmp
Job "ADMIN"."SYS_EXPORT_TABLE_01" successfully completed at 14:52:38
NOW DROP AND RE-CREATE THE TABLE "BIG_TABLE" WITH PARTITION OPTIONS
--------------------------------------------------------------------
SQL> drop table big_table;
Table dropped.
SQL> CREATE TABLE big_table (
id NUMBER(10),
created_date DATE,
lookup_id NUMBER(10),
data VARCHAR2(50)
)
PARTITION BY RANGE (created_date)
(PARTITION big_table_2003 VALUES LESS THAN (TO_DATE('01/01/2004', 'DD/MM/YYYY')),
PARTITION big_table_2004 VALUES LESS THAN (TO_DATE('01/01/2005', 'DD/MM/YYYY')),
PARTITION big_table_2005 VALUES LESS THAN (MAXVALUE));
SQL> select table_name, tablespace_name, partitioned
from dba_tables where table_name like 'BIG_%';
TABLE_NAME TABLESPACE_NAME PAR
------------------------------ ------------------------------ ---
BIG_TABLE EXAMPLE YES ----> TABLE IS PARTITIONED
SQL> select count(*) from big_table;
COUNT(*)
----------
0 ----------> no data
IMPORT THE DATA USING DATAPUMP
-------------------------------
impdp admin/admin@oradb1 directory=ar1 dumpfile=big_table.dmp logfile=big_table.log table_exists_action=append parallel=4
note: job will be completed with one error because of "TABLE_EXISTS_ACTION=append" parameter.
Exclude Metadata and copy only data to avoid the error caused by the above parameter
screen shot:
-------------
Import: Release 11.2.0.1.0 - Production on Tue Jun 28 15:11:53 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table "ADMIN"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "ADMIN"."SYS_IMPORT_FULL_01": admin/********@oradb1 directory=ar1 dumpfile=big_table.dmp logfile=big_table.log table_exists_action=append parallel=4
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39152: Table "SAMPLE"."BIG_TABLE" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SAMPLE"."BIG_TABLE" 46.61 MB 1000000 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Job "ADMIN"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 15:12:49
SQL> select count(*) from big_table;
COUNT(*)
----------
1000000
SQL> insert into big_table (id, created_date, lookup_id, data)
2 values(1001,'27-jun-03',5,'This is some old data');
1 row created.
SQL> select * from big_table partition(big_table_2004);
no rows selected
SQL> select * from big_table partition(big_table_2003);
ID CREATED_D LOOKUP_ID DATA
---------- --------- ---------- -------------------------
1001 27-JUN-03 5 This is some old data
TESTING PARTITION PRUNING
-----------------------------------------
SQL> SET AUTOTRACE TRACEONLY EXPLAIN
SQL> select * from big_table
2 where created_date='27-JUN-03';
Execution Plan
----------------------------------------------------------
Plan hash value: 1710340555
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 37 | 3 (0)| 00:00:01 | | |
| 1 | PARTITION RANGE SINGLE| | 1 | 37 | 3 (0)| 00:00:01 | KEY | KEY |
|* 2 | TABLE ACCESS FULL | BIG_TABLE | 1 | 37 | 3 (0)| 00:00:01 | KEY | KEY |
----------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("CREATED_DATE"='27-JUN-03')
Thursday, June 23, 2011
Goldengate Initial load (File to Replicat Method)
file to replicat initial load method
On Source:
------------
SQL> select count(*) from arun.sales;
COUNT(*)
----------
9
On Target:
SQL> select count(*) from arun.sales;
COUNT(*)
----------
3
On Source:
----------
GGSCI (RAC2) 1>EDIT PARAMS ILOAD100
SOURCEISTABLE
USERID ggs_admin@oradb1, PASSWORD ggs_admin
RMTHOST rac1, MGRPORT 7809
RMTFILE ./dirdat/int2.dat, purge
TABLE SCOTT.EMP;
table arun.sales;
execute extract command from $GGS_HOME
$ extract paramfile /u01/app/oracle/ggs/dirprm/iload100.prm reportfile dirrpt/iload100.rpt
CHECK THE REPORT FILE
view /u01/app/oracle/ggs/dirrpt/iload100.rpt
On Target:
-----------
GGSCI (rac1) 3> edit params iload200
REPLICAT ILOAD200
SPECIALRUN
ASSUMETARGETDEFS
HANDLECOLLISIONS
USERID ggs_admin@oradb2, password ggs_admin
EXTFILE ./dirdat/int2.dat
DISCARDFILE ./dirdat/iload200.dsc, purge
MAP SCOTT.EMP, TARGET SCOTT.EMP;
MAP ARUN.SALES, TARGET ARUN.SALES;
END RUNTIME
GGSCI (rac1) 4> add replicat iload200, extfile ./dirdat/int2.dat, checkpointtable ggs_admin.chkpt
note: checkpointtable is optional because the replicat process is a one time run process in this case. check my previous blog for checkpoint creation instructions.
Now start the replicat process on target
----------------------------------------
GGSCI (rac1) 2> start iload200
Now check both Source and target:
---------------------------------
On Source:
SQL> select count(*) from arun.sales;
COUNT(*)
----------
9
On Target:
SQL> select count(*) from arun.sales;
COUNT(*)
----------
9
At this point transaction synchronization can start.
On Source:
------------
SQL> select count(*) from arun.sales;
COUNT(*)
----------
9
On Target:
SQL> select count(*) from arun.sales;
COUNT(*)
----------
3
On Source:
----------
GGSCI (RAC2) 1>EDIT PARAMS ILOAD100
SOURCEISTABLE
USERID ggs_admin@oradb1, PASSWORD ggs_admin
RMTHOST rac1, MGRPORT 7809
RMTFILE ./dirdat/int2.dat, purge
TABLE SCOTT.EMP;
table arun.sales;
execute extract command from $GGS_HOME
$ extract paramfile /u01/app/oracle/ggs/dirprm/iload100.prm reportfile dirrpt/iload100.rpt
CHECK THE REPORT FILE
view /u01/app/oracle/ggs/dirrpt/iload100.rpt
On Target:
-----------
GGSCI (rac1) 3> edit params iload200
REPLICAT ILOAD200
SPECIALRUN
ASSUMETARGETDEFS
HANDLECOLLISIONS
USERID ggs_admin@oradb2, password ggs_admin
EXTFILE ./dirdat/int2.dat
DISCARDFILE ./dirdat/iload200.dsc, purge
MAP SCOTT.EMP, TARGET SCOTT.EMP;
MAP ARUN.SALES, TARGET ARUN.SALES;
END RUNTIME
GGSCI (rac1) 4> add replicat iload200, extfile ./dirdat/int2.dat, checkpointtable ggs_admin.chkpt
note: checkpointtable is optional because the replicat process is a one time run process in this case. check my previous blog for checkpoint creation instructions.
Now start the replicat process on target
----------------------------------------
GGSCI (rac1) 2> start iload200
Now check both Source and target:
---------------------------------
On Source:
SQL> select count(*) from arun.sales;
COUNT(*)
----------
9
On Target:
SQL> select count(*) from arun.sales;
COUNT(*)
----------
9
At this point transaction synchronization can start.
Monday, June 13, 2011
GoldenGate Active-Active(Bidirectional) Replication
GoldenGate Active-Active(Bidirectional) Replication:
-----------------------------------------------------
Project: Goldengate Active-Active(Bidirectional) Replication:
Scenario:
We have two locations with same database configuration
Location1=11g-r2 in archive log mode (ONE NODE RAC)/OEL/VMWARE
Location2=11g=r2 in archive log mode (ONE NODE RAC)/OEL/VMWARE
Data entry clerks are employed at both sites inputting orders into a
Order management system. Both database should be in sync after every
Transaction.
--------------------------------------------------------------------------------------------------
On Location1:
Download, Install goldengate software and configure environment variables
-------------------------------------------------------------------------
Login as Oracle user
$ mkdir /u01/app/oracle/ggs <------ (downoad the goldengate ZIP file to this directory assuming this as a shared storage)
$ unzip V18159-01.zip
Archive: V18159-01.zip
inflating: ggs_redhatAS50_x64_ora11g_64bit_v10.4.0.19_002.tar
$tar -xvf ggs_redhatAS50_x64_ora11g_64bit_v10.4.0.19_002.tar
vi /home/oracle/.bash_profile
#user specific configuration for Oracle
export ORACLE_SID=oradb1 ------> note: on location-2 i have the SID=oradb2
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
export GRID_HOME=/u01/app/11.2.0/grid -----> note: this is my CRS_HOME
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin:.
#end of file
Create subdir:
---------------
$cd /u01/app/oracle/ggs
$./ggsci
GGSCI (rac1) 2> create subdirs
Create Goldengate administrator
-------------------------------
[oracle@rac1 admin]$ sqlplus sys/oracle@oradb1 as sysdba
SQL> create tablespace ggs_data
datafile '+data' size 200m;
Tablespace created.
SQL> select tablespace_name from dba_tablespaces;
TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
GGS_DATA
6 rows selected.
SQL> create user ggs_admin identified by ggs_admin
2 default tablespace ggs_data
3 temporary tablespace temp;
User created.
SQL> grant dba to ggs_admin;
Grant succeeded.
CREATE CHECKPOINT TABLE FOR REPLICAT ON BOTH SOURCE AND TARGET
---------------------------------------------------------------
GGSCI (rac1) 1> edit params ./GLOBAL
CHECKPOINTTABLE ggs_admin.chkpt
GGSCI (rac1) 11> dblogin userid ggs_admin@oradb1, password ggs_admin
Successfully logged into database.
GGSCI (rac1) 12> ADD CHECKPOINTTABLE ggs_admin.chkpt
Successfully created checkpoint table GGS_ADMIN.CHKPT.
confirm the table - SQL> desc ggs_admin.chkpt
Create manager Process:
------------------------
GGSCI (rac1) 1> EDIT PARAM MGR
PORT 7809
PURGEOLDEXTRACTS /u01/app/oracle/ggs/dirdat/sa*, USECHECKPOINTS, MINKEEPHOURS 2
--AUTOSTART ( optional configuration)
--AUTORESTART (optional configuration)
cONFIGURE VIP FOR GOLDENGATE APPLICATION ( virtual IP is needed in RAC configuration for automatic failover)
------------------------------------------------------------------------------------------------------------
go to grid_home/bin ------->grid_home is my CRS_HOME
./crs_profile -create ggsvip -t application -a ${GRID_HOME}/bin/usrvip -o oi=eth1,ov=192.168.1.240,on=255.255.255.0
note:( ggsvip - name of the application vip
oi=eth1 - is the public interface
on=255.255.255.0 - mask)
./crs_register ggsvip
[oracle@rac1 bin]$ su - root
/u01/app/11.2.0/grid/bin/crs_setperm ggsvip -o root
/u01/app/11.2.0/grid/bin/crs_setperm ggsvip -u user:oracle:r-x
[oracle@rac1 bin]$ /u01/app/11.2.0/grid/bin/crs_start ggsvip
Attempting to start `ggsvip` on member `rac1`
Start of `ggsvip` on member `rac1` succeeded.
Test the Goldengate VIP
-----------------------------------
[oracle@rac1 bin]$ ping 192.168.1.240
PING 192.168.1.240 (192.168.1.240) 56(84) bytes of data.
64 bytes from 192.168.1.240: icmp_seq=0 ttl=64 time=0.040 ms
create the script ggs_action.sh for starting the goldengate process on cluster (crs_home/crs/public)
----------------------------------------------------------------------------------------------------
#!/bin/sh
#############################################################################
#@(#) Clusterware script to manage Golden Gate
#
#
#
# Author: Arundas kalathil
#
#
# Script to Manage Golden Gate from Clusterware
#############################################################################
GGS_HOME=/u01/app/oracle/ggs
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${GGS_HOME}
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export GGS_HOME LD_LIBRARY_PATH ORACLE_HOME
# Function runCmd to run the Golden Gate Script Execution
runCmd()
{
ggsciCmd=$1
result=`${GGS_HOME}/ggsci << EOF
${ggsciCmd}
exit
EOF`
}
# Function CheckMgr to check the Golden Gate Manager process
checkMgr()
#Configuring & Managing Golden Gate in Oracle Clusterware
{
if ( [ -f "${GGS_HOME}/dirpcs/MGR.pcm" ] )
then
pid=`cut -f8 “${GGS_HOME}/dirpcs/MGR.pcm”`
if [ ${pid} = `ps -e |grep ${pid} |grep mgr |cut -d " " -f2` ]
then
exit 0
else
if [ ${pid} = `ps -e |grep ${pid} |grep mgr |cut -d " " -f1` ]
then
exit 0
else
exit 1
fi
fi
else
exit 1
fi
}
# Main Code to get the input and run it
case $1 in
‘start’) runCmd ‘start manager’
sleep 5
checkMgr
;;
‘stop’) runCmd ‘stop er *’
runCmd ‘stop er *!’
runCmd ‘stop manager!’
exit 0
;;
‘check’) checkMgr
;;
‘clean’) runCmd ‘stop er *’
runCmd ‘stop er *!’
runCmd ‘kill er *’
runCmd ‘stop manager!’
exit 0
;;
‘abort’) runCmd ‘stop er *!’
runCmd ‘kill er *’
runCmd ‘stop manager!’
exit 0
;;
esac
# End of Script
---------------------------
copy the script to grid_home/crs/public and give the execute permission
create a profile to start on cluster
---------------------------------------
./crs_profile -create goldengate_app -t application -r ggsvip -a ${GRID_HOME}/crs/public/ggs_action.sh -o ci=10
Register and start the application
---------------------------------------
./crs_register goldengate_app
./crs_start goldengate_app
CREATE SUPPLIMENTAL LOGGING
-----------------------------
In order to extract the commited transactions from the online redo logs,
the database must be configured for supplemental logging on primary key columns.
[oracle@rac1 admin]$ sqlplus sys/oracle@oradb1 as sysdba
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
Database altered.
SQL> ALTER SYSTEM SWITCH LOGFILE;
(note: in RAC- SQL> alter system archive log current;)
CONFIGURE GOLDENGATE TO ADD SUPPLEMENTAL LOG DATA TO THE SOURCE TABLES
-----------------------------------------------------------------------
VERY IMPORTENT--> add supplemental log data to the source tables with ADD TRANDATA command
GGSCI (rac1) 2> dblogin userid ggs_admin@oradb1, password ggs_admin
Successfully logged into database.
GGSCI (rac1) 3> ADD TRANDATA scott.emp
Logging of supplemental redo data enabled for table SCOTT.EMP.
note: every source table must have primary key enabled else goldengate
will define all variable columns to represent uniqueness.This will increase
the volume of supplemental log being written to redologs and trailfiles.
cREATE PASSWORD FILE FOR ASM in each node
------------------------------------------
cd $GRID_HOME/dbs
orapwd file=orapw+ASM1 password=ORACLE entries=20
ADD THE FOLL: IN LISTENER.ORA IN EACH NODE
-------------------------------------------
SID_LIST_LISTENER_rac=
(SID_LIST=
(SID_DESC =
(GLOBAL_DBNAME = +ASM)
(SID_NAME = +ASM1)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1
)
)
note: this is because goldengate extract process need to go through oracle user
CREATE TNSNAMES ENTRIES FOR ASM
------------------------------------
ASM =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.arun.com)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = +ASM)
)
)
=================================================================================================================================
On Location-2 repeate the same procedure
=================================================================================================================================
With datapump import the TABLES/SCHEMA OR tablespaces -------->( user network import using db_link)
CREATE A DATABASE LINK AND DATAPUMP ALL THE TABLES WITH NETWORK_LINK
---------------------------------------------------------------------
note: add tns entry to tnsnames.ora and test with tnsping <serive_name>
in my case it is oradb2
SQL> create shared public database link torac1
2 connect to system identified by oracle ------------> FOR RAC CONFIGURATION
3 authenticated by system identified by oracle
4 using 'oradb2';
Database link created.
impdp system/oracle@oradb2 tables=scott.emp network_link=torac1
below is the screenshot
Import: Release 11.2.0.1.0 - Production on Fri Jun 10 14:07:45 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_IMPORT_TABLE_01": system/********@oradb2 tables=scott.emp network_link=torac1
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . imported "SCOTT"."EMP" 17 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYSTEM"."SYS_IMPORT_TABLE_01" successfully completed at 14:08:07
----------------------------------------------------------------------------------
SQL> connect sys/oracle@oradb1 as sysdba
Connected.
SQL> select count (*) from scott.emp;
COUNT(*)
----------
17
SQL> connect sys/oracle@oradb2 as sysdba
Connected.
SQL> select count (*) from scott.emp;
COUNT(*)
----------
17
Before configuring the initial load configure the extract for data change capture
define the local trail for the extract process
note: For configuring initial load you need to create an additional extract and replicat process with SOURCEISTABLE AND SPECIALRUN parameters.
This is not used in this example scinario. In real world you need to configure so it can make the source and target in sync and
replication can start from this point.
CONFIGURE REPLICATION FROM LOCATION-1 TO LOCATION-2 ( BEFORE STARTING, MAKE SURE TO DO A ADD TRANDATA ----> Supplemental logging)
==================================================================================================================================
on Location1(source):
GGSCI (rac1) 1> EDIT PARAMS ELOAD1
-- change data capture parameter file to extract
-- oltp table change
--
EXTRACT ELOAD1
SETENV (ORACLE_SID=ORADB1)
USERID ggs_admin@oradb1, password ggs_admin
EXTTRAIL ./dirdat/sa
GETAPPLOPS
IGNOREREPLICATES
TRANLOGOPTIONS ASMUSER SYS@ASM, ASMPASSWORD ORACLE
TABLE scott.emp;
-- end of file
GGSCI (rac1) 5> add extract ELOAD1, tranlog, threads 1, begin now ----> note:threads is only used for RAC configuration
EXTRACT added.
GGSCI (rac1) 6> add exttrail ./dirdat/sa, extract ELOAD1, megabytes 50
EXTTRAIL added.
CREATE DATAPUMP PROCESS AND ASSOCIATE TO EXTTRAIL SOURCE
--------------------------------------------------------
GGSCI (rac1) 10> add extract EPMP01, exttrailsource ./dirdat/sa, begin now
EXTRACT added.
GGSCI (rac1) 11> edit params EPMP01
-- DATA PUMP PARAMETER FILE TO READ THE LOCAL
-- TRAIL OF TABLE CHANGES
--
EXTRACT EPMP01
PASSTHRU
RMTHOST rac2, MGRPORT 7809
RMTTRAIL ./dirdat/ta
TABLE SCOTT.EMP;
GGSCI (rac1) 30> add rmttrail ./dirdat/ta, extract EPMP01
RMTTRAIL added.
ON LOCATION-2(TARGET):
======================
CREATE REPLICAT PROCESS AND CONFIGURE EXCEPTION HANDLER TO TRAP THE ERRORS. ALSO CONFIGURE LOOP DETECTION AND CONFLICT
----------------------------------------------------------------------------------------------------------------------
GGSCI (rac2) 5> edit params RLOAD1
-- Replicator parameter file to apply changes
-- to tables
--
--
--
-- This starts the macro
--
MACRO #exception_handler
BEGIN
, TARGET ggs_admin.exceptions
, COLMAP ( rep_name = "RLOAD1"
, table_name = @GETENV ("GGHEADER", "TABLENAME")
, errno = @GETENV ("LASTERR", "DBERRNUM")
, dberrmsg = @GETENV ("LASTERR", "DBERRMSG")
, optype = @GETENV ("LASTERR", "OPTYPE")
, errtype = @GETENV ("LASTERR", "ERRTYPE")
, logrba = @GETENV ("GGHEADER", "LOGRBA")
, logposition = @GETENV ("GGHEADER", "LOGPOSITION")
, committimestamp = @GETENV ("GGHEADER", "COMMITTIMESTAMP"))
, INSERTALLRECORDS
, EXCEPTIONSONLY;
END;
-- END OF THE MACRO
--
REPLICAT RLOAD1
ASSUMETARGETDEFS
SETENV (ORACLE_SID=ORADB2)
USERID ggs_admin@oradb2, PASSWORD ggs_admin
DISCARDFILE ./dirrpt/rload.dsc, PURGE
REPERROR (DEFAULT, EXCEPTION)
REPERROR (DEFAULT2, ABEND)
REPERROR (-1, EXCEPTION)
MAP SCOTT.EMP, TARGET SCOTT.EMP;
MAP SCOTT.EMP #exception_handler()
--end of file
GGSCI (rac2) 14> add replicat rload1, exttrail ./dirdat/ta, checkpointtable ggs_admin.chkpt, begin now
REPLICAT added.
SQL>create table ggs_admin.exceptions
( rep_name varchar2(8)
, table_name varchar2(61)
, errno number
, dberrmsg varchar2(4000)
, optype varchar2(20)
, errtype varchar2(20)
, logrba number
, logposition number
, committimestamp timestamp
);
SQL> create tablespace MY_INDEXES
datafile '+data' size 200m;
ALTER TABLE ggs_admin.exceptions ADD (
CONSTRAINT PK_CTS
PRIMARY KEY
(logrba, logposition, committimestamp) USING INDEX PCTFREE 0 TABLESPACE MY_INDEXES);
GGSCI (rac2) 33> START RLOAD1 ---->To start the replicat process
on location-1 start the extract process (ELOAD1 and EPMP01)
GGSCI (rac1) 3> START ELOAD1
GGSCI (rac1) 4> START EPMP01
CONFIGURE REPLICATION FROM LOCATION-2 TO LOCATION-1 --------->Repeate the same process. In this case
Location-2 will be the source and Location-1 will be the target.
CONFIRM ALL THE PROCESS ARE RUNNING ON BOTH LOCATION:
-----------------------------------------------------
GGSCI (rac1) 56> INFO ALL
Program Status Group Lag Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING ELOAD1 00:00:00 00:00:09
EXTRACT RUNNING EPMP01 00:00:00 00:00:04
REPLICAT RUNNING RLOAD1 00:00:00 00:00:05
NOW TEST THE ACTIVE-ACTIVE REPLICATION:
=======================================
ON LOCATION1:
-------------
SQL> SELECT COUNT(*) FROM EMP;
COUNT(*)
----------
17
SQL> UPDATE EMP
2 SET SAL=5000 ------------->MODIFYING THE SALARY, NEW SALARY IS 5000
3 WHERE ENAME='FORD';
1 row updated.
SQL> COMMIT;
Commit complete.
ON LOCATION2:
-------------
SQL> SELECT COUNT(*) FROM EMP;
COUNT(*)
----------
17
SQL> SELECT ENAME, SAL FROM EMP
2 WHERE ENAME='FORD';
ENAME SAL
---------- ----------
FORD 5000 ----------->UPDATED SALARY
Bingo!!!!
NOW LET US TEST THE REPLICATION FROM LOCATION-2 TO LOCATION-1
=======================================================
ON LOCATION-2:
SQL> INSERT INTO EMP(EMPNO,ENAME,JOB,SAL)
2 VALUES(66,'KEN','DBA','6000');
1 row created.
SQL> COMMIT;
Commit complete.
ON LOCATION-1:
SQL> SELECT EMPNO,ENAME,JOB,SAL FROM EMP
2 WHERE ENAME='KEN';
EMPNO ENAME JOB SAL
---------- ---------- --------- ----------
66 KEN DBA 6000
NOTE: MORE ADVANCE CONFIGURATION IS NEEDED FOR TRIGGERS AND CONFLICT DETECTION WHICH IS NOT COVERED IN THIS EXAMPLE.
=====================================================================================================================================
-----------------------------------------------------
Project: Goldengate Active-Active(Bidirectional) Replication:
Scenario:
We have two locations with same database configuration
Location1=11g-r2 in archive log mode (ONE NODE RAC)/OEL/VMWARE
Location2=11g=r2 in archive log mode (ONE NODE RAC)/OEL/VMWARE
Data entry clerks are employed at both sites inputting orders into a
Order management system. Both database should be in sync after every
Transaction.
--------------------------------------------------------------------------------------------------
On Location1:
Download, Install goldengate software and configure environment variables
-------------------------------------------------------------------------
Login as Oracle user
$ mkdir /u01/app/oracle/ggs <------ (downoad the goldengate ZIP file to this directory assuming this as a shared storage)
$ unzip V18159-01.zip
Archive: V18159-01.zip
inflating: ggs_redhatAS50_x64_ora11g_64bit_v10.4.0.19_002.tar
$tar -xvf ggs_redhatAS50_x64_ora11g_64bit_v10.4.0.19_002.tar
vi /home/oracle/.bash_profile
#user specific configuration for Oracle
export ORACLE_SID=oradb1 ------> note: on location-2 i have the SID=oradb2
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
export GRID_HOME=/u01/app/11.2.0/grid -----> note: this is my CRS_HOME
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin:.
#end of file
Create subdir:
---------------
$cd /u01/app/oracle/ggs
$./ggsci
GGSCI (rac1) 2> create subdirs
Create Goldengate administrator
-------------------------------
[oracle@rac1 admin]$ sqlplus sys/oracle@oradb1 as sysdba
SQL> create tablespace ggs_data
datafile '+data' size 200m;
Tablespace created.
SQL> select tablespace_name from dba_tablespaces;
TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
GGS_DATA
6 rows selected.
SQL> create user ggs_admin identified by ggs_admin
2 default tablespace ggs_data
3 temporary tablespace temp;
User created.
SQL> grant dba to ggs_admin;
Grant succeeded.
CREATE CHECKPOINT TABLE FOR REPLICAT ON BOTH SOURCE AND TARGET
---------------------------------------------------------------
GGSCI (rac1) 1> edit params ./GLOBAL
CHECKPOINTTABLE ggs_admin.chkpt
GGSCI (rac1) 11> dblogin userid ggs_admin@oradb1, password ggs_admin
Successfully logged into database.
GGSCI (rac1) 12> ADD CHECKPOINTTABLE ggs_admin.chkpt
Successfully created checkpoint table GGS_ADMIN.CHKPT.
confirm the table - SQL> desc ggs_admin.chkpt
Create manager Process:
------------------------
GGSCI (rac1) 1> EDIT PARAM MGR
PORT 7809
PURGEOLDEXTRACTS /u01/app/oracle/ggs/dirdat/sa*, USECHECKPOINTS, MINKEEPHOURS 2
--AUTOSTART ( optional configuration)
--AUTORESTART (optional configuration)
cONFIGURE VIP FOR GOLDENGATE APPLICATION ( virtual IP is needed in RAC configuration for automatic failover)
------------------------------------------------------------------------------------------------------------
go to grid_home/bin ------->grid_home is my CRS_HOME
./crs_profile -create ggsvip -t application -a ${GRID_HOME}/bin/usrvip -o oi=eth1,ov=192.168.1.240,on=255.255.255.0
note:( ggsvip - name of the application vip
oi=eth1 - is the public interface
on=255.255.255.0 - mask)
./crs_register ggsvip
[oracle@rac1 bin]$ su - root
/u01/app/11.2.0/grid/bin/crs_setperm ggsvip -o root
/u01/app/11.2.0/grid/bin/crs_setperm ggsvip -u user:oracle:r-x
[oracle@rac1 bin]$ /u01/app/11.2.0/grid/bin/crs_start ggsvip
Attempting to start `ggsvip` on member `rac1`
Start of `ggsvip` on member `rac1` succeeded.
Test the Goldengate VIP
-----------------------------------
[oracle@rac1 bin]$ ping 192.168.1.240
PING 192.168.1.240 (192.168.1.240) 56(84) bytes of data.
64 bytes from 192.168.1.240: icmp_seq=0 ttl=64 time=0.040 ms
create the script ggs_action.sh for starting the goldengate process on cluster (crs_home/crs/public)
----------------------------------------------------------------------------------------------------
#!/bin/sh
#############################################################################
#@(#) Clusterware script to manage Golden Gate
#
#
#
# Author: Arundas kalathil
#
#
# Script to Manage Golden Gate from Clusterware
#############################################################################
GGS_HOME=/u01/app/oracle/ggs
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${GGS_HOME}
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export GGS_HOME LD_LIBRARY_PATH ORACLE_HOME
# Function runCmd to run the Golden Gate Script Execution
runCmd()
{
ggsciCmd=$1
result=`${GGS_HOME}/ggsci << EOF
${ggsciCmd}
exit
EOF`
}
# Function CheckMgr to check the Golden Gate Manager process
checkMgr()
#Configuring & Managing Golden Gate in Oracle Clusterware
{
if ( [ -f "${GGS_HOME}/dirpcs/MGR.pcm" ] )
then
pid=`cut -f8 “${GGS_HOME}/dirpcs/MGR.pcm”`
if [ ${pid} = `ps -e |grep ${pid} |grep mgr |cut -d " " -f2` ]
then
exit 0
else
if [ ${pid} = `ps -e |grep ${pid} |grep mgr |cut -d " " -f1` ]
then
exit 0
else
exit 1
fi
fi
else
exit 1
fi
}
# Main Code to get the input and run it
case $1 in
‘start’) runCmd ‘start manager’
sleep 5
checkMgr
;;
‘stop’) runCmd ‘stop er *’
runCmd ‘stop er *!’
runCmd ‘stop manager!’
exit 0
;;
‘check’) checkMgr
;;
‘clean’) runCmd ‘stop er *’
runCmd ‘stop er *!’
runCmd ‘kill er *’
runCmd ‘stop manager!’
exit 0
;;
‘abort’) runCmd ‘stop er *!’
runCmd ‘kill er *’
runCmd ‘stop manager!’
exit 0
;;
esac
# End of Script
---------------------------
copy the script to grid_home/crs/public and give the execute permission
create a profile to start on cluster
---------------------------------------
./crs_profile -create goldengate_app -t application -r ggsvip -a ${GRID_HOME}/crs/public/ggs_action.sh -o ci=10
Register and start the application
---------------------------------------
./crs_register goldengate_app
./crs_start goldengate_app
CREATE SUPPLIMENTAL LOGGING
-----------------------------
In order to extract the commited transactions from the online redo logs,
the database must be configured for supplemental logging on primary key columns.
[oracle@rac1 admin]$ sqlplus sys/oracle@oradb1 as sysdba
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
Database altered.
SQL> ALTER SYSTEM SWITCH LOGFILE;
(note: in RAC- SQL> alter system archive log current;)
CONFIGURE GOLDENGATE TO ADD SUPPLEMENTAL LOG DATA TO THE SOURCE TABLES
-----------------------------------------------------------------------
VERY IMPORTENT--> add supplemental log data to the source tables with ADD TRANDATA command
GGSCI (rac1) 2> dblogin userid ggs_admin@oradb1, password ggs_admin
Successfully logged into database.
GGSCI (rac1) 3> ADD TRANDATA scott.emp
Logging of supplemental redo data enabled for table SCOTT.EMP.
note: every source table must have primary key enabled else goldengate
will define all variable columns to represent uniqueness.This will increase
the volume of supplemental log being written to redologs and trailfiles.
cREATE PASSWORD FILE FOR ASM in each node
------------------------------------------
cd $GRID_HOME/dbs
orapwd file=orapw+ASM1 password=ORACLE entries=20
ADD THE FOLL: IN LISTENER.ORA IN EACH NODE
-------------------------------------------
SID_LIST_LISTENER_rac=
(SID_LIST=
(SID_DESC =
(GLOBAL_DBNAME = +ASM)
(SID_NAME = +ASM1)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1
)
)
note: this is because goldengate extract process need to go through oracle user
CREATE TNSNAMES ENTRIES FOR ASM
------------------------------------
ASM =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.arun.com)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = +ASM)
)
)
=================================================================================================================================
On Location-2 repeate the same procedure
=================================================================================================================================
With datapump import the TABLES/SCHEMA OR tablespaces -------->( user network import using db_link)
CREATE A DATABASE LINK AND DATAPUMP ALL THE TABLES WITH NETWORK_LINK
---------------------------------------------------------------------
note: add tns entry to tnsnames.ora and test with tnsping <serive_name>
in my case it is oradb2
SQL> create shared public database link torac1
2 connect to system identified by oracle ------------> FOR RAC CONFIGURATION
3 authenticated by system identified by oracle
4 using 'oradb2';
Database link created.
impdp system/oracle@oradb2 tables=scott.emp network_link=torac1
below is the screenshot
Import: Release 11.2.0.1.0 - Production on Fri Jun 10 14:07:45 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_IMPORT_TABLE_01": system/********@oradb2 tables=scott.emp network_link=torac1
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . imported "SCOTT"."EMP" 17 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYSTEM"."SYS_IMPORT_TABLE_01" successfully completed at 14:08:07
----------------------------------------------------------------------------------
SQL> connect sys/oracle@oradb1 as sysdba
Connected.
SQL> select count (*) from scott.emp;
COUNT(*)
----------
17
SQL> connect sys/oracle@oradb2 as sysdba
Connected.
SQL> select count (*) from scott.emp;
COUNT(*)
----------
17
Before configuring the initial load configure the extract for data change capture
define the local trail for the extract process
note: For configuring initial load you need to create an additional extract and replicat process with SOURCEISTABLE AND SPECIALRUN parameters.
This is not used in this example scinario. In real world you need to configure so it can make the source and target in sync and
replication can start from this point.
CONFIGURE REPLICATION FROM LOCATION-1 TO LOCATION-2 ( BEFORE STARTING, MAKE SURE TO DO A ADD TRANDATA ----> Supplemental logging)
==================================================================================================================================
on Location1(source):
GGSCI (rac1) 1> EDIT PARAMS ELOAD1
-- change data capture parameter file to extract
-- oltp table change
--
EXTRACT ELOAD1
SETENV (ORACLE_SID=ORADB1)
USERID ggs_admin@oradb1, password ggs_admin
EXTTRAIL ./dirdat/sa
GETAPPLOPS
IGNOREREPLICATES
TRANLOGOPTIONS ASMUSER SYS@ASM, ASMPASSWORD ORACLE
TABLE scott.emp;
-- end of file
GGSCI (rac1) 5> add extract ELOAD1, tranlog, threads 1, begin now ----> note:threads is only used for RAC configuration
EXTRACT added.
GGSCI (rac1) 6> add exttrail ./dirdat/sa, extract ELOAD1, megabytes 50
EXTTRAIL added.
CREATE DATAPUMP PROCESS AND ASSOCIATE TO EXTTRAIL SOURCE
--------------------------------------------------------
GGSCI (rac1) 10> add extract EPMP01, exttrailsource ./dirdat/sa, begin now
EXTRACT added.
GGSCI (rac1) 11> edit params EPMP01
-- DATA PUMP PARAMETER FILE TO READ THE LOCAL
-- TRAIL OF TABLE CHANGES
--
EXTRACT EPMP01
PASSTHRU
RMTHOST rac2, MGRPORT 7809
RMTTRAIL ./dirdat/ta
TABLE SCOTT.EMP;
GGSCI (rac1) 30> add rmttrail ./dirdat/ta, extract EPMP01
RMTTRAIL added.
ON LOCATION-2(TARGET):
======================
CREATE REPLICAT PROCESS AND CONFIGURE EXCEPTION HANDLER TO TRAP THE ERRORS. ALSO CONFIGURE LOOP DETECTION AND CONFLICT
----------------------------------------------------------------------------------------------------------------------
GGSCI (rac2) 5> edit params RLOAD1
-- Replicator parameter file to apply changes
-- to tables
--
--
--
-- This starts the macro
--
MACRO #exception_handler
BEGIN
, TARGET ggs_admin.exceptions
, COLMAP ( rep_name = "RLOAD1"
, table_name = @GETENV ("GGHEADER", "TABLENAME")
, errno = @GETENV ("LASTERR", "DBERRNUM")
, dberrmsg = @GETENV ("LASTERR", "DBERRMSG")
, optype = @GETENV ("LASTERR", "OPTYPE")
, errtype = @GETENV ("LASTERR", "ERRTYPE")
, logrba = @GETENV ("GGHEADER", "LOGRBA")
, logposition = @GETENV ("GGHEADER", "LOGPOSITION")
, committimestamp = @GETENV ("GGHEADER", "COMMITTIMESTAMP"))
, INSERTALLRECORDS
, EXCEPTIONSONLY;
END;
-- END OF THE MACRO
--
REPLICAT RLOAD1
ASSUMETARGETDEFS
SETENV (ORACLE_SID=ORADB2)
USERID ggs_admin@oradb2, PASSWORD ggs_admin
DISCARDFILE ./dirrpt/rload.dsc, PURGE
REPERROR (DEFAULT, EXCEPTION)
REPERROR (DEFAULT2, ABEND)
REPERROR (-1, EXCEPTION)
MAP SCOTT.EMP, TARGET SCOTT.EMP;
MAP SCOTT.EMP #exception_handler()
--end of file
GGSCI (rac2) 14> add replicat rload1, exttrail ./dirdat/ta, checkpointtable ggs_admin.chkpt, begin now
REPLICAT added.
SQL>create table ggs_admin.exceptions
( rep_name varchar2(8)
, table_name varchar2(61)
, errno number
, dberrmsg varchar2(4000)
, optype varchar2(20)
, errtype varchar2(20)
, logrba number
, logposition number
, committimestamp timestamp
);
SQL> create tablespace MY_INDEXES
datafile '+data' size 200m;
ALTER TABLE ggs_admin.exceptions ADD (
CONSTRAINT PK_CTS
PRIMARY KEY
(logrba, logposition, committimestamp) USING INDEX PCTFREE 0 TABLESPACE MY_INDEXES);
GGSCI (rac2) 33> START RLOAD1 ---->To start the replicat process
on location-1 start the extract process (ELOAD1 and EPMP01)
GGSCI (rac1) 3> START ELOAD1
GGSCI (rac1) 4> START EPMP01
CONFIGURE REPLICATION FROM LOCATION-2 TO LOCATION-1 --------->Repeate the same process. In this case
Location-2 will be the source and Location-1 will be the target.
CONFIRM ALL THE PROCESS ARE RUNNING ON BOTH LOCATION:
-----------------------------------------------------
GGSCI (rac1) 56> INFO ALL
Program Status Group Lag Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING ELOAD1 00:00:00 00:00:09
EXTRACT RUNNING EPMP01 00:00:00 00:00:04
REPLICAT RUNNING RLOAD1 00:00:00 00:00:05
NOW TEST THE ACTIVE-ACTIVE REPLICATION:
=======================================
ON LOCATION1:
-------------
SQL> SELECT COUNT(*) FROM EMP;
COUNT(*)
----------
17
SQL> UPDATE EMP
2 SET SAL=5000 ------------->MODIFYING THE SALARY, NEW SALARY IS 5000
3 WHERE ENAME='FORD';
1 row updated.
SQL> COMMIT;
Commit complete.
ON LOCATION2:
-------------
SQL> SELECT COUNT(*) FROM EMP;
COUNT(*)
----------
17
SQL> SELECT ENAME, SAL FROM EMP
2 WHERE ENAME='FORD';
ENAME SAL
---------- ----------
FORD 5000 ----------->UPDATED SALARY
Bingo!!!!
NOW LET US TEST THE REPLICATION FROM LOCATION-2 TO LOCATION-1
=======================================================
ON LOCATION-2:
SQL> INSERT INTO EMP(EMPNO,ENAME,JOB,SAL)
2 VALUES(66,'KEN','DBA','6000');
1 row created.
SQL> COMMIT;
Commit complete.
ON LOCATION-1:
SQL> SELECT EMPNO,ENAME,JOB,SAL FROM EMP
2 WHERE ENAME='KEN';
EMPNO ENAME JOB SAL
---------- ---------- --------- ----------
66 KEN DBA 6000
NOTE: MORE ADVANCE CONFIGURATION IS NEEDED FOR TRIGGERS AND CONFLICT DETECTION WHICH IS NOT COVERED IN THIS EXAMPLE.
=====================================================================================================================================
Monday, January 31, 2011
Recover a deleted datafile when the Oracle database is running in noarchive mode.
On Unix/Linux, when a file is deleted, It is stored in its recycle bin area just like windows. And you can access the deleted file under /proc//fd .
In the following example, we will see how we can restore the deleted datafile
Let us create a tablespace, and populate a table in it.
SQL> create tablespace RECVR_TEST datafile '/oradata/tmp/recvr_test.dbf' size 10M;
Tablespace created.
SQL> create table DATA_TEST tablespace recvr_test as select * from dba_objects;
Table created.
SQL> select count(*) from DATA_TEST;
COUNT(*)
----------
12708
SQL> exit
Disconnected from Oracle Database 10g Express Edition Release 10.2.0.1.0
SQL> create tablespace RECVR_TEST datafile '/oradata/tmp/recvr_test.dbf' size 10M;
Tablespace created.
SQL> create table DATA_TEST tablespace recvr_test as select * from dba_objects;
Table created.
SQL> select count(*) from DATA_TEST;
COUNT(*)
----------
12708
SQL> exit
Disconnected from Oracle Database 10g Express Edition Release 10.2.0.1.0
ls -l /oradata/tmp/recvr_test.dbf
-rw-r----- 1 oracle dba 10493952 Sep 26 14:25 /oradata/tmp/recvr_test.dbf
Let us drop the datafile from unix prompt.
rm /oradata/tmp/recvr_test.dbf
ls -l /oradata/tmp/recvr_test.dbf
ls: /oradata/tmp/recvr_test.dbf: no such file or directory
We lost the Datafile
-rw-r----- 1 oracle dba 10493952 Sep 26 14:25 /oradata/tmp/recvr_test.dbf
Let us drop the datafile from unix prompt.
rm /oradata/tmp/recvr_test.dbf
ls -l /oradata/tmp/recvr_test.dbf
ls: /oradata/tmp/recvr_test.dbf: no such file or directory
We lost the Datafile
Let us try to access the data
sqlplus / as sysdba
Connected to:
Oracle Database 10g Express Edition Release 10.2.0.1.0
SQL> REM and we check if table data is accessible:
SQL> select count(*) from DATA_TEST;
ERROR at line 1:
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/oradata/tmp/recvr_test.dbf'
ORA-27041: unable to open file
Linux Error: 2: No such file or directory
Additional information: 3
Now we have a “Oh My god” Situation
We cannot access the data because we lost the datafile. But we will check the open file descriptor by oracle background process to locate the deleted file.
To check the dbwriter pid:
ps -edf | grep dbw
oracle 2661 1 0 Sep26 ? 00:00:06 xe_dbw0_XE
oracle 7044 7037 0 14:40 pts/1 00:00:00 /bin/bash -c ps -edf | grep dbw
oracle 7046 7044 0 14:40 pts/1 00:00:00 grep dbw
and we check its opened file descriptors for our file:
ls -l /proc/2661/fd | grep recvr_test
lrwx------ 1 oracle dba 64 Sep 26 14:02 66 -> /oradata/tmp/recvr_test.dbf (deleted)
here it is:
ls -l /proc/2661/fd/66
lrwx------ 1 oracle dba 64 Sep 26 14:02 /proc/2661/fd/66 -> /oradata/tmp/recvr_test.dbf (deleted)
In some other unix, lsof may be needed to map the file descriptor with the deleted file name
Now let us set a symbolic link so that oracle can see it as it was before the delete:
ln -s /proc/2661/fd/66 /oradata/tmp/recvr_test.dbf
To check the dbwriter pid:
ps -edf | grep dbw
oracle 2661 1 0 Sep26 ? 00:00:06 xe_dbw0_XE
oracle 7044 7037 0 14:40 pts/1 00:00:00 /bin/bash -c ps -edf | grep dbw
oracle 7046 7044 0 14:40 pts/1 00:00:00 grep dbw
and we check its opened file descriptors for our file:
ls -l /proc/2661/fd | grep recvr_test
lrwx------ 1 oracle dba 64 Sep 26 14:02 66 -> /oradata/tmp/recvr_test.dbf (deleted)
here it is:
ls -l /proc/2661/fd/66
lrwx------ 1 oracle dba 64 Sep 26 14:02 /proc/2661/fd/66 -> /oradata/tmp/recvr_test.dbf (deleted)
In some other unix, lsof may be needed to map the file descriptor with the deleted file name
Now let us set a symbolic link so that oracle can see it as it was before the delete:
ln -s /proc/2661/fd/66 /oradata/tmp/recvr_test.dbf
Let us put the tablespace in readonly mode to restrict users from writing.
SQL> alter tablespace RECVR_TEST read only;
Tablespace altered.
Let us copy the file to the original location.
cp -p /proc/2661/fd/66 /oradata/tmp/recvr_test.dbf
ls -l /oradata/tmp/recvr_test.dbf
-rw-r----- 1 oracle dba 10493952 Mar 26 14:54 /oradata/tmp/recvr_test.dbf
Now we have the datafile back and let us put the tablespace in read/write mode.
SQL> alter tablespace recvr_test read write;
Tablespace altered.
Now let us check the data is still there:
SQL> select count(*) from DATA_TEST;
COUNT(*)
----------
12708
SQL> alter tablespace RECVR_TEST read only;
Tablespace altered.
Let us copy the file to the original location.
cp -p /proc/2661/fd/66 /oradata/tmp/recvr_test.dbf
ls -l /oradata/tmp/recvr_test.dbf
-rw-r----- 1 oracle dba 10493952 Mar 26 14:54 /oradata/tmp/recvr_test.dbf
Now we have the datafile back and let us put the tablespace in read/write mode.
SQL> alter tablespace recvr_test read write;
Tablespace altered.
Now let us check the data is still there:
SQL> select count(*) from DATA_TEST;
COUNT(*)
----------
12708
YES, WE RESTORED THE DATAFILE.
Subscribe to:
Posts (Atom)