Tuesday, March 10, 2015

Restoring to a point in time to recover a few tables

So, haven't had to do this for a long time.

A developer came to me and said he'd updated some tables in PROD by mistake, and could I please recover them from backup?

This is the point where I usually say nope, too hard, go away.

However, this guy is always helping me out with PL/SQL and general SQL queries, and I'd only recently restored and recovered the database for a DR exercise, so I said OK.

Most databases these days are so big that a restore could take several hours, but this is a small system and only took about 20 minutes or so.

In DR, restored the database using the site DR documentation, and instead of recovering to the latest log, used a time:


RMAN> run {                                                                                                           
2> restore datafile 1,2,4,5,6,7,8,9,10,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31;                   
3> switch datafile all;                                                                                               
4> }

  
RMAN> recover database until time "to_date('11-MAR-2015 11:00:00','dd-MON-rrrr hh24:mi:ss')";

media recovery complete, elapsed time: 00:00:41
Finished recover at 11-MAR-2015 12:07:33

SQL> alter database open resetlogs;

Database altered.

Then created a directory for the export, exported the tables and scp'd them to PROD:

SQL> create directory andy_exp as '/u1/app/oracle/mga/akh';

Directory created.

SQL> exit

cd /u1/app/oracle/mga/akh

expdp \"/ as sysdba\" directory=andy_exp dumpfile=andy.dmp logfile=andy.log tables=SCHEMA.TABLE1,SCHEMA.TABLE2,SCHEMA.TABLE3,SCHEMA.TABLE4

Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCHEMA"."TABLE1"            9.660 MB   38913 rows
. . exported "SCHEMA"."TABLE2"            1015. KB    3023 rows
. . exported "SCHEMA"."TABLE3"              564.8 KB    1494 rows
. . exported "SCHEMA"."TABLE4"             8.367 KB      16 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /u1/app/oracle/mga/akh/andy.dmp

scp andy.dmp oracle@prodhost:/u1/app/oracle/mga/akh

andy.dmp           100% |***************************************************************************************************| 11620 KB    00:00


In PROD, created backups of the tables:

SQL> create table SCHEMA.TABLE1_ANDY as select * from SCHEMA.TABLE1;

Table created.

SQL> create table SCHEMA.TABLE2_ANDY as select * from SCHEMA.TABLE2;

Table created.

SQL> create table SCHEMA.TABLE3_ANDY as select * from SCHEMA.TABLE3;

Table created.

SQL> create table SCHEMA.TABLE4_ANDY as select * from SCHEMA.TABLE4;

Table created.


Then checked where the directory was, copied the dump file there, and imported and overwrote the existing data:

SQL> select * from dba_directories;

SYS                            DATA_PUMP_DIR
/u1/app/oracle/product/10.2.0/Db_1/rdbms/log/

SQL> exit

cp andy.dmp /u1/app/oracle/product/10.2.0/Db_1/rdbms/log/

cd /u1/app/oracle/product/10.2.0/Db_1/rdbms/log/

impdp \"/ as sysdba\" directory=data_pump_dir dumpfile=andy.dmp logfile=andy.log full=y table_exists_action=replace

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCHEMA"."TABLE1"            9.660 MB   38913 rows
. . imported "SCHEMA"."TABLE2"            1015. KB    3023 rows
. . imported "SCHEMA"."TABLE3"              564.8 KB    1494 rows
. . imported "SCHEMA"."TABLE4"             8.367 KB      16 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at 12:17:36

And that was it. 

If the database / tables had been too big, I don't think I could have done it.