Tuesday, March 10, 2015

Restoring to a point in time to recover a few tables

So, haven't had to do this for a long time.

A developer came to me and said he'd updated some tables in PROD by mistake, and could I please recover them from backup?

This is the point where I usually say nope, too hard, go away.

However, this guy is always helping me out with PL/SQL and general SQL queries, and I'd only recently restored and recovered the database for a DR exercise, so I said OK.

Most databases these days are so big that a restore could take several hours, but this is a small system and only took about 20 minutes or so.

In DR, restored the database using the site DR documentation, and instead of recovering to the latest log, used a time:


RMAN> run {                                                                                                           
2> restore datafile 1,2,4,5,6,7,8,9,10,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31;                   
3> switch datafile all;                                                                                               
4> }

  
RMAN> recover database until time "to_date('11-MAR-2015 11:00:00','dd-MON-rrrr hh24:mi:ss')";

media recovery complete, elapsed time: 00:00:41
Finished recover at 11-MAR-2015 12:07:33

SQL> alter database open resetlogs;

Database altered.

Then created a directory for the export, exported the tables and scp'd them to PROD:

SQL> create directory andy_exp as '/u1/app/oracle/mga/akh';

Directory created.

SQL> exit

cd /u1/app/oracle/mga/akh

expdp \"/ as sysdba\" directory=andy_exp dumpfile=andy.dmp logfile=andy.log tables=SCHEMA.TABLE1,SCHEMA.TABLE2,SCHEMA.TABLE3,SCHEMA.TABLE4

Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCHEMA"."TABLE1"            9.660 MB   38913 rows
. . exported "SCHEMA"."TABLE2"            1015. KB    3023 rows
. . exported "SCHEMA"."TABLE3"              564.8 KB    1494 rows
. . exported "SCHEMA"."TABLE4"             8.367 KB      16 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /u1/app/oracle/mga/akh/andy.dmp

scp andy.dmp oracle@prodhost:/u1/app/oracle/mga/akh

andy.dmp           100% |***************************************************************************************************| 11620 KB    00:00


In PROD, created backups of the tables:

SQL> create table SCHEMA.TABLE1_ANDY as select * from SCHEMA.TABLE1;

Table created.

SQL> create table SCHEMA.TABLE2_ANDY as select * from SCHEMA.TABLE2;

Table created.

SQL> create table SCHEMA.TABLE3_ANDY as select * from SCHEMA.TABLE3;

Table created.

SQL> create table SCHEMA.TABLE4_ANDY as select * from SCHEMA.TABLE4;

Table created.


Then checked where the directory was, copied the dump file there, and imported and overwrote the existing data:

SQL> select * from dba_directories;

SYS                            DATA_PUMP_DIR
/u1/app/oracle/product/10.2.0/Db_1/rdbms/log/

SQL> exit

cp andy.dmp /u1/app/oracle/product/10.2.0/Db_1/rdbms/log/

cd /u1/app/oracle/product/10.2.0/Db_1/rdbms/log/

impdp \"/ as sysdba\" directory=data_pump_dir dumpfile=andy.dmp logfile=andy.log full=y table_exists_action=replace

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCHEMA"."TABLE1"            9.660 MB   38913 rows
. . imported "SCHEMA"."TABLE2"            1015. KB    3023 rows
. . imported "SCHEMA"."TABLE3"              564.8 KB    1494 rows
. . imported "SCHEMA"."TABLE4"             8.367 KB      16 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at 12:17:36

And that was it. 

If the database / tables had been too big, I don't think I could have done it. 

Wednesday, February 11, 2015

APEX - Changing the colour of a row based on the value

I'm writing an app that shows the status of various database things, and I want the colour of the row to be red when there is an issue.

The solution is in an APEX forum post but it took some finding so I thought I'd put it here for my future reference.

The forum post is here : https://community.oracle.com/thread/2602690

by Jari.

In my app, I have a table (dbstatus) that is fed from a cron job that connects to the database and runs a script:

select instance_name,startup_time from v$instance;

The table just has the database name and the startup time as columns.

If the database is down and this fails, the startup_time will be null in the table, so that's what I check on.

 The query in the report looks like this:

select dbname,start_time,
case when start_time is null
then 'red'
end as fcolor
 from dbstatus;



Fairly straight forward.

You now need to edit the report columns. You will see a new column has appeared called FCOLOR - uncheck the 'SHOW' box so that it is hidden.

Click on the edit icon for the dbname column, and scroll down to the "Column Formatting section and enter this into the 'HTML Expression' box:




Do the same for the 'Start Time' column:





Now if the start time is null, the database name will show up in red:



 To make the others show up in green is a simple change to the SQL - just add the "else 'green'" line:

select dbname,start_time,
case when start_time is null
then 'red'
else 'green'
end as fcolor
 from dbstatus;




I think this is pretty neat.