A user requested a database backup that needed to be kept for 3 months and then restored.
Oh. Gee. Not had to do this for a while.
This was a non Prod database and backups are taken every night and only one copy kept. So the first thing I had to do was make sure the backups didn't get removed.
A bit of googling and this is the script I created:
run {
allocate channel d1 device type disk;
allocate channel d2 device type disk;
backup format '/backup/oraflash/UIQUT1/rman/keep/%u_%s_%p'
keep until time 'sysdate+100'
database
current controlfile
format '/backup/oraflash/UIQUT1/rman/keep/uiqut1_ctrl_%u.rman'
keep until time 'sysdate+100'
spfile
format '/backup/oraflash/UIQUT1/rman/keep/spfile_%u_%s_%p'
keep until time 'sysdate+100'
plus archivelog
format '/backup/oraflash/UIQUT1/rman/keep/arc_%u_%s_%p'
keep until time 'sysdate+100';
}
The first time I did this I only had the 'keep until time' as the last line - and it only kept the archived logs and removed the other backups - so you need it for each backup.
I checked it the next day and the files were all there.
You should also change the parameter 'control_file_record_keep_time' to a relevant figure depending on how long you want to keep the backups - not essential because the backup files can be recatalogued, but worthwhile.
I ran a 'restore database preview' which verified that the files were OK.
However, the user was pretty adamant that it was essential that the database could be restored down the line - and the only way I could be sure was to actually do it. I obviously couldn't overwrite the existing database, so I tarred up the backup files and scp'd them to another host.
Once there, I had some preparation to do
I created the directory - not 100% sure this is necessary, but worth doing
mkdir /backup/oraflash/UIQUT1/rman/keep
and moved the tar file there and extracted it.
I also copied the init.ora file to the $ORACLE_HOME/dbs directory and checked the paths of the controlfiles, log_arch_dest and audit dump dest, and created directories for those if they didn't exist
mkdir /app/oracle/admin/UIQUT1
mkdir /app/oracle/admin/UIQUT1/adump
*.control_files='/app/oracle/admin/UIQUT1/control01.ctl','/app/oracle/admin/UIQUT1/control02.ctl','/app/oracle/admin/UIQUT1/control03.ctl'
*.log_archive_dest_1='LOCATION=/backup/oraflash/UIQUT1/archivelog/'
There was also a local listener parameter that I removed.
I added an entry to the oratab and set the database name with . oraenv
So the process to restore a backup like this is
1. Startup nomount
2. Restore controlfile
3. Alter database mount
4. Restore datafiles
5. Recover database
6. Open resetlogs
But there are some more things to do in between.
First off though, restore the controlfile:
The script backed up the controlfile to a file with this format - uiqut1_ctrl_%u.rman - so find that file and restore it
rman target/
restore controlfile from '/backup/oraflash/UIQUT1/rman/keep/uiqut1_ctrl_bhsmcleg.rman';
It should do the job.
Mount the database (in RMAN)
alter database mount;
You need to tell RMAN about the backups
catalog start with ''/backup/oraflash/UIQUT1/rman/keep/'
This should run through and register the files. If you 'list backup of database;' it should show the files.
An added complication here is that the host I am restoring to uses ASM, but the source database doesn't. Not really a big deal, just have to generate the new file names. If you don't need to do this, just create the directories like the host server.
In SQLPlus:
set pages 9999
set lines 200
select 'set newname for datafile '''||name||''' to '''||replace(name, '/dat/oradata/UIQUT1/', '+STSORA050_DATA_01')||''';' FROM V$DATAFILE;
This will generate something like this:
set newname for datafile '/dat/oradata/UIQUT1/system01.dbf' to '+STSORA050_DATA_01/UIQUT1/system01.dbf';
set newname for datafile '/dat/oradata/UIQUT1/sysaux01.dbf' to '+STSORA050_DATA_01/UIQUT1/sysaux01.dbf';
set newname for datafile '/dat/orasys/UIQUT1/undotbs01.dbf' to '+STSORA050_DATA_01/UIQUT1/undotbs01.dbf';
Do the same for the TEMPFILEs (change V$DATAFILE to V$TEMPFILE)
Keep the output, you'll need it later.
Do the same for the redo logs :
alter database rename file '/dat/redoa/UIQUT1/redo03a.log' to '+STSORA050_DATA_01/UIQUT1/redo03a.log';
alter database rename file '/dat/redob/UIQUT1/redo03b.log' to '+STSORA050_DATA_01/UIQUT1/redo03b.log';
This is all that is needed.
Create a restore.rcv script and put this in it (if you need to rename the files - if not, just have the restore and recover commands)
run {
set newname for datafile '/dat/oradata/UIQUT1/system01.dbf' to '+STSORA050_DATA_01/UIQUT1/system01.dbf';
set newname for datafile '/dat/oradata/UIQUT1/sysaux01.dbf' to '+STSORA050_DATA_01/UIQUT1/sysaux01.dbf';
set newname for datafile '/dat/orasys/UIQUT1/undotbs01.dbf' to '+STSORA050_DATA_01/UIQUT1/undotbs01.dbf';
set newname for datafile '/dat/oradata/UIQUT1/users01.dbf' to '+STSORA050_DATA_01/UIQUT1/users01.dbf';
.
.
. (all the files from the rename command)
.
.
set newname for tempfile '/dat/orasys/UIQUT1/temp01.dbf' to '+STSORA050_DATA_01/UIQUT1/temp01.dbf';
restore database;
switch datafile all;
switch tempfile all;
recover database;
}
Run it like this:
nohup rman target / cmdfile='/backup/oraflash/UIQUT1/rman/keep/restore.rcv' log='/backup/oraflash/UIQUT1/rman/keep/restore.log' &
This will ensure it will keep going if your session disconnects.
Once complete, you might see this at the end of the logfile
RMAN-03002: failure of recover command at 12/19/2017 11:27:29
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 146352 and starting SCN of 505408658
Doesn't matter, just means it has applied the last log that was backed up.
Go to SQLPlus, rename the log files, and open the database
alter database rename file '/dat/redoa/UIQUT1/redo03a.log' to '+STSORA050_DATA_01/UIQUT1/redo03a.log';
alter database rename file '/dat/redob/UIQUT1/redo03b.log' to '+STSORA050_DATA_01/UIQUT1/redo03b.log';
alter database open resetlogs;
Hopefully it will open.
All good.
Monday, December 18, 2017
Monday, August 21, 2017
Running Oracle in a Docker Container - Part 3
OK, my apologies - I have 2 blog posts on this and nowhere does it get to the actual point of running Oracle in a Docker Container. I got caught up with real work and although I have done it and documented it, I haven't got around to posting here until now.
So, this is what you've been waiting for!
Using a container with an Oracle database
Connecting to the database with SQLDeveloper
So, this is what you've been waiting for!
Using a container with an Oracle database
A search for 'oracle' shows quite a few
entries, including a few that have databases already installed:
I downloaded the 11g one to test:
I started it up (ignore the different sessions)
but there were some issues – vi wasn’t installed so I couldn’t edit files, plus the listener.ora pointed to a hostname that changes every time the container is restarted.
Install vi:
but there were some issues – vi wasn’t installed so I couldn’t edit files, plus the listener.ora pointed to a hostname that changes every time the container is restarted.
Install vi:
I tried to
force the hostname by editing /etc/hostname and /etc/hosts but it didn’t work,
but I noticed that every time the container restarted it added an entry –
172.17.0.2, so I put that in the listener.ora and it worked:
I then tried starting the database:
It started fine, but was pretty slow. The
Mac Docker app can change how much memory can be allocated, so I stopped the
container, changed it, and restarted it.
But before that, I committed the container
so that the changes would be saved:
I also uploaded it to my repository:
The upload took a while and timed out a few
times, I had to retry – but it knows where it got to so continued.
So there, at last, you have Oracle XE running in a Docker container!
Connecting to the database with SQLDeveloper
A connection with SQL Developer is easy –
but you need to restart the container and map a port:
Start the listener and database:
Then start SQLDeveloper and use these
connection details (I created an ‘andy’ user in the database):
Even though the IP address in the container
is 172.17.0.2, you need to use 127.0.0.1 here.
You can then connect.
REMEMBER: Any database updates will be lost
unless you commit the changes in the container.
Running MySQL and Postgres in Docker
Yes, I know this is an Oracle DBA blog, but we have to keep up and learn new things.
With this in mind, I ran up MySQL and Postgres databases in Docker containers. It took about 30 minutes or so.
This assumes you have downloaded and installed Docker.
Start a Powershell session and type:
docker pull mysql
Once downloaded (only takes a few minutes), create the container by typing this on one line.
So that is a MySQL database running in a Docker container.
This is in the right-hand pane:
So that's it for MySQL.
Note that Docker containers are not persistent - any changes like adding users, tables or loading data will be lost when the container is stopped. You need to use the Docker 'COMMIT' command:
https://docs.docker.com/engine/reference/commandline/commit/
The process is the same for Postgres:
Once downloaded, start the container by typing this on one line:
That's it for Postgres. All very quick and easy, so get learning.
With this in mind, I ran up MySQL and Postgres databases in Docker containers. It took about 30 minutes or so.
This assumes you have downloaded and installed Docker.
Start a Powershell session and type:
docker pull mysql
Once downloaded (only takes a few minutes), create the container by typing this on one line.
docker run --name andy-mysql -p 3306:3306 -e
MYSQL_ROOT_PASSWORD=a_strong-password -d mysql:latest
So that is a MySQL database running in a Docker container.
Download MySQL Workbench (you will also need to install some
prerequisites – just follow the instructions.
Start MySQL Workbench and select ‘Database / Connect to
database’
Click on “OK”
It should prompt for the password:
Click “OK”
It will open a query window – click on “Server Status” to
see the server info
This is in the right-hand pane:
There is also a Performance Dashboard
Some useful Docker commands -
Stop (kill) any running containers:
docker kill $(docker ps -a -q)
Remove stopped containers:
docker rm $(docker ps -a -q)
https://docs.docker.com/engine/reference/commandline/commit/
The process is the same for Postgres:
Open a Powershell session and type
docker pull postgres
Once downloaded, start the container by typing this on one line:
docker run --name andypg -e
POSTGRES_PASSWORD=a_strong-password -d -p 5432:5432 postgres
Download the latest version of PgAdmin.
Start pgadmin and connect to Postgres:
Right-click on “Servers”, then”Create / Server”
Enter the name:
Click on the “Connection” tab, enter this information:
Click on “Save”.
Should appear in the list:
Click on it, stats appear in the right-hand pane:
That's it for Postgres. All very quick and easy, so get learning.
Wednesday, July 5, 2017
Creating an OEM12C Corrective Action to add a datafile
I've known about corrective actions in OEM for a while, but have been reluctant to implement them for reasons I can't really remember - maybe because I feel it's better to have a bit of control.
However, after getting woken up in the middle of the night only to have to add a datafile, decided it was about time to do this.
So, a quick google, and I found...nothing. Went onto MOS, did a search...one entry for a comment that didn't explain how to do it.
So, here we are.
OEM13C has a pre-defined "Add Datafile" corrective action that looks really neat, you can add by percentage and has lots of options.
But we aren't on OEM13C yet, and still prone to getting woken up.
In order to create this corrective action, you need a couple of things first:
1. You need to set the "db_create_file_dest" parameter.
2. You need to not care what the database datafiles are called.
So if you insist on your files being called "/dat/oradata/PRODDB/tbsname_data01.dbf" then this isn't for you, because from now on the files are going to be called something like "/dat/oradata/PRODDB/datafile/o1_mf_alert_te_dotwl2l7_.dbf"
So if you are OK with that...
Set the db_create_file_dest:
SQL> alter system set db_create_file_dest='/dat/oradata/"
Log into OEM and create the corrective action - in this example I've done it on one database as a test, but you can do it at the template level.
From the database home page, select ' Oracle Database / Monitoring / Metric and Collection Settings'
Scroll down to the "Tablespace Full" metric and select the pencil icon at the right of the page alongside "Tablespace Space Used (%)
Click on the 'Edit' button here:
Check the 'Allow only one...' box, and then click on the 'Add' button next to 'Warning'
Use the drop-down to select 'SQL Script'
Then click the 'Continue' button (right of the page)
Add a suitable name and description
Click on the 'Parameters' tab
Add this to the SQL:
alter tablespace %key_value_name% add datafile;
Click 'Continue'
The corrective action will be saved.
Change the Warning Threshold to 50%.
Click 'Continue', then 'OK' at the next page where you see this:
To test it, create a test tablespace in the target database, change the file so that it will fill up, and add data.
Create the tablespace:
SQL> create table test_data (col1 number,col2 number,col3 number) tablespace alert_test;
Table created.
Shrink the new file and turn off autoextend:
Check the space:
Insert more data:
SQL> @insert_data
319999 rows created.
Force the collection:
Check the files:
There was no Warning alert triggered - even though it's set for 50%.
This may be because I set the corrective action to only allow one so that it didn't keep adding files ad infintum:
or because the number of occurrences is set to 1 within a given period to prevent flooding.
I will need to test it to make sure that if there is a runaway transaction OEM doesn't just keep adding files until the disk fills up.
However, at least the file add worked, which should mean I can stay in bed.
However, after getting woken up in the middle of the night only to have to add a datafile, decided it was about time to do this.
So, a quick google, and I found...nothing. Went onto MOS, did a search...one entry for a comment that didn't explain how to do it.
So, here we are.
OEM13C has a pre-defined "Add Datafile" corrective action that looks really neat, you can add by percentage and has lots of options.
But we aren't on OEM13C yet, and still prone to getting woken up.
In order to create this corrective action, you need a couple of things first:
1. You need to set the "db_create_file_dest" parameter.
2. You need to not care what the database datafiles are called.
So if you insist on your files being called "/dat/oradata/PRODDB/tbsname_data01.dbf" then this isn't for you, because from now on the files are going to be called something like "/dat/oradata/PRODDB/datafile/o1_mf_alert_te_dotwl2l7_.dbf"
So if you are OK with that...
Set the db_create_file_dest:
SQL> alter system set db_create_file_dest='/dat/oradata/"
Log into OEM and create the corrective action - in this example I've done it on one database as a test, but you can do it at the template level.
From the database home page, select ' Oracle Database / Monitoring / Metric and Collection Settings'
Scroll down to the "Tablespace Full" metric and select the pencil icon at the right of the page alongside "Tablespace Space Used (%)
Click on the 'Edit' button here:
Check the 'Allow only one...' box, and then click on the 'Add' button next to 'Warning'
Use the drop-down to select 'SQL Script'
Then click the 'Continue' button (right of the page)
Add a suitable name and description
Click on the 'Parameters' tab
Add this to the SQL:
alter tablespace %key_value_name% add datafile;
Click 'Continue'
The corrective action will be saved.
Change the Warning Threshold to 50%.
Click 'Continue', then 'OK' at the next page where you see this:
To test it, create a test tablespace in the target database, change the file so that it will fill up, and add data.
Create the tablespace:
SQL> create tablespace alert_test;
Tablespace created.
SQL> select file_name from dba_data_files where tablespace_name='ALERT_TEST';
FILE_NAME
--------------------------------------------------------------------------------
/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwl2l7_.dbf
Shrink the file:
SQL> alter database datafile '/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwl2l7_.dbf' resize 10m;
Database altered.
Turn off autoextend:
SQL> alter database datafile '/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwl2l7_.dbf' AUTOEXTEND OFF;
Database altered.
List the file(s):
SQL> select file_name from dba_data_files where tablespace_name='ALERT_TEST';
FILE_NAME
--------------------------------------------------------------------------------
/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwl2l7_.dbf
Create a table and generate data to push it over the warning threshold:
Tablespace created.
SQL> select file_name from dba_data_files where tablespace_name='ALERT_TEST';
FILE_NAME
--------------------------------------------------------------------------------
/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwl2l7_.dbf
Shrink the file:
SQL> alter database datafile '/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwl2l7_.dbf' resize 10m;
Database altered.
Turn off autoextend:
SQL> alter database datafile '/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwl2l7_.dbf' AUTOEXTEND OFF;
Database altered.
List the file(s):
SQL> select file_name from dba_data_files where tablespace_name='ALERT_TEST';
FILE_NAME
--------------------------------------------------------------------------------
/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwl2l7_.dbf
Create a table and generate data to push it over the warning threshold:
SQL> create table test_data (col1 number,col2 number,col3 number) tablespace alert_test;
Table created.
insert into test_data
select rownum, mod(rownum,100), trunc(rownum/10)
from sys.source$
where rownum < 320000;
commit;
select rownum, mod(rownum,100), trunc(rownum/10)
from sys.source$
where rownum < 320000;
commit;
319999 rows created.
Commit complete.
Force the OEM agent collection (it only scans every 30 minutes):
(DBAPT)/userhome/oracle/mga/akh> . oraenv
ORACLE_SID = [DBAPT] ? AGENT12
The /app/oracle/product/agent12c/core/12.1.0.5.0/bin/orabase binary does not exist
You can set ORACLE_BASE manually if it is required.
(AGENT12)/userhome/oracle/mga/akh> emctl control agent runCollection DBAPT:oracle_database problemTbsp_10i_Loc
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
EMD runCollection completed successfully
Alert and email will be generated:
Go back to the database and check the files:
SQL> select file_name from dba_data_files where tablespace_name='ALERT_TEST';
FILE_NAME
--------------------------------------------------------------------------------
/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwl2l7_.dbf
/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwr5gy_.dbf
A file has been added by the corrective action.
Commit complete.
Force the OEM agent collection (it only scans every 30 minutes):
(DBAPT)/userhome/oracle/mga/akh> . oraenv
ORACLE_SID = [DBAPT] ? AGENT12
The /app/oracle/product/agent12c/core/12.1.0.5.0/bin/orabase binary does not exist
You can set ORACLE_BASE manually if it is required.
(AGENT12)/userhome/oracle/mga/akh> emctl control agent runCollection DBAPT:oracle_database problemTbsp_10i_Loc
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
EMD runCollection completed successfully
Alert and email will be generated:
Host=testhost
Target type=Database Instance
Target name=DBAPT
Categories=Capacity
Message=Tablespace [ALERT_TEST] is [80 percent] full
Severity=Warning
Target type=Database Instance
Target name=DBAPT
Categories=Capacity
Message=Tablespace [ALERT_TEST] is [80 percent] full
Severity=Warning
Go back to the database and check the files:
SQL> select file_name from dba_data_files where tablespace_name='ALERT_TEST';
FILE_NAME
--------------------------------------------------------------------------------
/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwl2l7_.dbf
/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwr5gy_.dbf
A file has been added by the corrective action.
I then wanted to test what happened if it filled again.
SQL> alter database datafile '/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwr5gy_.dbf' resize 20m;
Database altered.
SQL> alter database datafile '/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwr5gy_.dbf' autoextend off;
Database altered.
Database altered.
SQL> alter database datafile '/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwr5gy_.dbf' autoextend off;
Database altered.
Check the space:
Insert more data:
SQL> @insert_data
319999 rows created.
Force the collection:
(AGENT12)/userhome/oracle/mga/akh> emctl control agent runCollection DBAPT:oracle_database problemTbsp_10i_Loc
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
EMD runCollection completed successfully
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
EMD runCollection completed successfully
Check the files:
SQL> select file_name from dba_data_files where tablespace_name='ALERT_TEST';
FILE_NAME
--------------------------------------------------------------------------------
/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwl2l7_.dbf
/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwr5gy_.dbf
FILE_NAME
--------------------------------------------------------------------------------
/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwl2l7_.dbf
/dat/oradata/DBAPT/datafile/o1_mf_alert_te_dotwr5gy_.dbf
There was no Warning alert triggered - even though it's set for 50%.
This may be because I set the corrective action to only allow one so that it didn't keep adding files ad infintum:
or because the number of occurrences is set to 1 within a given period to prevent flooding.
I will need to test it to make sure that if there is a runaway transaction OEM doesn't just keep adding files until the disk fills up.
However, at least the file add worked, which should mean I can stay in bed.
Subscribe to:
Posts (Atom)