Wednesday, July 24, 2013

Oracle Database 12C Melbourne Launch


I went to the Oracle 12C database launch yesterday, there was a good turn out and I saw a few familiar faces.

The event was well attended, the MC said it was bigger than Sydney.

The presenters were eager to give the impression that the new pluggable architecture is better than having databases on VMs - the idea that 100 pluggable databases in a Container database is better than
100 VMs with 1 database each. They argue this means more management, plus the management of the VM layer is an overhead, and it means more hardware.

While there are some merits to this - for example if you back up a container database it automatically backs up all the pluggable databases (and you can restore by pluggable database), and if you upgrade or patch the container database it automatically upgrades and patches all the pluggables, to me it also introduces some other problems.
If you lose the host, or the container database, you lose all 100 pluggable databases. They acknowledged this and said you need to have your HA / DR policies in place.
I also feel it introduces the other issue of patches / bug fixes. If one application running in a PDB (Pluggable database) needs a patch or bug fix, you have to apply it to all the others. If it introduces problems in one of the other PDBs, what do you do? Although this has also always been the case with multiple databases running from one ORACLE_HOME.

I'm not sure if organisations will feel the need to use these PDBs. It will be us (or architects) who will advise on them, and I'm in 2 minds. While there are some nice features for DBAs (see below), and Oracle think that it reduces the workload of DBAs (and therefore the need for so many), I think it introduces more work, particularly around the resource management. At the moment if I see a database in a VM is chewing CPU and Memory, I ask the Unix guys to add more. Now I have to look into the container / pluggable resource management and try to work out which one is doing the work and fiddle around. The same with the Automatic Data Optimisation - at the moment the application and business decide what data gets archived, now I will have to generate the heat maps and suggest / implement policies myself.
If I want to set up data guard I have a primary and a standby. Now I have to decide if it's worth putting in Far Sync Standby.
New security features like data redaction and masking take the functionality from the application and put it in the database, leaving me to implement it.

Actually all this is good - the more work we have the better with the amount of jobs being outsourced, so if anyone asks if they should go to V12, just say yes. :) .

These are the notes:


3 Key database focus areas -

Big Data - Social, Blogs, Smart Meters.
Engineered systems - Exadata, Database Appliance.
Cloud - shared resources.

Oracle 12C Multi Tenant Architecture
CDB - Container Database, PDB - Pluggable Database.
Managing a Multitenant Environment

Pluggable databases - isolated from others, portable, no application changes required.
Reduces costs - less hardware, fewer databases.
Can upgrade older versions in place - V10.2, V11.2 and future versions - then plug into a container database.

There is resource management to share resources to prevent one DB from stealing all CPU, Memory.

Using Resource Manager with Cloud Control


Can backup the entire container and restore just one pluggable database.
Restore PDB

Patching the container patches all the pluggable databases (like patching an ORACLE_HOME).

Can take a PROD pluggable database and plug it into a DEV container, making a copy. Snapclone is almost instant.
Cloning a PDB

Automatic Data Optimisation - policy based compression and archiving, based on access time as well as age of data.
Automatic Data Optimisation

OEM12C has pluggable ready commands / displays. SQLDeveloper has pluggable commands.
Cloud Control




PDBs share the SGA and background processes of CDB.
As PDBs added to the container, only small increments of memory are required.
PDB has SYSTEM, SYSAUX and optional TEMP tablesspaces, plus application tablespaces can be added as required. Don't have UNDO or REDO, these are shared with the CDB.

DataGuard is at container level - if you add a PDB to the primary, it automatically gets copied to the secondary.

If you have 2 different versions of a CDB, ie V12.1 and V12.2, you can unplug a PDB from the 12.1 container and plug it into the 12.2 container and it is
automatically upgraded (the PDB doesn't actually change).

CDBs work with RAC.

A new PDB can be created in about 1-2 minutes, uses a "seed" database which is in every CDB.

Can upgrade a V10.2.0.5, 11.2.0.1, 11.2.0.3 database to V12.1 directly, then plug it into a container and it becomes a PDB. No application changes required.
SQLDeveloper has the functionality to make an upgraded db into a PDB.
Can use expdp and impdp to migrate as well.



New Security features

Redacting / Masking data built into the database with policies and roles. Can remove specific data / fields from reports using OEM12C, becomes effective immediately.
Real Application Security allows the database to enforce application security.
Unified Auditing - new roles - audit viewer and audit administrator.
Auditing turned on by default, new syntax for auditing. DB Performance not affected.
Privilege Analysis with Database Vault - reports on actual privileges / roles used and identify any not used that can then be removed.


Maximum Availability Architecture

Distant datacentre locations cannot use Dataguard synchronus replication due to the lag.
New feature called Far Sync Standby - small instance between primary and secondary, only controlfiles and redo logs. Zero data loss, no performance impact.
Far Sync
No licence required.
New feature called Application Continuity and Transaction Guard - coders currently have to allow for multiple errors and plan with exceptions. Transaction Guard has this built in so code doesn't need to try and cater for all events.
Transaction Guard and Application Continuity
True Transparent Application Failover - Can now do DML, but some conditions (can only use a specific JDBC driver).

RMAN can now do table restore.

RMAN Recover Table

New feature allows online datafile move.

Automatic Data Optimisation

Maintains a database 'heat-map' of frequently accessed data, less frequently accessed data. Can be displayed in OEM12C.
Can compress the older / less used data automatically based on policies. 3 levels - Advanced Row Compression, Columnar Query Compression, Columnar Archive Compression.
Syntax:
"alter table sales ilm add row store compress advanced row after 3 days of no modification;"
I got the impression that this requires a licence.


Thursday, July 11, 2013

Fixing "status pending" in OEM 12C after an agent re-install

I was having trouble with an OEM 12C host agent that wasn't behaving, so I thought the
easiest thing to do would be to uninstall and re-install. Little did I know..

First thing I learnt was that you don't just remove the 12C agent home. No, you need to go to the agent home oui bin directory and run the installer (yes, run the installer to de-install it):

/u01/app/oracle/product/12.1.0.1/core/12.1.0.1.0/oui/bin/runInstaller

Make sure you have an X-Windows server running.

Click on "Installed Products", then select all the "oracle_sysman*" items and de-install them first.

If you try and de-install the agent12c1 or sbin12c1 items you'll get an error. Once all the "oracle_sysman*" ones have been de-installed do the sbin12c1 and then the agent12c.

Now you can rm -rf the agent directory.

Go to the OEM 12C console and re-install the agent in the usual manner (setup, add target, add target manually).

Assuming all goes well, you should see the agent in the OEM console, but it will probably be showing "status pending" with the clock icon. If it isn't, congratulations, stop reading this and go and do something else.

If it is, then you will need to resynchronise the agent and unblock it from OEM.

You will also see that an "emctl status agent" on the host will show no files have been uploaded to the OEM, and that an "emctl pingOMS" will show a "blocked" message. Any attempts to do "emctl upload" will result in a timeout.

The Oracle documentation says you need to resynchronize and unblock the agent but I couldn't find any screenshots that show where to do this, eventually I poked around enough and found it.

Go to the OEM console and select "setup, agents". Click on the "Misconfiguration Alerts and Blocked Agents" tab, you should see the blocked agent there. Note, in V12.0.0.3, the menu has changed to
Setup , Manage Cloud Control and then Agents.



It took me ages to find out how to do the resynchronize - the main agent page has lots of clickable buttions, but none that say "Resynchronize"



Click on the agent to select it and the main host page will open. There is a small "Agent" icon at the top left of the page. Click on that and the drop-down menu will show a "Resynchronization" item. Click on that and "Continue" on the next page. This will submit a job which you can select to check if it works.

 


If it works, it will unblock the agent and it should no longer show up as "status pending".


This is the Oracle documentation : http://docs.oracle.com/cd/E25178_01/doc.1111/e24473/ha_outages.htm#BABHJCED

Management Agent Reinstall Using the Same Port

A Management Agent is monitoring multiple targets. The Agent installation is lost.
  1. De-install the Agent Oracle Home using the Oracle Universal Installer.
    Note:
    This step is necessary in order to clean up the inventory.
  2. Install a new Management Agent or use the Management Agent clone option to reinstall the Management Agent though Enterprise Manager. Specify the same port that was used by the crashed Agent. The location of the install must be same as the previous install.
    The OMS detects that the Management Agent has been re-installed and blocks the Management Agent.
  3. Initiate Management Agent Resynchronization from the Management Agent homepage.
    All targets in the Management Repository are pushed to the new Management Agent. The Agent is instructed to clear backlogged files and then do a clearstate. The Agent is then unblocked.
  4. Reconfigure User-defined Metrics if the location of User-defined Metric scripts have changed.
  5. Verify that the Management Agent is operational and all target configurations have been restored using the following emctl commands:
    emctl status agent 
    emctl upload agent 
    
    There should be no errors and no XML files in the backlog.





Monday, July 1, 2013

A quick dip into Oracle 12C containers and pluggable databases

I've been playing around with 12C in the evening at home (what a sad, sad life I lead). I had a look through the documentation : http://docs.oracle.com/cd/E16655_01/server.121/e17633/cdbovrvw.htm#CEGEEIJG

This is well worth reading, it explains what all this container and pluggable stuff is all about.

I don't like the way Oracle appears to be at war with DBAs - from Oracle 10g which was billed as "the self-managing database" (Ha!), to this comment in the doco:

"Figure 17-2 depicts the databases in Figure 17-1 after consolidation onto one computer. The DBA team is reduced from five to three, with one CDB administrator managing the CDB while two PDB administrators split management of the PDBs."

Anyway, on with the tale.

When you create the database, you need to specify that it's an old-fashioned one (a non-CDB), or a new-fangled Container Database.

This is either with the "enable pluggable database" clause in the "create database" statement, or by checking the appropriate box in the dbca.

I've not done much investigation, but created a CDB and then a pluggable database within that.

I used the DBCA and checked the box to create it as a container database, and called it ANDYPLG (I should really have called this ANDYCDB but I didn't).

Once created, I then restarted the DBCA and selected 'Manage Pluggable Databases" and then "Create a Pluggable Database" - called it ANDYPLG1.

That was it. I now had an Oracle 12C container database that had a pluggable database inside it.

I didn't get a chance to do too much with it, but found a few things:

The pluggable database has no OS processes.

You can see the databases using the v$services view:

SQL> select name from v$services;

NAME
----------------------------------------------------------------
andyplg1
ANDYPLGXDB
ANDYPLG
SYS$BACKGROUND
SYS$USERS

You switch from the container to the pluggable database with the alter session command:

SQL> alter session set container=andyplg1;

Session altered.

SQL> select name from v$services;

NAME
----------------------------------------------------------------
andyplg1

I then created a new container from the one I'd created earlier - this is one of the big advantages with pluggable databases, the ability to clone them very easily. However, I did notice that the source database has to be read-only:

SQL> create pluggable database andyplg2 from andyplg1;
create pluggable database andyplg2 from andyplg1
*
ERROR at line 1:
ORA-65081: database or pluggable database is not open in read only mode

SQL> alter pluggable database andyplg1 close;

Pluggable database altered.

SQL> alter pluggable database andyplg1 open read only;

Pluggable database altered.

SQL> create pluggable database andyplg2 from andyplg1;

Pluggable database created.

SQL> alter pluggable database andyplg1 close;

Pluggable database altered.

SQL> alter pluggable database andyplg1 open read write;

Pluggable database altered.



SQL> select name from v$services;

NAME
----------------------------------------------------------------
andyplg2
andyplg1
ANDYPLGXDB
ANDYPLG
SYS$BACKGROUND
SYS$USERS

Voila! The andyplg2 was created as a clone of andyplg1.

My next investigations will be around how much control we have around creating the pluggable databases - for instance, look at the datafile names of andyplg1:

/u04/oradata/ANDYPLG/E06734C67F8740C5E043161816ACD7D0/datafile/o1_mf_system_8x1b
hcnw_.dbf

Not very nice.

I'll also look into how to manage resources withing the pluggable databases.