Wednesday, September 11, 2024

Using ChatGPT to make my life easier as an Oracle DBA

I've been trying to think of ways to use ChatGPT to make my life easier as an Oracle DBA. So far I haven't really found anything game-changing but have managed to find a few things that have made certain things easier.

Coding / Scripting / SQL 

The most obvious thing is getting ChatGPT to write SQL and code. It is really good, but doesn't always get things 100% correct. One piece of code told me to use a DBMS package that doesn't exist. Another did a select on a table column that wasn't there.

But it is generally very good. In the past if I wanted a script or SQL I'd Google, find something that was nearly correct, hack it about and would need to go back and forth between Google, changing the script, testing it, going back to Google etc and it would take quite a while.

With ChatGPT you can do it all in the same place and prompt it for enhancements.

There's a Python script I got it to write: We have monitoring set up at our client sites, both shell scripts, SQL and OEM. These send us emails every time they run. A long time ago we settled on a standard subject line: CLIENT:HOSTNAME:DBNAME:METRIC:RESULT.

So something like WIDGETCO:PROD1:PRODDB:RMAN BACKUP:SUCCESS

Someone manually goes through these emails every day to look for WARNING or CRITICAL errors and checks them off on a spreadsheet to make sure we get the emails we're expecting.

I asked ChatGPT if there was a way to read my mail and look for the subjects that contain the words "SUCCESS","WARNING" and "CRITICAL" for the last 24 hours. It gave me a python script which I ran. I then asked it to make it 72 hours (to cater for weekends), which it did. I then told it to remove duplicates. Then to colour code them. Then to add the time the email was received. And so on. Eventually it gave me exactly what I wanted. I just run the script every day in a command prompt window and it gives me this


I won't post the script here, it might not serve your purpose, but all you need to do is go to ChatGPT and ask it to create what you want. "Can you write a script that connects to my email and lists all subject lines over the last 72 hours that contains the words "SUCCESS","SUCCEEDED","FAILED","WARNING","CRITICAL" and colour code them..."" etc.

You can run it and ask ChatGPT to enhance it as you go.

Obviously you can also ask it to write SQL for you instead of searching Google. I often just ask it stuff like "write an Oracle SQL script to...." and it does it. 

It can also fix SQL. I was given some scripts by a client and one of them failed with ORA-00920. If I used Google it would tell me what the issue was, and I'd have to figure out where in the SQL it was breaking. Instead, I just pasted it into ChatGPT and told it what was wrong "this fails with ora-00920, fix it" and it gave me the correct code.

9 times out of 10 if you ask ChatGPT for a script it will give you something that works. If it isn't exactly how you want it, just tell it and it will re-write if for you. Saves so much time and messing around.

It can also help with stuff that might take ages to Google and get something that works. Here are some examples I've used.

A client wanted yesterday’s alert log data. The alert log goes back to May 2020. I edited the log, searched for the current date and it took me to the first entry for that day. So I want to just extract from this line to the end.

How do I extract a section of a unix file from line 7717529 to the end?

It gave me 3 examples, I used this

tail -n +7717529 filename > alert_log_extract.txt

Another example:

how to estimate the rman backup size of an oracle container database?

It gave me some SQL, but I wasn’t sure if that included the pluggables, so asked

does this take into account multiple pluggable databases?

And it gave me better SQL.

Formatting output

We're currently doing some migrations using XTTS, and there is a config file that needs the tablespace names on one line separated by commas, and there was about 130 tablespace names. I ran 'select tablespace_name from dba_tablespaces order by 1;' and it created the list as a column, as you'd expect:

TABLESPACE_NAME

TBS1

TBS2

TBS3

TBS4

etc.

I went to ChatGPT and said "make these comma-separated all on one line" and pasted in the output, and it did it. But it left a space after each comma so I said "remove the space after the comma" and it did. So I just copied and pasted the output into the config file. Again, so much time saved.

Another neat trick I found - a client sent a screen shot of a table. I wanted it in CSV format so I could paste it into an Excel spreadsheet - of course, ChatGPT could do it. Something like this

So I pasted it into ChatGPT and asked for it in csv format and it did, so I could copy and paste it into Excel.

Just ask it stuff

Some more examples of what I've asked it

what are the deprecated features between oracle database v11 and v19c

Which PL/SQL packages are deprecated in 23C

What are the hard limits for Oracle Table size, index size, number of extents, maximum number of tables, maximum number of indexes, sequence hard limits or any other hard limits that we need to be aware of.

What licence options are required for Oracle AWR

how to check the oracle timezone file in use

how do i change this

can I get a time from an oracle scn

show oracle disabled or invalid constraints

Write a script to find all SQL taking more that 10 seconds for Oracle 19C using v$ tables including the SQL statement, number of executions and runtime


Basically ChatGPT has made a lot of things easier for me, especially getting it to write a command or script to do something that I only need to do once. Before, I'd be going back and forth with Google; ChatGPT usually gives me something I can use immediately, and if it isn't quite what I want it can give me something else in the same session.

Monday, September 9, 2024

Using Oracle pluggable databases to make patching easier

We have a client that has multiple applications in one database (I know), each separated into different tablespaces. We inherited this and they want to stick to it.

The problem is that we haven't been able to patch for years because although some applications say we can, others say not yet.

I thought a solution might be to use pluggable databases. They have an existing V19C SE database which is not a container DB. 

The theory is that we create a Container database and patch that to the latest release, then migrate the applications that can be patched into pluggable databases - we can export / import initially and then use the pluggable features to move the databases.

Eventually we'll have all the applications in their own pluggable database in the container running the latest patch, with the original non-container database now not in use. We drop that, recreate it as a container database and leave it empty.

The next patch comes out, we patch the empty one and then unplug and plug in the application PDBs that are able to be patched. 

Eventually all the application PDBs will be in this updated one, the other one empty and we just rinse and repeat.

There are a a couple of issues - first, we need enough space on the host to install another ORACLE_HOME. Second, Oracle allows 3 PDBs in a container DB for free - any more and a license is needed. Note: during testing I discovered that Standard Edition (which the client is running) is limited to 2 PDBs, so I suspect this idea isn't going to fly.

I did wonder about application connectivity, but we should be able to use the same service name, especially since the databases are on the same host.

They use VM snapshots for DR, so that won't be a problem, but if DataGuard was running we'd need to consider that as well, some config changes would probable be needed, or the configuration may need recreating each time. You'd also probably want 2 separate listeners.

On to the testing. I created a new VM (I use Proxmox) with enough space for 2 ORACLE_HOMEs, and installed V19 in each.

I then created a database in each home, one was a container, one was non container to replicate the existing client database. I then patched the container database to the n-1 release (April 2024).


I created 3 tablespaces, then 3 tables, in the non-container database to simulate the different applicants in the client environment.


I tried to create 3 PDBs in the container database, but Standard Edition is limited to 2. 

                                        

I then exported the data from the non container db into the PDBs.


I needed to create a user in the pdb to allow for the import:

create user impuser identified by <password>;
grant create session to impuser;
grant dba to impuser;

Then imported the data


So there is now a PDB with the data from the source data1 tablespace. I repeated the process for data2.


So there are now 2 PDBs with the data from the source. The applications can now be pointed to this, the database is a recent patch.

We can now drop the original source database and recreate it as a container. I just used dbca to both drop and create.

This is where I encountered something strange. I created the data1 pluggable database, then when I came to create the data2 PDB I got this message:

But the message I got when I did this before said I could only create 2 in a Standard Edition database, and there are 2:


Turns out this is a bug, and a second PDB can be created using SQLPlus, so I did that.



Now we have 2 container databases, one patched to April 2024 running the live databases, the other the base 19.3 release, both with data1 and data2 pluggable databases.

I then patched the new container database to the July 2024 patch, and will copy the pluggable databases across from the April 2024 patched database.


I'll use a db link to clone the PDBs.

Login to the source and create a user to clone the databases


Shutdown the pluggable database then open in RO mode


Create tnsnames.ora entries on both the source and target hosts


Check a tnsping


Create a database link in the target database pointing to the data1 PDB


Drop the TARGET pluggable database


Create the pluggable database from the source using the database link, then open it


You can now drop the source pluggable database, then clone the other one when it's ready to move to the latest patched database and drop that.

The next patch can be applied to the now empty database and the pluggable databases moved across when ready.

If this patch method is considered, you'd need to do it in non prod first with databases of the same size to determine the time it takes - if you are able to get an outage for all the databases in the same ORACLE_HOME at the same time, it may be that patching the home takes less time than a PDB clone, in which case this method may not be of use.