Sunday, October 6, 2024

Using NotebookLM to make my life easier as an Oracle DBA

I thought ChatGPT was good (yeah, it is), then yesterday I discovered NotebookLM which I think is even better.

NotebookLM is a Google experimental feature which is free, but you need a Google account (pretty much everyone).

Go here:  https://notebooklm.google.com/

And sign into Google if not already. You'll be shown a page like this



Click on 'New Notebook' and you'll get this



You can add sources - at the moment it only accepts certain formats, so I ended up converting what I needed to PDF and uploading them. 

When you upload a source, the magic starts. Depending on the size of the document, it pretty quickly gives you a summary, the document itself, and some starter questions. 

I uploaded a PDF of the Oracle ODA guide from here

https://docs.oracle.com/en/engineered-systems/oracle-database-appliance/19.6/cmtxn/index.html



You can also upload Google Docs, Youtube Videos or copy and paste text.

After a while, it generates this


You can then ask questions - "What are the steps involved in patching an Oracle Database Appliance using the Browser User Interface?"


So this is a great way to summarize large documents and to ask specific questions without having to try and find something specific by reading the entire document.

2 other uses I've found so far - take an AWR, convert it to PDF and then ask questions like "What are the performance bottlenecks?" and "What were the top 5 most CPU intensive SQLs?"






The next one is also pretty amazing - I extracted the audio from a Teams meeting to an MP3 file and uploaded it.

It creates a transcript, and you can ask questions. 



With all of these Notebooks you can create an audio file that is a podcast with 2 AI generated voices - it's incredible, though not suited to all Notebooks.


The last thing I tried was to upload a video from YouTube. You just provide it the link and it grabs it, creates a transcript and generates questions. This is a video on the 23c New SQL features.

https://www.youtube.com/watch?v=_NyvswWLWQE





I've just scratched the surface of this, but I think it's better than ChatGPT in some areas. Let's hope Google maintain and improve it.


Wednesday, September 11, 2024

Using ChatGPT to make my life easier as an Oracle DBA

I've been trying to think of ways to use ChatGPT to make my life easier as an Oracle DBA. So far I haven't really found anything game-changing but have managed to find a few things that have made certain things easier.

Coding / Scripting / SQL 

The most obvious thing is getting ChatGPT to write SQL and code. It is really good, but doesn't always get things 100% correct. One piece of code told me to use a DBMS package that doesn't exist. Another did a select on a table column that wasn't there.

But it is generally very good. In the past if I wanted a script or SQL I'd Google, find something that was nearly correct, hack it about and would need to go back and forth between Google, changing the script, testing it, going back to Google etc and it would take quite a while.

With ChatGPT you can do it all in the same place and prompt it for enhancements.

There's a Python script I got it to write: We have monitoring set up at our client sites, both shell scripts, SQL and OEM. These send us emails every time they run. A long time ago we settled on a standard subject line: CLIENT:HOSTNAME:DBNAME:METRIC:RESULT.

So something like WIDGETCO:PROD1:PRODDB:RMAN BACKUP:SUCCESS

Someone manually goes through these emails every day to look for WARNING or CRITICAL errors and checks them off on a spreadsheet to make sure we get the emails we're expecting.

I asked ChatGPT if there was a way to read my mail and look for the subjects that contain the words "SUCCESS","WARNING" and "CRITICAL" for the last 24 hours. It gave me a python script which I ran. I then asked it to make it 72 hours (to cater for weekends), which it did. I then told it to remove duplicates. Then to colour code them. Then to add the time the email was received. And so on. Eventually it gave me exactly what I wanted. I just run the script every day in a command prompt window and it gives me this


I won't post the script here, it might not serve your purpose, but all you need to do is go to ChatGPT and ask it to create what you want. "Can you write a script that connects to my email and lists all subject lines over the last 72 hours that contains the words "SUCCESS","SUCCEEDED","FAILED","WARNING","CRITICAL" and colour code them..."" etc.

You can run it and ask ChatGPT to enhance it as you go.

Obviously you can also ask it to write SQL for you instead of searching Google. I often just ask it stuff like "write an Oracle SQL script to...." and it does it. 

It can also fix SQL. I was given some scripts by a client and one of them failed with ORA-00920. If I used Google it would tell me what the issue was, and I'd have to figure out where in the SQL it was breaking. Instead, I just pasted it into ChatGPT and told it what was wrong "this fails with ora-00920, fix it" and it gave me the correct code.

9 times out of 10 if you ask ChatGPT for a script it will give you something that works. If it isn't exactly how you want it, just tell it and it will re-write if for you. Saves so much time and messing around.

It can also help with stuff that might take ages to Google and get something that works. Here are some examples I've used.

A client wanted yesterday’s alert log data. The alert log goes back to May 2020. I edited the log, searched for the current date and it took me to the first entry for that day. So I want to just extract from this line to the end.

How do I extract a section of a unix file from line 7717529 to the end?

It gave me 3 examples, I used this

tail -n +7717529 filename > alert_log_extract.txt

Another example:

how to estimate the rman backup size of an oracle container database?

It gave me some SQL, but I wasn’t sure if that included the pluggables, so asked

does this take into account multiple pluggable databases?

And it gave me better SQL.

Formatting output

We're currently doing some migrations using XTTS, and there is a config file that needs the tablespace names on one line separated by commas, and there was about 130 tablespace names. I ran 'select tablespace_name from dba_tablespaces order by 1;' and it created the list as a column, as you'd expect:

TABLESPACE_NAME

TBS1

TBS2

TBS3

TBS4

etc.

I went to ChatGPT and said "make these comma-separated all on one line" and pasted in the output, and it did it. But it left a space after each comma so I said "remove the space after the comma" and it did. So I just copied and pasted the output into the config file. Again, so much time saved.

Another neat trick I found - a client sent a screen shot of a table. I wanted it in CSV format so I could paste it into an Excel spreadsheet - of course, ChatGPT could do it. Something like this

So I pasted it into ChatGPT and asked for it in csv format and it did, so I could copy and paste it into Excel.

Just ask it stuff

Some more examples of what I've asked it

what are the deprecated features between oracle database v11 and v19c

Which PL/SQL packages are deprecated in 23C

What are the hard limits for Oracle Table size, index size, number of extents, maximum number of tables, maximum number of indexes, sequence hard limits or any other hard limits that we need to be aware of.

What licence options are required for Oracle AWR

how to check the oracle timezone file in use

how do i change this

can I get a time from an oracle scn

show oracle disabled or invalid constraints

Write a script to find all SQL taking more that 10 seconds for Oracle 19C using v$ tables including the SQL statement, number of executions and runtime


Basically ChatGPT has made a lot of things easier for me, especially getting it to write a command or script to do something that I only need to do once. Before, I'd be going back and forth with Google; ChatGPT usually gives me something I can use immediately, and if it isn't quite what I want it can give me something else in the same session.

Monday, September 9, 2024

Using Oracle pluggable databases to make patching easier

We have a client that has multiple applications in one database (I know), each separated into different tablespaces. We inherited this and they want to stick to it.

The problem is that we haven't been able to patch for years because although some applications say we can, others say not yet.

I thought a solution might be to use pluggable databases. They have an existing V19C SE database which is not a container DB. 

The theory is that we create a Container database and patch that to the latest release, then migrate the applications that can be patched into pluggable databases - we can export / import initially and then use the pluggable features to move the databases.

Eventually we'll have all the applications in their own pluggable database in the container running the latest patch, with the original non-container database now not in use. We drop that, recreate it as a container database and leave it empty.

The next patch comes out, we patch the empty one and then unplug and plug in the application PDBs that are able to be patched. 

Eventually all the application PDBs will be in this updated one, the other one empty and we just rinse and repeat.

There are a a couple of issues - first, we need enough space on the host to install another ORACLE_HOME. Second, Oracle allows 3 PDBs in a container DB for free - any more and a license is needed. Note: during testing I discovered that Standard Edition (which the client is running) is limited to 2 PDBs, so I suspect this idea isn't going to fly.

I did wonder about application connectivity, but we should be able to use the same service name, especially since the databases are on the same host.

They use VM snapshots for DR, so that won't be a problem, but if DataGuard was running we'd need to consider that as well, some config changes would probable be needed, or the configuration may need recreating each time. You'd also probably want 2 separate listeners.

On to the testing. I created a new VM (I use Proxmox) with enough space for 2 ORACLE_HOMEs, and installed V19 in each.

I then created a database in each home, one was a container, one was non container to replicate the existing client database. I then patched the container database to the n-1 release (April 2024).


I created 3 tablespaces, then 3 tables, in the non-container database to simulate the different applicants in the client environment.


I tried to create 3 PDBs in the container database, but Standard Edition is limited to 2. 

                                        

I then exported the data from the non container db into the PDBs.


I needed to create a user in the pdb to allow for the import:

create user impuser identified by <password>;
grant create session to impuser;
grant dba to impuser;

Then imported the data


So there is now a PDB with the data from the source data1 tablespace. I repeated the process for data2.


So there are now 2 PDBs with the data from the source. The applications can now be pointed to this, the database is a recent patch.

We can now drop the original source database and recreate it as a container. I just used dbca to both drop and create.

This is where I encountered something strange. I created the data1 pluggable database, then when I came to create the data2 PDB I got this message:

But the message I got when I did this before said I could only create 2 in a Standard Edition database, and there are 2:


Turns out this is a bug, and a second PDB can be created using SQLPlus, so I did that.



Now we have 2 container databases, one patched to April 2024 running the live databases, the other the base 19.3 release, both with data1 and data2 pluggable databases.

I then patched the new container database to the July 2024 patch, and will copy the pluggable databases across from the April 2024 patched database.


I'll use a db link to clone the PDBs.

Login to the source and create a user to clone the databases


Shutdown the pluggable database then open in RO mode


Create tnsnames.ora entries on both the source and target hosts


Check a tnsping


Create a database link in the target database pointing to the data1 PDB


Drop the TARGET pluggable database


Create the pluggable database from the source using the database link, then open it


You can now drop the source pluggable database, then clone the other one when it's ready to move to the latest patched database and drop that.

The next patch can be applied to the now empty database and the pluggable databases moved across when ready.

If this patch method is considered, you'd need to do it in non prod first with databases of the same size to determine the time it takes - if you are able to get an outage for all the databases in the same ORACLE_HOME at the same time, it may be that patching the home takes less time than a PDB clone, in which case this method may not be of use.



















Monday, August 19, 2024

Cloning an Oracle V19C Pluggable database to a different host using RMAN duplicate and database link

We have a requirement to perform some archiving and purging of a database, and we need to take a copy of Production which we can use to test.

One option is to create a PDB clone from the Production database since it is using containers.

I created an OEL7 VM in Proxmox, then installed Oracle ASM V19C and the 19C database software. then created a container database (orcl) and a pluggable database (orclpdb).  I won't go through this, it's pretty straight-forward.

I then cloned the VM to save me having to re-create one from scratch.

When the clone was done, I changed the hostname and IP address. So I ended up with 2 VMs, one called OEL7ASM1 and OEL7ASM2.

I also dropped the orclpdb pluggable database from the OEL7ASM2 host so that I can create it from the OEL7ASM1 host.



I added the source host oel7asm1 into the oel7asm2 /etc/hosts file and also created a tnsnames.ora entry for the source database. Make sure you use the FQDN





I couldn't get the tnsping or sqlplus connection to the ORCL_REMOTE database, even after enabling port 1521, so I ended up disabling the firewall and then it worked. Needless to say, don't do this on any client or host that you care about.

On the oel7asm2 host, trying to contact the source host before the firewall was disabled:


Disabling the firewall on the source host:


Trying the connection again from the target host

Before doing the clone, create a staging directory and set the target database REMOTE_RECOVERY_FILE_DEST parameter:

mkdir /tmp/pdbclone


You also need to put the source database in archivelog mode and open




You can now try the pdb clone using RMAN duplicate.

On the target host, connect to RMAN target and auxiliary and issue the RMAN command to perform the PDB duplicate. The target is the source, the auxiliary is where the database is to be cloned to.





The pdb should now be there


This method requires a staging area (/tmp/pdbclone). 

We can also use a database link which doesn't, so we'll do that now.

Drop the newly created pdb


Create a user on the source PDB database and give them DBA access and grants to create a pluggable  database



Create a new tnsnames.ora entry on the target host pointing to the pdb service on the source


On the target CDB, create a database link to the source as that user, using the tnsnames.ora entry that points to the source pdb:






Run the command to create the target pluggable database, then open it:


You can then drop the database link.