Category Archives: DBA

Securing Oracle scripts with a wallet

It’s that time of year when I try to clean up some of my bad habits from the year before. This time, it’s plain-text passwords in SQL scripts. Here’s a way to make them a little more secure. You can move all your passwords into a secure wallet file stored on your PC. This is not the most secure option available (and you should keep the wallet in a private location) but it is better than nothing. There is a more secure option which ties the wallet to your machine and username, but this complicates matters if you want to use the same wallet from different machines.

You will need to know the location of your SQLNET.ORA and TNSNAMES.ORA files which are used by SQL*Plus (usually defined by the TNS_ADMIN environment variable).

Create a folder to hold your wallet

mkdir D:\SQL\auth

Add the following into your SQLNET.ORA file to point to the Oracle Wallet

WALLET_LOCATION =
  (SOURCE =
    (METHOD = FILE)
    (METHOD_DATA =
  (DIRECTORY = D:\SQL\auth)
  )
)

SQLNET.WALLET_OVERRIDE = TRUE

Create a wallet – you will be prompted for a password to secure the wallet

orapki wallet create -wallet D:\SQL\auth -auto_login

Add your database credentials to the wallet, using an alias from your TNSNAMES file. You will be prompted for your database password, which is stored securely in the wallet.

mkstore -wrl D:\SQL\auth -createCredential TNS_ALIAS USERNAME

Now, when using SQL*Plus, you can connect using the wallet without having a password in your scripts

sqlplus /@TNS_ALIAS
connect /@TNS_ALIAS

If you need to connect via a proxy

sqlplus [PROXY]/@TNS_ALIAS
connect [PROXY]/@TNS_ALIAS

To list the contents of the wallet

mkstore -wrl D:\SQL\auth -listCredential

To modify the contents of the wallet, e.g. if your password is changed

mkstore -wrl D:\SQL\auth -modifyCredential TNS_ALIAS USERNAME PASSWORD

To delete an entry from the wallet

mkstore -wrl D:\SQL\auth -deleteCredential TNS_ALIAS

More reading

https://docs.oracle.com/database/121/DBSEG/authentication.htm#DBSEG99815

https://blog.pythian.com/securing-oracle-monitoring-backup-scripts/

https://blog.pythian.com/oracle-secure-external-password-stores/

Long-running delete of synopses during table stats gathering

Oracle 11.2.0.4

I recently encountered one long-running session and several blocked sessions, all performing the same delete operation.

--SQL_ID dsf21kcbjqfyk
DELETE FROM SYS.WRI$_OPTSTAT_SYNOPSIS$ WHERE BO# = :B2 AND INTCOL# = :B1;

One session had been running for several hours, and the other sessions were all blocked by a row lock from the first session. Unfortunately, each of these sessions was trying to gather stats on new partitions of tables as part of various ETL processes, so being blocked for several hours was not ideal.

The SYS table holds synopses of column statistics from partitioned tables, which are used to maintain incremental statistics on partitioned tables. For more details on this feature, see here.

The table is a range-hash partitioned table consisting of the following columns :

  • BO# NUMBER (partition key)
  • GROUP# NUMBER (subpartition key)
  • INTCOL# NUMBER
  • HASHVALUE NUMBER

In my case, the table contains around 11 billion rows across 1,700 partitions, and up to 32 subpartitions per partition. The total size of the table on disk is around 500GB, and the table has no indexes. The database itself contains around 1,800 partitioned tables, and over 1 million partitions and subpartitions.

The real-time SQL monitoring report from each session showed over 2,000 executions of the DELETE, suggesting it was being run from a PL/SQL FORALL statement. With each execution having to scan a whole partition, we were seeing over 1.5 billion buffer gets over a 4 hour period, which is around 100k buffer gets per second.

The report also showed that all of the sessions had the same values for the bind variables. This explains why all but one were blocked by a row lock held by the one “active” session.

On a hunch, I took these values of the bind variables and matched BO# to the OBJECT_ID in DBA_OBJECTS and INTCOL# to the INTERNAL_COLUMN_ID in DBA_TAB_COLS. Funnily enough, the object ID matched to an unrelated application table which, earlier in the day, had had a column dropped, and the column ID was one more than the remaining number of columns in the table. So it seems that every single stats gathering operation was trying to clean out the same “orphaned” synopses, and would not finish gathering stats until this operation was done.

A little peek inside the DBMS_STATS_INTERNAL package confirmed my suspicions – when gathering table stats, it fetches a list of all synopses where the table and column IDs are not in the data dictionary, and then runs a FORALL … DELETE.

Fortunately, Oracle Support was able to quickly supply a patch (19450139) which fixed the issue by bypassing the DELETE operation in DBMS_STATS_INTERNAL. I wonder now whether any other process is going to clean up the orphaned synopses, or whether they will simply remain in the table forever.

In conclusion then, I learnt a few things from this :

  1. The WRI$_OPTSTAT_SYNOPSIS$ table contains table/partition/column-level synopsis data. There is one partition per table, with the table’s partitions distributed randomly across 32 subpartitions, and each column identified by an unindexed numeric column. The volume of data in this table is therefore probably a function of the number of tables, partitions and columns.
  2. Without the patch in place, dropping columns on one partitioned table can impact the ability to gather stats on unrelated partitioned tables. Every time GATHER_TABLE_STATS was run on a partitioned table, it tried to delete the same rows, creating significant row-lock contention for other sessions.
  3. The execution counter in the top row of the real-time SQL monitoring report shows the number of times the operation has been called by the FORALL statement.
  4. Having to read through billions of rows is going to take time, even with fast I/O and a 50GB buffer cache.
  5. DBMS_STATS_INTERNAL is surprisingly readable, and it is interesting to see how an Oracle patch can consist of just changing a few lines in a package.

EZCONNECT without a password

After encountering this little annoyance for the nth time, I thought I should write it up once and for all, so it is committed to my electronic memory.

The EZCONNECT naming method is a handy way of connecting to an Oracle database by its service name, which avoids the need for aliases in the TNSNAMES file, or lengthy SQL*Net connection strings :

SQL> connect user/password@hostname:port/service_name
Connected.

However, one would expect that if you don’t want to specify the password on the command line, then simply omit the password and await the prompt. But…

SQL> connect user@hostname:port/service_name
ERROR:
ORA-12504: TNS:listener was not given the SERVICE_NAME in CONNECT_DATA

Without the password present, the command interpreter parses the string differently, and (according to Oracle Metalink note 267427.1) treats the service_name as the password and the hostname as the service_name. Basically it’s not handling the syntax very well.

The solution is to quote the part of the connection string after the @ symbol. The actual quoting required may vary by environment, but something along the following lines works in Windows.

SQL> connect user@'hostname:port/service_name'
Enter password:
Connected.