ORACLE DATABASE ON AMAZON AWS

Cloud is the trending topic in Data Management area, it was “Big Data” before, but the word “Big Data” became infamous mostly because inside companies (traditional or startups) decision makers didn’t know what to do with so many buzz tools, success articles, google-sized use cases, Gartner magic quadrants, vendors pressure and business non-existent use cases.
Things are more clear in “Big Data” space, there is the clear understanding that it will be the answer to some business questions and future needs and it needs to be fully integrated with the “old” technologies. Integration and “play together”, big and small data.

Cloud is another beast and it is mainly about agility, easy scaling and cost and decision makers know exactly this means for their companies. It is clear as water for them.

Some traditional companies are moving parts of their workloads (the less critical parts) to the cloud, while still maintain a big portion of the workloads on their on-premises data centers. Startups, they were born in the cloud so there is no other clear choice for them.
Oracle Database still remain very present on traditional companies, assuring critical OTLP and OLAP workloads and consequently present in all stages of development/test and production. Some of this stages and workloads will be forced to move to the cloud. From now on, things will become a complex world if you don’t pick Oracle Cloud and decide (for whatever reason) for the IaaS/PaaS leader: Amazon AWS.

First, it will be helpful to check Oracle position on this and that will be “Oracle Database Support for Amazon AWS EC2 (Doc ID 2174134.1)” on MOS as it states:

  • Singles instances (no RAC) is supported in Amazon AMI  on top of OEL 6.4 or latter (EC2 only)
  • No support for Oracle RAC at all
  • No support for Oracle Multitenant in Oracle 12C
  • No support for Oracle RDS, even it is single instance only.

As Single Instance is supported inside EC2, RAC is not and Oracle has a detailed document about “Third Party Clouds and Oracle RAC”:  http://www.oracle.com/technetwork/database/options/clustering/overview/rac-cloud-support-2843861.pdf

On this document Oracle states that RAC is not support for 2 reasons: the lack of shared storage and missing required network capabilities and it justify both reason.

  • Lack of shared storage: Amazon AMI images allow bypass EBS limitation on shared storage (concurrent access) using iSCSI and building a NAS to “emulate” shared storage. As Oracle states, there is of course an performance impact on I/O as another layer is built to “emulate” shared storage, so Amazon recomends large AMI instances stating the following here:

“In order to provide high I/O performance, we chose to build the NAS instances on EC2’s i2.8xlarge instance size, which has 6.25 terabytes of local ephemeral solid state disk (SSD). With the i2.8xlarge instances, we can present those SSD volumes via iSCSI to the RAC nodes”

Another side effect of Amazon AMI images is they rely on “ephemeral” storage as it literally means: “It is persistant for the life of the instance.”  You will have data loss if your NAS instances fail and Amazon is aware of this and states the following in the same document:

“The SSD volumes that come with the i2.8xlarge instance are ephemeral disk. That means that upon stop, termination or failure of the instance, the data on those volumes will be lost. This is obviously unacceptable to most customers, so we recommend deploying two or three of these NAS instances, and mirroring the storage between them using ASM Normal or High redundancy”Oracle, of course, doesn’t find this a very good solution (let’s face it, it is not) as the data loss danger is real.

  • Required network capabilities: The network on EC2 doesn’t not support multicast IP, however this is much needed for RAC to broadcast packets during cluster configuration and reconfiguration.
    Amazon provided here an workaround for this issue: point-to-point VPN among the RAC nodes using NTop N2N (a discontinued product).
    Of course network performance will suffer on top of this solution and we know how Oracle RAC deals with bad network performance on interconnect with the popular “gc wait events”.
    That said, Amazon is well aware of this stating:”

    Performance of network communications via N2N is significantly lower than non-VPN traffic over the same EC2 network. In addition, Ntop, the makers of N2N, are no longer developing the N2N product. These factors may preclude running production-class RAC workloads over N2N.Currently, we are developing new approaches using clusterware-managed dynamic creation of GRE tunnels to serve the cluster’s multicast needs. We will provide details on this approach and revise this tutorial accordingly before August 2016.”Still no news at date of this post (as far we have investigated).

Conclusion is that there two major workarounds provided by Amazon AWS should be carefully evaluated if you decide to deploy a RAC cluster inside AWS. Things will eventually improve on Amazon if the demand for deploying Orace RAC is high, but for now please be very careful on this topic.

Articles:
https://aws.amazon.com/articles/7455908317389540

http://www.oracle.com/technetwork/database/options/clustering/overview/rac-cloud-support-2843861.pdf

Article also published here: http://redglue.eu/oracle-database-on-amazon-aws/

SLOB LIO against Oracle on AIX different page sizes

1 – Introduction

We all know the possible benefits of using Huge Pages (or Large Pages, depending of the platform). Is is simply a method to have larger page size and that is useful for working with very large memory allocations. As you already know how things flow in Linux Kernel i will just give you a brief how things go in another operating system: AIX
The ideia behind is exactly the same as any other operating system that supports different pages sizes, but AIX (AIX 6.1 and Power6+) added one feature called: Dynamic Variable Page Size Support (VPSS). As you know pages are simply fixed length data blocks in virtual memory, so AIX VMM can dynamically use larger page sizes based on the “application memory workload”. The ideia behind is very good as the feature is transparent to applications, reduces the number hardware address translations done and the effort to implement it is close to nothing.
However, POWER6+ processors only supports mixing 4 KB and 64 KB page sizes, not 16M or other sizes (16G is also available).
To know the different page sizes available for your AIX system:

$ pagesize -a
4096 (4K)
65536 (64K)
16777216 (16M)
17179869184 (16G)

2 – The basics

To exemplify this let’s pick a POWER6 CPU running AIX 6.1 and Oracle 11.2.0.2 (RAC):

$ lsattr -El proc0
frequency   4400000000     Processor Speed       False
smt_enabled true           Processor SMT enabled False
smt_threads 2              Processor SMT threads False
state       enable         Processor state       False
type        PowerPC_POWER6 Processor type        False

This first thing to make sure is what page sizes are currently in use. vmstat will output 2 different page sizes in use: 4K and 64K .

$ vmstat -P all

System configuration: mem=49152MB
pgsz            memory                           page
----- -------------------------- ------------------------------------
           siz      avm      fre    re    pi    po    fr     sr    cy
   4K  3203552  2286288   324060     0     0     0    55    227     0
  64K   586210   605917     1180     0     0     0     0      0     0

Let’s now see what pagesize is used by Oracle. We will use svmon that captures and analysis the virtual memory allocation. The option “-P” will allow us to ask only for PID associated with SMON background process.


$ svmon -P $(ps -elf | egrep ora_smon_${ORACLE_SID} | grep -v egrep | awk '{print $4}') | grep shmat
 14c8d31  7000000d work default shmat/mmap           m   4096     0    4    4096
 147cf12  70000011 work default shmat/mmap           m   4096     0  655    4096
 15a1164  70000042 work default shmat/mmap           m   4096     0 1732    4096
 13648df  70000051 work default shmat/mmap           m   4096     0 1452    4096
 145cd1b  70000018 work default shmat/mmap           m   4096     0  880    4096
 1468717  70000006 work default shmat/mmap           m   4096     0  739    4096
 168ffae  70000038 work default shmat/mmap           m   4096     0 1131    4096
 17323cf  70000048 work default shmat/mmap           m   4096     0  601    4096
 15a1b64  70000039 work default shmat/mmap           m   4096     0 1015    4096
 13862ec  7000004f work default shmat/mmap           m   4096     0  836    4096
 1664798  7000001d work default shmat/mmap           m   4096     0 1713    4096
 115ec5a  7000002d work default shmat/mmap           m   4096     0 1474    4096
 13dbafb  70000058 work default shmat/mmap           m   4096     0 1271    4096
 1221a85  70000052 work default shmat/mmap           m   4096     0 1341    4096
 1753fd9  7000003f work default shmat/mmap           m   4096     0  728    4096

The column “m” shows that Oracle is asking for “medium size” pages (64K) only.
A better way of checking this is using this following options in svmon.
(1306770 is the PID of smon)

$ svmon -P 1306770 -O sortentity=pgsp,unit=auto,pgsz=on
Unit: auto
-------------------------------------------------------------------------------
     Pid Command          Inuse      Pin     Pgsp  Virtual
 1306770 oracle           27,2G    99,4M    7,81G    28,1G

     PageSize                Inuse        Pin       Pgsp    Virtual
     s    4 KB             275,38M         0K      48,7M      60,2M
     m   64 KB               26,9G      99,4M      7,76G      28,0G
-------------------------------------------------------------------------------

SQL> select sum(BYTES)/1024/1024/1024 as SGA_SIZE from V$SGASTAT;

  SGA_SIZE
----------
28,3113727

As you can see the value allocated to “Virtual Memory” (column Virtual) for page size of 64K is 28,0G and should be very similar to the SGA allocated to Oracle.
64K page size also reduce TLB miss rate and it is clear that will benefit the performance when using it instead of the typical 4KB pages. Besides this, no configuration is required at all as AIX kernel automatically allocates 64KB pages for shared memory (SGA) regions (and process data and instruction text if requested).
Another thing that you should know and you see in it column “Pin” in svmon command is that pages of 64K (or 4K) are not pinned. This is due to:
– Your Oracle LOCK_SGA parameter is set to FALSE, so that your SGA is not locked into physical memory.
– Is is recommended to not pin pages of regular size (4K or 64K) due complex nature of pinning that can cause serious problems. Pinning SGA to physical memory only with 16M page size (or 16GB?)

3 – The 16M pages question:

Every time you use large pages to map virtual memory, TLB is able to map more virtual memory entries with a much lower TLB miss rate. This is why Large Pages (or Huge Pages on Linux) are a common topic and a best practice on both OLTP or DW environments.
There are several documents that say to forget 16M and just use the regular 64K and let VPSS take care of promoting the page size that is asked for the application (Oracle in this case).So as you image the question is: Is 16M page size a benefit on Logical I/O performance? Let’s SLOB it!!

4 – SLOB for 64K vs 16M page size

To test, i’ve decided to use SLOB workload and measure the impact on CPU and LIO of 64k page size first. SLOB is an very useful utility written by Kevin Closson (http://kevinclosson.net/slob/) that can really be used in a various number of scenarios including PIO and LIO testing as well as CPU analysis (in the end everything is a CPU problem ;).
The hardware lab includes POWER7 processors and AIX 6.1 (6100-08-02-1316) with a CPU_COUNT=24.

4.1 – The setup

1 – Create a new database called SLOB (or use SLOB create_database_kit).

This step includes a few interesting details about the database, mainly the SGA size of 30G and a db_cache_size set by default to be managed by Oracle. That will be very handy to test the impact of allocating different page sizes.

*.sga_max_size=32212254720
*.sga_target=32212254720
*.pga_aggregate_target=4294967296
*.processes=1500

2 – Setup SQL*Net connectivity – Remember that we are running SLOB inside Linux that connects to a AIX database system.

# Settings for SQL*Net connectivity:
ADMIN_SQLNET_SERVICE=SLOB
SQLNET_SERVICE_BASE=SLOB
#SQLNET_SERVICE_MAX=2
SYSDBA_PASSWD=oracle

3 – Confirming that Oracle is “asking” AIX kernel 64k pages. Check column “m” that stands for medium pages size (64k).

$ svmon -P $(ps -elf | egrep ora_smon_SLOB | grep -v egrep | awk '{print $4}') | grep shmat
  ac40ac  7000007c work default shmat/mmap           m   4096     0    0    4096
  8adf8a  7000006a work default shmat/mmap           m   4096     0    0    4096
  8fe14f  70000063 work default shmat/mmap           m   4096     0    0    4096
  b152b1  70000047 work default shmat/mmap           m   4096     0    0    4096

Testcases rules:

– Oracle 11.2.0.2 was used (The version that i really need to test)
– Buffer Hit should be always 100% on Instance Efficiency AWR section
– The tests on each page size includes waiting 10 minutes between each page size run
– The following order was used for each testcase:

For 64k page size:

0 – Reboot server
1 – Run SLOB: ./runit.sh 20 to populate Oracle buffer cache
2 – Run SLOB: ./runit.sh 20 and save AWR data.
3 – Run SLOB: ./runit.sh 20 and save AWR data.
4 – Run SLOB: ./runit.sh 20 and save AWR data.

For 16M page size:

5 – Reboot server to free continuous 16M memory chuncks
6 – Setup Large Pages (16M) for Oracle in AIX
7 – Run SLOB: ./runit.sh 20 to populate Oracle buffer cache
8 – Run SLOB: ./runit.sh 20 and save AWR data.
9 – Run SLOB: ./runit.sh 20 and save AWR data.
10 – Run SLOB: ./runit.sh 20 and save AWR data.

Testcase #1 – LIO in 64k vs 16M page size with small dataset

The first test pretends to compare the use of medium pages of 64K and large pages (16M) and the impact on Logical I/O using a small dataset size (Aprox. 7GB). Please note that the test doesn’t pretend to know the “maximum value” for your Logical I/O, but instead compare 3 “equal” runs of SLOB results in different page size. As this test will run only with 20 active sessions, the CPUs are not totally busy as idle time is also present (CPU_COUNT=24).

Testcase #1 – slob.conf

UPDATE_PCT=0
RUN_TIME=300
WORK_LOOP=0
SCALE=50000
WORK_UNIT=256
REDO_STRESS=LITE
LOAD_PARALLEL_DEGREE=4
SHARED_DATA_MODULUS=0

Testcase #1 – SLOB Dataset (20 schemas)

$ ./setup.sh IOPS 20
NOTIFY  : 2015.03.26-16:04:43 :
NOTIFY  : 2015.03.26-16:04:43 : Begin SLOB setup. Checking configuration.
...
NOTIFY  : 2015.03.26-16:07:32 : SLOB setup complete (169 seconds).

Testcase #1 – Run it (64k vs 16M)

As rule for the runs, i’ve decided to populate the buffer cache to get Buffer Hit of 100%, avoiding interference from Physical I/O. The will happen only after the 2nd run of SLOB as the first run will partly serve as buffer cache warm-up.

$ ./runit.sh 20
NOTIFY : 2015.04.01-14:04:39 :
NOTIFY : 2015.04.01-14:04:39 : Conducting SLOB pre-test checks.
NOTIFY : 2015.04.01-14:04:39 : All SLOB sessions will connect to SLOB via SQL*Net
...
NOTIFY : 2015.04.01-14:10:02 : Terminating background data collectors.
./runit.sh: line 589: 24771 Killed                  ( iostat -xm 3 > iostat.out 2>&1 )
./runit.sh: line 590: 24772 Killed                  ( vmstat 3 > vmstat.out 2>&1 )
./runit.sh: line 590: 24773 Killed                  ( mpstat -P ALL 3 > mpstat.out 2>&1 )
NOTIFY : 2015.04.01-14:10:12 : SLOB test is complete.
# vmo -p -o lgpg_regions=1921 -o lgpg_size=16777216
...
Setting lgpg_size to 16777216
Setting lgpg_regions to 1921

$ export ORACLE_SGA_PGSZ=16m

$ svmon -P $(ps -elf | egrep "ora_smon_SLOB" | grep -v egrep | awk '{print $4}') | grep shmat
  8f0a4f  7000005c work default shmat/mmap           L     16    16    0      16
  8e0a4e  70000061 work default shmat/mmap           L     16    16    0      16
  bb077b  7000002e work default shmat/mmap           L     16    16    0      16
  ad072d  7000002b work default shmat/mmap           L     16    16    0      16
  b60836  7000002d work default shmat/mmap           L     16    16    0      16
  8d0a4d  7000005e work default shmat/mmap           L     16    16    0      16
  bf097f  7000000a work default shmat/mmap           L     16    16    0      16
  9d065d  70000029 work default shmat/mmap           L     16    16    0      16
  be097e  70000002 work default shmat/mmap           L     16    16    0      16
  bd097d  70000010 work default shmat/mmap           L     16    16    0      16

The column with value L, shows that Large Pages of 16M are actually being used by Oracle.

Testcase #1 – The Results

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:  100.00       Redo NoWait %:  100.00
            Buffer  Hit   %:  100.00    In-memory Sort %:  100.00
            Library Hit   %:  101.20        Soft Parse %:   94.83
         Execute to Parse %:   99.98         Latch Hit %:  100.00
Parse CPU to Parse Elapsd %:    0.00     % Non-Parse CPU:  100.00

Testcase #1 - LIO

In average of 3 different runs with different page sizes, 16M page size shows a very small improvement on less than 2% (1,9%).

Testcase #2 – LIO in 64k vs 16M page size with larger SLOB dataset

The same rules applies to this test case, the only difference is the SCALE used in SLOB as it is bigger but still be able to fit in Oracle buffer cache. As every block still comes from Oracle buffer cache the OS needs to know the status of every page allocated by Oracle SGA and make the so called address translation. On this test case SLOB ran with 20 sessions just like on the first test case.

Testcase #2 – The bigger SLOB SCALE run

This will result in about 23G of data with 20 different schemas

UPDATE_PCT=0
RUN_TIME=300
WORK_LOOP=0
SCALE=150000
WORK_UNIT=256
REDO_STRESS=LITE
LOAD_PARALLEL_DEGREE=4
SHARED_DATA_MODULUS=0
$ ./runit.sh 20
NOTIFY : 2015.04.01-14:04:39 :
NOTIFY : 2015.04.01-14:04:39 : Conducting SLOB pre-test checks.
NOTIFY : 2015.04.01-14:04:39 : All SLOB sessions will connect to SLOB via SQL*Net
...
NOTIFY : 2015.04.01-14:10:02 : Terminating background data collectors.
./runit.sh: line 589: 24771 Killed                  ( iostat -xm 3 > iostat.out 2>&1 )
./runit.sh: line 590: 24772 Killed                  ( vmstat 3 > vmstat.out 2>&1 )
./runit.sh: line 590: 24773 Killed                  ( mpstat -P ALL 3 > mpstat.out 2>&1 )
NOTIFY : 2015.04.01-14:10:12 : SLOB test is complete.

Testcase #2 – The Results

Testcase #2 - LIO

In average of 3 different runs, it represents no improvement using 16M page size with a difference of less than 1% between the 2 different page sizes. For this SLOB workload, we can conclude that 64k and 16M page size showed the same results.

Testcase #3 – LIO in 64k vs 16M page size with CPU pressure
The same rules applies to this test case as other two, as the Buffer Hit should be 100%. But this time we run under CPU starvation with 40 concurrent sessions on a CPU_COUNT=24. To make sure that all blocks come from Oracle buffer cache the SLOB SCALE was reduced to 80000.

Also, the overhead to managing large pages tables is indicated by CPU usage (mostly). This increase on kernel mode CPU usage will eventually make suffer your Logical I/O numbers. So CPU is wasted in managing big page tables translations instead of giving the CPU to Oracle to process your workload. The overhead of CPU usage and the “sometimes happen” page faults will lead to a less-than-good Oracle performance. The bottom line here is simple: 16M page size should provide better results by theory under CPU pressure.

Testcase #3 – CPU pressure and starvation

$ ./runit.sh 40
NOTIFY : 2015.04.02-16:44:34 :
NOTIFY : 2015.04.02-16:44:34 : Conducting SLOB pre-test checks.
NOTIFY : 2015.04.02-16:44:34 : All SLOB sessions will connect to SLOB via SQL*Net
NOTIFY:
UPDATE_PCT == 0
RUN_TIME == 300
WORK_LOOP == 0
SCALE == 80000
WORK_UNIT == 256
ADMIN_SQLNET_SERVICE == "SLOB"
SQLNET_SERVICE_MAX == "0"
...

Testcase #3 – The Results

testcase3_awr

Looks good! A difference on average of 3 runs of more than 11% in favor of 16M page size. This shows that under CPU pressure and possible starvation you will end up with more benefits than the work that is required to setup 16M page size on AIX 6/7.

To make sure that these results were ok to publish, i’ve done numerous SLOB runs with 64k and 16M page size and the results were the same. Benefits between 9% and 12% with CPU under a lot of pressure.

Conclusion

– 16M page size on AIX (and on other OS, probably) will provide you better Logical I/O performance when CPUs are under pressure. The benefits range between 9% and 12% when using 16M page size.
– These results may differ from your conclusions or tests, because as you understand your workload is different and the results will be inevitably be different.

Profiling DB Resource Manager – VKRM process

It seems that VKRM is a deeply unknow background process. I did a little investigation that will help to understand better all mechanism of profiling Oracle (Thank you Frits Hoogland) and a little more of one of the most underestimated feature of Oracle: Resource Manager.

VKRM manages the CPU scheduling for all Oracle processes and includes the CPU scheduling for the Database Resource Manager. Your DBRM active plan (parameter resource_manager_plan) will be subject to VKRM job to ensure that all your plan directives are fulfilled.
VKRM is a special background process, because it just go away when is not needed (at least in 11gR2) and every time your Resource Manager CPU scheduling kicks in, DBRM process will spawn VKRM again. Please note that DBRM is the “main” process for all Resource Manager tasks, VKRM is only for CPU scheduling.

There is no documentation explaining how VKRM works in detail, so what is left for us is to try some profiling and reach some (basic?) conclusions.

The first thing about VKRM is that, you simply can’t control its behavior…except there is an hidden parameter called _vkrm_schedule_interval exists which is basically VKRM schedule interval (surprise, surprise) that is by default set to 10 milliseconds:

SQL> @phidden _vkrm

KSPPINM 					   KSPPSTVL
-------------------------------------------------- --------------------------------------------------
_vkrm_schedule_interval 			   10

This is easily confirmed by strace on the PID corresponding to VKRM background process:

[oracle@baco scripts]$ ps -ef | grep ora_vkrm
oracle    2566     1  0 Nov01 ?        00:00:25 ora_vkrm_bacodb1
oracle    8965  7296  0 01:02 pts/3    00:00:00 grep ora_vkrm

[root@baco scripts]# strace -p 2566 -o ora_vkrm_strace.out
Process 2566 attached - interrupt to quit
^CProcess 2566 detached

The result is a bunch of nanosleep() Linux kernel functions, that suspends the execution of a calling thread until either at least the time specified (10000000 nanoseconds) has elapsed. On easy words, it is holding a sleep for every 10 milliseconds. On a successful sleep, nanosleep() returns 0.

nanosleep({0, 10000000}, 0x7fff271b1160) = 0
nanosleep({0, 10000000}, 0x7fff271b1160) = 0
nanosleep({0, 10000000}, 0x7fff271b1160) = 0

A small change in the _vkrm_schedule_interval to 5000 milliseconds will result in a different argument call for nanosleep() function and on a different period (every 5 seconds).
This will probably change the behavior of VKRM and CPU scheduling, the greater the value, the less precise will be your scheduling. As you can see in strace output it is possible to change _vkrm_schedule_interval while database is running (scope=memory) and it will take immediate effect on your scheduling behavior:

SQL> alter system set "_vkrm_schedule_interval"=5000 scope=memory;
SQL> alter system set "_vkrm_schedule_interval"=6000 scope=memory;
nanosleep({0, 10000000}, 0x7fff271b1160) = 0
nanosleep({0, 10000000}, 0x7fff271b1160) = 0
nanosleep({5, 0}, 0x7fff271b1160)       = 0
nanosleep({5, 0}, 0x7fff271b1160)       = 0
nanosleep({5, 0}, 0x7fff271b1160)       = 0
nanosleep({6, 0}, 0x7fff271b1160)       = 0

Trace files will also reveal your change:

*** 2014-11-02 04:02:43.992
kskvkrmschedintmod: setting VKRM scheduling interval from (6000)ms to [(10)ms (10000)us]
*** 2014-11-02 04:11:53.078
kskvkrmschedintmod: setting VKRM scheduling interval from (10)ms to [(5000)ms (5000000)us]
kskvkrmschedintmod: setting VKRM scheduling interval from (5000)ms to [(10)ms (10000)us]

Another chapter in profiling VKRM process is to use perf on Linux to see if we can see more interesting stuff. Bellow is the result of a perf report against VKRM process. Top 3 are three different kernel mode executed functions: __do_softirq, finish_task_switch and _raw_spin_unlock_irqrestore.
Most of the work is done in kernel mode, with Linux kernel software interrupts (softirq) and scheduler functions (finish_task_switch) allowing the high-precision CPU scheduling made by VKRM.
Another thing worth mention is usermode Oracle function kskvkrmmain representing only 3.03% of all work done by VKRM.

root@baco outputs]# perf record -g -p 2542 -e cpu-clock
[ perf record: Woken up 2 times to write data ]
[ perf record: Captured and wrote 0.451 MB perf.data (~19697 samples) ]

[oracle@baco outputs]$ perf report
[vdso] with build id 553f611ad979d16f78a66945dca52ba113827329 not found, continuing without symbols
...
 39.24%  ora_vkrm_bacodb  [kernel.kallsyms]   [k] __do_softirq
                 -- 99.05%-- do_nanosleep
...
34.31%  ora_vkrm_bacodb  [kernel.kallsyms]   [k] finish_task_switch
...
14.22%  ora_vkrm_bacodb  [kernel.kallsyms]   [k] _raw_spin_unlock_irqrestore
...
3.03%  ora_vkrm_bacodb  oracle              [.] kskvkrmmain
            |
            --- kskvkrmmain
                ksbrdp
    ...

1.25%  ora_vkrm_bacodb  oracle              [.] sltrusleep
            |
            --- sltrusleep
                kskvkrmmain
   ...

Another shot is oradebug to understand what kind of events happen related with VKRM:

SQL> oradebug setospid 2542
Oracle pid: 10, Unix process pid: 2542, image: oracle@baco (VKRM)
SQL>  oradebug unlimit
Statement processed.
SQL> oradebug event 10046 trace name context forever, level 8;
Statement processed.

*** 2014-11-09 14:06:38.559
WAIT #0: nam='latch free' ela= 21980 address=2722482696 <b>number=467</b> tries=0 obj#=-1 tim=6866775549

*** 2014-11-09 14:09:41.598
WAIT #0: nam='latch free' ela= 31774 address=2722482696 <b>number=467</b> tries=0 obj#=-1 tim=7049814301

The only event that is happening on this trace is latch free wait event.It is possible to identify what latch is related with latch free wait event with a simple query (see bellow). The latch is obviously related to Resource Manager CPU scheduling.

SQL> select latch#, name from v$latchname where latch# = 467;

    LATCH# NAME
---------- ----------------------------------------------------------------
       467 resmgr:resource group CPU method

This post has no great conclusions, it is just a pure exercise to understand a little more about a deeply unknow Oracle background process.

Resource Manager – CPU allocation math – Part 3

This is the last post of this mini-series regarding CPU allocation in Resource Manager. The idea behind this last post is very simple: Tracing the same test case we’ve used before and analyze trace files. This will let us understand how Oracle instrumentation works when DBRM is active and managing the CPU.
Please note that we are going to trace for only one service, that is perfectly enough for our testing.

Changing our cpu_alloc_burn.sql for tracing using 10046 event with the prefix for our traces ‘DBRM_TRACE’:

SET TERMOUT OFF
alter session set tracefile_identifier='DBRM_TRACE';
alter session set events '10046 trace name context forever, level 12';
select distinct t1.N2 from t1, t2
where t1.N1t2.N2
and t1.N3t2.N1
and t1.N2  t2.N1
and t2.N2 is not null;
[oracle@phoenix resource_manager]$  ./run_adhoc.sh
Starting 20 new executions for S_ADHOC service with tracing...

Now we have 20 new sessions connected to the service name S_ADHOC and consumer group ADHOC_QUERYS. The first thing that we will notice before digging into trace files is the wait event resmgr:cpu quantum:


      SID STATUS   RESOURCE_CONSUMER_GROUP	     SERVICE_NA EVENT
---------- -------- -------------------------------- ---------- ------------------------------
	22 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
	24 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
	26 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
	28 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
	29 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
	32 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
	34 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
	35 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
	38 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
       134 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
       136 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum

       SID STATUS   RESOURCE_CONSUMER_GROUP	     SERVICE_NA EVENT
---------- -------- -------------------------------- ---------- ------------------------------
       143 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
       148 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
       150 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
       151 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
       152 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
       156 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
       157 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
       159 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum
       162 ACTIVE   ADHOC_QUERYS		     S_ADHOC	resmgr:cpu quantum

This wait event basically states that a session exists and is waiting for the allocation of a quantum of CPU. It is basically DBRM doing his job, throttling CPU allocation until it is according the plan directives that we have defined. It is then obvious if you want to reduce the persistence of this wait event (AWR will help you checking that), you have to increase your CPU allocation (your plan directives) to avoid waiting so much on it.
Another (and the best way to do it, since it gives you a lot of information) is to check the trace file that we’ve generated before:

*** 2014-06-13 17:06:39.844
WAIT #140096016814088: nam='resmgr:cpu quantum' ela= 807849 location=2 consumer group id=88620  =0 obj#=88623 tim=1402675599844408
WAIT #140096016814088: nam='Disk file operations I/O' ela= 5589 FileOperation=2 fileno=0 filetype=15 obj#=88623 tim=1402675599854817

*** 2014-06-13 17:06:40.778
WAIT #140096016814088: nam='resmgr:cpu quantum' ela= 821271 location=3 consumer group id=88620  =0 obj#=88623 tim=1402675600778500

*** 2014-06-13 17:06:41.736
WAIT #140096016814088: nam='resmgr:cpu quantum' ela= 917063 location=3 consumer group id=88620  =0 obj#=88623 tim=1402675601736754

*** 2014-06-13 17:06:42.605
WAIT #140096016814088: nam='resmgr:cpu quantum' ela= 859088 location=3 consumer group id=88620  =0 obj#=88623 tim=1402675602605611

*** 2014-06-13 17:06:43.612
WAIT #140096016814088: nam='resmgr:cpu quantum' ela= 905964 location=3 consumer group id=88620  =0 obj#=88623 tim=1402675603612339
WAIT #140096016814088: nam='direct path read' ela= 1332 file number=4 first dba=16130 block cnt=62 obj#=88623 tim=1402675603682243

Some interesting info here:

ela – Amount time in microseconds that the session spent waiting for a CPU quantum allocation. If we sum everything (all the microseconds) we will have the total time of the session that is “out of CPU”;
consumer group id– The consumer group id, maps with DBA_RSRC_CONSUMER_GROUPS view;
obj# – The object that is part of the wait itself. On our case, it is a table. Maps directly with view DBA_OBJECTS.

Of course if we use tkprof to help us, we can have a more broader picture showing that one of our 20 sessions waited 391,34 seconds during his lifetime and waited for a maximum of 1,10 seconds for a CPU quantum allocation.

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00
  cursor: pin S wait on X                         1        0.14          0.14
  resmgr:cpu quantum                            511        1.10        391.34
  Disk file operations I/O                        4        0.00          0.01
  direct path read                              105        0.30          0.96

Conclusions:

– Use math to define correctly your CPU allocation in DBRM plans and be careful with over and under allocations as they impact your database performance.
– Always try to test your DBRM implementation before go live. Sometimes complex plans can be tricky to test and if you can’t measure the impact you can be in trouble. Trial and error is not a problem, when you are not live.
– Understand how DBRM works! DBRM is a complex beast and i hope that this mini-series can help on that.

Resource Manager – CPU allocation math – Part 2

As said in Part 1, the Part 2 will focus to measure how Oracle will effectively redistribute the CPU allocation defined in previous part. This is a important step while testing Resource Manager, it is very important to test your plans to ensure that Oracle behavior is according of what you are expecting.

Measure CPU allocation it is not a easy task, fortunately Oracle provide us some views related to Resource Manager to help us on this task.
First things first, after creating consumer groups, usernames, roles, plan and plan directives it is mandatory to tell Oracle what is the plan we are going to use. For that use the parameter resource_manager_plan

SQL> alter system set resource_manager_plan='DW_PLAN' scope=both;
System altered.

To measure the CPU allocation, it is necessary to create some heavy cpu loading tasks that use specified database services (as we defined as consumer group mapping). I’ve created 3 simple scripts that burn CPU for each service. The statement is a heavy CPU oriented as you can see bellow:

[oracle@phoenix resource_manager]$  cat cpu_alloc_burn.sql
SET TERMOUT OFF
select distinct t1.N2 from t1, t2
where t1.N1<>t2.N2
and t1.N3<>t2.N1
and t1.N2 <> t2.N1
and t2.N2 is not null;

To fire up the 20 sessions for each service. Bellow is an example for the database service S_DAILY_LOAD

echo "Starting 20 new executions in S_DAILY_LOAD service"
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &
sqlplus -s dw_user/dw_user@S_DAILY_LOAD @cpu_alloc_burn.sql &

[oracle@phoenix resource_manager]$  ./run_adhoc.sh
Starting 20 new executions for S_ADHOC service...
[oracle@phoenix resource_manager]$  ./run_daily_load.sh
Starting 20 new executions in S_DAILY_LOAD service
[oracle@phoenix resource_manager]$  ./run_reporting.sh
Starting 20 new executions for S_REPORTING service...

Now that we are burn heavily our cpu, let’s check Oracle view to ensure that session are in the correct resource consumer group.


SQL> select distinct username, resource_consumer_group, service_name from gv$session
where resource_consumer_group in ('ADHOC_QUERYS', 'DAILY_LOAD', 'REPORTING')
and status= 'ACTIVE'
order by resource_consumer_group;

USERNAME		       RESOURCE_CONSUMER_GROUP		SERVICE_NAME
------------------------------ -------------------------------- ----------------------------------------------------------------
DW_USER 		       ADHOC_QUERYS			S_ADHOC
DW_USER 		       DAILY_LOAD			S_DAILY_LOAD
DW_USER 		       REPORTING			S_REPORTING

Everything sounds good, now it is time to measure CPU activity based on Resource Manager view v$rsrc_consumer_group

SQL> SELECT name,
       active_sessions, execution_waiters, requests,
       cpu_wait_time, cpu_waits, consumed_cpu_time, yields
  FROM v$rsrc_consumer_group
ORDER BY cpu_wait_time;

Resource Manager view

Now it is time for math. The result of the previous query was taken about 5 minutes of running and the math shows us that Resource Manager is not yet “respecting” our CPU allocation according the numbers. As said before, we are probably unable to get a perfect match, but a close match.

Desired scenario:

Resource Manager Real CPU allocation

 

 

 

Real world scenario:
Resource Manager CPU math

 

To make sure that we can get better numbers i will let the sessions running under DBRM for several minutes, and little better numbers here, still not a perfect match between what we have defined in the first part.
As you can easily see, DAILY_LOAD is consuming 85% of our CPU time according Oracle vs 65% that we had specified. This is also happen with REPORTING a ADHOC_QUERYS with 12,75% and 11,83%.
Our conclusion is for a perfect match you need that consumer group to fully utilize its allocation. That will probably be very difficult. Please also note that for a much more complex plans (with sub plans as example) this task will be harder.

Update:

I’ve let the load scripts to run for about 1 hour to check if the results are more according what we expect from our resource manager CPU allocation. Bellow are the results (in SQL screenshot). The results are much better than the previous 5 minutes attempt:

Resource Manager CPU allocation 1 hour

 

The next part (Part 3) will be focused on what wait events are present in sessions when Oracle is managing your CPU allocation.

Resource Manager – CPU allocation math – Part 1

It is often a underestimate feature on Oracle Databases (changed a little bit with Exadata), Resource Manager is a very powerful feature that allows you to manage different workloads within a database. As you know hardware resources are limited and it is often necessary the proper allocation of resources to different tasks so it is Resource Manager job to handle these common problems these days.
This post will cover only CPU allocation (in a simple way) to different tasks or users. Understand the basis of CPU allocation from Resource Manager point of view will help to define a better plan to handle a proper allocation of your CPU resources.

Resource Manager is made of three components: consumer group, plan directives and resource plans.
Basically the consumer group is a group that aggregates and share common priority and scheduling. As example, in a datawarehousing evironment a “Reporting” group will share the same business priority.
In other way, plan directives is a link between a consumer group and a resource plan. It allows you to define the resource allocation. It is a one-to-one relationship and is a list of dictionary key value attributes.
In the end, resource plan is a collection of directives that will determine how of resources are allocated.Only one resource plan is allowed per instance.

Please note before continue that this post is not intended to explain Resource Manager in detail.

1 – The setup and example

We will setup a fairly simple use case. Let’s pretend that we have a datawarehousing system that has the following business rules. These rules have different CPU allocation priorities, based on business requirements:

DAILY_LOAD: Daily data-load from several OLTP databases;
REPORTING: Reporting tasks and services;
ADHOC_QUERY: Adhoc querys issued by users.

The following PL/SQL will create four consumer groups that will allow us to respect business priorities. The CPU priority will be defined at plan directive creation.

BEGIN
  dbms_resource_manager.clear_pending_area();
  dbms_resource_manager.create_pending_area();
  dbms_resource_manager.create_consumer_group(
    consumer_group => 'DAILY_LOAD',
    comment        => 'Consumer group for critical OLTP applications');
  dbms_resource_manager.create_consumer_group(
    consumer_group => 'REPORTING',
    comment        => 'Consumer group for long-running reports');
  dbms_resource_manager.create_consumer_group(
    consumer_group => 'ADHOC_QUERYS',
    comment        => 'Consumer group for adhoc querys');
  dbms_resource_manager.validate_pending_area();
  dbms_resource_manager.submit_pending_area();
END;

Apart from this let’s create three difference services and one particular database user to ensure the following mapping.

BEGIN
  dbms_resource_manager.clear_pending_area();
  dbms_resource_manager.create_pending_area();
  dbms_resource_manager.set_consumer_group_mapping(
    attribute      => dbms_resource_manager.service_name,
    value          => 'S_DAILY_LOAD',
    consumer_group => 'DAILY_LOAD');
  dbms_resource_manager.set_consumer_group_mapping(
    attribute      => dbms_resource_manager.service_name,
    value          => 'S_ADHOC',
    consumer_group => 'ADHOC_QUERY');
  dbms_resource_manager.set_consumer_group_mapping(
    attribute      => dbms_resource_manager.service_name,
    value          => 'S_REPORTING',
    consumer_group => 'REPORTING');
  dbms_resource_manager.submit_pending_area();
END;
BEGIN
  dbms_resource_manager_privs.grant_switch_consumer_group(
    GRANTEE_NAME   => 'ROLE_DW',
    CONSUMER_GROUP => 'DAILY_LOAD',
    GRANT_OPTION   =>  FALSE);
  dbms_resource_manager_privs.grant_switch_consumer_group(
    GRANTEE_NAME   => 'ROLE_DW',
    CONSUMER_GROUP => 'ADHOC_QUERY',
    GRANT_OPTION   =>  FALSE);
  dbms_resource_manager_privs.grant_switch_consumer_group(
    GRANTEE_NAME   => 'ROLE_DW',
    CONSUMER_GROUP => 'REPORTING',
    GRANT_OPTION   =>  FALSE);

END;

The mapping defined is the following:

– Users that connect to service S_DAILY_LOAD, S_ADHOC or S_REPORTING will be switched to the corresponding consumer group. Please note that all users that connect to application have the role ROLE_DW, avoiding specifing each individual database username.

– The last PL/SQL block will grant user that their session is able to switch to consumer group. This is mandatory for DBRM to be able to automatically switch your session. Even if you have a mapping that is based on service_name, module_name, client_os_user etc, your grant to switch need to be at username level. I’ve had a little discussion on this topic with Martin Bach during OUGN14 and so far is the only way to do it.

The next step is to create the plan and the plan directive. As it is a simple example, it is basically one plan and some directives for each consumer group.

BEGIN
 dbms_resource_manager.clear_pending_area();
 dbms_resource_manager.create_pending_area();
 dbms_resource_manager.create_plan(
   plan    => 'DW_PLAN',
   comment => 'Resource plan for normal business hours');

 dbms_resource_manager.create_plan_directive(
   plan             => 'DW_PLAN',
   group_or_subplan => 'DAILY_LOAD',
   comment          => 'DW Daily load from OLTP',
   mgmt_p1          => 65);
 dbms_resource_manager.create_plan_directive(
   plan             => 'DW_PLAN',
   group_or_subplan => 'REPORTING',
   comment          => 'Reporting services and tasks - Lower priority',
   mgmt_p2          => 50);
 dbms_resource_manager.create_plan_directive(
   plan             => 'DW_PLAN',
   group_or_subplan => 'ADHOC_QUERY',
   comment          => 'Adhoc Querys by users',
   mgmt_p2          => 40);
 dbms_resource_manager.create_plan_directive(
   plan             => 'DW_PLAN',
   group_or_subplan => 'OTHER_GROUPS',
   comment          => 'All other groups',
   mgmt_p3          => 100);
 dbms_resource_manager.validate_pending_area();
 dbms_resource_manager.submit_pending_area();
END;

2 – The formula for CPU plan allocation

In the previous setup, we decided to specify the following CPU percentages for each task. As you can see, you can’t simply sum all the values at they are at different levels (mgmt_pN) and the sum of all values is over 100% of CPU allocation.

Resource Manager CPU allocation

 

 

 

The formula for calculation CPU allocation of each level is the following:

Level N = (100 – SUM(L1)) x (100 – SUM(L2)) x (100 – SUM(L3)) … x Level N

Please note that for Level 1 or mgmt_1 there is not formula needed, just the value you setup in mgmt_p1.

In our case, the calculations are:

Level 1 (DAILY_LOAD) = 65%
Level 2
(REPORTING) = (100 – 65%) x 50% = 17,5%
Level 2 (ADHOC_QUERY) = (100 – 65%) x 40% = 14%
Level 3
(OTHER_GROUPS) = (100-65%) x (100% – SUM(50%+40%)) x 100% = 3,5%

Above is the summary of real CPU allocation, with a total of 100%. That means that all of CPU resources will be distributed across your priority and preferences. Of course, getting a perfect match between the CPU allocation that you setup and the real case scenario can be fairly difficult but a very approximate value is expected. Part 2 of these article will be focused on doing some test trying to measure a real case scenario.

Resource Manager Real CPU allocation

 

 

 

Have a nice weekend.

SQL Patch and RESULT_CACHE hint

Oracle provide us a lot of “cool” features to meet some more “hidden” needs, one example is SQL Patch. The problem with SQL Patch is that is not documented and it is basically an internal function from dbms_sqldiag_internal package. On the other hand you have the well know, well documented SQL Plan Baselines that also can fit your needs.

On this particular case, i’m only using SQL Patch in conjunction with RESULT CACHE. The ideia behind it, it to create an SQL Patch for a particular statement to use result_cache hint, without modifying any code at all.

First time is create a table and do some random query on it.

SQL> create table t1 as select dbms_random.value(0,10) N1, dbms_random.value(0,20) N2 from dual connect by level < 100000;
SQL> select count(*) from t1;

-------------------------------------------------------------------
| Id  | Operation	   | Name | Rows  | Cost (%CPU)| Time	  |
-------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |	1 |   189   (0)| 00:00:03 |
|   1 |  SORT AGGREGATE    |	  |	1 |	       |	  |
|   2 |   TABLE ACCESS FULL| T1   | 88330 |   189   (0)| 00:00:03 |
-------------------------------------------------------------------

Now just run the same, but using the hint RESULT_CACHE and check if Oracle respect your will:


SQL> select /*+ RESULT_CACHE */ count(*) from t1;

------------------------------------------------------------------------------------------
| Id  | Operation	    | Name			 | Rows  | Cost (%CPU)| Time	 |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |				 |     1 |   189   (0)| 00:00:03 |
|   1 |  RESULT CACHE	    | a10q47t7wrdhh6npx1qkqzugk0 |	 |	      | 	 |
|   2 |   SORT AGGREGATE    |				 |     1 |	      | 	 |
|   3 |    TABLE ACCESS FULL| T1			 | 88330 |   189   (0)| 00:00:03 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------
   1 - column-count=1; dependencies=(LCMARQUES.T1); attributes=(single-row); name="select /*+ result_cache*/ count(*) from t1"

As you can see, everything worked as expected. Let’s now add the SQL Patch instead of changing our SQL to include hint RESULT_CACHE. The ideia is to affect CBO decision before the execution:

SQL> begin
  2  SYS.dbms_sqldiag_internal.i_create_patch(sql_text => 'select count(*) from t1',
  3  hint_text => 'RESULT_CACHE',
  4  name => 'result_cache_patch');
  5  end;
  6  /
PL/SQL procedure successfully completed.

SQL Patch is now created, let’s see the query plan for the statement that we used:

SQL> select count(*) from t1;

-------------------------------------------------------------------
| Id  | Operation	   | Name | Rows  | Cost (%CPU)| Time	  |
-------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |	1 |   189   (0)| 00:00:03 |
|   1 |  SORT AGGREGATE    |	  |	1 |	       |	  |
|   2 |   TABLE ACCESS FULL| T1   | 88330 |   189   (0)| 00:00:03 |
-------------------------------------------------------------------

Note

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
-----
   - dynamic sampling used for this statement (level=2)
   - SQL patch "result_cache_patch" used for this statement

As you can see, the last line of the explain plan explicitly indicates that “result_cache_patch” will be used for this the statement. That is a little bit weird because the plan itself doesn’t contain any reference to RESULT_CACHE.
A 10046 trace file will show that even explain plan indicates that “result_cache_patch” will be used, it is ignored by the optimizer. The following traces shows a query with RESULT_CACHE hint and the other with the “result_cache_patch”

* Code changed to use RESULT_CACHE hint: select /*+ result_cache*/ count(*) from t1;

WAIT #139796640671976: nam='db file scattered read' ela= 817 file#=4 block#=1326 blocks=7 obj#=88605 tim=1399684414120606
FETCH #139796640671976:c=28996,e=221184,p=420,cr=678,cu=0,mis=0,r=1,dep=0,og=1,plh=3724264953,tim=1399684414121561
STAT #139796640671976 id=1 cnt=1 pid=0 pos=1 obj=0 op='RESULT CACHE  a10q47t7wrdhh6npx1qkqzugk0 (cr=678 pr=420 pw=0 time=221168 us)'
STAT #139796640671976 id=2 cnt=1 pid=1 pos=1 obj=0 op='SORT AGGREGATE (cr=678 pr=420 pw=0 time=221074 us)'

* Code without hint but with SQLPatch inplace:

WAIT #140032170080016: nam='SQL*Net message to client' ela= 5 driver id=1650815232 #bytes=1 p3=0 obj#=209 tim=1399684491153455
FETCH #140032170080016:c=13998,e=15248,p=0,cr=678,cu=0,mis=0,r=1,dep=0,og=1,plh=3724264953,tim=1399684491168743
STAT #140032170080016 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=678 pr=0 pw=0 time=15240 us)'
STAT #140032170080016 id=2 cnt=99999 pid=1 pos=1 obj=88605 op='TABLE ACCESS FULL T1 (cr=678 pr=0 pw=0 time=478162 us cost=189 size=0 card=88330)'

As seen,no RESULT CACHE was used (also easily seen by time taken to count the rows) even if SQLPatch inplace. This is actually result of a bug: Bug 16974854 : RESULT CACHE HINT DOES NOT WORK WITH SQL PATCH . Oracle also promised a patch soon (and included in some BP for 11.2.0.3/4). It will eventually be fixed also in Oracle 12.2.x according to bug description.

As side note, the same behavior applies using SQL Profiles (as well not good documented for this particular use case) and it doesn’t work at all.

SQL> exec dbms_sqldiag.drop_sql_patch('result_cache_patch');

PL/SQL procedure successfully completed.

DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
SQL_TEXT => 'select count(*) from t1',
PROFILE => SQLPROF_ATTR('RESULT_CACHE'),
NAME => 'PROFILE_RESULTC_T1',
REPLACE => TRUE,
FORCE_MATCH => TRUE);
END;

SQL> select count(*) from t1;
-------------------------------------------------------------------
| Id  | Operation	   | Name | Rows  | Cost (%CPU)| Time	  |
-------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |	1 |   189   (0)| 00:00:03 |
|   1 |  SORT AGGREGATE    |	  |	1 |	       |	  |
|   2 |   TABLE ACCESS FULL| T1   | 88330 |   189   (0)| 00:00:03 |
-------------------------------------------------------------------

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
-----
   - dynamic sampling used for this statement (level=2)
   - SQL profile "PROFILE_RESULTC_T1" used for this statement

Exact the same symptomes and exact the same behavior and outcome. Hope this can save you some time in the future.