ASM spfile operations

While playing with ASM in Oracle 12c i’ve ended up looking some details of the internal package DBMS_DISKGROUP. This package is an internal ASM Oracle library used for asmcmd implementation. For a list of available procedures (12c in this particular case) use the following:

[oracle@baco grid]$ find $ORACLE_HOME -name asmcmd* | xargs grep dbms_diskgroup

dbms_diskgroup.getfileattr(:fileName, :fileType, :fileSz, :blkSz);
dbms_diskgroup.mapextent(:fname, :extnum, :mapcount, :extsize, ...);
dbms_diskgroup.mapau(:gnum, :disk, :au, :file, :extent, :xsn);
dbms_diskgroup.dropdir('$dir');
dbms_diskgroup.createclientcluster (:clname, :direct_access);
dbms_diskgroup.getfileattr(:fileName, :fileType, :fileSz, :blkSz);
dbms_diskgroup.copy('', '', '', :src_path, :src_ftyp, :src_blksz,...);
dbms_diskgroup.getfileattr(:src_path, :fileType, :fileSz, :blkSz);
dbms_diskgroup.asmcopy(:src_path, :dst_name, :spfile_number,...);
dbms_diskgroup.gpnpsetds(:ds_path,0);
dbms_diskgroup.remap($gnum, $fnum, $vxn);

Sometime ago, i’ve wrote something about tracing asmcmd cp command using sql*net tracing, but today what got my attention was dbms_diskgroup.asmcopy. This procedure is used in asmcmdsys.pm ($GRID_HOME/lib/) and implements three different asmcmd commands: spcopy, spmove and spbackup. All these asmcmd commands handle only ASM SPFILE operations and no other operations. All other copy operations using asmcmd are handled by dbms_diskgroup.copy procedure. A deeper look in asmcmdsys.pm shows the use of asmcopy procedure and its parameters, including spfile_number(2) parameter that specifies the asm file number.

 my ($spfile_number) = 253;
...
 $sth = $dbh->prepare(q{
     begin
       dbms_diskgroup.asmcopy(:src_path, :dst_name, :spfile_number,
                              :fileType, :blkSz, :spfile_number2,
                              :spfile_type, :client_mode);
     exception when others then
       raise;
     end;
     });

...
  $sth->bind_param( ":spfile_number", $spfile_number);
...
  $sth->bind_param( ":spfile_number2", $spfile_number);

The ASM file number 253 seen in the top perl declaration represents the ASM spfile (that is why it is hardcoded). It is easily seen using my amap script:

SQL> @amap metadata DATA
DISKNUMBER FILENUMBER FILE EXTENT NUMBER	  METADATA DESC 	EXTENT MIRRORING RELATIVE AU POSITION
---------- ---------- ------------------ ------------------------------ ---------------- --------------------
	 0 1	      0 		 FILE DIRECTORY 		PRIMARY EXT	 2
	 1 1	      1 		 FILE DIRECTORY 		PRIMARY EXT	 31

	 1 9	      0 		 ATTRIBUTE DIRECTORY		PRIMARY EXT	 25
	 1 253	      0 		 ASM SPFILE			PRIMARY EXT	 30

There is nothing extraordinary with this post, just one more way to debug and understand the asmcmd commands when they may not return what you expect :)

Obsolete or deprecated parameters (ORA-32004)

Today i need to update my lab here at home to Oracle 11.2.0.4 version. While i was doing it, i notice that one of my instance started fine but showed an ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance.The error is clear, some of my parameters are obsolete for my particular Oracle version.

SQL> startup mount
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.

Total System Global Area  242208768 bytes
Fixed Size		    2227176 bytes
Variable Size		  184550424 bytes
Database Buffers	   50331648 bytes
Redo Buffers		    5099520 bytes
Database mounted.

To easily find out what kind of parameters are obsolete, just generate a pfile from memory and use bash to identified it:

SQL> create pfile='/home/oracle/pfileFENIX.ora' from memory;
File created.
[oracle@phoenix ~]$ grep -i deprecate pfileFENIX.ora
background_dump_dest='/u01/app/oracle/diag/rdbms/fenix/fenix/trace' #Deprecate parameter
user_dump_dest='/u01/app/oracle/diag/rdbms/fenix/fenix/trace' #Deprecate parameter

Adjust your pfile, startup your instance with it, generate a spfile and startup your instance again.

PS: This is more a mental note for me than a real post :-)

Transparent Huge Pages – OEL 6.5

This is a very short post to show that THP (or Transparent Huge Pages) was indeed removed from OEL 6.5 (UEK3 kernel – version 3.8.13-16.2.2.el6uek.x86_64) just like Martin Bach noticed in this post.

Transparent Huge Pages have been discussed since its introduction (kernel 2.6.37) in Oracle community mainly because of several performance issues and node reboots. My Oracle support has a good note alerting DBAs to disable the THP to avoid problems.

In the past i’ve showed using a very simple bash script how to check if any Oracle background process is allocating any THP:

#!/bin/bash

for i in $(ps -eo pid,cmd | grep ora_ | grep -v grep | awk '{split($0,a," "); print a[1]"|"a[2]}')
do
   PID=$(echo $i | awk '{split($0,a,"|"); print a[1]}')
   PNAME=$(echo $i | awk '{split($0,a,"|"); print a[2]}')
   THP=$(cat /proc/$PID/smaps | grep AnonHugePages | awk '{sum+=$2} END  {print sum}')

   echo "PID "$PID "(" $PNAME ") is using " $THP "Kb of THP"

done

After the update to OEL 6.5 (code name Santiago ;) and UEK3 kernel version (3.8.13-16.2.2.el6uek.x86_64) the result is that THP was removed. I’ve used Oracle 12c (12.1.0.1.0) for this little essay.

[root@baco scripts]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.5 (Santiago)

[root@baco scripts]# uname -a
Linux baco 3.8.13-16.2.2.el6uek.x86_64 #1 SMP Tue Nov 26 08:41:44 PST 2013 x86_64 x86_64 x86_64 GNU/Linux

 

[root@baco scripts]# grep -i AnonHugePages /proc/meminfo | wc -l
0

Screenshot 2013-12-07 18.19.35

Now you don’t have to worry about the not-so-good-for-Oracle THP. Now installing DTrace :-)

In-Database Archiving: Partitioning (Part I)

Today we come across with a big challenge: storing vast quantities of data for the lowest cost possible. The problem with all that data is that it will eventually become historical one day. That historical data may be subject to “archiving tiering”, partitioning, compression, etc but the application still can see and query old data in all “tiers”, all partitions, even they are not necessary to handle for the current scope of the application.

In-Database Archiving: How to enable it

Oracle 12c tried to solve this issue with a new feature called “In-Database Archiving”, which allows historical data to be marked and not shown based on session parameter. It is a simple column called ORA_ARCHIVE_STATE added to table when you enable “ROW ARCHIVAL” for the table.

A brief demonstration will be:

Create a table, enable “ROW ARCHIVAL” and query dba_tab_cols to check the new column. Please note that hidden column SYS_NC00002$ it is used to track what are the rows subject to ILM row archival policy:


SQL> create table inarch_t2 as select round(dbms_random.value(1,1),0) N1 from dual connect by level < 10;

SQL> alter table inarch_t2 row archival; 

QL> select column_name, virtual_column, segment_column_id, internal_column_id from dba_tab_cols where table_name = UPPER('inarch_t2');

COLUMN_NAME	   	  VIR  SEGMENT_COLUMN_ID INTERNAL_COLUMN_ID
------------------ --- ----------------- ------------------
ORA_ARCHIVE_STATE  NO		       3		  3
SYS_NC00002$	   NO		       2		  2 
N1		   		   NO		       1		  1

SQL> UPDATE inarch_t2 SET ORA_ARCHIVE_STATE=DBMS_ILM.ARCHIVESTATENAME(1) WHERE N1=1 and rownum < 5;

SQL> select ORA_ARCHIVE_STATE, SYS_NC00002$, N1 from inarch_t2;

OR SYS_NC0000	      N1
-- ---------- ----------
1  01		       1
1  01		       1
1  01		       1
1  01		       1
0		     	   1
0		    	   1
0		    	   1
0		      	   1
0		       	   1

As you read, it is very simple to enable “ROW ARCHIVAL” in table and to set some rows to “archived” using the DBMS_ILM.ARCHIVESTATENAME().

In-Database Archiving: Partioning based on ORA_ARCHIVE_STATE

Apart from that, the first thing that comes to my mind is the use of List Partitioning using ORA_ARCHIVE_STATE. The ideia behind it is to create two partitions based on ORA_ARCHIVE_STATE value. That will allow us to “move” data that we consider for historical purposes to another partition (or if you want to another tablespace) and still can decide from a session prespective that application can or cannot query the data on partition with historical data.

First we will create a table containing 51 rows of random Euro Millions results gently generated by Oracle 12c for us:

SQL> create table euromillions (N1 number, N2 number, N3 number, N4 number, N5 number, E1 number, E2 number) 
		partition by LIST (ORA_ARCHIVE_STATE) 
			(PARTITION fresh VALUES (0), 
				PARTITION old VALUES(1)) 
		ENABLE ROW MOVEMENT ROW ARCHIVAL;


Table created.


SQL> select COLUMN_NAME, COLUMN_POSITION from DBA_PART_KEY_COLUMNS where owner = 'LCMARQUES' and NAME='EUROMILLIONS';

COLUMN_NAME	     COLUMN_POSITION
-------------------- ---------------
ORA_ARCHIVE_STATE		   1


SQL> insert into euromillions
  2  select round(dbms_random.value(0,50),0), round(dbms_random.value(0,50),0), round(dbms_random.value(0,50),0),
  3  round(dbms_random.value(0,50),0), round(dbms_random.value(0,50),0), round(dbms_random.value(0,10),0), round(dbms_random.value(0,10),0) from dual connect by level < 52;

51 rows created.

We have now a table a table called EUROMILLIONS with 51 rows and 7 columns plus ORA_ARCHIVE_STATE that was used to create two partitions called fresh and old. As you expect after the INSERT only fresh partition contains data:


SQL> select count(1) from euromillions partition (old);

  COUNT(1)
----------
	 0

SQL> select ORA_ARCHIVE_STATE, count(1) from euromillions group by ORA_ARCHIVE_STATE;

ORA_A	COUNT(1)
----- ----------
0	      51


As expected we don’t have any data on old partition, since all the values are non-archived data and that means that In-Database Archiving column ORA_ARCHIVE_STATE is 0 for all the rows. Let’s now make some random euro millions results (where the first number to come out is between 1 and 10) using a ILM feature for In-Database Archiving:


SQL> UPDATE euromillions SET ORA_ARCHIVE_STATE=DBMS_ILM.ARCHIVESTATENAME(1) WHERE N1>1 and N1<10;

7 rows updated.


Seven results are now marked as archived data and we expect that the data archived go to “old” partition as result of ORA_ARCHIVE_STATE update. But remember, by default the archived data (with ORA_ARCHIVE_STATE=1) is not visible you may need to set visibility at session level to actually count the values on partition:


SQL> select count(1) from euromillions partition (old);

  COUNT(1)
----------
	 0

SQL> ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ALL;

Session altered.

SQL> select count(1) from euromillions partition (old);

  COUNT(1)
----------
	 7

We have now the 7 archived rows in a partition called old that are only possible to see and query if you set the the “ROW ARCHIVAL VISIBILITY=ALL” at session level, since the default is “ACTIVE“.

You can now use this technique to handle your historical data in a more smarter way. Next post i will write about set ILM policies and ADO to In-Archiving feature.

Writings on flashback log size

As you know, Flashback Database uses its own type of log files, they are usually called Flashback log files. These files are created when Oracle loads images of before modified data blocks into the flashback buffer and then RVWR (Recovery Writer) has the job to write flashback buffer to flashback log files on disk. It is an asynchronous process and you have very little control over it. Still, you can investigate how things work.

One of the things that DBA can’t control _directly_ is the size of these Flashback log files. By default they are almost the size of your redo log files and this makes perfect sense since the nature of redo logs and flashback logs. So what is forcing Oracle to create files the same size of the redo log files? The answer is the hidden parameter “_flashback_size_based_on_redo” that is set by default to TRUE.

Let’s make visual confirmation, generating some DML that will modify blocks:


SQL> select group#, bytes/1024 KB from v$log;

    GROUP#	   KB
---------- ----------
	 1	51200
	 2	51200
	 3	51200

SQL>select log#, bytes/1024 as KB from V$FLASHBACK_DATABASE_LOGFILE;

      LOG#	   KB
---------- ----------
	 1	51200
	 2	51200
	 3	51200

That is the easy way to check flashback log size, but you can dump the flashback log file and check how many blocks were allocated to flashback log:


SQL> alter system dump flashback logfile 1 logical

DUMP OF FLASHBACK LOG FILE (1) +DATA/BACODB/FLASHBACK/log_1.278.826239255
 FILE HEADER:
        Compatibility Vsn = 202375168=0xc100000
        Db ID=2672868488=0x9f50bc88, Db Name='BACODB'
        Activation ID=2672897670=0x9f512e86
        Control Seq=2099=0x833, File size=6400=0x1900
        File Number=1, Blksiz=8192, File Type=8 FLASH BACK

Since database block size is 8kb and the File Size is 6400 blocks, the math shows 51200 Kb, exactly the same as v$flashback_database_logfile.
Back to the hidden parameter _flashback_size_based_on_redo, which says that flashback log files are no more, no less than the size of your redo log files. So let’s turn it off and see what happens because we can :-)

SQL> alter system set "_flashback_size_based_on_redo"=FALSE scope=spfile;
System altered.

Shutdown and startup your instance, generate some more heavy DML and check your V$FLASHBACK_DATABASE_LOGFILE again.

SQL> select log#, bytes, bytes/1024 as KB from V$FLASHBACK_DATABASE_LOGFILE;

      LOG#	BYTES	      KB
---------- ---------- ----------
	 1   52428800	   51200
	 2   52428800	   51200
	 3   52428800	   51200
	 4   26214400	   25600

We have now log file number 4, with a very different size from previous files. Something catch my attention while database was starting up, a line in alert.log shows:

Allocated 2784840 bytes in shared pool for flashback generation buffer

A quick query after database startup, shows exactly the same:

SQL> SELECT bytes FROM v$sgastat WHERE pool = 'shared pool'AND name = 'flashback generation buff';

     BYTES
----------
   2784840

The values are a lot similar, between the flashback log size and flashback generation buffer size. The size of flashback generation buffer is affected by shared pool size and we can affect shared pool and maybe generate flashback logs of different size.

SQL> alter system set shared_pool_size=10M scope=both;
System altered.

SQL> SELECT bytes FROM v$sgastat WHERE pool = 'shared pool'AND name = 'flashback generation buff';

     BYTES
----------
   4194304

We have now a very different value for flashback generation buffer so it is time to charge more DML and generate flashback log number 5, but first back to hidden parameters. Another hidden parameter defaults the size of flashback generation buffer “_flashback_generation_buffer_size” with the default value of 4194304 (granule size is 4MB).

SQL> select log#, bytes, bytes/1024 as KB from V$FLASHBACK_DATABASE_LOGFILE;

     LOG#	BYTES	      KB
---------- ---------- ----------
	 1   52428800	   51200
	 2   52428800	   51200
	 3   52428800	   51200
	 4   26214400	   25600
	 5   16777216	   16384
	 6   16777216	   16384
	 7   16777216	   16384


Dumping file 6 will show exactly the same (File Size = 2048 * 8K):

 FILE HEADER:
        Compatibility Vsn = 202375168=0xc100000
        Db ID=2672868488=0x9f50bc88, Db Name='BACODB'
        Activation ID=2672897670=0x9f512e86
        Control Seq=2248=0x8c8, File size=2048=0x800
        File Number=6, Blksiz=8192, File Type=8 FLASH BACK

We now have flashback file number 5,6,7 with 16777216 bytes, 4 times bigger that the current flashback generation buffer. Maybe, just maybe, Oracle is writing delayed chunks of 16MB each time generating a file with 16MB minimum (_flashback_format_chunk_mb_dwrite is 16MB).
The last test will be set another hidden parameter, “_flashback_max_log_size”. The chosen value is 10MB.

SQL> alter system set "_flashback_max_log_size"=10485760 scope=spfile;

After one more set of DML, files 8,9,10 and 11 are 15941632 bytes, more than the max_log_size that we have set.

SQL> select log#, bytes, bytes/1024 as KB from V$FLASHBACK_DATABASE_LOGFILE;

      LOG#	BYTES	      KB
---------- ---------- ----------
	 1   52428800	   51200
	 2   52428800	   51200
	 3   52428800	   51200
	 4   26214400	   25600
	 5   16777216	   16384
	 6   16777216	   16384
	 7   16777216	   16384
	 8   15941632	   15568
	 9   15941632	   15568
	10   15941632	   15568
	11   15941632	   15568

Conclusions:

1) As per default, your flashback log size is the same as redo log size.
2) Size is determined by flashback generation buffer in your shared pool if it is more than the default (set by parameter _flashback_generation_buffer_size).
3) Oracle decides if it will write 4MB or 16MB chunks to file, ignoring the minimum set by _flashback_generation_buffer_size.
4) Hidden parameter _flashback_max_log_size was set, but appears to be ignored.
5) db_flashback_retention_target will not affect the flashback log size, only the retention.

Notes:
1) alter system dump flashback logfile logical will skip the contents of blocks in the dump file, avoiding a very big file.
2) Tests were conducted in Oracle 12.1.0.1.0 64 bits

ORE: Oracle R Entreprise installation – Part I

R is a very powerful programming language for statical computing and graphics, it is almost a standard software for data analysis. Oracle recognized the importance of R in data visualization and created Oracle R Entreprise.

What is?

Oracle R Entreprise is a layer. That layer allows the direct mapping of R data types to Oracle database objects and generate SQL using R expressions. That means you can interact with your database directly using R language.
Oracle R Entreprise is also a statistics engine with several functions and procedures executed in Oracle database.
The architecture behind Oracle R Entreprise is a client/server model, where the client contains a collection of R packages to connect and interact directly with database and the server a collection of PL/SQL procedures and functions to support clients requests.

Install on OEL6 – Prerequisites

I’m going to use OEL 6.4 to install server and client software of ORE (Oracle R Entreprise). Besides check if it is a supported operating system, you will need to install R first. The installation of R is required on both server and clients. Installing it on OEL 6 is very simple if you can use yum to install it for you (make sure that oel6_addons and oel6_latest are enabled in public-yum-ol6.repo file).


[ol6_latest]
name=Oracle Linux $releasever Latest ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
gpgcheck=1
enabled=1

[ol6_addons]
name=Oracle Linux $releasever Add ons ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
gpgcheck=1
enabled=1

After this, you just need to fire up yum and wait (the install size for me of R and all dependencies was 270MB, so be patient)

[root@phoenix ~]# yum install R.x86_64

To verify R installation, just run R in command line. I will use oracle user for testing purposes:

[oracle@phoenix ~]$  R

Oracle Distribution of R version 2.15.2  (--) -- "Trick or Treat"
Copyright (C)  The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: x86_64-unknown-linux-gnu (64-bit)
...
> demo()
> q()
Save workspace image? [y/n/c]: n

After installing R (or before if you wish) you need to verify Oracle Database compatibility. It requires a Oracle Entreprise Edition 11.2 (11.2.0.3 or you will need to install the patch for bug 11678127. Check MOS). Apart from requiring a patched 11.2.0.1/2 database or 11.2.0.3 you also need to configure extproc which is needed for ORE to call external procedures written in another language, R in this particular case.
On newer versions of Oracle, you don’t have to edit listener.ora or tnsnames.ora, only define the environment variables to be used by external procedures in extproc.ora located in $ORACLE_HOME/hs/admin.

Install on OEL6 – ORE Server Installation

First you need to download client and server here: http://www.oracle.com/technetwork/database/options/advanced-analytics/r-enterprise/ore-downloads-1502823.html
Set your environment variables adding $ORACLE_HOME/lib to LD_LIBRARY_PATH and $ORACLE_HOME/bin to PATH. Make sure also that ORACLE_SID is pointing to correct instance.

[oracle@phoenix ~]$  unzip ore-server-linux-x86-64-1.3.1.zip
[oracle@phoenix ~]$  cd server/
[oracle@phoenix server]$  ./install.sh

Oracle R Enterprise 1.3.1 Server Installation.

Copyright (c) 2012, 2013 Oracle and/or its affiliates. All rights reserved.

Checking R ................... Pass
Checking R libraries ......... Pass
Checking ORACLE_HOME ......... Pass
Checking ORACLE_SID .......... Pass
Checking sqlplus ............. Pass
Checking ORACLE instance ..... Pass
Checking ORE ................. Pass

Current configuration
  R_HOME               = /usr/lib64/R
  R_LIBS_USER          = /u01/app/oracle/product/11.2.0/dbhome_1/R/library
  ORACLE_HOME          = /u01/app/oracle/product/11.2.0/dbhome_1
  ORACLE_SID           = fenix

Do you wish to install ORE? [yes] yes

Choosing RQSYS tablespaces
  PERMANENT tablespace to use for RQSYS [SYSAUX]:
  TEMPORARY tablespace to use for RQSYS [TEMP]:

Tablespaces summary
  PERMANENT tablespace = SYSAUX
  TEMPORARY tablespace = TEMP

Installing libraries ......... Pass
Installing RQSYS data ........ Pass
Installing RQSYS code ........ Pass
Installing ORE packages ...... Pass
Creating ORE script .......... Pass

NOTE: ORE has been enabled for all database users. Next, install the
      supporting packages.

Now you have a RQSYS schema with all necessary objects. RQSYS is a locked account.
Next step is to install supporting packages:

[oracle@phoenix ~]$  unzip ore-supporting-linux-x86-64-1.3.1.zip
[oracle@phoenix ~]$  cd supporting/
[oracle@phoenix supporting]$  ORE CMD INSTALL DBI_0.2-5_R_x86_64-unknown-linux-gnu.tar.gz
* installing to library ‘/u01/app/oracle/product/11.2.0/dbhome_1/R/library’
* installing *binary* package ‘DBI’ ...
* DONE (DBI)
[oracle@phoenix supporting]$  ORE CMD INSTALL ROracle_1.1-9_R_x86_64-unknown-linux-gnu.tar.gz
* installing to library ‘/u01/app/oracle/product/11.2.0/dbhome_1/R/library’
* installing *binary* package ‘ROracle’ ...
* DONE (ROracle)
[oracle@phoenix supporting]$  ORE CMD INSTALL png_0.1-4_R_x86_64-unknown-linux-gnu.tar.gz
* installing to library ‘/u01/app/oracle/product/11.2.0/dbhome_1/R/library’
* installing *binary* package ‘png’ ...
* DONE (png)

Now to finish server installation we need to create and GRANT a user to use in the future. Inside your server directory run the following demo_user.sh.It will create a RQUSER that you can use when requesting calls to R and a grant (RQADMIN) that allows users to create or drop R scripts that use the database embedded R engine.

[oracle@phoenix server]$  ./demo_user.sh

Oracle R Enterprise 1.3.1 Server User Creation.

Copyright (c) 2012, 2013, Oracle and/or its affiliates. All rights reserved.

Checking ORACLE_HOME ......... Pass
Checking ORACLE_SID .......... Pass
Checking sqlplus ............. Pass
Checking ORACLE instance ..... Pass
Checking ORE ................. Pass

Current configuration
  ORACLE_HOME          = /u01/app/oracle/product/11.2.0/dbhome_1
  ORACLE_SID           = fenix

Do you wish to create an ORE user? [yes]

Choosing tablespaces
  PERMANENT tablespace to use [USERS]:
  TEMPORARY tablespace to use [TEMP]:

Choosing user
  ORE user to use [rquser]:
  Password to use for user rquser:

Tablespaces and user summary
  PERMANENT tablespace = USERS
  TEMPORARY tablespace = TEMP
  ORE user             = rquser

Creating ORE user ............ Pass
Done
[oracle@phoenix server]$  sqlplus / as SYSDBA

SQL> GRANT RQADMIN to RQUSER;
Grant succeeded.

Last and important step on server installation is verify if everything is working as expected. Fire up R and set .libPaths() to specify the library trees that R knows about and, hence, uses when looking for packages. If no error returs everything is fine with server. .libPaths() is expecting $ORACLE_HOME/R/library

[oracle@phoenix server]$  R
\
Oracle Distribution of R version 2.15.2  (--) -- "Trick or Treat"
Copyright (C)  The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: x86_64-unknown-linux-gnu (64-bit)
...
> .libPaths('/u01/app/oracle/product/11.2.0/dbhome_1/R/library')
> q()
Save workspace image? [y/n/c]: n

Done with the server part. Let’s take the client side.

Install on OEL6 – ORE Client Installation

Things are a little bit easy with the client. Assuming that you have downloaded the client file, just unzip it and install using R CMD INSTALL as follows:

[oracle@phoenix ~]$  unzip ore-client-linux-x86-64-1.3.1.zip

[root@phoenix client]# R CMD INSTALL ORE_1.3.1_R_x86_64-unknown-linux-gnu.tar.gz
* installing to library ‘/usr/lib64/R/library’
* installing *binary* package ‘ORE’ ...

* DONE (ORE)
Making packages.html  ... done
[root@phoenix client]# R CMD INSTALL OREbase_1.3.1_R_x86_64-unknown-linux-gnu.tar.gz
[root@phoenix client]# R CMD INSTALL OREeda_1.3.1_R_x86_64-unknown-linux-gnu.tar.gz
[root@phoenix client]# R CMD INSTALL OREdm_1.3.1_R_x86_64-unknown-linux-gnu.tar.gz
[root@phoenix client]# R CMD INSTALL OREgraphics_1.3.1_R_x86_64-unknown-linux-gnu.tar.gz
[root@phoenix client]# R CMD INSTALL OREpredict_1.3.1_R_x86_64-unknown-linux-gnu.tar.gz
[root@phoenix client]# R CMD INSTALL OREstats_1.3.1_R_x86_64-unknown-linux-gnu.tar.gz
[root@phoenix client]# R CMD INSTALL ORExml_1.3.1_R_x86_64-unknown-linux-gnu.tar.gz

Please be aware that i’m installing client a server on same machine. If you are going to install on different machines make sure that you install Oracle Instant Client (http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html) before you run CMD INSTALL on R.

Install on OEL6 – ORE Post Installation

On server:

[oracle@phoenix server]$  R
> .libPaths('/u01/app/oracle/product/11.2.0/dbhome_1/R/library')

On client. I will connect to local database with lcmarques username. Will also run a test on R layer and embedded R using demos already created by Oracle. Please note that your embedded test will only work if your user has the RQADMIN role.

[oracle@phoenix client]$  R
> library(ORE)
Loading required package: OREbase

Attaching package: ‘OREbase’

The following object(s) are masked from ‘package:base’:

    cbind, data.frame, eval, interaction, order, paste, pmax, pmin,
    rbind, table

Loading required package: OREstats
Loading required package: MASS
Loading required package: OREgraphics
Loading required package: OREeda
Loading required package: OREdm
Loading required package: lattice
Loading required package: OREpredict
Loading required package: ORExml

> ore.connect("lcmarques", password="lcmarques", conn_string="", all=TRUE)
Loading required package: ROracle
Loading required package: DBI

> demo("aggregate", package = "ORE")
	demo(aggregate)
	---- ~~~~~~~~~
> #
> #     O R A C L E  R  E N T E R P R I S E  S A M P L E   L I B R A R Y
> #
> #     Name: aggregate.R
> #     Description: Demonstrates aggregations
> #     See also summary.R

> demo("row_apply", package = "ORE")
	demo(row_apply)
	---- ~~~~~~~~~

That is it. I will now investigate ORE to see what it is capable :-) Feel free to comment.

Transparent HugePages notes for DBAs

Intro

The Linux 2.6.37 showed us a new feature called Transparent Hugepage Support, that works by reducing the number of TLB (Translation Lookaside Buffer) that is basically a cache for Virtual addressing translation. The main ideia beyond the implementation of transparent hugepages is to handle large pages just like normal pages and not just like another parallel memory management subsystem, that means if a large page is available for application, the system will provide it, otherwise the standard 4k page will be used.
This will “connect” the large pages and normal pages memory subsystems making possible large pages swappable as example. This of course sounds even better, if you think on the problems of having only small pages or only large pages. Small page size causes a large management overhead (17GB SGA + 1000 connections = 4.4 million pages and page table size of 16GB) and large pages size (example: 2MB or 4MB) causes fragmentation, as allocating huge pages becomes more difficult as memory is fragmented and the presence of contiguous memory is not available. THP tries to improve and address theses issues regularly scanning the areas marked as huge page candidates and replace a bunch of small pages with huge pages as well as directly allocate huge pages if possible.

Oracle SGA and THP

The current Transparent HugePages (or THP) only maps anonymous memory regions and therefore only affects heap and your SGA (the larger part of your memory allocation, probably) will need access to shared memory, so anonymous memory regions are useless until THP supports a shared memory implementation. THP only supports anonymous memory regions and Huge Pages only work with shared memory. Apart from THP only works with anonymous pages it only handles one huge page size (2MB), so it is not suitable for use and you need to preallocate it and Oracle must map them explicitly. Once they are pinned pages are not swappable. That is the main problem with HugePages, it requires a lot of effort from a System Administrator / DBA point of view and application must support these method, that why Huge Pages is used mainly in large RDBMS systems.

Oracle PGA and THP

As you know, PGA is heap and you can’t use the traditional huge pages, so you don’t consider it for calculations, but you may benefit from THP.  The number of THP used by the system is available using this command:

[oracle@obox ~]$ grep -i AnonHugePages /proc/meminfo
AnonHugePages: 253952 kB

Apart from this, we can check if any Oracle process is using THP. Just read /proc/PID/smaps and count the number of AnonHugePages for each mapping. Replace PID for whatever your dedicated process that handles the connection to database. I have found no AnonHugePages use, meaning that my PGA is probably not using it.

[oracle@obox ~]$ cat /proc/25894/smaps | grep AnonHugePages
AnonHugePages: 0 kB
AnonHugePages: 0 kB
AnonHugePages: 0 kB
AnonHugePages: 0 kB
AnonHugePages: 0 kB
AnonHugePages: 0 kB
...

Since THP is enabled, /proc/vmstat has some counters to monitor how sucessfully system is providing huge pages for use:

[oracle@obox ~]$ cat /proc/vmstat | grep thp_
thp_fault_alloc 368
thp_fault_fallback 264
thp_collapse_alloc 301
thp_collapse_alloc_failed 2642
thp_split 222

thp_fault_alloc is incremented every time a huge page is allocated to handle a page fault, since we have a 386 times allocated a huge page means that we are actually are using THP. thp_fault_fallback indicates that kernel fails to allocated huge page and fallback to normal page size. Since vmstat is reporting THP use and this is just a machine that runs only an Oracle database, Oracle must be using THP somewhere.
This made me think and drill down every Oracle process and search for any use of THP. For that i’ve made a shell script that searchs all the PIDs from Oracle processes:

#!/bin/bash
for i in $(pgrep -f ora_)
do
   echo "PID:"$i
   cat /proc/$i/smaps | grep AnonHugePages | sort | uniq
done

#all dedicated servers processes

for i in $(pgrep -f oracleTN*)
do
   echo "PID:"$i
   cat /proc/$i/smaps | grep AnonHugePages | sort | uniq
done

The result is a little surprise for me. Let’s analyse:

PID:58223
AnonHugePages:         0 kB
AnonHugePages:      4096 kB
PID:58363
AnonHugePages:         0 kB
AnonHugePages:      2048 kB
PID:58365
AnonHugePages:         0 kB
AnonHugePages:      2048 kB

[oracle@obox ~]$ ps -ef | grep 58223
oracle   36939 18818  0 16:30 pts/0    00:00:00 grep 58223
oracle   58223     1  0 May06 ?        00:35:34 ora_dbw0_TNMSAM
[oracle@obox ~]$ ps -ef | grep 58363
oracle   36943 18818  0 16:31 pts/0    00:00:00 grep 58363
oracle   58363     1  0 May06 ?        00:05:08 ora_arc0_TNMSAM
[oracle@obox ~]$ ps -ef | grep 58365
oracle   36946 18818  0 16:31 pts/0    00:00:00 grep 58365
oracle   58365     1  0 May06 ?        00:05:05 ora_arc1_TNMSAM

According the results, DBWR and ARCH Oracle processes are using THP.

Update: I did an improvement in the script (Bash Script) to detect THP usage by Oracle processes. Output and script bellow:

#!/bin/bash

for i in $(ps -eo pid,cmd | grep ora_ | grep -v grep | awk '{split($0,a," "); print a[1]"|"a[2]}')
do
   PID=$(echo $i | awk '{split($0,a,"|"); print a[1]}')
   PNAME=$(echo $i | awk '{split($0,a,"|"); print a[2]}') 
   THP=$(cat /proc/$PID/smaps | grep AnonHugePages | awk '{sum+=$2} END  {print sum}') 
  
   echo "PID "$PID "(" $PNAME ") is using " $THP "Kb of THP"

done

Output:
Usage of THP for Linux