Ubuntu 14.04.1 Oracle GI 12c ASM Flex Cluster on LXC Containers

This blog page documents building an Oracle ASM 12c (12.1.0.2.0) Flex Cluster on Ubuntu 14.04.1 Desktop (64-bit) using two Oracle Enterprise Linux 6.5 LXC Linux Containers. The build is complete and the screenshots of the completed installed system running on Ubuntu can be seen at the end of the post. I will be filling in details of the build such as the LXC config files for lxcora02 and lxcora03, as well as other key configuration files and details about how storage was presented to the nodes, etc. so if interested in this topic check back often or contact me directly. If one wants to run Oracle Enterprise software products on Ubuntu Linux, LXC Containers are probably the best way. Note that running Oracle database on Ubuntu is not officially supported by Oracle, nor is running Oracle database in Linux Containers officially supported either, so strictly speaking this must be considered for personal and test use only.

My To Do List for This Build

  1. Work out details of 11gR2 integration/compatibility with 12c ASM Flex Clustering

  2. Redistribute the LUNs across the multiple targets so the array outage can be simulated (one LUN from each DG on a target)

  3. Document the work creating the reboot, start, and shutdown aliases for managing the LXC containers

  4. Document the work using "chkconfig" to move "scst" in the KVM SAN to run AFTER "ssh" so that the KVM guest can ssh to the vmem1 host and create the LUNs at laptop bootup.

  5. Related to this is to document the files that were created in the '/etc/sudoers.d" directory to enable nopassword sudo from the KVM SAN to vmem1 at bootup to create the LUNs

  6. External name resolution in LXC containers not working (can only resolve by IP), i.e. "nslookup" and "dig" work but "ping" and "ssh" cannot resolve by name, only by IP.

This was added to /etc/init.d/scst as shown below.

Configure SCST to Start after SSH

On the KVM guest which has the SCST iSCSI Linux SAN, the scst services are configured to start after ssh so that the SCST SAN can be autostarted at bootup, and as soon as SCST has started, it executes a remote command on the Ubuntu host to login to the iscsi initiators and obtain the LUNs.

[root@oracle651 ~]# grep -A2 -B5 gstanden /etc/init.d/scst

case "$1" in

start)

## Start the service.

echo -n "Loading and configuring SCST"

start_scst

ssh -t gstanden@vmem1 "/home/gstanden/create_asm_luns.sh"

rc=$?

;;

[root@oracle651 ~]#

And chkconfig was used to move /etc/init.d/scst on oracle652 KVM guest to run AFTER ssh so that the above ssh would work. The point of all this work was so that oracle652 KVM guest could be set to "autostart" on boot, and that once ssh and scst were up, it would send an ssh command to the Ubuntu host to login to the SCST Linux SAN on oracle652 KVM guest and load up the iSCSI oracle ASM LUNs.

[root@oracle651 ~]# grep chkconfig /etc/init.d/scst

# chkconfig: 2345 56 87 <-- Set start value to "56" so that scst would start after ssh was already up7

[root@oracle651 ~]# grep chkconfig /etc/init.d/ssh*

# chkconfig: 2345 55 25 <-- The ssh service starts at priorty "55" so now it starts before scst

[root@oracle651 ~]# grep chkconfig /etc/init.d/scst.2014.11.01.1552.bak

# chkconfig: 2345 13 87 <-- Here are the original values for scst where it was starting very early at priority "13"

[root@oracle651 ~]#

The remote script on the Ubuntu host that is executed by scst init script on the KVM guest is shown below. The initiators are login/logout/login because of a workaround. It was found that most of the LUNs were created in Linux 5 style as multipath device nodes under /dev/mapper, but sometimes one or two would be created in Linux 6 style as soft links pointing to /dev/dm-* and for the purposes of this work with Linux LXC containers, what is needed is for all the LUNs to be created in Linux 5 style (no soft links) because so far there has not been found a way to preserve those links when delivering the storage to the LXC containers. In other words, /dev/mapper can be presented to each LXC container, but when there are links involved, it is problematic to preserve those links and somehow "promote" /dev/dm-* devices to the LXC container. Furthermore, the "lxc-device" command was found not to work with Oracle for storage. One can present a dm-* device to be sure, and Oracle will even accept the device as an ASM disk, but when it attempts to do IO to that device, there are errors. It was found that mounting /dev/mapper in the LXC container provides storage for the LXC nodes that seems to work in all ways with no problems.

gstanden@vmem1:~$ cat create_asm_luns.sh

sudo iscsiadm --mode node --targetname iqn.2014-08.org.vmem:oracle651.san.asm.luns --portal 10.207.40.74 --login

sudo iscsiadm --mode node --targetname iqn.2014-08.org.vmem:oracle651.san.asm.luns --portal 10.207.41.74 --login

sudo iscsiadm --mode node --targetname iqn.2014-10.org.vmem1:oracle651.san.asm.luns --portal 10.207.40.74 --login

sudo iscsiadm --mode node --targetname iqn.2014-10.org.vmem1:oracle651.san.asm.luns --portal 10.207.41.74 --login

sudo iscsiadm --mode node --targetname iqn.2014-10.org.vmem1:oracle651.san.asm.luns2 --portal 10.207.40.74 --login

sudo iscsiadm --mode node --targetname iqn.2014-10.org.vmem1:oracle651.san.asm.luns2 --portal 10.207.41.74 --login

ls -lrt /dev/mapper

sudo iscsiadm --mode node --targetname iqn.2014-08.org.vmem:oracle651.san.asm.luns --portal 10.207.40.74 --logout

sudo iscsiadm --mode node --targetname iqn.2014-08.org.vmem:oracle651.san.asm.luns --portal 10.207.41.74 --logout

sudo iscsiadm --mode node --targetname iqn.2014-10.org.vmem1:oracle651.san.asm.luns --portal 10.207.40.74 --logout

sudo iscsiadm --mode node --targetname iqn.2014-10.org.vmem1:oracle651.san.asm.luns --portal 10.207.41.74 --logout

sudo iscsiadm --mode node --targetname iqn.2014-10.org.vmem1:oracle651.san.asm.luns2 --portal 10.207.40.74 --logout

sudo iscsiadm --mode node --targetname iqn.2014-10.org.vmem1:oracle651.san.asm.luns2 --portal 10.207.41.74 --logout

sudo service multipath-tools restart

sudo iscsiadm --mode node --targetname iqn.2014-08.org.vmem:oracle651.san.asm.luns --portal 10.207.40.74 --login

sudo iscsiadm --mode node --targetname iqn.2014-08.org.vmem:oracle651.san.asm.luns --portal 10.207.41.74 --login

sudo iscsiadm --mode node --targetname iqn.2014-10.org.vmem1:oracle651.san.asm.luns --portal 10.207.40.74 --login

sudo iscsiadm --mode node --targetname iqn.2014-10.org.vmem1:oracle651.san.asm.luns --portal 10.207.41.74 --login

sudo iscsiadm --mode node --targetname iqn.2014-10.org.vmem1:oracle651.san.asm.luns2 --portal 10.207.40.74 --login

sudo iscsiadm --mode node --targetname iqn.2014-10.org.vmem1:oracle651.san.asm.luns2 --portal 10.207.41.74 --login

ls -lrt /dev/mapper

gstanden@vmem1:~$

My To Do List for The Next Build - Insights

  1. Use enumerated usernames on each node, such as "oracle1" and "dba1" on lxcora01 | "oracle2" and "dba2" on lxcora02 and so on.

  2. This will help manageability since all accounts exist on the node.

  3. The users could also be created on the Ubuntu host so that the names would display in "ps -ef" output rather than just UID and GID numbers.

  4. Longer-term project would be to package up the whole setup in deb file(s) so that it was deb installable.

  5. Oracle software would have to installed separately due to license and export restrictions, but the whole testbed could be deb-ized would think

[grid@lxcora02 grid]$ ./runcluvfy.sh comp gns -precrsinst -domain vmem.org -vip 10.207.39.3 -verbose -n lxcora02,lxcora03

Verifying GNS integrity

Checking GNS integrity...

Checking if the GNS subdomain name is valid...

The GNS subdomain name "vmem.org" is a valid domain name

Checking if the GNS VIP is a valid address...

GNS VIP "10.207.39.3" resolves to a valid IP address

Checking the status of GNS VIP...

GNS integrity check passed

Verification of GNS integrity was successful.

[grid@lxcora02 grid]$

Edit /etc/security/limits.conf in LXC Nodes

Most of the values are set in the /etc/security/limits.conf of the Ubuntu host, but a couple values are set locally in the container as shown below. Note that the Oracle Grid Infrastructure values are commented out. I have found that if I set those, I lose the ability to login to the "grid" user (i.e. with those set, "su - grid" fails with "could not open session"). I have not figured out the fix for this so am not setting them for now. If you have understanding of this problem and how to address, please leave a comment on the blog for the benefit of all, thanks.

[root@lxcora02 ~]# cat /etc/security/limits.conf

# /etc/security/limits.conf

#

#Each line describes a limit for a user in the form:

#

#<domain> <type> <item> <value>

#

#Where:

#<domain> can be:

# - an user name

# - a group name, with @group syntax

# - the wildcard *, for default entry

# - the wildcard %, can be also used with %group syntax,

# for maxlogin limit

#

#<type> can have the two values:

# - "soft" for enforcing the soft limits

# - "hard" for enforcing hard limits

#

#<item> can be one of the following:

# - core - limits the core file size (KB)

# - data - max data size (KB)

# - fsize - maximum filesize (KB)

# - memlock - max locked-in-memory address space (KB)

# - nofile - max number of open files

# - rss - max resident set size (KB)

# - stack - max stack size (KB)

# - cpu - max CPU time (MIN)

# - nproc - max number of processes

# - as - address space limit (KB)

# - maxlogins - max number of logins for this user

# - maxsyslogins - max number of logins on the system

# - priority - the priority to run user process with

# - locks - max number of file locks the user can hold

# - sigpending - max number of pending signals

# - msgqueue - max memory used by POSIX message queues (bytes)

# - nice - max nice priority allowed to raise to values: [-20, 19]

# - rtprio - max realtime priority

#

#<domain> <type> <item> <value>

#

#* soft core 0

#* hard rss 10000

#@student hard nproc 20

#@faculty soft nproc 20

#@faculty hard nproc 50

#ftp hard nproc 0

#@student - maxlogins 4

# End of file

# Oracle Grid Infrastructure

# grid hard nofile 65536

# grid soft nproc 2047

# Oracle Database

* soft memlock 9873408

* hard memlock 9873408

[root@lxcora02 ~]#

Add Entries to /etc/sysctl.conf on LXC Nodes for rp_filter

Add entries as shown in bold below. Note that this is required for the multicast as well.

[root@lxcora03 ~]# cat /etc/sysctl.conf

# Kernel sysctl configuration file for Red Hat Linux

#

# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and

# sysctl.conf(5) for more details.

# Controls IP packet forwarding

net.ipv4.ip_forward = 0

# Controls source route verification

net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing

net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel

kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.

# Useful for debugging multi-threaded applications.

kernel.core_uses_pid = 1

# Controls the use of TCP syncookies

net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.

net.bridge.bridge-nf-call-ip6tables = 0

net.bridge.bridge-nf-call-iptables = 0

net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue

kernel.msgmnb = 65536

# Controls the maximum size of a message, in bytes

kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes

kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages

kernel.shmall = 4294967296

# Oracle

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

fs.file-max = 6815744

fs.aio-max-nr = 1048576

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

kernel.panic_on_oops = 1

net.ipv4.conf.eth1.rp_filter = 2

net.ipv4.conf.eth2.rp_filter = 2

net.ipv4.conf.eth3.rp_filter = 2

net.ipv4.conf.eth4.rp_filter = 2

net.ipv4.conf.eth5.rp_filter = 2

net.ipv4.conf.eth6.rp_filter = 2

[root@lxcora03 ~]#

Then ran "sysctl -p" to have new settings take effect, and it worked in the LXC containers.

[root@lxcora03 ~]# sysctl -p

net.ipv4.ip_forward = 0

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.default.accept_source_route = 0

kernel.sysrq = 0

kernel.core_uses_pid = 1

error: "net.ipv4.tcp_syncookies" is an unknown key

error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key

error: "net.bridge.bridge-nf-call-iptables" is an unknown key

error: "net.bridge.bridge-nf-call-arptables" is an unknown key

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.shmmax = 68719476736

kernel.shmall = 4294967296

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

fs.file-max = 6815744

fs.aio-max-nr = 1048576

net.ipv4.ip_local_port_range = 9000 65500

error: "net.core.rmem_default" is an unknown key

error: "net.core.rmem_max" is an unknown key

error: "net.core.wmem_default" is an unknown key

error: "net.core.wmem_max" is an unknown key

kernel.panic_on_oops = 1

net.ipv4.conf.eth1.rp_filter = 2

net.ipv4.conf.eth2.rp_filter = 2

net.ipv4.conf.eth3.rp_filter = 2

net.ipv4.conf.eth4.rp_filter = 2

net.ipv4.conf.eth5.rp_filter = 2

net.ipv4.conf.eth6.rp_filter = 2

[root@lxcora03 ~]#

Install Oracle 12.1.0.2.0 ASM Flex Clusterware

Putting in the screenshots now.

Note on the next installer page, the pre-requisite checks, the shown errors are acceptable for the LXC container install.

Execute root.sh Scripts on Each LXC Node

Execute first script as shown below on lxcora02 and then lxcora03.

[root@lxcora03 ~]# /u00/app/12.1.0/oraInventory/orainstRoot.sh

Changing permissions of /u00/app/12.1.0/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /u00/app/12.1.0/oraInventory to oinstall.

The execution of the script is complete.

[root@lxcora03 ~]#

Run the second root script as shown below on lxcora02.

[root@lxcora02 ~]# /u00/app/grid/product/12.1.0/grid/root.sh

Performing root user operation.

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u00/app/grid/product/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u00/app/grid/product/12.1.0/grid/crs/install/crsconfig_params

2014/10/29 19:46:56 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

TFA-00012: Oracle Trace File Analyzer (TFA) requires BASH shell. Please install bash and try again.

2014/10/29 19:47:05 CLSRSC-4004: Failed to install Oracle Trace File Analyzer (TFA) Collector. Grid Infrastructure operations will continue.

2014/10/29 19:47:12 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful

root wallet

root wallet cert

root cert export

peer wallet

profile reader wallet

pa wallet

peer wallet keys

pa wallet keys

peer cert request

pa cert request

peer cert

pa cert

peer root cert TP

profile reader root cert TP

pa root cert TP

peer pa cert TP

pa peer cert TP

profile reader pa cert TP

profile reader peer cert TP

peer user cert

pa user cert

2014/10/29 19:49:29 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

CRS-2672: Attempting to start 'ora.evmd' on 'lxcora02'

CRS-2672: Attempting to start 'ora.mdnsd' on 'lxcora02'

CRS-2676: Start of 'ora.mdnsd' on 'lxcora02' succeeded

CRS-2676: Start of 'ora.evmd' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'lxcora02'

CRS-2676: Start of 'ora.gpnpd' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'lxcora02'

CRS-2672: Attempting to start 'ora.gipcd' on 'lxcora02'

CRS-2676: Start of 'ora.cssdmonitor' on 'lxcora02' succeeded

CRS-2676: Start of 'ora.gipcd' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'lxcora02'

CRS-2672: Attempting to start 'ora.diskmon' on 'lxcora02'

CRS-2676: Start of 'ora.diskmon' on 'lxcora02' succeeded

CRS-2676: Start of 'ora.cssd' on 'lxcora02' succeeded

ASM created and started successfully.

Disk Group SYSTEMDG created successfully.

CRS-2672: Attempting to start 'ora.crf' on 'lxcora02'

CRS-2672: Attempting to start 'ora.storage' on 'lxcora02'

CRS-2676: Start of 'ora.crf' on 'lxcora02' succeeded

CRS-2676: Start of 'ora.storage' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'lxcora02'

CRS-2676: Start of 'ora.crsd' on 'lxcora02' succeeded

CRS-4256: Updating the profile

Successful addition of voting disk 4cf21b2b34044fcabfdfedfe2aea7644.

Successful addition of voting disk 36febb54b8bc4fa2bfee0509b3f4ab1e.

Successful addition of voting disk 1a7d8c36835a4fa4bf27dd1bf0edc23d.

Successfully replaced voting disk group with +SYSTEMDG.

CRS-4256: Updating the profile

CRS-4266: Voting file(s) successfully replaced

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE 4cf21b2b34044fcabfdfedfe2aea7644 (/dev/mapper/asm_disk_lxc1_systemdg_01) [SYSTEMDG]

2. ONLINE 36febb54b8bc4fa2bfee0509b3f4ab1e (/dev/mapper/asm_disk_lxc1_systemdg_02) [SYSTEMDG]

3. ONLINE 1a7d8c36835a4fa4bf27dd1bf0edc23d (/dev/mapper/asm_disk_lxc1_systemdg_03) [SYSTEMDG]

Located 3 voting disk(s).

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'lxcora02'

CRS-2673: Attempting to stop 'ora.crsd' on 'lxcora02'

CRS-2677: Stop of 'ora.crsd' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 'lxcora02'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'lxcora02'

CRS-2673: Attempting to stop 'ora.gpnpd' on 'lxcora02'

CRS-2677: Stop of 'ora.ctssd' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.evmd' on 'lxcora02'

CRS-2673: Attempting to stop 'ora.storage' on 'lxcora02'

CRS-2677: Stop of 'ora.gpnpd' on 'lxcora02' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'lxcora02' succeeded

CRS-2677: Stop of 'ora.storage' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'lxcora02'

CRS-2677: Stop of 'ora.evmd' on 'lxcora02' succeeded

CRS-2677: Stop of 'ora.asm' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'lxcora02'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'lxcora02'

CRS-2677: Stop of 'ora.cssd' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.crf' on 'lxcora02'

CRS-2677: Stop of 'ora.crf' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'lxcora02'

CRS-2677: Stop of 'ora.gipcd' on 'lxcora02' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'lxcora02' has completed

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Starting Oracle High Availability Services-managed resources

CRS-2672: Attempting to start 'ora.mdnsd' on 'lxcora02'

CRS-2672: Attempting to start 'ora.evmd' on 'lxcora02'

CRS-2676: Start of 'ora.mdnsd' on 'lxcora02' succeeded

CRS-2676: Start of 'ora.evmd' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'lxcora02'

CRS-2676: Start of 'ora.gpnpd' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on 'lxcora02'

CRS-2676: Start of 'ora.gipcd' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'lxcora02'

CRS-2676: Start of 'ora.cssdmonitor' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'lxcora02'

CRS-2672: Attempting to start 'ora.diskmon' on 'lxcora02'

CRS-2676: Start of 'ora.diskmon' on 'lxcora02' succeeded

CRS-2676: Start of 'ora.cssd' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'lxcora02'

CRS-2672: Attempting to start 'ora.ctssd' on 'lxcora02'

CRS-2676: Start of 'ora.ctssd' on 'lxcora02' succeeded

CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'lxcora02'

CRS-2676: Start of 'ora.asm' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.storage' on 'lxcora02'

CRS-2676: Start of 'ora.storage' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.crf' on 'lxcora02'

CRS-2676: Start of 'ora.crf' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'lxcora02'

CRS-2676: Start of 'ora.crsd' on 'lxcora02' succeeded

CRS-6023: Starting Oracle Cluster Ready Services-managed resources

CRS-6017: Processing resource auto-start for servers: lxcora02

CRS-6016: Resource auto-start has completed for server lxcora02

CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources

CRS-4123: Oracle High Availability Services has been started.

2014/10/29 20:00:45 CLSRSC-343: Successfully started Oracle Clusterware stack

CRS-2672: Attempting to start 'ora.net1.network' on 'lxcora02'

CRS-2676: Start of 'ora.net1.network' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.gns.vip' on 'lxcora02'

CRS-2676: Start of 'ora.gns.vip' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.gns' on 'lxcora02'

CRS-2676: Start of 'ora.gns' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'lxcora02'

CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'lxcora02'

CRS-2676: Start of 'ora.asm' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.SYSTEMDG.dg' on 'lxcora02'

CRS-2676: Start of 'ora.SYSTEMDG.dg' on 'lxcora02' succeeded

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'lxcora02'

CRS-2673: Attempting to stop 'ora.crsd' on 'lxcora02'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'lxcora02'

CRS-2673: Attempting to stop 'ora.SYSTEMDG.dg' on 'lxcora02'

CRS-2673: Attempting to stop 'ora.oc4j' on 'lxcora02'

CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'lxcora02'

CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'lxcora02'

CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'lxcora02'

CRS-2673: Attempting to stop 'ora.lxcora02.vip' on 'lxcora02'

CRS-2673: Attempting to stop 'ora.cvu' on 'lxcora02'

CRS-2673: Attempting to stop 'ora.gns' on 'lxcora02'

CRS-2677: Stop of 'ora.cvu' on 'lxcora02' succeeded

CRS-2677: Stop of 'ora.SYSTEMDG.dg' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'lxcora02'

CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.scan2.vip' on 'lxcora02'

CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'lxcora02' succeeded

CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.scan3.vip' on 'lxcora02'

CRS-2673: Attempting to stop 'ora.scan1.vip' on 'lxcora02'

CRS-2677: Stop of 'ora.asm' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'lxcora02'

CRS-2677: Stop of 'ora.lxcora02.vip' on 'lxcora02' succeeded

CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'lxcora02' succeeded

CRS-2677: Stop of 'ora.scan3.vip' on 'lxcora02' succeeded

CRS-2677: Stop of 'ora.scan2.vip' on 'lxcora02' succeeded

CRS-2677: Stop of 'ora.gns' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.gns.vip' on 'lxcora02'

CRS-2677: Stop of 'ora.oc4j' on 'lxcora02' succeeded

CRS-2677: Stop of 'ora.scan1.vip' on 'lxcora02' succeeded

CRS-2677: Stop of 'ora.gns.vip' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.ons' on 'lxcora02'

CRS-2677: Stop of 'ora.ons' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.net1.network' on 'lxcora02'

CRS-2677: Stop of 'ora.net1.network' on 'lxcora02' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'lxcora02' has completed

CRS-2677: Stop of 'ora.crsd' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.storage' on 'lxcora02'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'lxcora02'

CRS-2673: Attempting to stop 'ora.gpnpd' on 'lxcora02'

CRS-2677: Stop of 'ora.storage' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'lxcora02'

CRS-2677: Stop of 'ora.gpnpd' on 'lxcora02' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'lxcora02' succeeded

CRS-2677: Stop of 'ora.asm' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'lxcora02'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 'lxcora02'

CRS-2673: Attempting to stop 'ora.evmd' on 'lxcora02'

CRS-2677: Stop of 'ora.ctssd' on 'lxcora02' succeeded

CRS-2677: Stop of 'ora.evmd' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'lxcora02'

CRS-2677: Stop of 'ora.cssd' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.crf' on 'lxcora02'

CRS-2677: Stop of 'ora.crf' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'lxcora02'

CRS-2677: Stop of 'ora.gipcd' on 'lxcora02' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'lxcora02' has completed

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Starting Oracle High Availability Services-managed resources

CRS-2672: Attempting to start 'ora.mdnsd' on 'lxcora02'

CRS-2672: Attempting to start 'ora.evmd' on 'lxcora02'

CRS-2676: Start of 'ora.mdnsd' on 'lxcora02' succeeded

CRS-2676: Start of 'ora.evmd' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'lxcora02'

CRS-2676: Start of 'ora.gpnpd' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on 'lxcora02'

CRS-2676: Start of 'ora.gipcd' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'lxcora02'

CRS-2676: Start of 'ora.cssdmonitor' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'lxcora02'

CRS-2672: Attempting to start 'ora.diskmon' on 'lxcora02'

CRS-2676: Start of 'ora.diskmon' on 'lxcora02' succeeded

CRS-2676: Start of 'ora.cssd' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'lxcora02'

CRS-2672: Attempting to start 'ora.ctssd' on 'lxcora02'

CRS-2676: Start of 'ora.ctssd' on 'lxcora02' succeeded

CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'lxcora02'

CRS-2676: Start of 'ora.asm' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.storage' on 'lxcora02'

CRS-2676: Start of 'ora.storage' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.crf' on 'lxcora02'

CRS-2676: Start of 'ora.crf' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'lxcora02'

CRS-2676: Start of 'ora.crsd' on 'lxcora02' succeeded

CRS-6023: Starting Oracle Cluster Ready Services-managed resources

CRS-2664: Resource 'ora.SYSTEMDG.dg' is already running on 'lxcora02'

CRS-6017: Processing resource auto-start for servers: lxcora02

CRS-2672: Attempting to start 'ora.oc4j' on 'lxcora02'

CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'lxcora02'

CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'lxcora02'

CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'lxcora02' succeeded

CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'lxcora02' succeeded

CRS-2676: Start of 'ora.oc4j' on 'lxcora02' succeeded

CRS-6016: Resource auto-start has completed for server lxcora02

CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources

CRS-4123: Oracle High Availability Services has been started.

2014/10/29 20:06:31 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@lxcora02 ~]#

Run the second root script on the second LXC node lxcora02 as shown below.

[root@lxcora03 ~]# /u00/app/grid/product/12.1.0/grid/root.sh

Performing root user operation.

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u00/app/grid/product/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u00/app/grid/product/12.1.0/grid/crs/install/crsconfig_params

2014/10/29 20:09:22 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

TFA-00012: Oracle Trace File Analyzer (TFA) requires BASH shell. Please install bash and try again.

2014/10/29 20:09:31 CLSRSC-4004: Failed to install Oracle Trace File Analyzer (TFA) Collector. Grid Infrastructure operations will continue.

2014/10/29 20:09:32 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful

2014/10/29 20:10:41 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Starting Oracle High Availability Services-managed resources

CRS-2672: Attempting to start 'ora.mdnsd' on 'lxcora03'

CRS-2672: Attempting to start 'ora.evmd' on 'lxcora03'

CRS-2676: Start of 'ora.mdnsd' on 'lxcora03' succeeded

CRS-2676: Start of 'ora.evmd' on 'lxcora03' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'lxcora03'

CRS-2676: Start of 'ora.gpnpd' on 'lxcora03' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on 'lxcora03'

CRS-2676: Start of 'ora.gipcd' on 'lxcora03' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'lxcora03'

CRS-2676: Start of 'ora.cssdmonitor' on 'lxcora03' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'lxcora03'

CRS-2672: Attempting to start 'ora.diskmon' on 'lxcora03'

CRS-2676: Start of 'ora.diskmon' on 'lxcora03' succeeded

CRS-2676: Start of 'ora.cssd' on 'lxcora03' succeeded

CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'lxcora03'

CRS-2672: Attempting to start 'ora.ctssd' on 'lxcora03'

CRS-2676: Start of 'ora.ctssd' on 'lxcora03' succeeded

CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'lxcora03' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'lxcora03'

CRS-2676: Start of 'ora.asm' on 'lxcora03' succeeded

CRS-2672: Attempting to start 'ora.storage' on 'lxcora03'

CRS-2676: Start of 'ora.storage' on 'lxcora03' succeeded

CRS-2672: Attempting to start 'ora.crf' on 'lxcora03'

CRS-2676: Start of 'ora.crf' on 'lxcora03' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'lxcora03'

CRS-2676: Start of 'ora.crsd' on 'lxcora03' succeeded

CRS-6017: Processing resource auto-start for servers: lxcora03

CRS-2672: Attempting to start 'ora.net1.network' on 'lxcora03'

CRS-2676: Start of 'ora.net1.network' on 'lxcora03' succeeded

CRS-2672: Attempting to start 'ora.ons' on 'lxcora03'

CRS-2676: Start of 'ora.ons' on 'lxcora03' succeeded

CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'lxcora02'

CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'lxcora02' succeeded

CRS-2673: Attempting to stop 'ora.scan1.vip' on 'lxcora02'

CRS-2677: Stop of 'ora.scan1.vip' on 'lxcora02' succeeded

CRS-2672: Attempting to start 'ora.scan1.vip' on 'lxcora03'

CRS-2676: Start of 'ora.scan1.vip' on 'lxcora03' succeeded

CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'lxcora03'

CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'lxcora03' succeeded

CRS-6016: Resource auto-start has completed for server lxcora03

CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources

CRS-4123: Oracle High Availability Services has been started.

2014/10/29 20:14:58 CLSRSC-343: Successfully started Oracle Clusterware stack

2014/10/29 20:15:10 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@lxcora03 ~]#

Some details about installed ASM Flex Cluster on LXC Nodes

Some details about the system as shown below.

[grid@lxcora02 ~]$ srvctl config scan

SCAN name: lxc1-scan.gns1.vmem.org, Network: 1

Subnet IPv4: 10.207.39.0/255.255.255.0/eth0, dhcp

Subnet IPv6:

SCAN 0 IPv4 VIP: -/scan1-vip/10.207.39.100

SCAN VIP is enabled.

SCAN VIP is individually enabled on nodes:

SCAN VIP is individually disabled on nodes:

SCAN 1 IPv4 VIP: -/scan2-vip/10.207.39.101

SCAN VIP is enabled.

SCAN VIP is individually enabled on nodes:

SCAN VIP is individually disabled on nodes:

SCAN 2 IPv4 VIP: -/scan3-vip/10.207.39.102

SCAN VIP is enabled.

SCAN VIP is individually enabled on nodes:

SCAN VIP is individually disabled on nodes:

[grid@lxcora02 ~]$ nslookup lxc1-scan

Server: 10.207.39.3

Address: 10.207.39.3#53

Name: lxc1-scan.gns1.vmem.org

Address: 10.207.39.100

Name: lxc1-scan.gns1.vmem.org

Address: 10.207.39.101

Name: lxc1-scan.gns1.vmem.org

Address: 10.207.39.102

[grid@lxcora02 ~]$ srvctl config gns

GNS is enabled.

GNS VIP addresses: 10.207.39.3

Domain served by GNS: gns1.vmem.org

[grid@lxcora02 ~]$ ps -ef | grep pmon

grid 8652 8465 0 20:40 lxc/console 00:00:00 grep pmon

grid 21404 1 0 20:05 ? 00:00:00 asm_pmon_+ASM1

grid 29026 1 0 20:22 ? 00:00:00 mdb_pmon_-MGMTDB

[grid@lxcora02 ~]$ ssh root@lxcora03

root@lxcora03's password:

Last login: Wed Oct 29 20:36:59 2014

[root@lxcora03 ~]# ps -ef | grep pmon

grid 15383 1 0 20:14 ? 00:00:00 asm_pmon_+ASM2

root 25811 25799 0 20:40 pts/0 00:00:00 grep pmon

[root@lxcora03 ~]# crsctl stat res -t

--------------------------------------------------------------------------------

Name Target State Server State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.ASMNET1LSNR_ASM.lsnr

ONLINE ONLINE lxcora02 STABLE

OFFLINE OFFLINE lxcora03 STABLE

ora.LISTENER.lsnr

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ora.SYSTEMDG.dg

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ora.net1.network

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ora.ons

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE lxcora03 STABLE

ora.LISTENER_SCAN2.lsnr

1 ONLINE ONLINE lxcora02 STABLE

ora.LISTENER_SCAN3.lsnr

1 ONLINE ONLINE lxcora02 STABLE

ora.MGMTLSNR

1 ONLINE ONLINE lxcora02 169.254.40.120 192.2

10.39.10 192.211.39.

10,STABLE

ora.asm

1 ONLINE ONLINE lxcora02 Started,STABLE

2 ONLINE ONLINE lxcora03 Started,STABLE

3 OFFLINE OFFLINE STABLE

ora.cvu

1 ONLINE ONLINE lxcora02 STABLE

ora.gns

1 ONLINE ONLINE lxcora02 STABLE

ora.gns.vip

1 ONLINE ONLINE lxcora02 STABLE

ora.lxcora02.vip

1 ONLINE ONLINE lxcora02 STABLE

ora.lxcora03.vip

1 ONLINE ONLINE lxcora03 STABLE

ora.mgmtdb

1 ONLINE ONLINE lxcora02 Open,STABLE

ora.oc4j

1 ONLINE ONLINE lxcora02 STABLE

ora.scan1.vip

1 ONLINE ONLINE lxcora03 STABLE

ora.scan2.vip

1 ONLINE ONLINE lxcora02 STABLE

ora.scan3.vip

1 ONLINE ONLINE lxcora02 STABLE

--------------------------------------------------------------------------------

[root@lxcora03 ~]#

Completed System

Picture of completed system running on Ubuntu 14.04.1 inside Oracle Enterprise Linux 6.5 LXC Linux Containers

Some details of the system from the Ubuntu 14.04.1. Notice that the Oracle Grid Infrastructure processes are visible from the Base OS even though running in the LXC containers.

gstanden@vmem1:~$ uname -a

Linux vmem1.vmem.org 3.13.11.6 #1 SMP Mon Sep 15 11:54:55 CDT 2014 x86_64 x86_64 x86_64 GNU/Linux

gstanden@vmem1:~$

gstanden@vmem1:~$ cat /etc/lsb-release

DISTRIB_ID=Ubuntu

DISTRIB_RELEASE=14.04

DISTRIB_CODENAME=trusty

DISTRIB_DESCRIPTION="Ubuntu 14.04.1 LTS"

gstanden@vmem1:~$

gstanden@vmem1:~$ ps -ef | grep pmon

1098 14706 12394 0 22:47 ? 00:00:00 asm_pmon_+ASM1

1098 15499 12394 0 22:48 ? 00:00:00 mdb_pmon_-MGMTDB

1098 30057 28706 0 22:53 ? 00:00:00 asm_pmon_+ASM2

gstanden 31975 30291 0 23:04 pts/36 00:00:00 grep --color=auto pmon

gstanden@vmem1:~$ sudo lxc-ls -f

[sudo] password for gstanden:

NAME STATE IPV4 IPV6 GROUPS AUTOSTART

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

lxcora01 STOPPED - - - NO

lxcora02 RUNNING 10.207.39.101, 10.207.39.102, 10.207.39.3, 10.207.39.87, 10.207.39.97, 169.254.133.43, 169.254.254.234, 169.254.56.148, 169.254.88.208, 172.220.40.10, 172.221.40.10, 192.210.39.10, 192.211.39.10, 192.212.39.10, 192.213.39.10 - - NO

lxcora03 RUNNING 10.207.39.100, 10.207.39.88, 10.207.39.93, 169.254.122.71, 169.254.141.10, 169.254.223.146, 169.254.28.2, 172.220.40.11, 172.221.40.11, 192.210.39.11, 192.211.39.11, 192.212.39.11, 192.213.39.11 - - NO

gstanden@vmem1:~$

Install RAC Database

Disable Transparent Huge Pages

On the Ubuntu host it is necessary to disable Transparent Huge Pages when running OEL 6 on Ubuntu 3 kernels as described here at Oracle-Base Configuring HugePages for Oracle on Linux (x86-64)

There is a tool on Ubuntu 14.04 that can be used to manage hugepages as shown below.

gstanden@vmem1:~$ hugeadm

The program 'hugeadm' is currently not installed. You can install it by typing:

sudo apt-get install hugepages

gstanden@vmem1:~$ sudo apt-get install hugepages

Reading package lists... Done

Building dependency tree

Reading state information... Done

The following package was automatically installed and is no longer required:

linux-image-extra-3.13.0-32-generic

Use 'apt-get autoremove' to remove it.

The following extra packages will be installed:

libhugetlbfs0

Suggested packages:

libhugetlbfs-tests

The following NEW packages will be installed:

hugepages libhugetlbfs0

0 upgraded, 2 newly installed, 0 to remove and 6 not upgraded.

Need to get 104 kB of archives.

After this operation, 348 kB of additional disk space will be used.

Do you want to continue? [Y/n] y

Get:1 http://us.archive.ubuntu.com/ubuntu/ trusty/universe libhugetlbfs0 amd64 2.17-0ubuntu2 [55.4 kB]

Get:2 http://us.archive.ubuntu.com/ubuntu/ trusty/universe hugepages amd64 2.17-0ubuntu2 [48.4 kB]

Fetched 104 kB in 0s (223 kB/s)

Selecting previously unselected package libhugetlbfs0.

(Reading database ... 374320 files and directories currently installed.)

Preparing to unpack .../libhugetlbfs0_2.17-0ubuntu2_amd64.deb ...

Unpacking libhugetlbfs0 (2.17-0ubuntu2) ...

Selecting previously unselected package hugepages.

Preparing to unpack .../hugepages_2.17-0ubuntu2_amd64.deb ...

Unpacking hugepages (2.17-0ubuntu2) ...

Processing triggers for man-db (2.6.7.1-1ubuntu1) ...

Setting up libhugetlbfs0 (2.17-0ubuntu2) ...

Setting up hugepages (2.17-0ubuntu2) ...

Processing triggers for libc-bin (2.19-0ubuntu6.3) ...

gstanden@vmem1:~$

Here are some details about usage of the "hugeadm" tool. Documentation of manpage for this tool is here.

gstanden@vmem1:~$ hugeadm

hugeadm [options]

options:

--list-all-mounts List all current hugetlbfs mount points

--pool-list List all pools

--hard specified with --pool-pages-min to make

multiple attempts at adjusting the pool size to the

specified count on failure

--pool-pages-min <size|DEFAULT>:[+|-]<pagecount|memsize<G|M|K>>

Adjust pool 'size' lower bound

--obey-mempolicy Obey the NUMA memory policy when

adjusting the pool 'size' lower bound

--thp-always Enable transparent huge pages always

--thp-madvise Enable transparent huge pages with madvise

--thp-never Disable transparent huge pages

--thp-khugepaged-pages <pages to scan> Number of pages that khugepaged

should scan on each pass

--thp-khugepaged-scan-sleep <milliseconds> Time in ms to sleep between

khugepaged passes

--thp-khugepages-alloc-sleep <milliseconds> Time in ms for khugepaged

to wait if there was a huge page allocation failure

--pool-pages-max <size|DEFAULT>:[+|-]<pagecount|memsize<G|M|K>>

Adjust pool 'size' upper bound

--set-recommended-min_free_kbytes

Sets min_free_kbytes to a recommended value to improve availability of

huge pages at runtime

--set-recommended-shmmax Sets shmmax to a recommended value to

maximise the size possible for shared memory pools

--set-shm-group <gid|groupname> Sets hugetlb_shm_group to the

specified group, which has permission to use hugetlb shared memory pools

--add-temp-swap[=count] Specified with --pool-pages-min to create

temporary swap space for the duration of the pool resize. Default swap

size is 5 huge pages. Optional arg sets size to 'count' huge pages

--add-ramdisk-swap Specified with --pool-pages-min to create

swap space on ramdisks. By default, swap is removed after the resize.

--persist Specified with --add-temp-swap or --add-ramdisk-swap

options to make swap space persist after the resize.

--enable-zone-movable Use ZONE_MOVABLE for huge pages

--disable-zone-movable Do not use ZONE_MOVABLE for huge pages

--create-mounts Creates a mount point for each available

huge page size on this system under /var/lib/hugetlbfs

--create-user-mounts <user>

Creates a mount point for each available huge

page size under /var/lib/hugetlbfs/<user>

usable by user <user>

--create-group-mounts <group>

Creates a mount point for each available huge

page size under /var/lib/hugetlbfs/<group>

usable by group <group>

--create-global-mounts

Creates a mount point for each available huge

page size under /var/lib/hugetlbfs/global

usable by anyone

--max-size <size<G|M|K>> Limit the filesystem size of a new mount point

--max-inodes <number> Limit the number of inodes on a new mount point

--page-sizes Display page sizes that a configured pool

--page-sizes-all Display page sizes support by the hardware

--dry-run Print the equivalent shell commands for what

the specified options would have done without

taking any action

--explain Gives a overview of the status of the system

with respect to huge page availability

--verbose <level>, -v Increases/sets tracing levels

--help, -h Prints this message

gstanden@vmem1:~$

Disable of transparent pages is accomplished using this tool as shown below.

gstanden@vmem1:~$ cat /sys/kernel/mm/transparent_hugepage/enabled

[always] madvise never

gstanden@vmem1:~$ sudo hugeadm --thp-never

gstanden@vmem1:~$ cat /sys/kernel/mm/transparent_hugepage/enabled

always madvise [never]

gstanden@vmem1:~$

Ensure SCST Linux iSCSI SAN Space Sufficient

Ran into an issue where the KVM SCST Linux SAN ran out of space on the LVM where the file-backed LUNs for SCST for this database are stored. This resulted in one of the DATA LUNs having a status of CLOSED ONLINE MEMBER but showing 0 size and other oddities and diskgroup was basically unusable although ti would mount.

Turned out there was space on the VG that has somehow not been allocated, so that was fixed. Then it was necessary to add the LUN back to the diskgroup. This referenced here at the website IT with Coffee was very helpful getting the LUN added back into the Normal Redundancy DATA ASM diskgroup. The command was simply as shown below.

alter diskgroup DATA add disk '/dev/mapper/asm_disk_lxc1_data_01' force;

Also, the work getting the space on the vg_scst added to lv_vmem1 is worth noting for future reference. The vgs showed 40.00 GB available on physical volume /dev/vdg (which is 100Gb size, but only 60Gb was in use, with 40Gb still free). These were the commands used to get the remaining entire space added to lv_vmem1 as shown below. I had to do it stepwise for some reason but in the end got all available free space on the PV added to the lv_vmem1. After that, the RAC database was created again, and used "Admin" management instead of "Pools" this time.

lvextend -L+39G /dev/mapper/vg_scst-lv_vmem1

resize2fs /dev/mapper/vg_scst-lv_vmem1

lvextend -l+100%FREE /dev/mapper/vg_scst-lv_vmem1

resize2fs /dev/mapper/vg_scst-lv_vmem1

gstanden@vmem1:~$ ssh -Y -C oracle@lxcora02

oracle@lxcora02's password:

Last login: Thu Oct 30 00:03:20 2014 from 10.207.39.1

[oracle@lxcora02 ~]$ cd database

[oracle@lxcora02 database]$ ./runInstaller

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB. Actual 324014 MB Passed

Checking swap space: must be greater than 150 MB. Actual 29991 MB Passed

Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-10-30_11-04-43AM. Please wait ...[oracle@lxcora02 database]$

During install of the RAC DB, found that there was insufficient space for the RAC database in the DATA diskgroup. Here are the steps for resizing the LUNs in this particular case. The general procedure is documented here at this blog.

Edit the original file used to create the LUNs to use a new "seek" value. In this case just a bit over 2X the space is desired, so change the value to "45" as shown below.

[root@oracle651 scripts]# cat dd-create-DATA-luns.sh

dd if=/dev/zero of=/scst_vmem1/AsmDat_01.img bs=512 count=0 seek=20M

dd if=/dev/zero of=/scst_vmem1/AsmDat_02.img bs=512 count=0 seek=20M

dd if=/dev/zero of=/scst_vmem1/AsmDat_03.img bs=512 count=0 seek=20M

[root@oracle651 scripts]# cp -p dd-create-DATA-luns.sh dd-extend-DATA-luns.sh

[root@oracle651 scripts]# vi dd-extend-DATA-luns.sh

[root@oracle651 scripts]# cat dd-extend-DATA-luns.sh

dd if=/dev/zero of=/scst_vmem1/AsmDat_01.img bs=512 count=0 seek=45M

dd if=/dev/zero of=/scst_vmem1/AsmDat_02.img bs=512 count=0 seek=45M

dd if=/dev/zero of=/scst_vmem1/AsmDat_03.img bs=512 count=0 seek=45M

Now extend the LUNs using the script as shown below.

[root@oracle651 scripts]# ./dd-extend-DATA-luns.sh

0+0 records in

0+0 records out

0 bytes (0 B) copied, 0.00144059 s, 0.0 kB/s

0+0 records in

0+0 records out

0 bytes (0 B) copied, 0.000682248 s, 0.0 kB/s

0+0 records in

0+0 records out

0 bytes (0 B) copied, 0.00102967 s, 0.0 kB/s

Verify that the LUNs have been resized as shown below.

[root@oracle651 scst_vmem1]# ls -lrt /scst_vmem1/AsmDat*

-rw-r--r-- 1 root root 24159191040 Oct 30 13:50 /scst_vmem1/AsmDat_03.img

-rw-r--r-- 1 root root 24159191040 Oct 30 13:50 /scst_vmem1/AsmDat_02.img

-rw-r--r-- 1 root root 24159191040 Oct 30 13:50 /scst_vmem1/AsmDat_01.img

[root@oracle651 scst_vmem1]#

Resync the new sizes with scstadmin as shown below, and verify status of group and target after resync operation as shown below.

[root@oracle651 scst_vmem1]# scstadmin -resync_dev AsmDat_01

Collecting current configuration: done.

-> Making requested changes.

-> Setting device attribute 'resync_size' to value '1' for device 'AsmDat_01': done.

-> Done.

All done.

[root@oracle651 scst_vmem1]# scstadmin -resync_dev AsmDat_02

Collecting current configuration: done.

-> Making requested changes.

-> Setting device attribute 'resync_size' to value '1' for device 'AsmDat_02': done.

-> Done.

All done.

[root@oracle651 scst_vmem1]# scstadmin -resync_dev AsmDat_03

Collecting current configuration: done.

-> Making requested changes.

-> Setting device attribute 'resync_size' to value '1' for device 'AsmDat_03': done.

-> Done.

All done.

[root@oracle651 scst_vmem1]# scstadmin -list_group vmem1 -driver iscsi -target iqn.2014-10.org.vmem1:oracle651.san.asm.luns2

Collecting current configuration: done.

Assigned LUNs:

LUN Device

--------------

1 AsmFra_01

2 AsmFra_02

3 AsmFra_03

Assigned Initiators:

Initiator

-------------------------------------

iqn.2014-09.org.vmem1:oracle.asm.luns

All done.

[root@oracle651 scst_vmem1]#

Update the iSCSI client (vmem1) with the new sizes using the multipathd command: multipathd -k"resize map asm_disk_lxc1_data_01" (example).

gstanden@vmem1:~$ sudo multipathd -k"resize map asm_disk_lxc1_data_01"

fail

gstanden@vmem1:~$

Trying again actually logged in as root also fails as shown below (stubbornly tried a couple of times with same result of course!)

root@vmem1:~# multipathd -k

multipathd> resize map asm_disk_lxc1_data_01

fail

multipathd> resize map "asm_disk_lxc1_data_01"

fail

multipathd> exit

root@vmem1:~#

Try different method as described here at sig-io as shown below.

root@vmem1:~# blockdev --rereadpt /dev/sdt

root@vmem1:~# blockdev --rereadpt /dev/sdn

root@vmem1:~# blockdev --getsz /dev/sdn

47185920

root@vmem1:~# blockdev --getsz /dev/sdt

47185920

root@vmem1:

Dump and edit the mapping table as shown below.

root@vmem1:~# dmsetup table asm_disk_lxc1_data_01 | tee asm_disk_lxc1_data_01_mapping.bak asm_disk_lxc1_data_01_mapping.cur

0 20971520 multipath 0 0 2 1 round-robin 0 1 1 65:48 1000 round-robin 0 1 1 8:208 1000

root@vmem1:~# ls -lrt

total 24

drwxr-xr-x 2 root root 4096 Aug 16 18:11 Desktop

drwxr-xr-x 2 root root 4096 Aug 16 18:21 R-Studio

-rw-r--r-- 1 root root 2125 Sep 4 10:27 xamarin.gpg

drwxr-xr-x 19 root root 4096 Sep 15 11:14 scst

-rw-r--r-- 1 root root 88 Oct 30 14:27 asm_disk_lxc1_data_01_mapping.cur

-rw-r--r-- 1 root root 88 Oct 30 14:27 asm_disk_lxc1_data_01_mapping.bak

root@vmem1:~# more asm*

::::::::::::::

asm_disk_lxc1_data_01_mapping.bak

::::::::::::::

0 20971520 multipath 0 0 2 1 round-robin 0 1 1 65:48 1000 round-robin 0 1 1 8:208 1000

::::::::::::::

asm_disk_lxc1_data_01_mapping.cur

::::::::::::::

0 20971520 multipath 0 0 2 1 round-robin 0 1 1 65:48 1000 round-robin 0 1 1 8:208 1000

root@vmem1:~# vi asm_disk_lxc1_data_01_mapping.cur

root@vmem1:~# cat asm_disk_lxc1_data_01_mapping.cur

0 47185920 multipath 0 0 2 1 round-robin 0 1 1 65:48 1000 round-robin 0 1 1 8:208 1000

root@vmem1:~#

Apply the changes by reloading the edited map as shown below.

root@vmem1:~# dmsetup suspend asm_disk_lxc1_data_01; dmsetup reload asm_disk_lxc1_data_01 asm_disk_lxc1_data_01_mapping.cur; dmsetup resume asm_disk_lxc1_data_01

Verify procedure has worked. It has!

root@vmem1:~# multipath -ll -v2 asm_disk_lxc1_data_01

asm_disk_lxc1_data_01 (2626237633731622d) dm-11 SCST_FIO,AsmDat_01

size=22G features='0' hwhandler='0' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| `- 15:0:0:1 sdt 65:48 active ready running

`-+- policy='round-robin 0' prio=1 status=enabled

`- 14:0:0:1 sdn 8:208 active ready running

root@vmem1:~# multipath -ll -v2 asm_disk_lxc1_data_02

asm_disk_lxc1_data_02 (23366353036663832) dm-6 SCST_FIO,AsmDat_02

size=10G features='0' hwhandler='0' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| `- 15:0:0:2 sdu 65:64 active ready running

`-+- policy='round-robin 0' prio=1 status=enabled

`- 14:0:0:2 sdo 8:224 active ready running

root@vmem1:~#

Size all other LUNs as needed using same procedure. Final results of steps are shown below.

root@vmem1:~# multipath -ll -v2 asm_disk_lxc1_data_01

asm_disk_lxc1_data_01 (2626237633731622d) dm-11 SCST_FIO,AsmDat_01

size=22G features='0' hwhandler='0' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| `- 15:0:0:1 sdt 65:48 active ready running

`-+- policy='round-robin 0' prio=1 status=enabled

`- 14:0:0:1 sdn 8:208 active ready running

root@vmem1:~# multipath -ll -v2 asm_disk_lxc1_data_02

asm_disk_lxc1_data_02 (23366353036663832) dm-6 SCST_FIO,AsmDat_02

size=22G features='0' hwhandler='0' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| `- 15:0:0:2 sdu 65:64 active ready running

`-+- policy='round-robin 0' prio=1 status=enabled

`- 14:0:0:2 sdo 8:224 active ready running

root@vmem1:~# multipath -ll -v2 asm_disk_lxc1_data_03

asm_disk_lxc1_data_03 (23263663266376635) dm-8 SCST_FIO,AsmDat_03

size=22G features='0' hwhandler='0' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| `- 15:0:0:3 sdv 65:80 active ready running

`-+- policy='round-robin 0' prio=1 status=enabled

`- 14:0:0:3 sdp 8:240 active ready running

root@vmem1:~#

Finally, resize the LUN in ASM as shown below.

[grid@lxcora02 ~]$ sqlplus "/ as sysasm"

SQL*Plus: Release 12.1.0.2.0 Production on Thu Oct 30 15:59:03 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Real Application Clusters and Automatic Storage Management options

SQL> set linesize 200

SQL> column path format a40

SQL> select name, label, path, total_mb, header_status, mount_status, mode_status from v$asm_disk where name like '%DATAT%';

NAME LABEL PATH TOTAL_MB HEADER_STATU MOUNT_S MODE_ST

------------------------------ ------------------------------- ---------------------------------------- ---------- ------------ ------- -------

DATA_0000 /dev/mapper/asm_disk_lxc1_data_01 10240 MEMBER CACHED ONLINE

DATA_0002 /dev/mapper/asm_disk_lxc1_data_03 10240 MEMBER CACHED ONLINE

DATA_0001 /dev/mapper/asm_disk_lxc1_data_02 10240 MEMBER CACHED ONLINE

3 rows selected.

SQL> alter diskgroup DATA resize all;

Diskgroup altered.

SQL> select name, label, path, total_mb, header_status, mount_status, mode_status from v$asm_disk where name like '%DATA%';

NAME LABEL PATH TOTAL_MB HEADER_STATU MOUNT_S MODE_ST

------------------------------ ------------------------------- ---------------------------------------- ---------- ------------ ------- -------

DATA_0000 /dev/mapper/asm_disk_lxc1_data_01 23040 MEMBER CACHED ONLINE

DATA_0002 /dev/mapper/asm_disk_lxc1_data_03 23040 MEMBER CACHED ONLINE

DATA_0001 /dev/mapper/asm_disk_lxc1_data_02 23040 MEMBER CACHED ONLINE

SQL>

Done.

Continue with install...using DBCA this time as this is a re-do after fixes.

[grid@lxcora03 grid]$ ./runcluvfy.sh comp scan -verbose

Verifying SCAN

Checking Single Client Access Name (SCAN)...

SCAN Name Node Running? ListenerName Port Running?

---------------- ------------ ------------ ------------ ------------ ------------

lxc1-scan.gns1.vmem.org lxcora03 true LISTENER_SCAN1 1521 true

lxc1-scan.gns1.vmem.org lxcora02 true LISTENER_SCAN2 1521 true

lxc1-scan.gns1.vmem.org lxcora02 true LISTENER_SCAN3 1521 true

Checking TCP connectivity to SCAN listeners...

Node ListenerName TCP connectivity?

------------ ------------------------ ------------------------

lxcora02 LISTENER_SCAN1 yes

lxcora02 LISTENER_SCAN2 yes

lxcora02 LISTENER_SCAN3 yes

TCP connectivity to SCAN listeners exists on all cluster nodes

Checking name resolution setup for "lxc1-scan.gns1.vmem.org"...

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...

Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...

Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined

More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file

PRVG-2058 : "hosts" entry in the existing "/etc/nsswitch.conf" files is inconsistent

"hosts" entry was found as "hosts: dns files" on nodes: lxcora03

"hosts" entry was found as "hosts: dns files nisplus nis db" on nodes: lxcora02

Check for integrity of name service switch configuration file "/etc/nsswitch.conf" failed

Checking SCAN IP addresses...

Check of SCAN IP addresses passed

Verification of SCAN VIP and listener setup failed

Verification of SCAN was unsuccessful on all the specified nodes.

[grid@lxcora03 grid]$

Cleaned up the /etc/nsswitch.conf file (removed #commented out #hosts entries) and re-run SCAN verification as shown below which is successful this time.

[grid@lxcora03 grid]$ ./runcluvfy.sh comp scan -verbose

Verifying SCAN

Checking Single Client Access Name (SCAN)...

SCAN Name Node Running? ListenerName Port Running?

---------------- ------------ ------------ ------------ ------------ ------------

lxc1-scan.gns1.vmem.org lxcora03 true LISTENER_SCAN1 1521 true

lxc1-scan.gns1.vmem.org lxcora02 true LISTENER_SCAN2 1521 true

lxc1-scan.gns1.vmem.org lxcora02 true LISTENER_SCAN3 1521 true

Checking TCP connectivity to SCAN listeners...

Node ListenerName TCP connectivity?

------------ ------------------------ ------------------------

lxcora02 LISTENER_SCAN1 yes

lxcora02 LISTENER_SCAN2 yes

lxcora02 LISTENER_SCAN3 yes

TCP connectivity to SCAN listeners exists on all cluster nodes

Checking name resolution setup for "lxc1-scan.gns1.vmem.org"...

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...

Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...

Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined

More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file

All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"

Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

SCAN Name IP Address Status Comment

------------ ------------------------ ------------------------ ----------

lxc1-scan.gns1.vmem.org 10.207.39.102 passed

lxc1-scan.gns1.vmem.org 10.207.39.101 passed

lxc1-scan.gns1.vmem.org 10.207.39.100 passed

Checking SCAN IP addresses...

Check of SCAN IP addresses passed

Verification of SCAN VIP and listener setup passed

Verification of SCAN was successful.

[grid@lxcora03 grid]$

Now re-run DBCA installer verification checks.

Run root.sh scripts at appropriate time as shown below.

[root@lxcora02 ~]# /u00/app/oracle/product/12.1.0/dbhome_1/root.sh

Performing root user operation.

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /u00/app/oracle/product/12.1.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

[root@lxcora02 ~]# ssh root@lxcora03

root@lxcora03's password:

Last login: Thu Oct 30 16:47:36 2014 from 10.207.39.87

[root@lxcora03 ~]# /u00/app/oracle/product/12.1.0/dbhome_1/root.sh

Performing root user operation.

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /u00/app/oracle/product/12.1.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

[root@lxcora03 ~]#

Cluster Startup Failure on LXC Node 2 (lxcora03)

Ran into an issue with the always one LXC node failing to startup HAS and cluster intermittently failing to startup cluster services. This turned out to be caused by CSS (Cluster Synchronization Service) so switched to NTP as discussed below. However, for reference, the document I looked at was Oracle Support document: Troubleshoot Grid Infrastructure Startup Issues (Doc ID 1050908.1) which however turned out NOT to be the issue. I did NOT need to use any of the material in that document, but it's a potentially useful document, so it' referenced here. The main issue is to configure NTP for the LXC nodes as described in the next section.

Configure NTP on Ubuntu Host

Install NTP as described here at eHowStuff (skip the "Malaysia" part unless you happen to live in Malaysia). For my system, these were the commands used as shown below.

gstanden@vmem1:~$ sudo apt-get install ntp -y

Then be sure that the Ubuntu host is set to use "Internet" for time sychnronization by opening up "System Settings" GUI and checking "Automatically from the internet" as shown below. If using Ubuntu Server instead of desktop, there will be a flat file to configure instead.

Finally, reboot the Ubuntu host and verify that NTP service starts automatically at boot as shown below.

gstanden@vmem1:~$ ps -ef | grep ntp

ntp 7995 1 0 15:27 ? 00:00:00 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 128:142

gstanden 8422 7914 0 15:36 pts/9 00:00:00 grep --color=auto ntp

gstanden@vmem1:~$ ntpq -p

remote refid st t when poll reach delay offset jitter

==============================================================================

50.7.0.147 .INIT. 16 u - 64 0 0.000 0.000 0.000

host2.kingrst.c .INIT. 16 u - 64 0 0.000 0.000 0.000

bindcat.fhsu.ed .INIT. 16 u - 64 0 0.000 0.000 0.000

70.35.113.43 .INIT. 16 u - 64 0 0.000 0.000 0.000

golem.canonical .INIT. 16 u - 64 0 0.000 0.000 0.000

10.207.39.255 .BCST. 16 u - 64 0 0.000 0.000 0.000

gstanden@vmem1:~$

Configure NTP on LXC Nodes

As noted below, ntp service is needed for time synchronization on the LXC RAC nodes as it has been found that using Cluster Synchronization Service (CSS) results in a problem where only one of the RAC nodes will startup HAS successfully (if CSS is in use). The lxcora02 node will start, but lxcora02 HAS will not, or vice versa, but not both.

However, when NTP is used, both nodes start normally. Clearly, one solution would be to debug why CSS causes this problem, but here NTP is simply used to workaround this issue.

The next problem that occurs is that if NTP is installed using YUM on the LXC RAC nodes, then run "chkconfig ntpd on" one finds that the NTP service immediately dies. There is an error message that appears in /var/log/messages associated with this problem, as shown below.

March 23 10:44:56 ntpd [10401]: cap_set_proc () failed to drop root privileges: Operation not permitted

After searching on this issue, found one hit (in French language) here which was used to solve this issue. At that referenced link, poster found that simply starting the ntpd manually works. Therefore, here is the /etc/rc.local that is being used on the LXC RAC nodes to startup ntpd at boot (successfully).

gstanden@vmem1:~$ ssh root@lxcora02

root@lxcora02's password:

Last login: Sat Nov 1 15:48:28 2014 from 10.207.39.88

[root@lxcora02 ~]# cat /etc/rc.local

#!/bin/sh

#

# This script will be executed *after* all the other init scripts.

# You can put your own initialization stuff in here if you don't

# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

# Oracle ASM Storage

chown grid:asmadmin /dev/mapper/asm_disk*

chmod 0660 /dev/mapper/asm_disk*

# NTP

ntpd

[root@lxcora02 ~]#

Also, the Ubuntu host runs NTP so the Ubuntu host has been configured in the LXC RAC nodes as a local time source, and remote NTP servers have been commented out, so that the LXC nodes will get time from the Ubuntu laptop, and all 3, laptop, and both LXC RAC nodes will get the same time. The bolded entries in in the /etc/ntp.conf file on each LXC RAC node have been modified as shown below.

[root@lxcora02 ~]# cat /etc/ntp.conf

# For more information about this file, see the man pages

# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).

driftfile /var/lib/ntp/drift

# Permit time synchronization with our time source, but do not

# permit the source to query or modify the service on this system.

restrict default kod nomodify notrap nopeer noquery

restrict -6 default kod nomodify notrap nopeer noquery

# Permit all access over the loopback interface. This could

# be tightened as well, but to do so would effect some of

# the administrative functions.

restrict 127.0.0.1

restrict -6 ::1

# Hosts on local network are less restricted.

#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.

# Please consider joining the pool (http://www.pool.ntp.org/join.html).

server 10.207.39.1 prefer

# server 0.rhel.pool.ntp.org iburst

# server 1.rhel.pool.ntp.org iburst

# server 2.rhel.pool.ntp.org iburst

# server 3.rhel.pool.ntp.org iburst

#broadcast 192.168.1.255 autokey # broadcast server

#broadcastclient # broadcast client

#broadcast 224.0.1.1 autokey # multicast server

#multicastclient 224.0.1.1 # multicast client

#manycastserver 239.255.254.254 # manycast server

#manycastclient 239.255.254.254 autokey # manycast client

# Enable public key cryptography.

#crypto

includefile /etc/ntp/crypto/pw

# Key file containing the keys and key identifiers used when operating

# with symmetric key cryptography.

keys /etc/ntp/keys

# Specify the key identifiers which are trusted.

#trustedkey 4 8 42

# Specify the key identifier to use with the ntpdc utility.

#requestkey 8

# Specify the key identifier to use with the ntpq utility.

#controlkey 8

# Enable writing of statistics records.

#statistics clockstats cryptostats loopstats peerstats

[root@lxcora02 ~]#

Once these changes are made test the ntp functionality by rebooting and testing as shown below.

[root@lxcora02 ~]# ntpq -p

remote refid st t when poll reach delay offset jitter

==============================================================================

vmem1.vmem.org .INIT. 16 u - 64 0 0.000 0.000 0.000

[root@lxcora02 ~]#

Optionally, one could simply configure NTP on each of the LXC nodes, rather than synchronizing to the Ubuntu host.

Automate Creation of SCST LUNs on Ubuntu LXC Host

The KVM Guest "oracle652" can be autostarted using the "virsh autostart oracle652" command, but it is also required to create the storage LUNs on the Ubuntu host server. That is what's being worked on now. Some code however was put into KVM guest oracle652 which caused it to stop booting (waiting for a sudo password). To fix this "guestfish" was used. Since this is an amazingly useful program for fixing these kinds of programs, it's documented here as shown below. (The operation done below was to edit /etc/init.d/scst file).

gstanden@vmem1:~$ sudo guestfish -d oracle652

Welcome to guestfish, the guest filesystem shell for

editing virtual machine filesystems and disk images.

Type: 'help' for help on commands

'man' to read the manual

'quit' to quit the shell

><fs> run

><fs> list-filesystems

/dev/sda1: ext4

/dev/sdh1: iso9660

/dev/sdh2: vfat

/dev/vg_oracle651/lv_root: ext4

/dev/vg_oracle651/lv_swap: swap

/dev/vg_scst/lv_oracle631: ext4

/dev/vg_scst/lv_oracle632: ext4

/dev/vg_scst/lv_vmem1: ext4

><fs> mount /dev/vg_oracle651/lv_root /

><fs> vi /etc/init.d/scst

><fs>

GNS Configuration Notes

Just finished working all day and pulling hair out plus cursing a bit (no one but me at home) and finally have it working. These are just some rough notes to capture the configuration files that are working for this so I have them. There were some very useful posts out on the net which I used, although in the end, the Oracle Install Guide turned out to be farily accurate. The page here at Oracle RAC with GNS (the only caveat Ihave to this reference is that author named the VIP "oralab-scan" which could be a bit misleading because this VIP has nothing to do per se with the scan name except to point to GNS which resolves it. Still, the author of this page put in a wealth of details which was absolutely critical to helping get this done. Another good reference is the page at Martin Bach blog here . Martin Bach also has another good post here.

The Oracle Official Documentation was quite useful as well, both the Oracle® Grid Infrastructure Installation Guide12c Release 1 (12.1) for LinuxE48914-12 and the related manual on Oracle® Clusterware Administration and Deployment Guide12c Release 1 (12.1)E48819-07 is also helpful.

Here is /etc/resolv.conf as shown below for the Ubuntu laptop host.

gstanden@vmem1:~$ cat /etc/resolv.conf

# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)

# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN

options attempts: 1

options timeout: 1

search vmem.org gns1.vmem.org

nameserver 10.207.39.1

nameserver 10.207.39.3

gstanden@vmem1:~$

Begin Update #1 2014 November 4

Found that the above /etc/resolv.conf does not work as well for my purposes as the following resolv.conf, so, the following /etc/resolv.conf is now the one in use as shown below. Also shown is tne /etc/network/interfaces Ubuntu configuration file which generates this /etc/resolv.conf file at bootup. The bolded entries are the ones which are responsible for generating the required /etc/resolv.conf file.

gstanden@vmem1:~$ cat /etc/resolv.conf

# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)

# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN

nameserver 127.0.0.1

search vmem.org gns1.vmem.org

gstanden@vmem1:~$ cat /etc/network/interfaces

# interfaces(5) file used by ifup(8) and ifdown(8)

auto lo

iface lo inet loopback

dns-domain vmem.org

dns-search gns1.vmem.org

dns-nameserver 127.0.0.1

gstanden@vmem1:~$

Why is this better? Because it results in several resolution improvements as listed below. As far as why from an absolute DNS engineering perspective, still working to understand that one. The empirical results however are excellent with this modification, and it meets all resolution requirements, so it is in use now.

  1. Both "lxc1-scan" and "lxc1-scan.gns1.vmem.org" resolve equally well and instantly

  2. With the previous /etc/resolv.conf, "lxc1-scan" did not resolve, and the FQDN version was slightly delayed in resolution by a few seconds

    1. Internet firefox page loads on the Ubuntu host are very fast with this new /etc/resolv.conf.

End Update #1 2014 November 4

Here is /etc/nsswitch.conf for the Ubuntu laptop host

gstanden@vmem1:~$ cat /etc/nsswitch.conf

# /etc/nsswitch.conf

#

# Example configuration of GNU Name Service Switch functionality.

# If you have the `glibc-doc-reference' and `info' packages installed, try:

# `info libc "Name Service Switch"' for information about this file.

passwd: compat

group: compat

shadow: compat

#hosts: dns files mdns4_minimal [NOTFOUND=return]

hosts: files dns nis

networks: files

protocols: db files

services: db files

ethers: db files

rpc: db files

netgroup: nis

gstanden@vmem1:~$

Here is the /etc/resolv.conf for lxcora02 as shown below.

[root@lxcora02 ~]# cat /etc/resolv.conf

options attempts:2 timeout:1

; generated by /sbin/dhclient-script

search vmem.org gns1.vmem.org

nameserver 10.207.39.3

nameserver 10.207.39.1

[root@lxcora02 ~]#

And here is /etc/resolv.conf for lxcora03 as shown below.

[root@lxcora03 ~]# cat /etc/resolv.conf

options attempts:2 timeout:1

; generated by /sbin/dhclient-script

search vmem.org gns1.vmem.org

nameserver 10.207.39.3

nameserver 10.207.39.1

[root@lxcora03 ~]#

The forward zone file for the domain of the GNS ASM Flex Cluster as shown below. The lines added for GNS are shown in bold.

root@vmem1:/var/lib/bind# named-checkzone vmem.org fwd.vmem.org

zone vmem.org/IN: loaded serial 1411021420

OK

root@vmem1:/var/lib/bind# pwd

/var/lib/bind

root@vmem1:/var/lib/bind# ls -l fwd.vmem.org

-rw-r--r-- 1 bind bind 1096 Nov 2 14:20 fwd.vmem.org

root@vmem1:/var/lib/bind# named-checkzone vmem.org fwd.vmem.org

zone vmem.org/IN: loaded serial 1411021420

OK

root@vmem1:/var/lib/bind# cat fwd.vmem.org

$ORIGIN .

$TTL 86400 ; 1 day

vmem.org IN SOA vmem1.vmem.org. postmaster.vmem.org. (

1411021420 ; serial

60 ; refresh (1 minute)

1800 ; retry (30 minutes)

604800 ; expire (1 week)

86400 ; minimum (1 day)

)

NS vmem1.vmem.org.

$ORIGIN vmem.org.

_sflow._udp TXT "txtvers=1" "polling=20" "sampling=512"

SRV 0 0 6343 vmem1

$TTL 3600 ; 1 hour

lxcora01 A 10.207.39.85

TXT "001060b0cef034d0d8b3ec569ea6615637"

lxcora02 A 10.207.39.87

TXT "001d6eb17bf9f9081897f98548b528bfee"

lxcora03 A 10.207.39.88

TXT "0084a3923f9bbe6559acbb98034af0c10a"

$TTL 86400 ; 1 day

lxcora1 A 10.207.39.79

$TTL 3600 ; 1 hour

lxcora3 A 10.207.39.83

TXT "007e7554cced9024a1bac46cc4370293c6"

lxcora5 A 10.207.39.84

TXT "0092ff209913ebc9ca832c943e48eff86f"

$TTL 86400 ; 1 day

oracle631 A 10.207.39.72

oracle632 A 10.207.39.76

oracle652 A 10.207.39.74

vmem1 A 10.207.39.1

lxc1-gns-vip.vmem.org. A 10.207.39.3

$TTL 3600 ; 1 hour

vmem2 A 10.207.39.81

TXT "3118a6c1688c312b822454df8baa10bfff"

$ORIGIN gns1.vmem.org.

@ IN NS lxc1-gns-vip.vmem.org.

root@vmem1:/var/lib/bind#

Note that "lxc1-gns-vip" is a COMPLETELY ARBITRARY choice of name (although the domain is not arbitrary). If we look at a table showing the choices that were made at install time for GNS it can be seen that that DNS name does not appear anywhere in the installed configuration. It is just literally a VIP. It could just as easily have been called "lxc1" (i.e. FQDN "lxc1-gns-vip"). For example, here is the table of relevant values for this GNS setup. The first row is this system just built. The second row are the values Martin Bach used, to help see the way the setup goes. I've constructed this table from the references that I used to help figure this out (the last column "Source" gives the link to the referenced webpage work). The table should help to understand how these diffrent authors completed the same task successfully to configure GNS.

There are some differences worth noting and possibly trying to resolve as to how these configurations differ slightly and which, if any, is "better" or if they are all equivalently "good" (i.e all of these configurations work so all roads lead to Rome as they say).

One slight difference between my configuration and Martin Bach is in the DNS forward lookup zone file. Here are my entries to that file as shown below (showing just the GNS lines condensed for clariry).

lxc1-gns-vip.vmem.org. A 10.207.39.3

$ORIGIN gns1.vmem.org.

@ IN NS lxc1-gns-vip.vmem.org.

Here is what Martin Bach used as shown below. Note that Martin's version has an "IN" on the "A" record line. Need to so some additional research. I went with not using the "A" record because the checkzone program did not like the "A" record in there (as shown further below).

$ORIGIN rac.localdomain.

@ IN NS gns.rac.localdomain.

gns.rac.localdomain. IN A 192.168.99.150

Tried checkzone on the fwd.vmem.org both with and without the "IN" on the "A" record line, and both versions passed, so guess that this is options, but note to self need to understand better what "IN" an "A" denote in forward zone files.

root@vmem1:/var/lib/bind# named-checkzone vmem.org fwd.vmem.org

zone vmem.org/IN: loaded serial 1411021420

OK

root@vmem1:/var/lib/bind# vi fwd.vmem.org

root@vmem1:/var/lib/bind# named-checkzone vmem.org fwd.vmem.org

zone vmem.org/IN: loaded serial 1411021420

OK

Recall that column 5 counting from the left is set at install time as shown below. For example, "lxc1-scan" was used here. The non-advanced install will always set the scan name to "clustername-scan" but in the advanced install as shown below, it looks possible to have a scan that does not include the Cluster Name (bit would not recommend doing this).

Similar to "Mystery Author" the dig lookups can be run to verify correct DNS function as shown below.

gstanden@vmem1:~$ dig lxc1-gns-vip.vmem.org

; <<>> DiG 9.9.5-3-Ubuntu <<>> lxc1-gns-vip.vmem.org

;; global options: +cmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62496

;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2

;; OPT PSEUDOSECTION:

; EDNS: version: 0, flags:; udp: 4096

;; QUESTION SECTION:

;lxc1-gns-vip.vmem.org. IN A

;; ANSWER SECTION:

lxc1-gns-vip.vmem.org. 86400 IN A 10.207.39.3

;; AUTHORITY SECTION:

vmem.org. 86400 IN NS vmem1.vmem.org.

;; ADDITIONAL SECTION:

vmem1.vmem.org. 86400 IN A 10.207.39.1

;; Query time: 0 msec

;; SERVER: 10.207.39.1#53(10.207.39.1)

;; WHEN: Sun Nov 02 18:35:57 CST 2014

;; MSG SIZE rcvd: 102

And for the reverse lookup as shown below.

gstanden@vmem1:~$ dig -x 10.207.39.3

; <<>> DiG 9.9.5-3-Ubuntu <<>> -x 10.207.39.3

;; global options: +cmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53494

;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2

;; OPT PSEUDOSECTION:

; EDNS: version: 0, flags:; udp: 4096

;; QUESTION SECTION:

;3.39.207.10.in-addr.arpa. IN PTR

;; ANSWER SECTION:

3.39.207.10.in-addr.arpa. 86400 IN PTR lxc1-gns-vip.vmem.org.

;; AUTHORITY SECTION:

39.207.10.in-addr.arpa. 86400 IN NS vmem1.vmem.org.

;; ADDITIONAL SECTION:

vmem1.vmem.org. 86400 IN A 10.207.39.1

;; Query time: 0 msec

;; SERVER: 10.207.39.1#53(10.207.39.1)

;; WHEN: Sun Nov 02 18:36:12 CST 2014

;; MSG SIZE rcvd: 124

gstanden@vmem1:~$

These values can be checked using some utilities after installation as shown below (here run from lxcora02).

[grid@lxcora02 grid]$ ./runcluvfy.sh comp gns -postcrsinst -verbose

Verifying GNS integrity

Checking GNS integrity...

Checking if the GNS subdomain name is valid...

The GNS subdomain name "gns1.vmem.org" is a valid domain name

Checking if the GNS VIP belongs to same subnet as the public network...

Public network subnets "10.207.39.0" match the GNS VIP "10.207.39.3"

Checking if the GNS VIP is a valid address...

GNS VIP "10.207.39.3" resolves to a valid IP address

Checking the status of GNS VIP...

Checking if FDQN names for domain "gns1.vmem.org" are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable

Checking status of GNS resource...

Node Running? Enabled?

------------ ------------------------ ------------------------

lxcora02 no yes

lxcora03 yes yes

GNS resource configuration check passed

Checking status of GNS VIP resource...

Node Running? Enabled?

------------ ------------------------ ------------------------

lxcora02 no yes

lxcora03 yes yes

GNS VIP resource configuration check passed.

GNS integrity check passed

Verification of GNS integrity was successful.

[grid@lxcora02 grid]$

Here is the same check run from lxcora03 as shown below.

[grid@lxcora03 grid]$ ./runcluvfy.sh comp gns -postcrsinst -verbose

Verifying GNS integrity

Checking GNS integrity...

Checking if the GNS subdomain name is valid...

The GNS subdomain name "gns1.vmem.org" is a valid domain name

Checking if the GNS VIP belongs to same subnet as the public network...

Public network subnets "10.207.39.0" match the GNS VIP "10.207.39.3"

Checking if the GNS VIP is a valid address...

GNS VIP "10.207.39.3" resolves to a valid IP address

Checking the status of GNS VIP...

Checking if FDQN names for domain "gns1.vmem.org" are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable

Checking status of GNS resource...

Node Running? Enabled?

------------ ------------------------ ------------------------

lxcora02 no yes

lxcora03 yes yes

GNS resource configuration check passed

Checking status of GNS VIP resource...

Node Running? Enabled?

------------ ------------------------ ------------------------

lxcora02 no yes

lxcora03 yes yes

GNS VIP resource configuration check passed.

GNS integrity check passed

Verification of GNS integrity was successful.

[grid@lxcora03 grid]$

Also, there is an srvctl command that can be used as shown below from node lxcora02.

[grid@lxcora02 grid]$ srvctl config gns -a

GNS is enabled.

GNS is listening for DNS server requests on port 53

GNS is using port 5353 to connect to mDNS

GNS status: OK

Domain served by GNS: gns1.vmem.org

GNS version: 12.1.0.2.0

Globally unique identifier of the cluster where GNS is running: 381c21634d685f94ffd391a8528e606a

Name of the cluster where GNS is running: lxc1

Cluster type: server.

GNS log level: 1.

GNS listening addresses: tcp://10.207.39.3:11244.

GNS is individually enabled on nodes:

GNS is individually disabled on nodes:

[grid@lxcora02 grid]$

The output of "crsctl stat res -t" also shows GNS status as shown below in bold.

[grid@lxcora03 ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

Name Target State Server State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.ASMNET1LSNR_ASM.lsnr

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ora.DATA.dg

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ora.FRA.dg

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ora.IOPS.dg

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ora.LISTENER.lsnr

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ora.SYSTEMDG.dg

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ora.UNDO1.dg

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ora.net1.network

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ora.ons

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE lxcora02 STABLE

ora.LISTENER_SCAN2.lsnr

1 ONLINE ONLINE lxcora03 STABLE

ora.LISTENER_SCAN3.lsnr

1 ONLINE ONLINE lxcora03 STABLE

ora.MGMTLSNR

1 ONLINE ONLINE lxcora03 169.254.28.2 192.210

.39.11 192.211.39.11

,STABLE

ora.asm

1 ONLINE ONLINE lxcora02 Started,STABLE

2 ONLINE ONLINE lxcora03 Started,STABLE

3 OFFLINE OFFLINE STABLE

ora.cvu

1 ONLINE ONLINE lxcora03 STABLE

ora.gns

1 ONLINE ONLINE lxcora03 STABLE

ora.gns.vip

1 ONLINE ONLINE lxcora03 STABLE

ora.lxcora02.vip

1 ONLINE ONLINE lxcora02 STABLE

ora.lxcora03.vip

1 ONLINE ONLINE lxcora03 STABLE

ora.mgmtdb

1 ONLINE ONLINE lxcora03 Open,STABLE

ora.oc4j

1 ONLINE ONLINE lxcora03 STABLE

ora.scan1.vip

1 ONLINE ONLINE lxcora02 STABLE

ora.scan2.vip

1 ONLINE ONLINE lxcora03 STABLE

ora.scan3.vip

1 ONLINE ONLINE lxcora03 STABLE

ora.vmem1.db

1 ONLINE ONLINE lxcora02 Open,STABLE

2 ONLINE ONLINE lxcora03 Open,STABLE

--------------------------------------------------------------------------------

[grid@lxcora03 ~]$

There are alot of other options that can be run on the srvctl command as shown below for GNS verification and info.

[grid@lxcora03 ~]$ srvctl config gns -help

Displays the configuration information for the GNS daemon.

Usage: srvctl config gns [-detail] [-subdomain] [-multicastport] [-node <node_name>] [-port] [-status] [-version] [-query <name>] [-list] [-clusterguid] [-clustername] [-clustertype] [-loglevel] [-network] [-verbose]

-detail Print detailed configuration information

-subdomain Display subdomain served by GNS

-multicastport Display the port on which the GNS daemon is listening for multicast requests

-node <node_name> Display the configuration information for GNS on the specified node.

-port Display the port which the GNS daemon uses to communicate with the DNS server.

-network Display network on which GNS is listening

-status Display the status of GNS

-version Display the version of GNS

-query <name> Query GNS for the records belonging to a name.

-list List all records in GNS.

-clusterguid Display the globally unique identifier of the cluster where GNS is running

-clustername Display the name of the cluster where GNS is running

-clustertype Display the type of configuration of GNS on this cluster

-loglevel Print the log level of the GNS

-verbose Verbose output

-help Print usage

[grid@lxcora03 ~]$ srvctl config gns -clustertype

Connecting to the database from Ubuntu host laptop.

[oracle@lxcora02 ~]$ sqlplus system/Violin#1@//lxc1-scan.gns1.vmem.org:1521/VMEM1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Nov 3 00:29:48 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Sun Nov 02 2014 21:28:06 -05:00

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Advanced Analytics and Real Application Testing options

SQL> select name, open_mode from v$database;

NAME OPEN_MODE

--------- --------------------

VMEM1 READ WRITE

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Advanced Analytics and Real Application Testing options

[oracle@lxcora02 ~]$

Connect to DB from Ubuntu Host

The database is accessible from the Ubuntu host using the instantclient software from Oracle (free). Instructions on how to obtain and install the software are here. Connecting to the database using Oracle Instantclient is shown below.

gstanden@vmem1:~$ sqlplus system/Violin#1@//lxc1-scan.gns1.vmem.org:1521/VMEM1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Nov 3 12:05:55 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Mon Nov 03 2014 09:41:48 -06:00

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Advanced Analytics and Real Application Testing options

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Advanced Analytics and Real Application Testing options

gstanden@vmem1:~$ sqlplus system/Violin#1@//lxc1-scan.gns1.vmem.org:1521/VMEM1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Nov 3 14:27:21 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Mon Nov 03 2014 12:05:57 -06:00

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Advanced Analytics and Real Application Testing options

SQL> select name, open_mode from v$database;

NAME OPEN_MODE

--------- --------------------

VMEM1 READ WRITE

SQL> select instance_name from v$instance;

INSTANCE_NAME

----------------

VMEM11

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Advanced Analytics and Real Application Testing options

gstanden@vmem1:~$