Oracle RAC 6-node 12c GNS ASM Flex Cluster Ubuntu 15.04 Automated

Update: 2019-04-15

The latest release of Orabuntu-LXC includes full support for Oracle Grid Infrastructure 18c and for the RAC database on Ubuntu Linux 18.10 (other earlier versions are being tested for support). Simply download Orabuntu-LXC, deploy it, download Oracle GI and DB 18c software, and install it in the LXC containers. Orabuntu-LXC takes care of all pre-requisites required to run Oracle Grid Infrastructure 18c and Oracle Database 18c in Oracle Linux 7 LXC on Ubuntu Linux 18.10. A guide for deploying Oracle GI and DB 18c on Ubuntu Linux 18.10 will be available soon at the Orabuntu-LXC documentation site.

GENERAL NOTES

There is another more succinct (less wordy) version of this blog page here which you can use for the install if you prefer. I will eventually get rid of one or the other, but felt at this point there was a need for a page that was more step-by-step to the point without my opinions on various topics. I suggest reading through both this page and the other blog page before doing the work.

What is this page? It is a guide that provides a set of scripts so that you can prepare an Ubuntu 15.04 64-bit desktop to easily run Oracle Enterprise software products (such as Oracle Enterprise Edition 12c RAC ASM database) notably with NO HYPERVISOR needed at bare-metal speed, using a straightforward and robust solution using LXC Linux Containers. This is I argue the easiest and most straightforward way to run Oracle Enterprise Products on Ubuntu. Note that Ubuntu Linux is not an officially supported platform for Oracle Database software, so this should be considered for training and education purposes only. This approach to containerize Oracle database in LXC should work on any Linux platform that supports LXC and OpenvSwitch, so it could also be done on RedHat and variants. The question of whether Oracle Corporation will support Oracle Database products in say, OEL LXC containers running on OEL base OS, is a question that may not have yet been answered. I'm not aware of what Oracle's position statement is on that question.

NOTE BE SURE TO USE THE MOST RECENT VERSION OF THE SCRIPT BUNDLE AT THE END OF THIS BLOG PAGE.

This has been a WIP. The first working bundle was the v8 bundle, and later bundles work even better. As of this edit the latest is the "v11" bundle.

On Ubuntu, the file will download by default to /home/username/Downloads. Gunzip and untar the file. Then untar scst-files.tar. Now you have all the files ready to go.

Google Sites only allows comments from authorized collaborators, so if you have any questions or comments, send to me at "gilstanden@hotmail.com". Thank you.

Why would anyone want to run Oracle Enterprise Products on Ubuntu? Well, why did Sir Edmund Hillary climb Mount Everest?: In Sir Edmunds's own words, "Because it was there." Not compelling enough for you? Ok, then consider this: Ubuntu is a non-trivial case of this general approach to running Oracle in LXC OEL containers an ANY Linux distribution that supports LXC and OpenvSwitch, i.e. RedHat, and the containerization of Oracle database instances allows achievement of high density and elasticity without paying the "hypervisor performance penalty" i.e. the LXC containers all run at "bare-metal speed".

I would also like to recognize and thank the oft-maligned Oracle Corporation which still, to this day, makes their Enterprise Database Software available as a free download to ALL (subject to the export and use requirements, of course) for educational and self-study purposes. Thank you.

This page is an automated install of Oracle 12c on Ubuntu 15.04 using LXC Oracle Enteprise Linux 6.5 Containers using automated script install.

The networking is OpenvSwitch, which means you have a true Layer 2 production-grade switch as you networking solution.

It's worth noting I guess that the setup includes a DNS-DHCP configuration that automatically assigns IP's to new LXC containers that are added to the OpenvSwitch sw1 10.207.39.x network so that you don't have to manually add them to DNS - it's all take care of automatically. The part of setting up the DNS-DHCP environment was mostly due to the great blot at Big Dino, but that link seems to be broken so I guess it's a good thing I scripted it all up and included it with this package!

There are so many people who, by blogging and posting information, helped to make this work possible. If you go through all my various posts at this google site you will find references to most if not all of them, and there are many. However, I'd like to particularly thank Jean-Jacques Sarton who posted a great bit about linux bridge networking techniques which I adapted to OpenvSwitch networking for this project.

Download the latest version of the tar.gz bundle to a FRESH INSTALL of UBUNTU 64-BIT 15.04 DESKTOP VERSION. I do not recommend installing this on anything other than a fresh install of 15.04 because the scripts have not yet been engineered to protect various existing configurations in all cases,

The bundle will be saved by default to the ~/Downloads directory off of the user home directory. The scripts are all run as the user, but most of the commands inside the scripts run in the usual Ubuntu way with use of the "sudo" command. First gunzip the bundles, then "tar -xvf" it and then just follow the README.

The ubuntu-services-* files in the ~/Downloads directory will create Oracle RAC-ready containers ready for Oracle 12c ASM Flex Cluster installation with GNS (Grid Naming Service). In one of the scripts, you choose how many RAC nodes are desired, and then the scripts clone the number of RAC-ready LXC containers specified for you. I am running a W520 Lenovo Mobile Workstation with 32 Gb of RAM, and I easily run 6 RAC nodes on this system. If you do not give a parameter to the script for number of nodes, it defaults to "2" nodes.

Once those scripts are all completed cd to ~/Downloads/scst-files and read the README and run all the create-scst-*.sh files. Be sure to read the post a github by Chris Weiss here before you run the create-scst-*.sh files because those scripts are an automated version of Chris' instructions at github here so reading his post first will help to understand the steps. When those files are all done, there will be 3 LUNs ready in /dev/mapper which will also be available in /dev/mapper of all of the LXC containers when they are started.

A quick pointer about the LUNs for GI in /dev/mapper on the Ubuntu host: Most of the time the LUNs will be created in the "RHEL5" style, i.e. they will be actual devices, not symlinks. For example, they will look like this when "ls -l" check is done:

oracle3@W523:~/Downloads$ ls -lrt /dev/mapper

total 0

crw------- 1 root root 10, 236 Sep 7 19:51 control

brw-rw---- 1 root disk 252, 1 Sep 7 20:02 asm_systemdg_01

brw-rw---- 1 root disk 252, 0 Sep 7 20:02 asm_systemdg_00

brw-rw---- 1 root disk 252, 2 Sep 7 20:02 asm_systemdg_02

oracle3@W523:~/Downloads$

However, for reasons I do not yet fully understand, occassionally one or more devices may get created in the "RHEL6" style of symlinks, i.e.

oracle3@W523:~/Downloads$ ls -lrt /dev/mapper

total 0

crw------- 1 root root 10, 236 Sep 7 19:51 control

brwxrwxrwx 1 root disk 252, 1 Sep 7 20:02 asm_systemdg_01 --> /dev/dm-3

brw-rw---- 1 root disk 252, 0 Sep 7 20:02 asm_systemdg_00

brw-rw---- 1 root disk 252, 2 Sep 7 20:02 asm_systemdg_02

oracle3@W523:~/Downloads$

Note that LXC CONTAINERS AS USED IN THIS GUIDE CANNOT UTILIZE SYMLINKED RHEL6-STYLE DEVICES because the LXC container as configured for this project has no way to follow a symlink to /dev/ on the host.

The workaround to fix this is, before you start up the LXC containers, logout of the SCST SAN and then log back in to the SCST SAN, and this seems to make this go away (i.e. a logout/relogin sets all the devices to the RHEL5 style).

sudo iscsiadm --mode node --targetname iqn.2015-08.org.vmem:w520.san.asm.luns --portal 10.207.41.1 --logout

sudo iscsiadm --mode node --targetname iqn.2015-08.org.vmem:w520.san.asm.luns --portal 10.207.40.1 --logout

sudo multipath -F

sudo iscsiadm --mode node --targetname iqn.2015-08.org.vmem:w520.san.asm.luns --portal 10.207.41.1 --login

sudo iscsiadm --mode node --targetname iqn.2015-08.org.vmem:w520.san.asm.luns --portal 10.207.40.1 --login

Doing those above steps will get the LUNs to the desired RHEL5-style non-symlinked device nodes format.

Once the install is done, simply download Oracle 12c (12.1.0.2.0) Grid Infrastructure from the Oracle Software public download at http://www.oracle.com and scp the install media into the lxcora01 container into /home/grid and follow the usual install. Due to export restrictions, it's not possible to bundle the Oracle Enterprise Edition softwares with the script bundle.

I've published the screenshots of the install in this blog. You should accept the option to automatically run the root.sh scripts, and the password is "root" for the username "root". On the final checks screen, there will be several failed checks. As long as the only failed checks are as shown in the screenshots below, they are acceptable and you should choose "ignore prerequisites" but ONLY if the only errors are the ones shown in the screenshot. All other errors should be resolved before starting the install. Occassionall NTP errors will be caught by the verification program. Be sure to resolve all NTP errors before starting the install. Sometimes you just have to restart NTP, etc. The other errors as shown below in the screenshot are expected and are normal and can safely be ignored. Notably, the cluster verification check at the end of the install of 12c Grid Infrastructure is completely 100% successful.

Once the install is complete, there are a few extra things. You should edit the .bashrc files for the "grid" users in /home/grid directory and add the ORACLE_BASE, ORACLE_HOME and PATH variables. Yes, these can be part of the lxcora01 before the clone, I will incorporate that next, easy change, but for "v8" bundle you have to add them after the install of 12c GI is complete. Edit: These are included in the "v11" bundle so no need to add them manually, they get propagated at the "clone container" step. You will still need to manually add "+ASM1, +ASM2, ..." to the appropriate nodes in the grid account bashrc. Note, add the ORACLE_SID values of +ASM1, +ASM2, etc. AFTER the system is built and up and running. No need to add ORACLE_SID to the grid .bashrc prior to the install.

The other thing is that when you run "crsctl stat res -t" after the install, you will see only 3 ASM instances are up over the 6 total RAC nodes. This is the magic of ASM Flex Clustering. If you want to have all ASM instances up, just run the following command as the "grid" user:

srvctl modify asm -count 6

Or whatever your total cluster node count is ( in my case I have 6 RAC node containers). This will turn all the ASM instances on and give "perfect" output for "crsctl stat rest -t" command, so I like to do that post-install step.

The other thing is you can install Oracle Instant Client according to the instructions here on the Ubuntu host, and that way you can sqlplus into the RAC cluster from the Ubuntu host.

The bundles have been tested on Ubuntu 15.04. They completely automate the creation of a 12c Flex GI cluster on OEL 6.5 containers but there are a few manual steps that need to be done, such as accepting some default settings during build of the SCST kernel, configuring the kernel, etc. During the build of SCST there are some defaults such as "(NEW)" that are accepted, and you have to be sure to edit only the "scst" config file during the kernel build steps, when prompted.

If the automatic root configuration option is chosen just before installation of 12c Grid Infrastructure, an additional warning will be generated during during the configuration check summary regarding "DHCP" and the information on that ignorable error is shown below. This additional warning can be ignored per the Oracle documentation here.

2.21.6.1 Bug 19156657

The Oracle Clusterware installation may list the prerequisite check Task DHCP Configuration check with a warning when a user selects the root automation option and later decides against it.

Workaround: This check failure can be ignored in this situation.

Run the following files in the "v11" tar.gz archive on a FRESH INSTALL of Ubuntu 15.04 64-bit desktop edition. It probably would work on 64-bit 15.04 server version but it has not yet been tested on the server version. This is an early version so it hasn't been bulletproofed yet for deployment on "been-running-for-awhile-customized" 15.04 desktop 64-bit so it's safer to just run on a freshly-installed 15.04 64-bit desktop that has been updated to the latest packages and kernel.

Use the "v11" version below. Some scripts will automatically reboot the Ubuntu host. Run the scripts in sort order.

1. First do the ubuntu-services-* from the ~/Downloads directory.

2. After those are all run, then do the create-scst-* files from the ~/Downloads/scst-files directory.

If all goes as went on my reference 15.04 64-bit desktop version on several successful test, you will have LXC the same results, LXC containers that are Oracle 12c GNS RAC ASM Flex-Cluster-ready and an SCST SAN built and ready with LUNs for the Oracle Grid Infrastructure install, and finally an Oracle 12c n-Node RAC ASM Flex Cluster running.

Here's what the cluster looks like running on Ubuntu 15.04. As you can see, on my Lenovo W520 top-of-the-line mobile workstation with 32Gb of RAM, I can easily run a 6-node Oracle RAC ASM Flex Cluster, as shown below. Note that ALL RAC nodes in this build are sharing the SAME kernel, namely an Ubuntu 15.04 Linux 3.19.0-26-scst #28-Ubuntu SMP kernel, and that the storage SAN using SCST ALSO SHARES THE SAME KERNEL. EVERYTHING running in this desktop RAC 6-node LAB shares the SAME kernel. There is NO HYPERVISOR here. The performance ffor a laptop or desktop setup is, relatively speaking to a hypervisor desktop setup, in a work, PHENOMENAL. I will be adding some SLOB data to give the actual performance numbers so you can see how well this type of setup performs. Oracle RAC on desktop or laptop in LINUX CONTAINERS gives performance that heretofore was undreamed of on a desktop or laptop setup, but with no loss of elasticity and density benefits of a typical hypervisor-based setup. Indeed, the density with this LXC setup for RAC is at least 10X what would be possible with say Virtulabox or VMWare. There would be no way, for example, with a typical hyperviosr like Virtualbox, to run a 6-node RAC on a laptop, even on a high-end 32Gb laptop like the Lenovo W520, but with LXC Linux Containers, a 6-node RAC on a laptop is easily possible.

[root@lxcora01 ~]# crsctl stat res -t

--------------------------------------------------------------------------------

Name Target State Server State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.ASMNET1LSNR_ASM.lsnr

ONLINE ONLINE lxcora01 STABLE

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ONLINE ONLINE lxcora04 STABLE

ONLINE ONLINE lxcora05 STABLE

ONLINE ONLINE lxcora06 STABLE

ora.LISTENER.lsnr

ONLINE ONLINE lxcora01 STABLE

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ONLINE ONLINE lxcora04 STABLE

ONLINE ONLINE lxcora05 STABLE

ONLINE ONLINE lxcora06 STABLE

ora.SYSTEMDG.dg

ONLINE ONLINE lxcora01 STABLE

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ONLINE ONLINE lxcora04 STABLE

ONLINE ONLINE lxcora05 STABLE

ONLINE ONLINE lxcora06 STABLE

ora.net1.network

ONLINE ONLINE lxcora01 STABLE

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ONLINE ONLINE lxcora04 STABLE

ONLINE ONLINE lxcora05 STABLE

ONLINE ONLINE lxcora06 STABLE

ora.ons

ONLINE ONLINE lxcora01 STABLE

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ONLINE ONLINE lxcora04 STABLE

ONLINE ONLINE lxcora05 STABLE

ONLINE ONLINE lxcora06 STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE lxcora04 STABLE

ora.LISTENER_SCAN2.lsnr

1 ONLINE ONLINE lxcora02 STABLE

ora.LISTENER_SCAN3.lsnr

1 ONLINE ONLINE lxcora03 STABLE

ora.MGMTLSNR

1 ONLINE ONLINE lxcora01 169.254.19.64 192.21

0.39.10 192.211.39.1

0,STABLE

ora.asm

1 ONLINE ONLINE lxcora01 Started,STABLE

2 ONLINE ONLINE lxcora04 Started,STABLE

3 ONLINE ONLINE lxcora02 Started,STABLE

4 ONLINE ONLINE lxcora06 Started,STABLE

5 ONLINE ONLINE lxcora03 Started,STABLE

6 ONLINE ONLINE lxcora05 Started,STABLE

ora.cvu

1 ONLINE ONLINE lxcora01 STABLE

ora.gns

1 ONLINE ONLINE lxcora05 STABLE

ora.gns.vip

1 ONLINE ONLINE lxcora05 STABLE

ora.lxcora01.vip

1 ONLINE ONLINE lxcora01 STABLE

ora.lxcora02.vip

1 ONLINE ONLINE lxcora02 STABLE

ora.lxcora03.vip

1 ONLINE ONLINE lxcora03 STABLE

ora.lxcora04.vip

1 ONLINE ONLINE lxcora04 STABLE

ora.lxcora05.vip

1 ONLINE ONLINE lxcora05 STABLE

ora.lxcora06.vip

1 ONLINE ONLINE lxcora06 STABLE

ora.mgmtdb

1 ONLINE ONLINE lxcora01 Open,STABLE

ora.oc4j

1 ONLINE ONLINE lxcora01 STABLE

ora.scan1.vip

1 ONLINE ONLINE lxcora04 STABLE

ora.scan2.vip

1 ONLINE ONLINE lxcora02 STABLE

ora.scan3.vip

1 ONLINE ONLINE lxcora03 STABLE

--------------------------------------------------------------------------------

[root@lxcora01 ~]# uname -a

Linux lxcora01 3.19.0-26-scst #28 SMP Wed Sep 2 00:35:46 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux

[root@lxcora01 ~]#

Note that everything on this system shared the same kernel. All the RAC nodes and the SCST SCAN too. The power of LXC containers. The performance is quite remarkable for a laptop, compared to hypervisors, of course, because all these container RAC nodes run as "bare metal". I will upload SLOB results to demonstrate.

Gilbert Standen

Yonkers, NY

September 2, 2015

INSTALL INSTRUCTIONS START HERE

Download the latest tar.gz bundle from nandydandyoracle here (that's this webpage incidentally). The latest bundle is the "v9" as of this edit.

Now open a terminal session on the Ubuntu desktop and go the the ~/Downloads subdirectory of your user account. For example, on my desktop this directory will be:

/home/gstanden/Downloads

and then gunzip and untar the bundle as shown below.

oracle@W521:~$ clear

.

oracle@W521:~$ cd Downloads

oracle@W521:~/Downloads$ ls -lrt

total 52

-rw-rw-r-- 1 oracle oracle 51614 Sep 3 21:54 ubuntu-lxc-oracle.v8.tar.gz

oracle@W521:~/Downloads$ gunzip ubuntu-lxc-oracle.v8.tar.gz

oracle@W521:~/Downloads$ tar -xvf ubuntu-lxc-oracle.v8.tar

ubuntu-host.lst

ubuntu-host.tar

lxc-config.lst

lxc-config.tar

lxc-lxcora01.lst

lxc-lxcora01.tar

lxc-lxcora0x.lst

lxc-lxcora0x.tar

scst-files.tar

ubuntu-services-1.sh

ubuntu-services-2a.sh

ubuntu-services-2b.sh

ubuntu-services-3a.sh

ubuntu-services-3b.sh

ubuntu-services-3c.sh

ubuntu-services-3d.sh

ubuntu-host-backup.sh

rc.local

dhclient.conf

lxc-services.sh

install_grid.sh

edit_bashrc

crt_links_v2.sh

ubuntu-lxc-oracle.lst.copy

README

oracle@W521:~/Downloads$

Run the scripts in this order as shown below. Some scripts will reboot the host when finished.

oracle@W521:~/Downloads$ ls ubuntu-services* | sort

ubuntu-services-1.sh

ubuntu-services-2a.sh

ubuntu-services-2b.sh

ubuntu-services-3a.sh

ubuntu-services-3b.sh

ubuntu-services-3c.sh

ubuntu-services-3d.sh

oracle@W521:~/Downloads$

If you have not previously run any sudo commands, the ubuntu-services-1.sh script will stop to prompt for your user password so that sudo commands can be run as shown below.

oracle2@W522:~/Downloads$ ./ubuntu-services-1.sh

============================================

Verify network up....

============================================

PING google.com (74.125.196.113) 56(84) bytes of data.

64 bytes from yk-in-f113.1e100.net (74.125.196.113): icmp_seq=1 ttl=41 time=31.8 ms

--- google.com ping statistics ---

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 31.882/31.882/31.882/0.000 ms

[sudo] password for oracle2:

In this example, the default user account created at install of the OS was "oracle2". Yours might be /home/jsmith/Downloads if your name was Jim Smith and you selected "jsmith" as your username when you installed the Ubuntu base OS on your laptop or desktop.

This script needs to have a parameter passed in (the number of RAC nodes you want). I have a Lenovo W520 top-of-the-line mobile workstation with a quad-core processor and 32 Gb of RAM, so I choose "6" for "six RAC node LXC containers". If you give no parameter at all, it defaults to "2" so if you have an "ordinary" desktop with 4Gb of RAM or maybe 8Gb of RAM, just pass in "2" as the parameter or take the default which is also "2".

================================================

Now run ubuntu-services-3c.sh X

Note that ubuntu-services-3c.sh takes an input

variable X which is the number of LXC RAC nodes

you wish to create. If X is not entered, the

build defaults to a 2-node RAC cluster. If X is

set to 6 it will create a 6-node RAC

================================================

oracle@W521:~/Downloads$ ./ubuntu-services-3c.sh 6

When you get this message, in most cases you will answer "N" because you should be installing on a fresh copy of Ubuntu 15.04 64-bit desktop and there should be no previously-created containers. However, this code exists in the script because I'm working on dealing with the case that containers of the same name already exist. Usually, just answer "N" to this, especially if as shown below, no containers are found matching the naming criteria output shows empty set { } as shown below.

!!! WARNING !!!

===========================================

Destruction of cloned containers ( Y / N )

===========================================

ClonedContainersExist =

Existing containers in the set { } have been found.

These containers match the names of containers that are about to be created.

Please answer Y to destroy the existing containers or N to keep them

!!! WARNING: ANSWERING Y WILL DESTROY EXISTING CONTAINERS !!!

Destroy existing containers? [ Y | N ]:

N

Final output shows all Oracle-ready lxcora0x containers are running and ready for Oracle RAC Grid Infrastructure install as shown below.

================================================

Starting LXC clone containers for Oracle

================================================

starting a container...

lxcora01

next command will be: sudo lxc-start -n lxcora01

sleeping 20 seconds...

starting a container...

lxcora02

next command will be: sudo lxc-start -n lxcora02

sleeping 20 seconds...

starting a container...

lxcora03

next command will be: sudo lxc-start -n lxcora03

sleeping 20 seconds...

starting a container...

lxcora04

next command will be: sudo lxc-start -n lxcora04

sleeping 20 seconds...

starting a container...

lxcora05

next command will be: sudo lxc-start -n lxcora05

sleeping 20 seconds...

starting a container...

lxcora06

next command will be: sudo lxc-start -n lxcora06

sleeping 20 seconds...

================================================

Waiting for final container initialization...

================================================

================================================

LXC containers for Oracle started.

================================================

NAME STATE IPV4 IPV6 GROUPS AUTOSTART

--------------------------------------------------------------------------------------------------------------------------------------------------

lxcora00 STOPPED - - - NO

lxcora01 RUNNING 10.207.39.10, 172.220.40.10, 172.221.40.10, 192.210.39.10, 192.211.39.10, 192.212.39.10, 192.213.39.10 - - NO

lxcora02 RUNNING 10.207.39.11, 172.220.40.11, 172.221.40.11, 192.210.39.11, 192.211.39.11, 192.212.39.11, 192.213.39.11 - - NO

lxcora03 RUNNING 10.207.39.12, 172.220.40.12, 172.221.40.12, 192.210.39.12, 192.211.39.12, 192.212.39.12, 192.213.39.12 - - NO

lxcora04 RUNNING 10.207.39.13, 172.220.40.13, 172.221.40.13, 192.210.39.13, 192.211.39.13, 192.212.39.13, 192.213.39.13 - - NO

lxcora05 RUNNING 10.207.39.14, 172.220.40.14, 172.221.40.14, 192.210.39.14, 192.211.39.14, 192.212.39.14, 192.213.39.14 - - NO

lxcora06 RUNNING 10.207.39.15, 172.220.40.15, 172.221.40.15, 192.210.39.15, 192.211.39.15, 192.212.39.15, 192.213.39.15 - - NO

oracle@W521:~/Downloads$

Next untar the scst-files.tar file and cd to scst-files to create the SCST Linux SAN which will provide the storage LUNs for the Oracle 12c Grid Infrastructure install as shown below.

[ SIDE NOTE: You do not have to use SCST Linux SAN for this . You could just as easily use Linux TGT SAN. I am using SCST because it is one of the few Linux SAN products that can present 4K logical / 4K phyiscal LUNs. If you don't need that feature, TGT SAN will be simpler and does not require compilation of a custom kernel. More on TGT SAN here. ]

oracle@W521:~/Downloads$ ls -lrt scst*

-rw-rw-r-- 1 oracle oracle 40960 Sep 2 21:57 scst-files.tar

oracle@W521:~/Downloads$ tar -xvf scst-files.tar

scst-files/

scst-files/create-scst.

scst-files/create-scst-1b.sh

scst-files/create-scst-1d.sh

scst-files/create-scst-2a.sh

scst-files/create-scst-4b.sh

scst-files/create-scst-4a.sh

scst-files/README

scst-files/create-scst-2b.sh

scst-files/create-scst-5a.sh

scst-files/create-scst-1c.sh

scst-files/NOTES

scst-files/create-scst-1a.sh

scst-files/create-scst-3.sh

scst-files/create-scst-5b.sh

oracle@W521:~/Downloads$ cd scst-files

oracle@W521:~/Downloads/scst-files$ ls create-scst* | sort

create-scst-1a.sh

create-scst-1b.sh

create-scst-1c.sh

create-scst-1d.sh

create-scst-2a.sh

create-scst-2b.sh

create-scst-3.sh

create-scst-4a.sh

create-scst-4b.sh

create-scst-5a.sh

create-scst-5b.sh

create-scst.sh

oracle@W521:~/Downloads/scst-files$

Run the create-scst-*.sh files in the order shown above. When you run the create-scst-1d.sh script, you will get these prompts of the form "(NEW)" and just hit enter for each one (accepts default value "NEW") as shown below. See bold section at end of following blockcode quote where the first prompt for "(NEW)" appears. You willl get many of these prompts and just accept all of the "(NEW)" prompts by hitting <ENTER> key at each prompt to accept default value of "(NEW)".

oracle@W521:~/Downloads/scst-files$ ./create-scst-1d.sh

for i in debian.master/d-i/kernel-versions.in debian.master/control.stub.in; do \

new=`echo $i | sed 's/\.in$//'`; \

cat $i | sed -e 's/PKGVER/3.19.0/g' \

-e 's/ABINUM/26/g' \

-e 's/SRCPKGNAME/linux/g' \

-e 's/=HUMAN=/64 bit x86/g' \

> $new; \

done

flavours="debian.master/control.d/vars.generic debian.master/control.d/vars.generic-lpae debian.master/control.d/vars.lowlatency debian.master/control.d/vars.powerpc-e500mc debian.master/control.d/vars.powerpc-smp debian.master/control.d/vars.powerpc64-emb debian.master/control.d/vars.powerpc64-smp debian.master/control.d/vars.scst";\

for i in $flavours; do \

/bin/bash -e debian/scripts/control-create $i | \

sed -e 's/PKGVER/3.19.0/g' \

-e 's/ABINUM/26/g' \

-e 's/SRCPKGNAME/linux/g' \

-e 's/=HUMAN=/64 bit x86/g' \

>> debian.master/control.stub; \

done

cp debian.master/control.stub debian.master/control

rm -rf /home/oracle/Downloads/linux-3.19.0/debian/build/modules /home/oracle/Downloads/linux-3.19.0/debian/build/firmware \

/home/oracle/Downloads/linux-3.19.0/debian/build/kernel-versions /home/oracle/Downloads/linux-3.19.0/debian/build/package-list \

/home/oracle/Downloads/linux-3.19.0/debian/build/debian.master

mkdir -p /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64/

cp debian.master/d-i/modules/* /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64/

mkdir -p /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64/

cp debian.master/d-i/firmware/* /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64/

cp debian.master/d-i/package-list debian.master/d-i/kernel-versions /home/oracle/Downloads/linux-3.19.0/debian/build/

touch /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64/kernel-image

# kernel-wedge needs to poke around in debian.master/

ln -nsf /home/oracle/Downloads/linux-3.19.0/debian /home/oracle/Downloads/linux-3.19.0/debian/build/debian

# Some files may need to differ between architectures

if [ -d debian.master/d-i/modules-amd64 ]; then \

cp debian.master/d-i/modules-amd64/* \

/home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64/; \

fi

if [ -d debian.master/d-i/firmware-amd64 ]; then \

cp debian.master/d-i/firmware-amd64/* \

/home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64/; \

fi

# Remove unwanted stuff for this architecture

if [ -r "debian.master/d-i/exclude-modules.amd64" ]; then \

(cat debian.master/d-i/exclude-modules.amd64; \

ls /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64/) | sort | uniq -d | \

(cd /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64/; xargs rm -f); \

fi

if [ -r "debian.master/d-i/exclude-firmware.amd64" ]; then \

(cat debian.master/d-i/exclude-firmware.amd64; \

ls /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64/) | sort | uniq -d | \

(cd /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64/; xargs rm -f); \

fi

# Per flavour module lists

flavour_modules=`ls debian.master/d-i/modules.amd64-* 2>/dev/null` \

|| true; \

if [ "$flavour_modules" != "" ]; then \

for flav in $flavour_modules; do \

name=`echo $flav | sed 's/.*\/modules.amd64-//'`; \

mkdir /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64-$name; \

(cd /home/oracle/Downloads/linux-3.19.0/debian/build/modules/; tar cf - `cat ../$flav`) | \

(cd /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64-$name/; tar xf -); \

touch /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64-$name/kernel-image; \

done; \

fi

flavour_firmware=`ls debian.master/d-i/firmware.amd64-* 2>/dev/null` \

|| true; \

if [ "$flavour_firmware" != "" ]; then \

for flav in $flavour_firmware; do \

name=`echo $flav | sed 's/.*\/firmware.amd64-//'`; \

mkdir /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64-$name; \

(cd /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/; tar cf - `cat ../$flav`) | \

(cd /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64-$name/; tar xf -);\

touch /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64-$name/kernel-image; \

done; \

fi

# Some files may need to differ between flavours

flavour_module_dirs=`ls -d debian.master/d-i/modules-amd64-* 2>/dev/null`\

|| true; \

if [ "$flavour_module_dirs" ]; then \

for flav in $flavour_module_dirs; do \

name=`echo $flav | sed 's/.*\/modules-amd64-//'`; \

[ -d /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64-$name ] || \

cp -a /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64 \

modules/amd64-$name; \

cp $flav/* /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64-$name/; \

done; \

fi

flavour_firmware_dirs=`ls -d debian.master/d-i/firmware-amd64-* 2>/dev/null`\

|| true; \

if [ "$flavour_firmware_dirs" ]; then \

for flav in $flavour_firmware_dirs; do \

name=`echo $flav | sed 's/.*\/firmware-amd64-//'`; \

[ -d /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64-$name ] || \

cp -a /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64 \

firmware/amd64-$name; \

cp $flav/* /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64-$name/; \

done; \

fi

# Remove unwanted stuff for each flavour

flavour_exclude=`ls debian.master/d-i/exclude-modules.amd64-* 2>/dev/null`\

|| true; \

if [ "$flavour_exclude" ]; then \

for flav in $flavour_exclude; do \

name=`echo $flav | sed 's/.*\/exclude-modules.amd64-//'`;\

[ -d /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64-$name ] || \

cp -a /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64 \

/home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64-$name; \

(cat $flav; \

ls /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64-$name) | \

sort | uniq -d | \

(cd /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64-$name/; \

xargs rm -f); \

done; \

fi

flavour_exclude=`ls debian.master/d-i/exclude-firmware.amd64-* 2>/dev/null`\

|| true; \

if [ "$flavour_exclude" ]; then \

for flav in $flavour_exclude; do \

name=`echo $flav | sed 's/.*\/exclude-firmware.amd64-//'`;\

[ -d /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64-$name ] || \

cp -a /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64 \

/home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64-$name; \

(cat $flav; \

ls /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64-$name) | \

sort | uniq -d | \

(cd /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64-$name/; \

xargs rm -f); \

done; \

fi

if [ ! -d /home/oracle/Downloads/linux-3.19.0/debian/build/modules/x86_64 ]; then \

mkdir -p /home/oracle/Downloads/linux-3.19.0/debian/build/modules/x86_64; \

cp /home/oracle/Downloads/linux-3.19.0/debian/build/modules/amd64/* \

/home/oracle/Downloads/linux-3.19.0/debian/build/modules/x86_64; \

fi

if [ ! -d /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/x86_64 ]; then \

mkdir -p /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/x86_64; \

cp /home/oracle/Downloads/linux-3.19.0/debian/build/firmware/amd64/* \

/home/oracle/Downloads/linux-3.19.0/debian/build/firmware/x86_64; \

fi

cp debian.master/control.stub debian/control.stub

cd /home/oracle/Downloads/linux-3.19.0/debian/build && LANG=C kernel-wedge gen-control > /home/oracle/Downloads/linux-3.19.0/debian/control

Use of uninitialized value $builddep in string ne at /usr/share/kernel-wedge/commands/gen-control line 43, <KVERS> line 6.

Use of uninitialized value $builddep in split at /usr/share/kernel-wedge/commands/gen-control line 44, <KVERS> line 6.

Use of uninitialized value $builddep in string ne at /usr/share/kernel-wedge/commands/gen-control line 43, <KVERS> line 8.

Use of uninitialized value $builddep in split at /usr/share/kernel-wedge/commands/gen-control line 44, <KVERS> line 8.

Use of uninitialized value $builddep in string ne at /usr/share/kernel-wedge/commands/gen-control line 43, <KVERS> line 10.

Use of uninitialized value $builddep in split at /usr/share/kernel-wedge/commands/gen-control line 44, <KVERS> line 10.

Use of uninitialized value $builddep in string ne at /usr/share/kernel-wedge/commands/gen-control line 43, <KVERS> line 12.

Use of uninitialized value $builddep in split at /usr/share/kernel-wedge/commands/gen-control line 44, <KVERS> line 12.

dh_testdir

dh_testroot

dh_clean

# d-i stuff

rm -rf debian.master/d-i-amd64

# Generated on the fly.

rm -f debian.master/d-i/firmware/kernel-image

# normal build junk

rm -rf debian.master/abi/3.19.0-26.28

rm -rf /home/oracle/Downloads/linux-3.19.0/debian/build

rm -f /home/oracle/Downloads/linux-3.19.0/debian/stamps/stamp-*

rm -rf debian.master/linux-*

# This gets rid of the d-i packages in control

cp -f debian.master/control.stub debian.master/control

cp debian.master/changelog debian/changelog

# Install the copyright information.

cp debian.master/copyright debian/copyright

# If we have a reconstruct script use it.

[ -f debian.master/reconstruct ] && bash -x debian.master/reconstruct

+ '[' '!' -L ubuntu/vbox/vboxguest/include ']'

+ ln -sf ../include ubuntu/vbox/vboxguest/include

+ '[' '!' -L ubuntu/vbox/vboxguest/r0drv ']'

+ ln -sf ../r0drv ubuntu/vbox/vboxguest/r0drv

+ '[' '!' -L ubuntu/vbox/vboxsf/include ']'

+ ln -sf ../include ubuntu/vbox/vboxsf/include

+ '[' '!' -L ubuntu/vbox/vboxsf/r0drv ']'

+ ln -sf ../r0drv ubuntu/vbox/vboxsf/r0drv

+ '[' '!' -L ubuntu/vbox/vboxvideo/include ']'

+ ln -sf ../include ubuntu/vbox/vboxvideo/include

+ exit 0

dh_testdir;

/bin/bash -e debian/scripts/misc/kernelconfig updateconfigs

* Run silentoldconfig (yes=0) on amd64/config.flavour.generic ...

make[1]: Entering directory '/home/oracle/Downloads/linux-3.19.0'

make[2]: Entering directory '/home/oracle/Downloads/linux-3.19.0/build'

HOSTCC scripts/basic/fixdep

GEN ./Makefile

HOSTCC scripts/kconfig/conf.o

SHIPPED scripts/kconfig/zconf.tab.c

SHIPPED scripts/kconfig/zconf.lex.c

SHIPPED scripts/kconfig/zconf.hash.c

HOSTCC scripts/kconfig/zconf.tab.o

HOSTLD scripts/kconfig/conf

scripts/kconfig/conf --silentoldconfig Kconfig

.config:3896:warning: override: M686 changes choice state

.config:7457:warning: override: TREE_RCU changes choice state

.config:8637:warning: override: TICK_CPU_ACCOUNTING changes choice state

.config:8640:warning: override: USB_DWC2_HOST changes choice state

*

* Restart config...

*

*

* Networking options

*

Packet socket (PACKET) [Y/n/m/?] y

Packet: sockets monitoring interface (PACKET_DIAG) [M/n/y/?] m

Unix domain sockets (UNIX) [Y/n/m/?] y

UNIX: socket monitoring interface (UNIX_DIAG) [M/n/y/?] m

Transformation user configuration interface (XFRM_USER) [M/n/y/?] m

Transformation sub policy support (XFRM_SUB_POLICY) [N/y/?] n

Transformation migrate database (XFRM_MIGRATE) [N/y/?] n

Transformation statistics (XFRM_STATISTICS) [Y/n/?] y

PF_KEY sockets (NET_KEY) [M/n/y/?] m

PF_KEY MIGRATE (NET_KEY_MIGRATE) [N/y/?] n

TCP/IP networking (INET) [Y/n/?] y

TCP/IP zero-copy transfer completion notification (TCP_ZERO_COPY_TRANSFER_COMPLETION_NOTIFICATION) [N/y/?] (NEW)

After answering many of these "(NEW)" prompts and accepting default value "(NEW)" you will eventually get the following prompt as shown below. Notice that all prompts should be answered with "n" in this phase EXCEPT the line for "amd64/config.flavour.scst? [Y/n] Y" which as you can see should be answered with "Y" as shown below (bolded line). Be sure that ONLY this config is answered with "Y" . All other config line prompts should be answered with "n" as shown below.

check-config: 43/43 checks passed -- exit 0

rm -rf build

dh_testdir;

/bin/bash -e debian/scripts/misc/kernelconfig editconfigs

Do you want to edit config: amd64/config.flavour.generic? [Y/n]

Do you want to edit config: amd64/config.flavour.lowlatency? [Y/n] n

Do you want to edit config: amd64/config.flavour.scst? [Y/n] Y

Answer "n" to the rest of the "config" prompts. There will be an error message about some builds as shown below, but these are expected and can be ignored because we only care about the build on amd64 which is successful.

*** ERROR: 12 config-check failures detected

When you choose the "Y" for config.flavour.scst you will be presented with a linux GUI screen that will popup automatically. This utility is so that you can configure the new SCST options. Screenshots are shown below to guide you through choosing the required SCST options for the kernel.

This is the screen you first get as shown below.

Use the "down arrow" key on your keyboard to move down to "Networking support --->" as shown below.

Use the down arrow key to move down to "Networking options --->" as shown below.

Use the down arrow key to move down to "TCP/IP zero-copy transfer completion notification" and press the spacebar on the keyboard to put a " * " symbol in the brackets, i.e. [ * ] as shown below (selects the feature).

Now use the right-arrow key on the keyboard to move the lower highlighted value from "<Select>" to "<Exit>", and then hit <Enter> to exit this screen, as shown below.

Repeat the same sequence of steps to "<Exit>" the next screen too, as shown below.

Now use the down-arrow key on the keyboard to select "Device Drivers --->" as shown below and hit <Enter> to select.

Now use the down-arrow key on the keyboard to move down to "SCSI target (SCST) support --->" and hit <Enter> to select option.

The "< > SCSI target (SCST) support" option is displayed. Press the spacebar to enable the large number of options associated with this and to put a " *M" in the < M> to select all options as shown in the next screenshot following.

Once the "<M>" is in the option, all the additional options appear. No need to make anymore changes, this is it for this screen. Now use the right arrow key as before to exit from this screen back up to the main screen as shown in the next screen following.

Next screenshot shows using right-arrow key to exit back up to the previous screen, as shown below.

Again, use the right-arrow key to select "<Exit>" to exit up to the next higher screen as shown below.

Again, use right-arrow screen to select "<Exit>" to move back up to the next higher screen as shown below.

This is the final screen. Just accept the default "<Yes>" to accept all the change you have made as shown below.

Now the kernel build will continue on and will run for a relatively long time such as 1 hour or more. It can take quite a long time for the kernel to build. You can go to sleep if it's late and check it in the morning, or go have a few beers and some tacos and come back in 90 minutes or so to see if the kernel build has finished.

When the kernel build finishes, these are the last few lines where you get back to the prompt as shown below.

flock -w 60 /home/oracle/Downloads/linux-3.19.0/debian/.LOCK dh_gencontrol -plinux-cloud-tools-3.19.0-26-scst

dh_md5sums -plinux-cloud-tools-3.19.0-26-scst

dh_builddeb -plinux-cloud-tools-3.19.0-26-scst

dpkg-deb: building package `linux-cloud-tools-3.19.0-26-scst' in `../linux-cloud-tools-3.19.0-26-scst_3.19.0-26.28_amd64.deb'.

oracle@W521:~/Downloads/scst-files$

Once the kernel build has finished as shown above, you need to continue with the next script to be run as shown below. The first few output lines of the next script create-scst-2a.sh is shown below.

oracle@W521:~/Downloads/scst-files$ ./create-scst-2a.sh

-rw-r--r-- 1 root root 9499358 Sep 3 23:09 linux-headers-3.19.0-26_3.19.0-26.28_all.deb

-rw-r--r-- 1 root root 873666 Sep 4 00:03 linux-headers-3.19.0-26-scst_3.19.0-26.28_amd64.deb

-rw-r--r-- 1 root root 55869032 Sep 4 00:03 linux-image-3.19.0-26-scst_3.19.0-26.28_amd64.deb

[sudo] password for oracle:

Next run create-scst-2b.sh script. This builds scst modules.

Now it's time to run create-scst-3.sh script.

Next, edit the create-scst-4a.sh script and uncomment the line as shown below (if necessary). In any case, be sure the indicated line is uncommented as shown below.

# Create Target and Groups

sudo scstadmin -add_target iqn.2015-08.org.vmem:w520.san.asm.luns -driver iscsi <-- Uncomment this line !!

When the create-scst-3.sh script has finished, it will wait for 30 seconds, and will present the following output (this is the expected output which shows your SCST SAN has successfully been configured and is ready with 3 LUNs for Oracle Grid Infrastructure install.

======================================================

Verify that SCST SAN is fully configured and ready

Sleeping for 30 seconds...

======================================================

Collecting current configuration: done.

Driver: iscsi

Target: iqn.2015-08.org.vmem:w520.san.asm.luns

Driver/target 'iscsi/iqn.2015-08.org.vmem:w520.san.asm.luns' has no associated LUNs.

Group: lxc1

Assigned LUNs:

LUN Device

--------------------

0 asm_systemdg_00

1 asm_systemdg_01

2 asm_systemdg_02

Assigned Initiators:

Initiator

-------------------------------------

iqn.2014-09.org.vmem1:oracle.asm.luns

All done.

Next run create-scst-4b.sh script. The expected output is shown below. Notice that the LUNs are discovered on many networks, but that we only login to SCST LUNs on the two OpenvSwitch networks, sw2 and sw3, that are dedicated as LUN storage networks. Notice also that the last step does an "ls -l /dev/mapper" and shows that we now have 3 multipath LUNs that are intended for the SYSTEMDG Oracle ASM diskgroup of the 12c ASM Flex Cluster Grid Infrastructure install.

oracle@W521:~/Downloads/scst-files$ ./create-scst-4b.sh

10.207.40.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

192.168.1.7:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

192.213.39.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

10.0.3.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

10.207.39.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

192.210.39.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

172.221.40.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

10.207.41.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

172.220.40.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

10.207.29.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

192.211.39.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

192.212.39.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

192.168.122.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

10.207.41.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

192.168.1.7:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

192.213.39.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

10.0.3.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

10.207.39.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

192.210.39.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

10.207.40.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

172.221.40.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

172.220.40.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

10.207.29.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

192.211.39.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

192.212.39.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

192.168.122.1:3260,1 iqn.2015-08.org.vmem:w520.san.asm.luns

Logging in to [iface: default, target: iqn.2015-08.org.vmem:w520.san.asm.luns, portal: 10.207.41.1,3260] (multiple)

Login to [iface: default, target: iqn.2015-08.org.vmem:w520.san.asm.luns, portal: 10.207.41.1,3260] successful.

Logging in to [iface: default, target: iqn.2015-08.org.vmem:w520.san.asm.luns, portal: 10.207.40.1,3260] (multiple)

Login to [iface: default, target: iqn.2015-08.org.vmem:w520.san.asm.luns, portal: 10.207.40.1,3260] successful.

total 0

crw------- 1 root root 10, 236 Sep 4 00:12 control

brw-rw---- 1 root disk 252, 2 Sep 4 00:24 mpath2

brw-rw---- 1 root disk 252, 0 Sep 4 00:24 mpath1

brw-rw---- 1 root disk 252, 1 Sep 4 00:24 mpath0

oracle@W521:~/Downloads/scst-files$

,

Now run create-scst-5a.sh script. This script if it works as designed should auto-create the /etc/multipath.conf file you will need to configure your multipath LUNs in /dev/mapper with friendly names and with other settings designed for the Oracle deployment. Some details of what create-scst-5a.sh does is shown below. The multipath.conf file that is autocreated for my Lenovo W520 mobile workstation is shown below. You will likely be on different laptop or desktop and different disk and your multipath.conf will be different in the "blacklist" and "devices" stanzas, but the script has been designed to configure the multipath.conf for your system. If you have issues, please email me at "gilstanden@hotmail.com" so that I can improve the script to work for all types of combos of machine and disks.

oracle@W521:~/Downloads/scst-files$ ./create-scst-5a.sh

oracle@W521:~/Downloads/scst-files$ more multipath.conf

blacklist {

devnode "sd[a]$"

wwid "350004cf20fba5cfd"

device {

vendor "ATA"

product "ST2000LM003HN-M"

revision "0001"

}

}

defaults {

user_friendly_names yes

}

devices {

device {

vendor "SCST_FIO"

product "asm*"

revision "310"

path_grouping_policy group_by_serial

getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"

hardware_handler "0"

features "1 queue_if_no_path"

fast_io_fail_tmo 5

dev_loss_tmo 30

failback immediate

rr_weight uniform

no_path_retry fail

path_checker tur

rr_min_io 4

path_selector "round-robin 0"

}

}

multipaths {

multipath {

wwid 26537623737636234

alias asm_systemdg_00

}

multipath {

wwid 26634313565346333

alias asm_systemdg_01

}

multipath {

wwid 26330663234633561

alias asm_systemdg_02

}

}

oracle@W521:~/Downloads/scst-files$

Now run create-scst-5b.sh script. See the output of this script below for details. Note that it has assigned the user-friendly names to the mpathX lunx in /dev/mapper. If the multipath.conf was created correctly for your system, you will also see the LUN "friendly names" in /dev/mapper also. Notice highlighed in bold the friendly names and the true multiapath (2 paths) that have been achieved for this SCST SAN for Oracle. Note that the owner and group of the LUNs ("root" and "disk" respectively) do not matter because the LXC containers when they startup will set the owner, group and mode for the LUNs to "grid", "asmadmin" and 660. Once the containers are using the storage, you will notice that the owner and group for the LUNs at the OS level will change to the UID and GID of the grid user. [ Side note: Optionally, you can create a "grid" user at the OS level with "asmadmin" group at the OS level that have the same GID and UID as the grid user and group in the containers, so that files being accessed by the containers will display with the same named group and user at the OS level as in the container].

oracle@W521:~/Downloads/scst-files$ ./create-scst-5b.sh

Logging out of session [sid: 1, target: iqn.2015-08.org.vmem:w520.san.asm.luns, portal: 10.207.41.1,3260]

Logout of [sid: 1, target: iqn.2015-08.org.vmem:w520.san.asm.luns, portal: 10.207.41.1,3260] successful.

Logging out of session [sid: 2, target: iqn.2015-08.org.vmem:w520.san.asm.luns, portal: 10.207.40.1,3260]

Logout of [sid: 2, target: iqn.2015-08.org.vmem:w520.san.asm.luns, portal: 10.207.40.1,3260] successful.

===========================================================================

Login to SCST target...

Verify that login to SCST target is successful...

===========================================================================

Logging in to [iface: default, target: iqn.2015-08.org.vmem:w520.san.asm.luns, portal: 10.207.41.1,3260] (multiple)

Login to [iface: default, target: iqn.2015-08.org.vmem:w520.san.asm.luns, portal: 10.207.41.1,3260] successful.

Logging in to [iface: default, target: iqn.2015-08.org.vmem:w520.san.asm.luns, portal: 10.207.40.1,3260] (multiple)

Login to [iface: default, target: iqn.2015-08.org.vmem:w520.san.asm.luns, portal: 10.207.40.1,3260] successful.

===========================================================================

Verify /dev/mapper output...should be using multipath friendly names...

Sleeping 5 seconds...

===========================================================================

total 0

crw------- 1 root root 10, 236 Sep 4 00:12 control

brw-rw---- 1 root disk 252, 1 Sep 4 00:32 asm_systemdg_02

brw-rw---- 1 root disk 252, 0 Sep 4 00:32 asm_systemdg_01

brw-rw---- 1 root disk 252, 2 Sep 4 00:32 asm_systemdg_00

===========================================================================

Verify multipath -ll -v2 output ...

Sleeping 5 seconds...

===========================================================================

asm_systemdg_02 (26330663234633561) dm-1 SCST_FIO,asm_systemdg_02

size=10G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

|- 9:0:0:2 sdg 8:96 active ready running

`- 8:0:0:2 sdd 8:48 active ready running

asm_systemdg_01 (26634313565346333) dm-0 SCST_FIO,asm_systemdg_01

size=10G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

|- 9:0:0:1 sdf 8:80 active ready running

`- 8:0:0:1 sdc 8:32 active ready running

asm_systemdg_00 (26537623737636234) dm-2 SCST_FIO,asm_systemdg_00

size=10G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

|- 9:0:0:0 sde 8:64 active ready running

`- 8:0:0:0 sdb 8:16 active ready running

oracle@W521:~/Downloads/scst-files$

This concludes the series of "create-scst-*.sh" scripts. If all has run as intended, the SCST SAN is up and running and ready. You can run some checks on both the LXC Containers and the SCST SAN as shown below.

oracle@W521:~/Downloads/scst-files$ sudo lxc-ls -f

NAME STATE IPV4 IPV6 GROUPS AUTOSTART

------------------------------------------------

lxcora00 STOPPED - - - NO

lxcora01 STOPPED - - - NO

lxcora02 STOPPED - - - NO

lxcora03 STOPPED - - - NO

lxcora04 STOPPED - - - NO

lxcora05 STOPPED - - - NO

lxcora06 STOPPED - - - NO

oracle@W521:~/Downloads/scst-files$ sudo scstadmin -list_group

Collecting current configuration: done.

Driver: iscsi

Target: iqn.2015-08.org.vmem:w520.san.asm.luns

Driver/target 'iscsi/iqn.2015-08.org.vmem:w520.san.asm.luns' has no associated LUNs.

Group: lxc1

Assigned LUNs:

LUN Device

--------------------

0 asm_systemdg_00

1 asm_systemdg_01

2 asm_systemdg_02

Assigned Initiators:

Initiator

-------------------------------------

iqn.2014-09.org.vmem1:oracle.asm.luns

All done.

oracle@W521:~/Downloads/scst-files$

Notice that when the Ubuntu host was last rebooted it booted into the "-scst" kernel that we built as shown below.

oracle@W521:~/Downloads/scst-files$ uname -a

Linux W521 3.19.0-26-scst #28 SMP Thu Sep 3 23:08:42 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux

oracle@W521:~/Downloads/scst-files$

At this point, you might as well kick off the download of the Oracle 12c Grid Infrastructure software from http://www.oracle.com/downloads and then also while it is downloading you can start up the LXC RAC node containers as shown below.

Notice that it's important to be sure that on all containers, that the "10.207.39.x" interface comes up and is the first interface listed.

oracle@W521:~/Downloads/scst-files$ sudo lxc-start -n lxcora01

oracle@W521:~/Downloads/scst-files$ sudo lxc-ls -f

NAME STATE IPV4 IPV6 GROUPS AUTOSTART

------------------------------------------------------------------------------------------------------------------------------------

lxcora00 STOPPED - - - NO

lxcora01 RUNNING 172.220.40.10, 172.221.40.10, 192.210.39.10, 192.211.39.10, 192.212.39.10, 192.213.39.10 - - NO

lxcora02 STOPPED - - - NO

lxcora03 STOPPED - - - NO

lxcora04 STOPPED - - - NO

lxcora05 STOPPED - - - NO

lxcora06 STOPPED - - - NO

oracle@W521:~/Downloads/scst-files$ sudo lxc-ls -f

NAME STATE IPV4 IPV6 GROUPS AUTOSTART

--------------------------------------------------------------------------------------------------------------------------------------------------

lxcora00 STOPPED - - - NO

lxcora01 RUNNING 10.207.39.10, 172.220.40.10, 172.221.40.10, 192.210.39.10, 192.211.39.10, 192.212.39.10, 192.213.39.10 - - NO

lxcora02 STOPPED - - - NO

lxcora03 STOPPED - - - NO

lxcora04 STOPPED - - - NO

lxcora05 STOPPED - - - NO

lxcora06 STOPPED - - - NO

oracle@W521:~/Downloads/scst-files$ sudo lxc-start -n lxcora02

oracle@W521:~/Downloads/scst-files$ sudo lxc-start -n lxcora03

oracle@W521:~/Downloads/scst-files$ sudo lxc-start -n lxcora04

oracle@W521:~/Downloads/scst-files$ sudo lxc-start -n lxcora05

oracle@W521:~/Downloads/scst-files$ sudo lxc-start -n lxcora06

oracle@W521:~/Downloads/scst-files$ sudo lxc-ls -f

NAME STATE IPV4 IPV6 GROUPS AUTOSTART

--------------------------------------------------------------------------------------------------------------------------------------------------

lxcora00 STOPPED - - - NO

lxcora01 RUNNING 10.207.39.10, 172.220.40.10, 172.221.40.10, 192.210.39.10, 192.211.39.10, 192.212.39.10, 192.213.39.10 - - NO

lxcora02 RUNNING 10.207.39.11, 172.220.40.11, 172.221.40.11, 192.210.39.11, 192.211.39.11, 192.212.39.11, 192.213.39.11 - - NO

lxcora03 RUNNING 10.207.39.12, 172.220.40.12, 172.221.40.12, 192.210.39.12, 192.211.39.12, 192.212.39.12, 192.213.39.12 - - NO

lxcora04 RUNNING 10.207.39.13, 172.220.40.13, 172.221.40.13, 192.210.39.13, 192.211.39.13, 192.212.39.13, 192.213.39.13 - - NO

lxcora05 RUNNING 10.207.39.14, 172.220.40.14, 172.221.40.14, 192.210.39.14, 192.211.39.14, 192.212.39.14, 192.213.39.14 - - NO

lxcora06 RUNNING 172.220.40.15, 172.221.40.15, 192.210.39.15, 192.211.39.15, 192.212.39.15, 192.213.39.15 - - NO

oracle@W521:~/Downloads/scst-files$ sudo lxc-ls -f

NAME STATE IPV4 IPV6 GROUPS AUTOSTART

--------------------------------------------------------------------------------------------------------------------------------------------------

lxcora00 STOPPED - - - NO

lxcora01 RUNNING 10.207.39.10, 172.220.40.10, 172.221.40.10, 192.210.39.10, 192.211.39.10, 192.212.39.10, 192.213.39.10 - - NO

lxcora02 RUNNING 10.207.39.11, 172.220.40.11, 172.221.40.11, 192.210.39.11, 192.211.39.11, 192.212.39.11, 192.213.39.11 - - NO

lxcora03 RUNNING 10.207.39.12, 172.220.40.12, 172.221.40.12, 192.210.39.12, 192.211.39.12, 192.212.39.12, 192.213.39.12 - - NO

lxcora04 RUNNING 10.207.39.13, 172.220.40.13, 172.221.40.13, 192.210.39.13, 192.211.39.13, 192.212.39.13, 192.213.39.13 - - NO

lxcora05 RUNNING 10.207.39.14, 172.220.40.14, 172.221.40.14, 192.210.39.14, 192.211.39.14, 192.212.39.14, 192.213.39.14 - - NO

lxcora06 RUNNING 172.220.40.15, 172.221.40.15, 192.210.39.15, 192.211.39.15, 192.212.39.15, 192.213.39.15 - - NO

oracle@W521:~/Downloads/scst-files$ sudo lxc-ls -f

NAME STATE IPV4 IPV6 GROUPS AUTOSTART

--------------------------------------------------------------------------------------------------------------------------------------------------

lxcora00 STOPPED - - - NO

lxcora01 RUNNING 10.207.39.10, 172.220.40.10, 172.221.40.10, 192.210.39.10, 192.211.39.10, 192.212.39.10, 192.213.39.10 - - NO

lxcora02 RUNNING 10.207.39.11, 172.220.40.11, 172.221.40.11, 192.210.39.11, 192.211.39.11, 192.212.39.11, 192.213.39.11 - - NO

lxcora03 RUNNING 10.207.39.12, 172.220.40.12, 172.221.40.12, 192.210.39.12, 192.211.39.12, 192.212.39.12, 192.213.39.12 - - NO

lxcora04 RUNNING 10.207.39.13, 172.220.40.13, 172.221.40.13, 192.210.39.13, 192.211.39.13, 192.212.39.13, 192.213.39.13 - - NO

lxcora05 RUNNING 10.207.39.14, 172.220.40.14, 172.221.40.14, 192.210.39.14, 192.211.39.14, 192.212.39.14, 192.213.39.14 - - NO

lxcora06 RUNNING 172.220.40.15, 172.221.40.15, 192.210.39.15, 192.211.39.15, 192.212.39.15, 192.213.39.15 - - NO

oracle@W521:~/Downloads/scst-files$ sudo lxc-ls -f

NAME STATE IPV4 IPV6 GROUPS AUTOSTART

--------------------------------------------------------------------------------------------------------------------------------------------------

lxcora00 STOPPED - - - NO

lxcora01 RUNNING 10.207.39.10, 172.220.40.10, 172.221.40.10, 192.210.39.10, 192.211.39.10, 192.212.39.10, 192.213.39.10 - - NO

lxcora02 RUNNING 10.207.39.11, 172.220.40.11, 172.221.40.11, 192.210.39.11, 192.211.39.11, 192.212.39.11, 192.213.39.11 - - NO

lxcora03 RUNNING 10.207.39.12, 172.220.40.12, 172.221.40.12, 192.210.39.12, 192.211.39.12, 192.212.39.12, 192.213.39.12 - - NO

lxcora04 RUNNING 10.207.39.13, 172.220.40.13, 172.221.40.13, 192.210.39.13, 192.211.39.13, 192.212.39.13, 192.213.39.13 - - NO

lxcora05 RUNNING 10.207.39.14, 172.220.40.14, 172.221.40.14, 192.210.39.14, 192.211.39.14, 192.212.39.14, 192.213.39.14 - - NO

lxcora06 RUNNING 10.207.39.15, 172.220.40.15, 172.221.40.15, 192.210.39.15, 192.211.39.15, 192.212.39.15, 192.213.39.15 - - NO

oracle@W521:~/Downloads/scst-files$

Once the Oracle Grid Infrastructure 12c (12.1.0.2.0) is downloaded, you can scp it to the lxcora01 LXC containers as the "grid" user as shown below. (Note that you could just as easily cp the media into "/var/lib/lxc/lxcora01/rootfs/home/grid"). After scp'g the install media into the lxcora01 LXC containers, ssh into the lxcora01 container as "grid" user using the "-Y" and "-C" switches to enable x-windows for the Oracle installer step, and then unzip the install media as shown in the next following section.

oracle@W521:~/Downloads$ ls -lrt linuxam*

-rw-rw-r-- 1 oracle oracle 646972897 Sep 4 00:58 linuxamd64_12102_grid_2of2.zip

-rw-rw-r-- 1 oracle oracle 1747043545 Sep 4 01:04 linuxamd64_12102_grid_1of2.zip

oracle@W521:~/Downloads$ scp linuxamd64_12102_grid_*.zip grid@lxcora01:~/.

grid@lxcora01's password:

linuxamd64_12102_grid_1of2.zip 100% 1666MB 333.2MB/s 00:05

linuxamd64_12102_grid_2of2.zip 100% 617MB 308.5MB/s 00:02

oracle@W521:~/Downloads$ ssh -Y -C grid@lxcora01

grid@lxcora01's password:

/usr/bin/xauth: creating new authority file /home/grid/.Xauthority

[grid@lxcora01 ~]$ ls -lrt

total 2337920

drwxr-xr-x 3 grid oinstall 4096 Sep 3 22:27 grid

-rwxr-xr-x 1 grid oinstall 28 Sep 3 22:27 edit_bashrc

-rw-r--r-- 1 grid oinstall 1747043545 Sep 4 01:05 linuxamd64_12102_grid_1of2.zip

-rw-r--r-- 1 grid oinstall 646972897 Sep 4 01:05 linuxamd64_12102_grid_2of2.zip

[grid@lxcora01 ~]$

When you unzip the first Oracle install media file, you can say "y" to overwrite the cvuqdisk rpm file as shown below.

[grid@lxcora01 ~]$ unzip linuxamd64_12102_grid_1of2.zip

Archive: linuxamd64_12102_grid_1of2.zip

replace grid/rpm/cvuqdisk-1.0.9-1.rpm? [y]es, [n]o, [A]ll, [N]one, [r]ename: y

Once both of the Oracle Grid install media files have been unzipped, you can cd into the grid directory (/home/grid/grid) to begin the GRID install with "./runInstaller" as shown below.

[grid@lxcora01 ~]$ ls -lrt

total 2337928

-rwxr-xr-x 1 grid oinstall 28 Sep 3 22:27 edit_bashrc

-rw-r--r-- 1 grid oinstall 1747043545 Sep 4 01:05 linuxamd64_12102_grid_1of2.zip

-rw-r--r-- 1 grid oinstall 646972897 Sep 4 01:05 linuxamd64_12102_grid_2of2.zip

drwxr-xr-x 7 grid oinstall 4096 Sep 4 01:11 grid

[grid@lxcora01 ~]$ cd grid

[grid@lxcora01 grid]$ ./runInstaller

The ntp check must PASS in the installer. If it fails, you can open a terminal and check to be sure all ntpd processes are running and verify that

"ntpq -p"

has valid output, and then retry the checks again for the installer.

oracle@W521:~$ ssh root@lxcora01

Last login: Fri Sep 4 01:16:09 2015 from 10.207.39.1

[root@lxcora01 ~]# ntpq -p

remote refid st t when poll reach delay offset jitter

==============================================================================

*vmem1.vmem.org 132.163.4.102 2 u 49 64 377 0.077 0.012 0.053

[root@lxcora01 ~]# ssh lxcora02

The authenticity of host 'lxcora02 (10.207.39.11)' can't be established.

RSA key fingerprint is a9:85:d2:96:d8:38:56:fa:ba:0b:64:be:f0:a7:59:c4.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'lxcora02,10.207.39.11' (RSA) to the list of known hosts.

root@lxcora02's password:

[root@lxcora02 ~]# ntpq -p

ntpq: read: Connection refused

[root@lxcora02 ~]# ntpd

[root@lxcora02 ~]# ntpq -p

remote refid st t when poll reach delay offset jitter

==============================================================================

vmem1.vmem.org 132.163.4.102 2 u 2 64 1 0.693 0.097 0.000

[root@lxcora02 ~]# ssh lxcora03

The authenticity of host 'lxcora03 (10.207.39.12)' can't be established.

RSA key fingerprint is a9:85:d2:96:d8:38:56:fa:ba:0b:64:be:f0:a7:59:c4.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'lxcora03,10.207.39.12' (RSA) to the list of known hosts.

root@lxcora03's password:

[root@lxcora03 ~]# ntpq -p

remote refid st t when poll reach delay offset jitter

==============================================================================

*vmem1.vmem.org 132.163.4.102 2 u 38 64 377 0.209 0.046 0.054

[root@lxcora03 ~]# ssh lxcora04

The authenticity of host 'lxcora04 (10.207.39.13)' can't be established.

RSA key fingerprint is a9:85:d2:96:d8:38:56:fa:ba:0b:64:be:f0:a7:59:c4.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'lxcora04,10.207.39.13' (RSA) to the list of known hosts.

root@lxcora04's password:

[root@lxcora04 ~]# ntpq -p

remote refid st t when poll reach delay offset jitter

==============================================================================

*vmem1.vmem.org 132.163.4.102 2 u 60 64 377 0.179 0.029 0.074

[root@lxcora04 ~]# ssh lxcora05

The authenticity of host 'lxcora05 (10.207.39.14)' can't be established.

RSA key fingerprint is a9:85:d2:96:d8:38:56:fa:ba:0b:64:be:f0:a7:59:c4.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'lxcora05,10.207.39.14' (RSA) to the list of known hosts.

root@lxcora05's password:

[root@lxcora05 ~]# ntpq -p

remote refid st t when poll reach delay offset jitter

==============================================================================

*vmem1.vmem.org 132.163.4.102 2 u 62 128 377 0.389 0.052 0.045

[root@lxcora05 ~]# ssh lxcora06

The authenticity of host 'lxcora06 (10.207.39.15)' can't be established.

RSA key fingerprint is a9:85:d2:96:d8:38:56:fa:ba:0b:64:be:f0:a7:59:c4.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'lxcora06,10.207.39.15' (RSA) to the list of known hosts.

root@lxcora06's password:

[root@lxcora06 ~]# ntpq -p

remote refid st t when poll reach delay offset jitter

==============================================================================

*vmem1.vmem.org 132.163.4.102 2 u 56 64 377 0.463 0.001 0.090

[root@lxcora06 ~]#

Once the Oracle Grid Infrastructure (GI) installer kicks off, you can monitor the installer actions using "tail -f" on the installer log as shown below. The exact name of your install log file will differ of course, but it will be similar name in format.

[root@lxcora01 logs]# pwd

/u00/app/12.1.0/oraInventory/logs

[root@lxcora01 logs]# ls -lrt

total 1356

-rw------- 1 grid oinstall 124 Sep 4 01:43 oraInstall2015-09-04_01-15-18AM.out

-rw------- 1 grid oinstall 0 Sep 4 01:43 oraInstall2015-09-04_01-15-18AM.err

-rw-r----- 1 grid oinstall 314 Sep 4 01:43 silentInstall2015-09-04_01-15-18AM.log

-rw-r--r-- 1 grid oinstall 1378661 Sep 4 01:43 installActions2015-09-04_01-15-18AM.log

[root@lxcora01 logs]# tail -f installActions2015-09-04_01-15-18AM.log

Note that some of the screenshots near the end of the GI install show the Ubuntu OS screen and the output of the above "tail -f" on the install log. Below are the install screens and options to choose for this installation.

The values for the screen below, such as "lxc1" and "lxc1-scan.gns1.vmem.org" should be typed in exactly as shonw below. Be sure to include "." (period symbols) where shown (they are small here and hard to see, examine the screenshot carefully!). All the values shown must be entered exactly as shown because the install bundle has configured bind9 (aka "named") to resolve these values, so simply just use the exact value below.

Below the addtional lxcora0x nodes need to be added to the list of nodes as shown below.

Pressing the "Add" button will bring up this dialog as shown below. Add each of the additional nodes in the same way. The Node Role is always "HUB" and the "Virtual Hostname" is always "AUTO" so just accept the defaults for each pop-up add cluster node information dialog box.

The final list of nodes on the screen is shown below (for 6 nodes). If you chose fewer nodes then you will of course add fewer nodes here.

The below screen looks almost identical to the previous one, but note that the SSH connectivity lower part of screen has been activiated by clicking on the "SSH connectivity" button. Enter "grid" as the password and then click on "Setup" to setup no-password ssh for "grid" user across all LXC nodes.

Note, if for any reason passowrdless SSH setup fails, just try the tool again. Click "Next" to go to the next screen.

Now that passwordless SSH is setup, the next screen is the interface setup. Use the dropdowns to allocate the interfaces exactly as shown below. The Private is a 4-bond HAIP for the interconnect, and the two ASM interfaces are for the ASM Flex Cluster network. These are all OpenvSwitch networks. All the networking for this RAC is OpenvSwitch.

The below screen will not show the "Disk Path" values until you change the "Disk Discovery Path" as shown next screen following below by clicking on "Change Discovery Path" and changing it to "/dev/mapper/asm*" as shown below.

Set a password for the accounts as shown below.

Select "Do not use IPMI" option.

Do not register with EM Cloud Control unless you are already running it.

Accept the defaults for the groups as shown below and click next.

Set the "Oracle base" and the "Software location" exactly as shown below.

Accept the default Inventory Directory exactly as shown below.

Type in the password which is "root" for the root user so that the installation software can automatically run the root installation scripts automatically. I have tested this option and it works very reliably. If you monitor the install log with the "tail -f" that was shown before the screenshots section, you can monitor the progress of the root.sh scripts running.

The Oracle pre-install Cluster Verification Utility runs as shown below.

The "Network Time Protocol" check should SUCCEED (should be no warnings for NTP). If you get an NTP warning as shown below, it should be addressed and the cluster verification step rerun.

You can click on the "(more details)" link and get the popup dialog with more details on the Cluster Verification check details for that check, as shown below.

After checking all the LXC containers, one container was found to have NTP not running, NTP was started on that LXC containers, the "ntpq -p" command was run on that LXC container, and then the verification check was rerun using the "Check Again" button. This time there was no error for NTP, which is the desired result.

The check warnings and failures below are the ONLY warnings and failures that should occur. The "OS kernel parameter" warnings for {rmem_default, rmem_max, wmem_default, and wmem_max} can be safely ignored because we have set these values at the host level (in the Ubuntu sysctl.conf file) and since all LXC containers share the same kernel with the host, these are actually set where they need to be. The only reason that check fails is because the check itself does not know how to check the host kernel values.

The Device Checks for ASM is a known error about not being able to validate the path on all nodes and can be safely ignored. The same is true of the resolv.conf integrity check, the DHCP check, and the /boot mount check, all of which can be safely ignored.

Therefore, next check the "Ignore All" box as long as only the below warnings and failed messages looks just like the screenshot below with the exact same warnings, and click on "Next" after checking the "Ignore All" box.

Check the "Ignore All" button in the upper-right-corner of the screen as shown below. Then click "Next" to continue.

Click on "Yes" to continue with the install and confirm that the warnings are being ignored.

The install summary screen is displayed as shown below.

Save the response file so that a "silent" install could be done on another subsequent install. Click on "Save" to save the response file.

The install begins and the progress bar indicates progress of the install as shown below.

When the installer gets to running the root scripts, it asks for confirmation to run them. click on "Yes' to run the root scripts.

The root scripts can be monitored using the "tail -f" as previously described and as shown below.

The root.sh script executions on the rest of the nodes (other than lxcora01) run in parallel as shown below. You can expect to see elevated top load averages during this phase.

Sometimes the Oracle Cluster Verification may fail. The Cluster Verification Utillity can be re-run by clicking on "Retry". In this case as seen in the remaining screenshots, the second time Oracle Cluster Veriication Utility ran, it was successful. However, even if CVU is not successful, it usually does not affect the install, and you would find the install was totally successful. So if after a couple of retries of the CVU it still shows some checks failed, then just open a terminal session and login to an lxcora0x node and run "crsctl stat res -t" as root user and inspect if the status output looks like all registered services are status quo normal. Note the CVU check can take a few minutes.

As can be seen from the below screenshot, the "Retry" of CVU was successful.

The install is successful. Close the installer.

Here is the output below of how the initially installed Oracle 12c ASM Flex Cluster will look immediately after the runInstaller session completes. Notice how only 3 ASM instances and associated services are up. This is because the default cardinality of ASM in a Flex Cluster is "3". This can be changed so that ALL services are up using the command:

[root@lxcora01 ] srvctl modify asm -count 6

for example if as in this case the ASM Flex Cluster has 6 nodes.

So, as shown below, running the command to modify ASM and re-run "crsctl stat res -t" gives the following output as shown below. Now as can be seen below, ALL installed and configured services have status ONLINE ONLINE including ALL of the 6 ASM instances and associated ASMNET1LSNR services.

[root@lxcora01 logs]# which srvctl

/u00/app/grid/product/12.1.0/grid/bin/srvctl

[root@lxcora01 logs]# srvctl modify asm -count 6

[root@lxcora01 logs]# crsctl stat res -t

--------------------------------------------------------------------------------

Name Target State Server State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.ASMNET1LSNR_ASM.lsnr

ONLINE ONLINE lxcora01 STABLE

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ONLINE ONLINE lxcora04 STABLE

ONLINE ONLINE lxcora05 STABLE

ONLINE ONLINE lxcora06 STABLE

ora.LISTENER.lsnr

ONLINE ONLINE lxcora01 STABLE

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ONLINE ONLINE lxcora04 STABLE

ONLINE ONLINE lxcora05 STABLE

ONLINE ONLINE lxcora06 STABLE

ora.SYSTEMDG.dg

ONLINE ONLINE lxcora01 STABLE

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ONLINE ONLINE lxcora04 STABLE

ONLINE ONLINE lxcora05 STABLE

ONLINE ONLINE lxcora06 STABLE

ora.net1.network

ONLINE ONLINE lxcora01 STABLE

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ONLINE ONLINE lxcora04 STABLE

ONLINE ONLINE lxcora05 STABLE

ONLINE ONLINE lxcora06 STABLE

ora.ons

ONLINE ONLINE lxcora01 STABLE

ONLINE ONLINE lxcora02 STABLE

ONLINE ONLINE lxcora03 STABLE

ONLINE ONLINE lxcora04 STABLE

ONLINE ONLINE lxcora05 STABLE

ONLINE ONLINE lxcora06 STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE lxcora06 STABLE

ora.LISTENER_SCAN2.lsnr

1 ONLINE ONLINE lxcora02 STABLE

ora.LISTENER_SCAN3.lsnr

1 ONLINE ONLINE lxcora03 STABLE

ora.MGMTLSNR

1 ONLINE ONLINE lxcora01 169.254.39.117 192.2

10.39.10 192.211.39.

10,STABLE

ora.asm

1 ONLINE ONLINE lxcora01 Started,STABLE

2 ONLINE ONLINE lxcora06 Started,STABLE

3 ONLINE ONLINE lxcora02 Started,STABLE

4 ONLINE ONLINE lxcora04 Started,STABLE

5 ONLINE ONLINE lxcora05 Started,STABLE

6 ONLINE ONLINE lxcora03 Started,STABLE

ora.cvu

1 ONLINE ONLINE lxcora01 STABLE

ora.gns

1 ONLINE ONLINE lxcora02 STABLE

ora.gns.vip

1 ONLINE ONLINE lxcora02 STABLE

ora.lxcora01.vip

1 ONLINE ONLINE lxcora01 STABLE

ora.lxcora02.vip

1 ONLINE ONLINE lxcora02 STABLE

ora.lxcora03.vip

1 ONLINE ONLINE lxcora03 STABLE

ora.lxcora04.vip

1 ONLINE ONLINE lxcora04 STABLE

ora.lxcora05.vip

1 ONLINE ONLINE lxcora05 STABLE

ora.lxcora06.vip

1 ONLINE ONLINE lxcora06 STABLE

ora.mgmtdb

1 ONLINE ONLINE lxcora01 Open,STABLE

ora.oc4j

1 ONLINE ONLINE lxcora01 STABLE

ora.scan1.vip

1 ONLINE ONLINE lxcora06 STABLE

ora.scan2.vip

1 ONLINE ONLINE lxcora02 STABLE

ora.scan3.vip

1 ONLINE ONLINE lxcora03 STABLE

--------------------------------------------------------------------------------

[root@lxcora01 logs]#

The "iotop" package can be installed for monitoring IO across all the instances as shown below. Once installed run "sudo iotop" to start the monitor.

oracle@W521:~/Downloads$ sudo iotop

[sudo] password for oracle:

sudo: iotop: command not found

oracle@W521:~/Downloads$ sudo apt-get install iotop

Reading package lists... Done

Building dependency tree

Reading state information... Done

The following packages were automatically installed and are no longer required:

linux-headers-3.19.0-15 linux-headers-3.19.0-15-generic linux-image-3.19.0-15-generic linux-image-extra-3.19.0-15-generic

Use 'apt-get autoremove' to remove them.

The following NEW packages will be installed:

iotop

0 upgraded, 1 newly installed, 0 to remove and 8 not upgraded.

Need to get 23.8 kB of archives.

After this operation, 127 kB of additional disk space will be used.

Get:1 http://us.archive.ubuntu.com/ubuntu/ vivid/universe iotop amd64 0.6-1 [23.8 kB]

Fetched 23.8 kB in 1s (17.2 kB/s)

Selecting previously unselected package iotop.

(Reading database ... 236524 files and directories currently installed.)

Preparing to unpack .../archives/iotop_0.6-1_amd64.deb ...

Unpacking iotop (0.6-1) ...

Processing triggers for man-db (2.7.0.2-5) ...

Setting up iotop (0.6-1) ...

oracle@W521:~/Downloads$

You can view the OpenvSwitch at the Ubuntu host level using the command "sudo ovs-vsctl show".

To login from the Ubuntu host to the Oracle in the containers, I recommend installing Oracle Instant Client as described here. The Oracle RAC is readily accessible from the Ubuntu host over sqlplus.

To ssh into the containers, just use:

ssh root@lxcora01 (or whichever container you want to ssh to).

To stop the container:

sudo lxc-stop -n lxcora01

To start the container

sudo lxc-start -n lxcora01

To list all your containers

sudo lxc-ls -f

Note that when you reboot the desktop, you'll need to login manually to SCST on 10.207.40.1 and 10.207.41.1, the storage networks using the appropriate iscsiadm command (which can be found in the create-scst-*.sh scripts) to get the LUNs active in /dev/mapper on the Ubuntu host. You'll then need to start the LXC containers,which will take care of mapping the storage LUNs into themselves and starting up Oracle GI automatically in the usual way. You can put the iscsiadm login commands to the SCST SAN in the /etc/rc.local file if you prefer, like this, which will configure the LUNs onboot, but still be sure to make sure none of the LUNs are symlinks, as described previously.

oracle3@W523:~/Downloads$ cat /etc/rc.local

#!/bin/sh -e

#

# rc.local

#

# This script is executed at the end of each multiuser runlevel.

# Make sure that the script will "exit 0" on success or any other

# value on error.

#

# In order to enable or disable this script just change the execution

# bits.

#

# By default this script does nothing.

# sudo iscsiadm --mode node --targetname iqn.2015-08.org.vmem:w520.san.asm.luns --portal 10.207.41.1 --login

# sudo iscsiadm --mode node --targetname iqn.2015-08.org.vmem:w520.san.asm.luns --portal 10.207.40.1 --login

# sudo iscsiadm --mode node --targetname iqn.2015-08.org.vmem:w520.san.asm.luns --portal 10.207.41.1 --logout

# sudo iscsiadm --mode node --targetname iqn.2015-08.org.vmem:w520.san.asm.luns --portal 10.207.40.1 --logout

# sudo multipath -F

# sudo iscsiadm --mode node --targetname iqn.2015-08.org.vmem:w520.san.asm.luns --portal 10.207.41.1 --login

# sudo iscsiadm --mode node --targetname iqn.2015-08.org.vmem:w520.san.asm.luns --portal 10.207.40.1 --login

exit 0

oracle3@W523:~/Downloads$

Also note that it can take up to 15 seconds of so for the DHCP provided public IP for each RAC node (the 10.207.39.x network) to come up. Be sure that IP comes up. It's best to start the containers one by one and verify each one has started up it's public IP ok using sudo lxc-ls -f before starting the next containers.