Setting up ORCA

ORCA 5.0 has been released and is the recommended version. There will be very little information about older versions here.

ORCA is available for Windows, Linux and Mac OS X platforms. ORCA is distributed from the main ORCA site. First, one needs to register as a user in the forum and log in. Once logged in, the ORCA Downloads site will be available.

The latest version of ORCA, version 5.0 (released July 2021) is the recommended version and can be accessed from the official website. Generally you should download the latest bugfix release (5.0.4 as of now).

Information about the release in the ORCA Forum.

The ORCA 5.0 version is  available as compressed archives (containing all executables) for Linux, Windows and Mac OSX. Note that only binaries are available, no source code. This has the advantage of requiring no compilation.

The archive should be extracted and will then reveal a directory of the ORCA executables. The program then needs to be configured to work correctly from the command line.

Note that ORCA runs completely from the command line, there is no graphical user interface. Various visualizations programs that can be used with ORCA (to build molecular geometries and visualize results) can be found under Visualization and printing.

Only 64-bit binaries are available. A few 32-bit binaries are available for older ORCA versions. 


Setting up ORCA for a single computer (serial)

Setting up ORCA on a computer typically involves downloading the binaries program and then telling the command line environment of the operation system where ORCA is located (setting the PATH variable). Note that the path to the ORCA binaries should never contain spaces (i.e. no spaces allowed in directory names).


ORCA_Configuration.pdf

Manual Windows configuration:

A slideshow that shows how to configure ORCA for a single computer (serial), for Windows, see above.


Manual quick and simple MAC OS X configuration:

1. Download ORCA binaries for Mac. Extract archive, rename directory to "orca" and move to /Applications folder. Note: ORCA can in principle be anywhere but here we choose to put it in /Applications.

2. Open the Terminal Program (under /Applications/Utilities).

3. Paste the following text (environment variable setting) into the Terminal window and press Enter:

Mac OS  10.15 (Catalina) and newer:

echo 'export PATH="/Applications/orca:$PATH"; export LD_LIBRARY_PATH="/Applications/orca:$LD_LIBRARY_PATH"'  >> ~/.zshrc; source ~/.zshrc

Older Mac OS  versions:

echo 'export PATH="/Applications/orca:$PATH"; export LD_LIBRARY_PATH="/Applications/orca:$LD_LIBRARY_PATH"'  >> ~/.bash_profile; source ~/.bash_profile

Nothing will happen but now "orca" is available as a command in the command line.


Note: New Mac OS versions have a security feature that prevents ORCA and its subprograms from running directly. To override this feature , cd to the ORCA directory in the Terminal and run the following xattr command:

cd /Applications/orca  

xattr -d com.apple.quarantine *


Manual quick and simple Linux configuration:

1. Download ORCA binaries for Linux. Extract archive, rename directory to "orca" and move to your user home folder (~).

2. Open a new Terminal window. 

3. Paste the following text (environment variable setting) into the Terminal window and press Enter:

echo 'export PATH="$HOME/orca:$PATH"; export LD_LIBRARY_PATH="$HOME/orca:$LD_LIBRARY_PATH"'  >> ~/.bash_profile; source ~/.bash_profile

Nothing will happen but now "orca" is available as a command in the command line. Type 'which orca' in the shell to confirm that ORCA is now available in your path.If this did not work the first time, do not repeat but edit the ~/.bash_profile manually.

Note: It is possible that different Linux versions use different bash login files than .bash_profile, e.g. .bashrc or .profile. If you use another shell then bash then you must edit the respective shell configuration file instead (ZSH: .zshrc, TSH: .tcshrc)

Note: Some Linux distributions come with another program named ORCA installed (ORCA the screenreader). This may interfer with ORCA the QM program. Setting ORCA the QM program in the left part of the PATH variable should avoid this problem:

export PATH=$HOME/orca:$PATH

Ask in the ORCA forum if you have problems setting up ORCA. See parallel section below on how to set up ORCA for parallel (multi-core) calculations.


Running ORCA from the command line

Once ORCA is set up, the program is run from the command line (cmd program in Windows, Terminal program on Mac/Linux). ORCA does not have a graphical user interface.

Using the command line in Windows

Using the command line in Mac OS X

Using the command line in Linux 

To quickly check if ORCA has been set up correctly, simply type "orca" in the command line window. You should receive the following message if ORCA has been set up correctly: 


This program requires the name of a parameterfile as argument

For example ORCA TEST.INP

However, if you get some kind of message such as "orca: command not found", then ORCA is not in your $PATH and ORCA has not been set up correctly. 

In order to start using ORCA for calculations you should first create a new directory where you want to work. This can be done through the command line (use the mkdir command) or through the regular operating system interface if preferred. You should then change to that directory in the command line window (cd command). ORCA is then generally run like this from the command line:

Mac/Linux: 

orca inputfile.inp > output.out

Windows: 

orca inputfile.txt > output.out 

inputfile.inp/inputfile.txt is here an ORCA inputfile (can be named anything) created by the user (see example file below) that should be present in the working directory. The ">" character redirects the ORCA output to a new file, here called "output.out" (can be named anything). The inputfile needs to be in plain-text format and can be created using any text editor program that creates plain text. Do not use Microsoft Word! Also note that Windows automatically changes the filetype of text files to TXT so it is easiest to stick to txt as the file extension (typing dir in the command line will show you the actual name and extension of the file).

Text editors to use for creating ORCA inputfiles:

Windows: Use e.g. Notepad. Even better choice is the free Notepad++

Mac OS X: Use e.g. the TextEdit program but remember to change to Plain Text mode (Format menu -> Make Plain text). Or use any command-line editor like nano, vi, emacs etc. 

Linux: Use e.g. TextEditor. Or use any command-line editor like nano, vi, emacs etc. or the X-windows client nedit.

Note that if you connect to a Linux/Unix computing cluster using SSH on your Windows/Mac/Linux machine, it makes most sense to learn how to use the command-line editors:  nano, vi, emacs etc. or the X-windows client nedit.

Example ORCA inputfile on H2O (copy/paste the following into a new file called "inputfile.inp"  (Mac/Linux) or inputfile.txt (Windows)): 

! B3LYP def2-SVP Opt

# My first ORCA calculation

*xyz 0 1
O        0.000000000      0.000000000      0.000000000
H        0.000000000      0.759337000      0.596043000
H        0.000000000     -0.759337000      0.596043000

*

See General input for more information on the format of the input file and rest of website for ORCA keywords in general. Use any text editor program to open the outputfile that contains the results of your calculation. Some programs like Chemcraft, Gabedit and Avogadro can additionally open ORCA outputfiles and visualise the results.

To run ORCA in the background (often convenient):

Windows: 

start /b orca inputfile.inp > output.out    

Mac/Linux: 

orca inputfile.inp > output.out &


Setting up ORCA for a parallel environment

Setting up ORCA to run in parallel on multiple cores is only slightly more involved. Below are general guidelines. See ORCA manual for more information. Ask in the ORCA forum if you run into problems (after you have carefully read all the available information). 

You can install OpenMPI in any directory you have access to, e.g. your home directory (you do not need admin access to the computer/cluster).


Windows

See manual and release notes.

Mac OS X

1.) Install the correct OpenMPI version from openmpi.org

2.) Extract the orca binaries into some directory (using either the Mac OS X Archive Utility or command-line programs gunzip, tar -xvf or bunzip2.

3.) Set the PATH and LD_LIBRARY_PATH variables for BOTH Openmpi and ORCA, so they can be found, by editing the ~/.bashrc file .

4.) Run orca with the full path  in the command line: 

/full/path/to/orca/orca

Linux

1.) Install the correct OpenMPI version from openmpi.org

2.) Extract the orca binaries into some directory using the command-line programs gunzip, tar -xvf or bunzip2.

3.) Set the path variable for BOTH Openmpi and ORCA, so they can be found by editing the ~/.bashrc file.

4.) Run orca with the full path  in the command line:

/full/path/to/orca/orca


Running ORCA in parallel

Note that when running ORCA in parallel, ORCA should NOT be started with mpirun: e.g. mpirun -np 4 orca etc. like many MPI programs. ORCA takes care of communicating with the OpenMPI interface on its own when needed (one just needs to make sure the OpenMPI binaries and libraries are made available as environmental settings).

Use the !PalX keyword in the inputfile to tell ORCA to start multiple processes. E.g. to start a 4-process job, the input file might look like this:

! B3LYP def2-SVP  Opt PAL4

or using block input:

! B3LYP def2-SVP Opt

%pal
nprocs 4
end

The inputfile can then be run, calling ORCA by a full path (This is important so that ORCA can determine where all the different ORCA subprograms are):

/full/path/to/orca/orca test.inp

Note that usually parallel jobs are not started directly like this, but rather submitted to a queuing system like PBS using a job-script on a multi-node computing cluster. See later.


Top reasons for ORCA not working in parallel:

If you have multiple networks on the cluster, OpenMPI may display warning messages related to this: 


[[6829,1],0]: A high-performance Open MPI point-to-point messaging module

was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)

Host: cluster

Another transport will be used instead, although this may result in

lower performance.


To get rid of the warnings you can modify the MCA environment variable (typically set in a job-script, see below), e.g. like this: 

export OMPI_MCA_btl=self,tcp,sm

For more information on this, see the OpenMPI FAQ.


Setting up a ORCA job-submit file for a queueing system (PBS,SLURM,SGE)

Job-submit scripts will of course differ according to the queueing system and each cluster will have different settings depending on how the cluster is run. Below are simple example files for PBS/Torque and SLURM:

job-orca-PBS.sh:

#!/bin/bash

#PBS -l nodes=1:ppn=8

#PBS -q short

# Usage of this script:

#qsub -N jobname job-orca.sh  , where jobname is the name of your ORCA inputfile (jobname.inp) without the .inp extension

# Jobname below is set automatically when using "qsub -N jobname job-orca.sh ". Can alternatively be set manually here. Should be the name of the inputfile without extension (.inp or whatever).

export job=$PBS_JOBNAME

#Setting OPENMPI paths here:

export PATH=/users/home/user/openmpi/bin:$PATH

export LD_LIBRARY_PATH=/users/home/user/openmpi/lib:$LD_LIBRARY_PATH

# Here giving the path to the ORCA binaries and giving communication protocol

export orcadir=/users/home/user/orca_4_0_1_linux_x86-64_openmpi202

export RSH_COMMAND="/usr/bin/ssh -x"

export PATH=$orcadir:$PATH

# Creating local scratch folder for the user on the computing node. /scratch directory must exist. 

if [ ! -d /scratch/$USER ]

then

  mkdir -p /scratch/$USER

fi

tdir=$(mktemp -d /scratch/$USER/orcajob__$PBS_JOBID-XXXX)

# Copy only the necessary files for ORCA from submit directory to scratch directory: inputfile, xyz-files, GBW-file etc.

# Add more here if needed.

cp $PBS_O_WORKDIR/*.inp $tdir/

cp $PBS_O_WORKDIR/*.gbw $tdir/

cp $PBS_O_WORKDIR/*.xyz $tdir/

cp $PBS_O_WORKDIR/*.hess $tdir/

cp $PBS_O_WORKDIR/*.pc $tdir/

# Creating nodefile in scratch

cat ${PBS_NODEFILE} > $tdir/$job.nodes

# cd to scratch

cd $tdir

# Copy job and node info to beginning of outputfile

echo "Job execution start: $(date)" >> $PBS_O_WORKDIR/$job.out

echo "Shared library path: $LD_LIBRARY_PATH" >> $PBS_O_WORKDIR/$job.out

echo "PBS Job ID is: ${PBS_JOBID}" >> $PBS_O_WORKDIR/$job.out

echo "PBS Job name is: ${PBS_JOBNAME}" >> $PBS_O_WORKDIR/$job.out

cat $PBS_NODEFILE >> $PBS_O_WORKDIR/$job.out

#Start ORCA job. ORCA is started using full pathname (necessary for parallel execution). Output file is written directly to submit directory on frontnode.

$orcadir/orca $job.inp >> $PBS_O_WORKDIR/$job.out

# ORCA has finished here. Now copy important stuff back (xyz files, GBW files etc.). Add more here if needed.

cp $tdir/*.gbw $PBS_O_WORKDIR

cp $tdir/*.engrad $PBS_O_WORKDIR

cp $tdir/*.xyz $PBS_O_WORKDIR

cp $tdir/*.loc $PBS_O_WORKDIR

cp $tdir/*.qro $PBS_O_WORKDIR

cp $tdir/*.uno $PBS_O_WORKDIR

cp $tdir/*.unso $PBS_O_WORKDIR

cp $tdir/*.uco $PBS_O_WORKDIR

cp $tdir/*.hess $PBS_O_WORKDIR

cp $tdir/*.cis $PBS_O_WORKDIR

cp $tdir/*.dat $PBS_O_WORKDIR

cp $tdir/*.mp2nat $PBS_O_WORKDIR

cp $tdir/*.nat $PBS_O_WORKDIR

cp $tdir/*.scfp_fod $PBS_O_WORKDIR

cp $tdir/*.scfp $PBS_O_WORKDIR

cp $tdir/*.scfr $PBS_O_WORKDIR

cp $tdir/*.nbo $PBS_O_WORKDIR

cp $tdir/FILE.47 $PBS_O_WORKDIR

cp $tdir/*_property.txt $PBS_O_WORKDIR

cp $tdir/*spin* $PBS_O_WORKDIR


job-orca-SLURM.sh:

#!/bin/bash

#SBATCH -N 1

#SBATCH --tasks-per-node=8

#SBATCH --time=8760:00:00

#SBATCH -p compute

#SBATCH --error="%x.e%j"

#SBATCH --output="%x.o%j"

# Usage of this script:

#sbatch -J jobname job-orca-SLURM.sh , where jobname is the name of your ORCA inputfile (jobname.inp).

# Jobname below is set automatically when submitting like this: sbatch -J jobname job-orca.sh

#Can alternatively be set manually below. job variable should be the name of the inputfile without extension (.inp)

job=${SLURM_JOB_NAME}

job=$(echo ${job%%.*})

#Setting OPENMPI paths here:

export PATH=/users/home/user/openmpi/bin:$PATH

export LD_LIBRARY_PATH=/users/home/user/openmpi/lib:$LD_LIBRARY_PATH

# Here giving the path to the ORCA binaries and giving communication protocol

#You can also load module here.

export orcadir=/path/to/orca

export RSH_COMMAND="/usr/bin/ssh -x"

export PATH=$orcadir:$PATH

export LD_LIBRARY_PATH=$orcadir:$LD_LIBRARY_PATH

# Creating local scratch folder for the user on the computing node. 

#Set the scratchlocation variable to the location of the local scratch, e.g. /scratch or /localscratch 

export scratchlocation=/scratch

if [ ! -d $scratchlocation/$USER ]

then

  mkdir -p $scratchlocation/$USER

fi

tdir=$(mktemp -d $scratchlocation/$USER/orcajob__$SLURM_JOB_ID-XXXX)

# Copy only the necessary stuff in submit directory to scratch directory. Add more here if needed.

cp  $SLURM_SUBMIT_DIR/*.inp $tdir/

cp  $SLURM_SUBMIT_DIR/*.gbw $tdir/

cp  $SLURM_SUBMIT_DIR/*.xyz $tdir/

# Creating nodefile in scratch

echo $SLURM_NODELIST > $tdir/$job.nodes

# cd to scratch

cd $tdir

# Copy job and node info to beginning of outputfile

echo "Job execution start: $(date)" >>  $SLURM_SUBMIT_DIR/$job.out

echo "Shared library path: $LD_LIBRARY_PATH" >>  $SLURM_SUBMIT_DIR/$job.out

echo "Slurm Job ID is: ${SLURM_JOB_ID}" >>  $SLURM_SUBMIT_DIR/$job.out

echo "Slurm Job name is: ${SLURM_JOB_NAME}" >>  $SLURM_SUBMIT_DIR/$job.out

echo $SLURM_NODELIST >> $SLURM_SUBMIT_DIR/$job.out

#Start ORCA job. ORCA is started using full pathname (necessary for parallel execution). Output file is written directly to submit directory on frontnode.

$orcadir/orca $job.inp >>  $SLURM_SUBMIT_DIR/$job.out

# ORCA has finished here. Now copy important stuff back (xyz files, GBW files etc.). Add more here if needed.

cp $tdir/*.gbw $SLURM_SUBMIT_DIR

cp $tdir/*.engrad $SLURM_SUBMIT_DIR

cp $tdir/*.xyz $SLURM_SUBMIT_DIR

cp $tdir/*.loc $SLURM_SUBMIT_DIR

cp $tdir/*.qro $SLURM_SUBMIT_DIR

cp $tdir/*.uno $SLURM_SUBMIT_DIR

cp $tdir/*.unso $SLURM_SUBMIT_DIR

cp $tdir/*.uco $SLURM_SUBMIT_DIR

cp $tdir/*.hess $SLURM_SUBMIT_DIR

cp $tdir/*.cis $SLURM_SUBMIT_DIR

cp $tdir/*.dat $SLURM_SUBMIT_DIR

cp $tdir/*.mp2nat $SLURM_SUBMIT_DIR

cp $tdir/*.nat $SLURM_SUBMIT_DIR

cp $tdir/*.scfp_fod $SLURM_SUBMIT_DIR

cp $tdir/*.scfp $SLURM_SUBMIT_DIR

cp $tdir/*.scfr $SLURM_SUBMIT_DIR

cp $tdir/*.nbo $SLURM_SUBMIT_DIR

cp $tdir/FILE.47 $SLURM_SUBMIT_DIR

cp $tdir/*_property.txt $SLURM_SUBMIT_DIR

cp $tdir/*spin* $SLURM_SUBMIT_DIR