HPC2N job submission

Make.sbatch
***------------------------------------------------------ ***
#!/bin/bash
## Name of the script - unnecessary, but useful to help find the job in a long list
#SBATCH -A 5DV152-VT14
#SBATCH -J mpiCountSort

## Names of output and error files. You can call them what you will. If you
## don't give them a name, the file containing output and any errors, will be called
## slurm-<jobid>.out.
##

## asking for 24 processes. There are 48 cores per node on Abisko.
## If you do not specify processes/cores, you will get the smalles allocatable amount,
## Which is one 'socket' (6 cores, as far as SLURM is concerned). Runtime (walltime)
## in this example is 4 minutes max. If
## the job completes sooner, it will return at that time.
#SBATCH -n 6
#SBATCH --time=01:00:00
# Spread the tasks evenly across two nodes
#SBATCH --ntasks-per-node=48

#SBATCH --output=pN:6&dN:700.out
#SBATCH --error=pN:6&dN:700.err

## Load any modules you need (here OpenMPI for PathScale compilers)
module add openmpi/gcc/1.6.5
echo "Starting at `date`"
echo "Running on hosts: $SLURM_NODELIST"
echo "Running on $SLURM_NNODES nodes."
echo "Running on $SLURM_NPROCS processors."
echo "Current working directory is `pwd`"

srun -n $SLURM_NPROCS ./mpiCountSort 700
echo "Program finished with exit code $? at: `date`"
***-------------------------------------------***
Submit the program with:
sbatch <my_jobscript> 
Help link: http://www.hpc2n.umu.se/batchsystem/examples_scripts