SLiMSuite REST Server


Links
REST Home
EdwardsLab Homepage
EdwardsLab Blog
SLiMSuite Blog
SLiMSuite
Webservers
Genomes
REST Pages
REST Status
REST Help
REST Tools
REST Alias Data
REST API
REST News
REST Sitemap

SLiMFarmer V1.10.2

SLiMSuite HPC job farming control program

Module: SLiMFarmer
Description: SLiMSuite HPC job farming control program
Version: 1.10.2
Last Edit: 30/07/20
Citation: Edwards et al. (2020), Methods Mol Biol. 2141:37-72.

Copyright © 2014 Richard J. Edwards - See source code for GNU License Notice


Imported modules: rje rje_hpc rje_obj rje_iridis rje_qsub


See SLiMSuite Blog for further documentation. See rje for general commands.

Function

This module is designed to control and execute parallel processing jobs on an HPC cluster using PBS and QSUB. If qsub=T it will generate a job file and use qsub to place that job in the queue using the appropriate parameter settings. If slimsuite=T and farm=X gives a recognised program (below) or hpcmode is not fork then the qsub job will call SLiMFarmer with the same commandline options, plus qsub=F i=-1 v=-1. If seqbyseq=T, this will be run in a special way. (See SeqBySeq mode.) Otherwise slimsuite=T indicates that farm=X is a SLiMSuite program, for which the python call and pypath will be added. If this program uses forking then it should parallelise over a single multi-processor node. If farm=X contains a / path separator, this will be added to pypath, otherwise it will be assumed that farm is in tools/.

If slimsuite=F then farm should be a program call to be queued in the PBS job file instead. In this case, the contents of jobini=FILE will be added to the end of the farm call. This enables commands with double quotes to be included in the farm command, for example.

Currently recognised SLiMSuite programs for farming: SLiMFinder, QSLiMFinder, SLiMProb, SLiMCore.

Currently recognised SLiMSuite programs for rsh mode only qsub farming: GOPHER, SLiMSearch, UniFake.

NOTE: Any commandline options that need bracketing quotes will need to be placed into an ini file. This can either be the ini file used by SLiMFarmer, or a jobini=FILE that will only be used by the farmed programs. Note that commands in slimfarmer.ini will not be passed on to other SLiMSuite programs unless ini=slimfarmer.ini is given as a commandline argument.

The runid=X setting is important for SLiMSuite job farming as this is what separates different parameter setting combinations run on the same data and is also used for identifying which datasets have already been run. Running several jobs on the same data using the same SLiMSuite program but with different parameter settings will therefore cause problems. If runid is not set, it will default to the job=X setting.

The hpcmode=X setting determines the method used for farming out jobs across the nodes. hpcmode=rsh uses rsh to spawn the additional processes out to other nodes, based on a script written for the IRIDIS HPC by Ivan Wolton. hpcmode=fork will restrict analysis to a single node and use Python forking to distribute jobs. This can be used even on a single multi-processor machine to fork out SLiMSuite jobs. basefile=X will set the log, RunID, ResFile, ResDir and Job: RunID and Job will have path stripped; ResFile will have .csv appended.

Initially, it will call other programs but, in time, it is envisaged that other programs will make use of SLiMFarmer and have parallelisation built-in.

SeqBySeq Mode

In SeqBySeq mode, the program assumes that seqin=FILE and basefile=X are given and farm=X states the Python program
to be run, which should be SLiMSuite program. (The SLiMSuite subdirectory will also need to be given unless
slimsuite=F, in which case the whole path to the program should be given. pypath=PATH can set an alternative path.)

Seqin will then be worked through in turn and each sequence farmed out to the farm program. Outputs given by OutList
are then compiled, as is the Log, into the correct basefile=X given. In the case of *.csv and *.tdt files, the header
row is copied for the first file and then excluded for all subsequent files. For all other files extensions, the
whole output is copied.

Commandline

Basic QSub Options

qsub=T/F : Whether to execute QSub PDB job creation and queuing [False]
jobini=FILE : Ini file to pass to the farmed HPC jobs with SLiMFarmer options. Overrides commandline. [None]
slimsuite=T/F : Whether program is an RJE *.py script (adds log processing) [True]
nodes=X : Number of nodes to run on [1]
ppn=X : Processors per node [16]
walltime=X : Walltime for qsub job (hours) [12]
vmem=X : Virtual Memory limit for run (GB) [126]
job=X : Name of job file (.job added) [slimfarmer]

Advanced QSub Options

hpc=X : Name of HPC system ['katana']
pypath=PATH : Path to python modules [slimsuite home directoy]
qpath=PATH : Path to change directory too [current path]
pause=X : Wait X seconds before attempting showstart [5]
jobwait=T/F : Whether to wait for the job to finish before exiting [False]
email=X : Email address to email job stats to at end ['']
mailstart=T/F : Whether to email user at start of run [False]
depend=LIST : List of job ids to wait for before starting job (dependhpc=X added) []
dependhpc=X : Name of HPC system for depend ['kman.restech.unsw.edu.au']
report=T/F : Pull out running job IDs and run showstart [False]
modules=LIST : List of modules to add in job file e.g. blast+/2.2.31,clustalw []
modpurge=T/F : Whether to purge loaded modules in qsub job file prior to loading [True]
precall=LIST : List of additional commands to run between module loading and program call []
daisychain=X : Chain together a set of qsub runs of the same call that depend on the previous job.

Main SLiMFarmer Options

farm=X : Execute a special SLiMFarm analysis on HPC [batch]
- batch will farm out a batch list of commands read in from subjobs=LIST
- gopher/slimfinder/qslimfinder/slimprob/slimcore/slimsearch/unifake = special SLiMSuite HPC.
- if seqbyseq=T, farm=X will specify the program to be run (see docs)
- otherwise, farm=X will be executed as a system call in place of SLiMFarmer
hpcmode=X : Mode to be used for farming jobs between nodes (rsh/fork) [fork]
forks=X : Number of forks to be used when hpcmode=fork and qsub=F. [1]
jobini=FILE : Ini file to pass to the farmed SLiMSuite run. (Also used for SLiMFarmer options if qsub=T.) [None]
jobforks=X : Number of forks to pass to farmed out run if >0 [0]

Standard HPC Options

subsleep=X : Sleep time (seconds) between cycles of subbing out jobs to hosts [1]
subjobs=LIST : List of subjobs to farm out to HPC cluster []
iolimit=X : Limit of number of IOErrors before termination [50]
memfree=X : Min. proportion of node memory to be free before spawning job [0.1]
test=T/F : Whether to produce extra output in "test" mode [False]
keepfree=X : Number of processors to keep free on head node [1]

SeqBySeq Options

seqbyseq=T/F : Activate seqbyseq mode - assumes basefile=X option used for output [False]
seqin=FILE : Input sequence file to farm out [None]
basefile=X : Base for output files - compiled from individual run results [None]
outlist=LIST : List of extensions of outputs to add to basefile for output (basefile.*) []
pickhead=X : Header to extract from OutList file and used to populate AccNum to skip []

SLiMSuite Farming Options

runid=X : Text identifier for SLiMSuite job farming [job]
resfile=FILE : Main output file for SLiMSuite run [farm.csv]
pickup=T/F : Whether to pickup previous run based on existing results and RunID [True]
sortrun=T/F : Whether to sort input files by size and run big -> small to avoid hang at end [True]
loadbalance=T/F : Whether to split SortRun jobs equally between large & small to avoid memory issues [True]
basefile=X : Set the log, RunID, ResFile, ResDir and Job to X [None].


See also rje.py generic commandline options.

History Module Version History

    # 0.0 - Initial Compilation.
    # 1.0 - Functional version using rje_qsub and rje_iridis to fork out SLiMSuite runs.
    # 1.1 - Updated to use rje_hpc.JobFarmer and incorporate main SLiMSuite farming within SLiMFarmer class.
    # 1.2 - Implemented the slimsuite=T/F option and got SLiMFarmer qsub to work with GOPHER forking.
    # 1.3 - Modified default vmem request to 127GB from 64GB.
    # 1.4 - Added modules=LIST : List of modules to add in job file [clustalo,mafft]
    # 1.4.1 - Fixed farm=batch mode for qsub=T.
    # 1.4.2 - Fixed log transfer issues due to new #VIO line. Better handling of crashed runs.
    # 1.4.3 - Added recognition of missing slimsuite programs and switching to slimsuite=F.
    # 1.4.4 - Modified default vmem request to 126GB from 127GB.
    # 1.4.5 - Updated BLAST loading default to 2.2.31
    # 1.5.0 - mailstart=T/F : Whether to email user at start of run [False]
    # 1.6.0 - modpurge=T/F : Whether to purge loaded modules in qsub job file prior to loading [True]
    # 1.7.0 - precall=LIST : List of additional commands to run between module loading and program call []
    # 1.8.0 - jobforks=X : Number of forks to pass to farmed out run if >0 [0]
    # 1.9.0 - daisychain=X : Chain together a set of qsub runs of the same call that depend on the previous job.
    # 1.10.0 - Added appending contents of jobini file to slimsuite=F farm commands.
    # 1.10.1 - Added job resource summary to job stdout.
    # 1.10.2 - Fixed bug when SLiMFarmer batch run being called from another program, e.g. MultiHAQ

SLiMFarmer REST Output formats

Run with &rest=docs for program documentation and options. A plain text version is accessed with &rest=help.
&rest=OUTFMT can be used to retrieve individual parts of the output, matching the tabs in the default
(&rest=format) output. Individual OUTFMT elements can also be parsed from the full (&rest=full) server output,
which is formatted as follows:
###~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~###
# OUTFMT:
... contents for OUTFMT section ...

Available REST Outputs

There is currently no specific help available on REST output for this program.

© 2015 RJ Edwards. Contact: richard.edwards@unsw.edu.au.