Friday, January 4th 2013 11:25:01 PM PST
Note: The Triton Resource is now running in full production. The Triton Compute Cluster (TCC) and Petascale Data Analysis Facility (PDAF) are available to anyone with a TAPP account as of October 5, 2009.
The trial account phase of Triton Resource will be discontinued after January 31, 2013. Please refer to the Research Cyberinfrastructure website for information regarding possible startup opportunities for research computing at UCSD and SDSC.
TAPP, the Triton Affiliates and Partners Program, is the prescribed way to manage your access.
Triton staff maintain a Discussion List to which all Triton users are encouraged to subscribe. Members can post questions and comments to Triton Discussion List (firstname.lastname@example.org) to obtain help and support for issues and community feedback.
In order to run interactive parallel batch jobs on Triton, use the command:
which will provide a login to the launch node as well as the PBS_NODEFILE file with all nodes assigned to the interactive job.
Other qsub options can be used, such as those described by the
man qsub command.
As with any job, the interactive job will wait in the queue until the specified number of nodes become available. Requesting fewer nodes and shorter wall clock times may reduce the wait time because the job can more easily backfill among larger jobs.
The showbf command gives information on available time slots:
Partition Tasks Nodes Duration StartOffset StartDate --------- ----- ----- ------------ ------------ -------------- ALL 8 8 INFINITY 00:00:00 13:45:30_04/03
This command will provide an accurate prediction of when the submitted job will be allowed to run.
The exit command will end the interactive job.
To run an interactive job with a wall clock limit of 30 minutes using two nodes and two processors per node:
$ qsub -I -V -l walltime=00:30:00 -l nodes=2:ppn=2
qsub: waiting for job 75.triton-42.sdsc.edu to start
qsub: job 75.triton-42.sdsc.edu ready
$ echo $PBS_NODEFILE
$ more /opt/torque/aux/75.triton-42.sdsc.edu
$ mpirun -machinefile /opt/torque/aux/75.triton-42.sdsc.edu -np 4 <hostname>
To run an interactive job with a wall clock limit of 30 minutes in queue large using two nodes and 32 processors per node:
qsub -I -q large -l walltime=00:30:00 -l nodes=2:ppn=32