Changes

Jump to navigation Jump to search
397 bytes added ,  20:29, 24 January 2017
no edit summary
Line 1: Line 1:  +
== GCC 4.9.2 ==
 +
The default GCC for CentOS 6 is 4.4.7 which will be used when running your jobs. Please see [http://gryphn.phys.uconn.edu/statswiki/index.php/How_to_Submit_a_Job#GCC_4.9.2 this section] regarding the use of GCC 4.9.2.
 +
 
== C example ==
 
== C example ==
 
=== The Problem and the Code ===
 
=== The Problem and the Code ===
Line 38: Line 41:  
<pre>Executable  = calcpi
 
<pre>Executable  = calcpi
 
Requirements = ParallelSchedulingGroup == "stats group"
 
Requirements = ParallelSchedulingGroup == "stats group"
 +
+AccountingGroup = "group_statistics_testjob.username"
 
Universe  = vanilla
 
Universe  = vanilla
 
output    = calcpi$(Process).out
 
output    = calcpi$(Process).out
Line 49: Line 53:  
The last line specifies that 50 instances should be scheduled on the cluster. The description file specifies the executable and the arguments passed to it during execution. (In this case we are requesting that all instances iterate 10e9 times in the program's sampling loop.) The requirement field insists that the job stay on the Statistics Cluster. (All statistics nodes are labeled with "stats group" in their Condor ClassAds) Output and error files are targets for standard out and standard error streams respectively. The log file is used by Condor to record in real time the progress in job processing. Note that this setup labels output files by process number to prevent a job instance from overwritting files belonging to another. The current values imply that all files are to be found in the same directory as the description file.
 
The last line specifies that 50 instances should be scheduled on the cluster. The description file specifies the executable and the arguments passed to it during execution. (In this case we are requesting that all instances iterate 10e9 times in the program's sampling loop.) The requirement field insists that the job stay on the Statistics Cluster. (All statistics nodes are labeled with "stats group" in their Condor ClassAds) Output and error files are targets for standard out and standard error streams respectively. The log file is used by Condor to record in real time the progress in job processing. Note that this setup labels output files by process number to prevent a job instance from overwritting files belonging to another. The current values imply that all files are to be found in the same directory as the description file.
   −
The <i>universe</i> variable specifies the condor runtime environment. For the purposes of these independent jobs, the simplest "vanilla" universe suffices. In a more complicated parallel task, with checkpointing and migration, MPI calls etc., more advanced run-time environments are employed, often requiring specilized linking of the binaries. The lines specifying transfer settings are important to avoid any assumptions about accessibility over nfs. They should be included whether or not any output files (aside from standard output and error) are necessary.  
+
Note that this example uses the Accounting Group "group_statistics_testjob" with the user's username appended at the end. If running a default, standard job, do not include this line. For more explanation, please see this page on [http://gryphn.phys.uconn.edu/statswiki/index.php/How_to_Submit_a_Job#Job_policy Job Policy].
 +
 
 +
The <i>universe</i> variable specifies the condor runtime environment. For the purposes of these independent jobs, the simplest "vanilla" universe suffices. In a more complicated parallel task, with checkpointing and migration, MPI calls etc., more advanced run-time environments are employed, often requiring specilized linking of the binaries. The lines specifying transfer settings are important to avoid any assumptions about accessibility over nfs. They should be included whether or not any output files (aside from standard output and error) are necessary.
 +
 
 
=== Job Submission and Management ===
 
=== Job Submission and Management ===
 
While logged in on stats, the job is submitted with:
 
While logged in on stats, the job is submitted with:
Line 107: Line 114:  
universe = vanilla
 
universe = vanilla
 
Requirements = ParallelSchedulingGroup == "stats group"
 
Requirements = ParallelSchedulingGroup == "stats group"
 +
+AccountingGroup = "group_statistics_testjob.username"
    
should_transfer_files = YES
 
should_transfer_files = YES
Line 149: Line 157:  
== Matlab example ==
 
== Matlab example ==
 
=== The Problem and Code ===
 
=== The Problem and Code ===
Matlab can be run in batch mode (i.e. non-interactive mode) on the cluster. No graphics can be used when running on the cluster. The following example demonstrates a simple Matlab example which saves the output to be opened later in Matlab interactively.
+
Matlab can be run in <b>batch mode (i.e. non-interactive mode)</b> on the cluster. <b>No graphics</b> can be used when running on the cluster. The following example demonstrates a simple Matlab example which saves the output to be opened later in Matlab interactively.
    
File: Matlab_example.m
 
File: Matlab_example.m
Line 168: Line 176:  
universe = vanilla
 
universe = vanilla
 
Requirements = ParallelSchedulingGroup == "stats group"
 
Requirements = ParallelSchedulingGroup == "stats group"
 +
+AccountingGroup = "group_statistics_testjob.username"
 
initialdir = /path/to/your/jobs/directory
 
initialdir = /path/to/your/jobs/directory
 
transfer_input_files = Matlab_example.m, runMatlab
 
transfer_input_files = Matlab_example.m, runMatlab
Line 219: Line 228:  
By this time, only 6 jobs are left on the cluster, all with status 'R' - running. Various statistics are given including a job ID number. This handle is useful if intervention is required like manual removal of frozen job instances from the cluster. A command condor_rm 7.28 would remove just that instance, whereas condor_rm 7 will remove this entire job.  
 
By this time, only 6 jobs are left on the cluster, all with status 'R' - running. Various statistics are given including a job ID number. This handle is useful if intervention is required like manual removal of frozen job instances from the cluster. A command condor_rm 7.28 would remove just that instance, whereas condor_rm 7 will remove this entire job.  
   −
== Long job submit files ==
  −
In order to submit a long job on the cluster the following line needs to be added to the submit file
  −
<pre>
  −
+AccountingGroup = “group_statistics_longjob.prod”
  −
</pre>
  −
Please note that long jobs have a maximum of 48 hours before they may be killed. The cluster is optimal for many small jobs, not a few long jobs.
   
== Acknowledgement ==
 
== Acknowledgement ==
 
Examples provided by Igor Senderovich, Alex Barnes, Yang Liu
 
Examples provided by Igor Senderovich, Alex Barnes, Yang Liu
191

edits

Navigation menu