3,220 bytes added
, 16:58, 19 July 2013
== Disk Space Organization ==
Below is a list of of different forms of disk space available on the cluster to the user for reading and writing. (Unless noted otherwise, there is no backup and thus no warranty about the security of the data.)
<br>
{| class="wikitable" style="text-align: left; background: #A9A9A9"
| style="background: #f9f9f9; width: 150px" | /home/<i><b>username</b></i>
| style="background: #f9f9f9; width: 750px" | user home directory with regular backups; 90 MB default quota.
|-
| style="background: #f9f9f9; width: 150px" | /home/<i><b>username</b></i>/jobs
| style="background: #f9f9f9; width: 750px" | link to local space on stats.phys ideal for launching Condor jobs. No backups, no per-user quota; ~150 GB space total
|-
| style="background: #f9f9f9; width: 150px" | /scratch
| style="background: #f9f9f9; width: 75px" | temporary space shared by <b>all</b> users; ~100 GB total; <b>NO BACKUP, DO NOT PERMANENTLY STORE RESULTS HERE</b>
|-
| style="background: #f9f9f9; width: 150px" | /local
| style="background: #f9f9f9; width: 750px" | ~30 GB local space available on every Statistics node that is automatically used for staging by Condor for executables and data.
|}
Upon logging in, the user lands in his or her home directory: /home/<i><b>username</b></i>. This modest amount of space is intended as private space for development, testing of code etc. Results may safely be kept here for some time as this space is backed up. It is not intended for large data files or the many files generated by Condor jobs.
A special directory /home/<b><i>username</i></b>/jobs is a more efficient space for launching jobs. It is actually a link to local space on stats.phys.uconn.edu (and is not accessible from processing nodes.) It is not bound by quotas.
For this and other purposes a temporary directory /scratch is also available from all nodes. This is a larger working space suitable for collaborative work and launching of jobs. Big files that need to be transfered to the cluster can be copied directly to the scratch space by specifying /scratch as a destination in your scp client. For example, compression-enabled transfer of a large file directly to a working directory under the cluster's scratch space with the console-based scp client looks as follows:
<i>scp -C WarAndPeace.txt lntolstoy@stats.phys.uconn.edu:/scratch/LeosSpace/novel_parse</i>
Being a collaborative space, however, means that it should be kept organized for the sake of all other clusters users (including members of the adjoining Physics and Geophysics clusters). This space may be cleaned up by administrators if any files appear to be abandoned.
The shared spaces discussed so far reside on a file server and accessible to the stats nodes over the network via nfs. While this is conveneint for the purposes of a shared space for testing and job distribution, network latency, bandwidth limitation and congestion may create a bottleneck for data-intensive calculations. To resolve this problem, local space is available on each node in the form of an on-board hard disk. It is mounted on each node under /local. Note that use of this space requires a job to copy the necessary files and clean up at the end.