Difference between revisions of "Job and File Storage Space"
(11 intermediate revisions by the same user not shown) | |||
Line 4: | Line 4: | ||
{| class="wikitable" style="text-align: left; background: #A9A9A9" | {| class="wikitable" style="text-align: left; background: #A9A9A9" | ||
| style="background: #f9f9f9; width: 150px" | /home/<i><b>username</b></i> | | style="background: #f9f9f9; width: 150px" | /home/<i><b>username</b></i> | ||
− | | style="background: #f9f9f9; width: 750px" | user home directory with regular backups; 90 MB default quota. | + | | style="background: #f9f9f9; width: 750px" | user home directory with regular backups; 90 MB default SHARED quota. |
|- | |- | ||
| style="background: #f9f9f9; width: 150px" | /home/<i><b>username</b></i>/jobs | | style="background: #f9f9f9; width: 150px" | /home/<i><b>username</b></i>/jobs | ||
− | | style="background: #f9f9f9; width: 750px" | | + | | style="background: #f9f9f9; width: 750px" | only available on stats.phys.uconn.edu ideal for launching Condor jobs. No backups, no per-user quota; ~150 GB space total |
|- | |- | ||
| style="background: #f9f9f9; width: 150px" | /scratch | | style="background: #f9f9f9; width: 150px" | /scratch | ||
− | | style="background: #f9f9f9; width: | + | | style="background: #f9f9f9; width: 750px" | temporary space shared by <b>all</b> users; ~100 GB total; <b>NO BACKUP, DO NOT PERMANENTLY STORE RESULTS HERE</b> |
− | |||
− | |||
− | |||
|} | |} | ||
− | + | === Results === | |
+ | If you have files that you will not be using and do not want to delete, please move them to your own personal storage off of the cluster. The jobs directory is NOT backed up so any results that are important should be moved off of the cluster. The cluster is where you stage your work but it is not a recommended permanent home for your results. | ||
== /home/username == | == /home/username == | ||
Upon logging in, the user lands in his or her home directory: /home/<i><b>username</b></i>. This modest amount of space is intended as private space for development, testing of code etc. Results may safely be kept here for some time as this space is backed up. It is not intended for large data files or the many files generated by Condor jobs. | Upon logging in, the user lands in his or her home directory: /home/<i><b>username</b></i>. This modest amount of space is intended as private space for development, testing of code etc. Results may safely be kept here for some time as this space is backed up. It is not intended for large data files or the many files generated by Condor jobs. | ||
+ | |||
+ | The quota for this directory is shared among all statistics users. If one person fills their home directory it will prevent other users from being able to write to theirs. For large files, please use the jobs directory. | ||
== /home/username/jobs == | == /home/username/jobs == | ||
A special directory /home/<b><i>username</i></b>/jobs is a more efficient space for launching jobs. It is actually a link to local space on stats.phys.uconn.edu (and is not accessible from processing nodes.) It is not bound by quotas. | A special directory /home/<b><i>username</i></b>/jobs is a more efficient space for launching jobs. It is actually a link to local space on stats.phys.uconn.edu (and is not accessible from processing nodes.) It is not bound by quotas. | ||
− | <b>This is the recommended location for storing files | + | <b>This is the recommended location for storing files that are in use.</b> |
== /scratch == | == /scratch == | ||
For this and other purposes a temporary directory /scratch is also available from all nodes. This is a larger working space suitable for collaborative work and launching of jobs. Big files that need to be transfered to the cluster can be copied directly to the scratch space by specifying /scratch as a destination in your scp client. For example, compression-enabled transfer of a large file directly to a working directory under the cluster's scratch space with the console-based scp client looks as follows: | For this and other purposes a temporary directory /scratch is also available from all nodes. This is a larger working space suitable for collaborative work and launching of jobs. Big files that need to be transfered to the cluster can be copied directly to the scratch space by specifying /scratch as a destination in your scp client. For example, compression-enabled transfer of a large file directly to a working directory under the cluster's scratch space with the console-based scp client looks as follows: | ||
− | <i>scp -C WarAndPeace.txt | + | <i>scp -C WarAndPeace.txt Intolstoy@stats.phys.uconn.edu:/scratch/LeosSpace/novel_parse</i> |
− | + | {|class="wikitable" style="text-align: center" | |
− | == | + | |style="background: #f2f2f2; width: 200px"| scp |
− | + | |style="background: #f2f2f2; width: 200px"| -C | |
− | + | |style="background: #f2f2f2; width: 200px"| WarAndPeace.txt | |
− | + | |style="background: #f2f2f2; width: 200px"| lntolstoy@stats.phys.uconn.edu:/scratch/LeosSpace/novel_parse | |
+ | |- | ||
+ | |style="width: 200px"| secure copy command | ||
+ | |style="width: 200px"| specific option for scp | ||
+ | |style="width: 200px"| local file in present directory | ||
+ | |style="width: 200px"| remote file location on stats server | ||
+ | |} |
Latest revision as of 18:50, 9 November 2016
Disk Space Organization Overview
Below is a list of of different forms of disk space available on the cluster to the user for reading and writing. (Unless noted otherwise, there is no backup and thus no warranty about the security of the data.)
/home/username | user home directory with regular backups; 90 MB default SHARED quota. |
/home/username/jobs | only available on stats.phys.uconn.edu ideal for launching Condor jobs. No backups, no per-user quota; ~150 GB space total |
/scratch | temporary space shared by all users; ~100 GB total; NO BACKUP, DO NOT PERMANENTLY STORE RESULTS HERE |
Results
If you have files that you will not be using and do not want to delete, please move them to your own personal storage off of the cluster. The jobs directory is NOT backed up so any results that are important should be moved off of the cluster. The cluster is where you stage your work but it is not a recommended permanent home for your results.
/home/username
Upon logging in, the user lands in his or her home directory: /home/username. This modest amount of space is intended as private space for development, testing of code etc. Results may safely be kept here for some time as this space is backed up. It is not intended for large data files or the many files generated by Condor jobs.
The quota for this directory is shared among all statistics users. If one person fills their home directory it will prevent other users from being able to write to theirs. For large files, please use the jobs directory.
/home/username/jobs
A special directory /home/username/jobs is a more efficient space for launching jobs. It is actually a link to local space on stats.phys.uconn.edu (and is not accessible from processing nodes.) It is not bound by quotas.
This is the recommended location for storing files that are in use.
/scratch
For this and other purposes a temporary directory /scratch is also available from all nodes. This is a larger working space suitable for collaborative work and launching of jobs. Big files that need to be transfered to the cluster can be copied directly to the scratch space by specifying /scratch as a destination in your scp client. For example, compression-enabled transfer of a large file directly to a working directory under the cluster's scratch space with the console-based scp client looks as follows:
scp -C WarAndPeace.txt Intolstoy@stats.phys.uconn.edu:/scratch/LeosSpace/novel_parse
scp | -C | WarAndPeace.txt | lntolstoy@stats.phys.uconn.edu:/scratch/LeosSpace/novel_parse |
secure copy command | specific option for scp | local file in present directory | remote file location on stats server |