Difference between revisions of "Cluster Overview"
Line 15: | Line 15: | ||
<br> | <br> | ||
each (13) Dell PowerEdge R430 node features: | each (13) Dell PowerEdge R430 node features: | ||
− | *2 x | + | *2 x 12-core Intel Xeon E5-2650L processors (1.8 GHz) |
*64 GB of memory | *64 GB of memory | ||
*4 TB hard drive<br> | *4 TB hard drive<br> |
Revision as of 14:59, 18 March 2016
Hardware Configuration
A cluster of 32 AMD and 15 Intel computers has been assembled to facilitate parallel computation in the field of Statistics.
Each (32) Dell PowerEdge SC1435 node features:
- 2 x 4-core AMD Opteron 2350 processors (2 GHz)
- 8 GB of Memory (667 MHz)
- 250 GB hard drive (SATA, 7.2k RPM, 3 Gbps)
each (7) Dell PowerEdge R420 node features:
- 2 x 6-core Intel Xeon E5-2430L processors (2 GHz)
- 64 GB of memory (1600 MHz)
- 4 TB hard drive (SATA, 7.2k RPM, 3 Gbps)
each (13) Dell PowerEdge R430 node features:
- 2 x 12-core Intel Xeon E5-2650L processors (1.8 GHz)
- 64 GB of memory
- 4 TB hard drive
It is best to think of every core as a separate virtual machine or processing slot capable of running one process. The cores remain isolated because the computer cannot distribute a simple, stand-alone process between its cores without special instructions for doing so within the code itself. We will therefore refer interchangeably to cores, slots or virtual machines (of which there are 8 x 31 + 12 x 7 + 16 x 8) instead of computers or processors as independent computing units.
The Statistics Cluster is intergrated into existing computing infrastructure in the Nuclear Physics lab at the Physics Department. The lab provides its computing resources as well as networking, file server, security and other services. Below is a rough list of computing resources available for use (as of 2010):
Statistics | Physics | Geophysics | |
---|---|---|---|
Architecture | 64 bit | 64 bit/32 bit | 32 bit |
Cores | 460 | 192/72 | 34 |
Performance (Gflops) | 322 | 260/72 | 34 |
Note that some software has a license limited to the Statistics Department equipment and does not span the other cluster segments.
Software
The following is a selected list of available software
GCC Compiler Package (4.4.7) | PGI Fortran Compiler (7.2) | MPICH2 (1.2.1) | Open MPI (1.5.4) |
LAM-MPI (7.1.14) | BEST (1.0) | Condor (8.2.9) | CERNLIB (2005) |
OpenBUGS | R (3.x) | ROOT (5.34) |
- The usual Linux tools and scripting languages are also available
Additional software can be requested by contacting the system administrator. Please supply the name and version number in your request. For all packages related to R please ask Jun Yan (jun.yan at uconn.edu).
Current Status
Current available resources and their utilization may be monitored using:
- Ganglia - for comprehensive usage statistics
- Cluster load summary - for a simple summary of cluster resource load
Acknowledgement
Purchase of the cluster and related software was partially supported by NSF Scientific Computing Research Environments for the Mathematical Sciences (SCREMS) Program grant 0723557 to M.H. Chen, Z. Chi (PI), D. Dey and O. Harel.