Private:Weekly Group 7/27/2011

= Igor =


 * General TeraGrid/XCEDE features
 * Varied user base, some not very savvy.
 * Conservative scheduling to guarantee time for approved projects. Jim Weichel notes high variance of resource occupation and low average occupation. Conversations go on about opportunistic use. Sharing with OSG at this point seems to be one-way i.e. flooding into OSG
 * Focus on standards (e.g. JSDL for job description) and interoperability
 * Effort in close collaboration with "Resource/Service Providers" to make things work, through whatever technologies/platforms (including commercial.)
 * Centralized support


 * XCEDE Container tutorials
 * Focus on Linux-file-system-like paradigm for all resources. Examples
 * FUSE mounting/export of local files to make grid file system appear as a single tree
 * Basic Execution System (BES: quasi-CE) export, "linking" to queues etc. Queues and other abstract resources appear like devices do under /dev or /proc
 * Easily pluggable storage and execution resources - easy for collaboration through the XCEDE software stack whether or not the organization policy will support scheduling on added resources.
 * Work on meta-data caching including pushing of catalog updates.


 * Globus Tools
 * GridFTP continues to work on optimizing, e.g. "pipelining" such that next file in queue is begins before previous acknowledgement
 * Globus Online - received with general adulation! The site/service babysits transfers by storing user certificates, favorite storage sites etc. as part of a user profile. (Parameters for the major facilities already in database. Adding personal equipment (e.g. laptop) is easy with a light, trivial to install server.) The site and ssh-accessible shell provide interface for checking on transfers, histories etc.


 * Info from vendors
 * APPRO company exhibited cluster nodes, showed 1U with 2xCPU, 4xGPU! Much general buzz about direct Infiniband-GPU architectures (bypassing slow MB bus) They say (from private collaboration info) that much is going on at Intel about GPU integration but little is ready for public release. Could the bus bottle-neck be a matter of waiting ~1 year?
 * HP showed nodes with 2 MBs/1U node. 1U "drawer" slides out of a multi-U crate, which houses a central cooling/power supply (i.e. proprietary stuff) Flexible layout with place for GPU cards, HDs etc.


 * Other tools
 * Eclipse Parallel Tools Platform (PTP) - a parallel development/debugging system.