new GPU development system

We commissioned a new computer system last week for GPU development. It consists of 3 workstations delivered by Puget systems.

IMG_20160520_131638
installed system, f.l.t.r. uri, schwyz and unterwalden
  • uri: Dual CPU 2.4 GHz Intel Xeon 8 core + 4 x Titan X
  • schwyz: Dual CPU 2.4 GHz Intel Xeon 8 core + 2 x Tesla K40c + 3 x 6 TB HDD
  • unterwalden: Single CPU 3.0 GHz Intel Xeon 8 core + 4 x Titan X

each equipped with 128 GB RAM and a 512 GB M.2 SSD.

IMG_20160516_104400
Samsung 512 GB M.2 drive

The M.2 disks are interfacing directly to the PCIx bus on the motherboard, and allow for very fast i/o. Determined to be at around 800 MB/s sequential write (as opposed to 150 MB/s on the SATA drives).

We decided to investigate consumer grade cards (here Titan X) that do have lower double precision throughput than the scientific grade Teslas, and also lack EEC memory, but cost about 3-4x less.

IMG_20160516_104618
4 x Titan X and two Tesla K40c

The system serves several purposes:

  • investigate single vs. double precision
  • investigate importance of CPU speed
  • consumer vs. scientific grade hardware
  • efficiently develop CUDA code
  • run number crunching jobs at otherwise idle times

We are also looking forward to the new Nvidia Pascal architecture.

New funding obtained for GPU cluster

The Penn State IceCube group is partnering in new large GPU-centric cluster, the Cyber-Laboratory for Astronomy, Materials and Physics (CyberLAMP), funded by the NSF MRI program.  D. Cowen is a co-PI for CyberLAMP, a $1M-scale compute cluster emphasizing the use of GPUs for scientific applications.

For certain calculations, graphical processing units can provide orders of magnitude more compute power than standard CPUs.   The IceCube Collaboration is already taking advantage of GPU power for certain simulations.  Working with colleagues in Canada, the Penn State group aims to dramatically expand this effort, with the ultimate goal of quickly running hundreds or thousands of simulations for each neutrino interaction.  We will then use the subset of those simulations whose pattern most closely matches the pattern produced by the actual neutrino to reconstruct its properties–namely, its energy, direction and flavor.

An individual neutrino interaction will produce upwards of millions of photons in the ice at the South Pole.   Simulating an individual event involves mimicking the propagation of these photons through the optically complex ice at the South Pole.  This process is very time-consuming on traditional CPUs, but much, much speedier with the GPU architecture, an architecture originally developed to trace photons to make video games more realistic.

Some proposal details:

  • Title: MRI: Acquisition of High Performance Hybrid Computing Cluster to Advance Cyber-enabled Science and Education at Penn State
  • Number: MRI Proposal 1626251
  • PI: Prof. Yuexing Li (Astronomy and Astrophysics)
  • Co-PIs:
    • Prof. Doug Cowen (Physics, Astronomy and Astrophysics)
    • Prof. Eric Ford (Astronomy and Astrophysics)
    • Prof. Mahmut Kandemir (Computer Science and Engineering)
    • Prof. Adri van Duin (Mechanical and Nuclear Engineering, Materials Science and Engineering)

IceCube at Penn State