by clicking on the page. A slider will appear, allowing you to adjust your zoom level. Return to the original size by clicking on the page again.
the page around when zoomed in by dragging it.
the zoom using the slider on the top right.
by clicking on the zoomed-in page.
by entering text in the search field and click on "In This Issue" or "All Issues" to search the current issue or the archive of back issues respectively.
by clicking on thumbnails to select pages, and then press the print button.
this publication and page.
displays a table of sections with thumbnails and descriptions.
displays thumbnails of every page in the issue. Click on a page to jump.
allows you to browse through every available issue.
GCN : January and February 2016
A partnership between the Pittsburgh Supercomput- ing Center and the technol- ogy industry is making the processing power of high-performance computing available to both traditional and non- traditional HPC users. The center teamed with Hewlett Packard Enterprise and Intel for the Bridges project, a National Science Foundation-funded program that gives approved users seamless desktop ac- cess to HPC resources via a portal. “The name ‘Bridges’ stems from three computational needs the system will fill for the research community,” said Nick Nystrom, the center’s direc- tor of strategic applications and prin- cipal investigator on the project, when it was first announced. “Foremost, Bridges will bring supercomputing to nontraditional users and research communities. Second, its data-in- tensive architecture will allow high- performance computing to be applied effectively to big data. Third, it will bridge supercomputing to university campuses to ease access and provide burst capability.” Bridges users will upload their data and submit the jobs to the HPC resources they’ve selected, Nystrom said. They don’t have to log in or un- derstand File Transfer Protocol, for example. Portal managers will handle granting access to users and allocating resources. Bill Mannel, vice president and gen- eral manager of HPC and big data at Hewlett Packard Enterprise, said Bridges consists of three types of the company’s machines: • Four Integrity Superdome X servers, which let users lock data once into their 12 terabytes of shared memory and then conduct analyses. The pro- cess concentrates memory in one place rather than spreading across many nodes. • 42 ProLiant DL580 servers, each of which has 3 terabytes of shared mem- ory and provides virtualization and re- mote visualization. • 800 Apollo 2000 nodes, each with 128 gigabytes of shared memory to support capacity workloads. “All tied together, you’ve got the number crunchers, which are the Apol- lo 2000s, the data analytics engines in the Superdome, and then you have the 580s that provide the direct access to the whole system from all the users’ workstations or laptops, basically giv- ing them access to the supercomputing resources of the center itself,” Mannel said. The system’s composition reflects the workloads Nystrom said he expects Bridges to handle — particularly those involving big data. “What that lets us do is to converge case study BY STEPHANIE KANOWITZ SUPERCOMPUTING The Pittsburgh Supercomputing Center’s Bridges project gives seamless desktop access to high-performance computing Bringing super power to the desktop SHUTTERSTOCK GCN JANUARY/FEBRUARY 2016 • GCN.COM 29 “The point is to have each compute node have multiple paths to storage to avoid congestion and also to give people the maximum performance at the minimum cost.” — NICK NYSTROM, PITTSBURGH SUPERCOMPUTING CENTER 0216gcn_029-031.indd 29 2/3/16 11:58 AM
March and April 2016