by clicking on the page. A slider will appear, allowing you to adjust your zoom level. Return to the original size by clicking on the page again.
the page around when zoomed in by dragging it.
the zoom using the slider on the top right.
by clicking on the zoomed-in page.
by entering text in the search field and click on "In This Issue" or "All Issues" to search the current issue or the archive of back issues respectively.
by clicking on thumbnails to select pages, and then press the print button.
this publication and page.
displays a table of sections with thumbnails and descriptions.
displays thumbnails of every page in the issue. Click on a page to jump.
allows you to browse through every available issue.
GCN : April 2015
GCN APRIL 2015 • GCN.COM 27 centers will reap significant rewards by exploiting storage virtualization along with automated tiering and data de- duplication. “These technologies are ro- bust and maturing rapidly,” SSA noted. Data centers that are adopting storage virtualization should deploy resource management and performance manage- ment tools, SSA recommended. “ These tools are essential to managing a storage infrastructure supporting high- performance, high-availability workloads with a given staff efficiency,” SSA said. They become even more critical when virtualizing and tiering storage. • Many leading-edge data centers are grappling with how to transfer massive datasets of 10 terabytes or more from one location to another. To handle the massive transfers, they are adopting ser- vices such as Globus, developed by DOE’s Argonne National Laboratory. Globus is a cloud-based data transfer service that supports the sharing of large datasets in a way that carefully manages bandwidth and improves reli- ability. “We started as a high-performance secure file transfer service,” said Vas Vasili- adis, director of products, communications and development for the Computation Institute at the University of Chicago’s Argonne National Lab. “If you want to move terabytes or petabytes of data from a national lab back to your campus, we are a service that will act as a third-party mediator or controller to make sure the data transfer completes. We recover errors automatically and notify you when we’re done.” The Massachusetts Green High Per- formance Computing Center is one user of the service. MGHPCC’s Goodhue said the software has an interface that’s easy for scientists to use without needing IT support. The benefit of Globus is that it optimizes the way a large file is transmit- ted across the network. “Globus figures out the speediest way to get the file from here to there,” he said. “It has a set of performance monitoring tools to periodically check those paths and make sure nothing is hindering the trans- fer rate. You can think of it as an overlay on the Internet that is very careful about the paths it chooses and also tests those paths to make sure the transfer rates can be very high.” Goodhue said Globus makes the transfer “simple, fast and transparent for researchers to move big datasets from one place to another.’’ The Globus service has been available for five years and includes 30 federal laboratories and universities as its cus- tomers. Argonne offers other services that takes advantage of the transfer technol- ogy, including a data publication and discovery service that allows researchers to share their data with others through a cloud-based platform. “We give them the mechanism to de- scribe their data using metadata and to assemble it and spread it across multiple systems for storage,” Vasiliadis said. “We give the data a permanent identifier, which allows the researcher or institu- tion to curate it in a way that makes sense for them.’’ One of the advantages of Globus is that it allows the end user to man- age, move and share very large data sets without involving IT department personnel. The model has uses for enterprise data as well as scientific data, Vasiliadis said. “ We’re handling the administra- tive burden and letting our users take advantage of the high-performance storage systems we have in place,” Vasiliadis said. “ Transferring data is really time consuming and error-prone, and it shouldn’t be that way. We give the user a simple browser tool, and they can move terabytes of files and forget about it. They don’t have to babysit the transfer.” • 5. OPTIMIZE THE TRANSFER VERY LARGE DATASETS The National Energy Research Scientific Computing Center learned the hard way that data centers need to protect their tape-based archival systems when the building is under construction. NERSC has 45 petabytes of scientific data stored in its archival tape system, which dates back 40 years. Unfortu- nately, the archive suffered from what Hick called a “dusty tape problem” due to regular construction at the center. “We’re frequently doing construction to prepare for new techniques of cool- ing, or removing walls to get a bigger supercomputer system,” Hick said. “ That activity is not good for storage, in particular the dust involved in con- struction. I’m talking about particles down to the submicron level,” Hick said, adding that his team learned how to protect its tape archival system from dust. A few years ago, NERSC had to hire an environmental remediation company to migrate valuable data from dusty tapes onto clean tapes. Now NERSC wraps its tape systems in a bubble to protect them when construction occurs. “We have to build a bubble with a filtration system around the tapes. It’s cleaner than normal,” Hick said. The cen- ter built three of them to avoid putting user data at risk. “The reason I’m sharing this is that it’s an issue most sites won’t talk about, Hick said. “But there are solutions. Dusty tapes are not a catastrophe. You just have to be smart about this risk.’’ • 6. PROTECT DATA ARCHIVES WHEN THE DATA CENTER IS BEING RENOVATED. 0415gcn_022-027.indd 27 3/30/15 3:58 PM