by clicking on the page. A slider will appear, allowing you to adjust your zoom level. Return to the original size by clicking on the page again.
the page around when zoomed in by dragging it.
the zoom using the slider on the top right.
by clicking on the zoomed-in page.
by entering text in the search field and click on "In This Issue" or "All Issues" to search the current issue or the archive of back issues respectively.
by clicking on thumbnails to select pages, and then press the print button.
this publication and page.
displays a table of sections with thumbnails and descriptions.
displays thumbnails of every page in the issue. Click on a page to jump.
allows you to browse through every available issue.
GCN : January 2015
[BrieFing] A group of scientific researchers, who work with datasets in the terabyte range, wants to develop a set of tools for data sharing, analysis and access for data management challenges across the scientific community. Under the auspices of the National Science Foundation, the group has begun a project called SciServer, whose mission is to, “build a long term, flexible ecosystem” to provide access to data- sets generated by astronomy and space science projects. “By building a common infrastructure, we can create data access and analysis tools useful to all areas of science,” Alex Szalay of Johns Hopkins University, the leader of the NSF-funded project told Phys.org. SciServer grew out of work with the Sloan Digital Sky Survey (SDSS), an ongoing project to map the entire uni- verse. SDSS, begun 15 years ago, now has over 70 terabytes in its database covering 220 million galaxies and 260 million stars. The SciServer team, which began working on the solutions to the problems in 2013, said they would launch the proj- ect in phases over the next four years. The tactics they will bring to the proj- ect include: Bring the analysis to the data. “This means scientists can search and analyze big data without downloading terabytes of data, resulting in much faster processing times,” Szalay said in a statement. Specify real-world use cases. The SciServer team is collaborating to en- sure the system will be most helpful to working scientists. Develop new tools. To help ease the burden on researchers, the team devel- oped “SciDrive,” a cloud data storage system that allows scientists to upload and share data using a Dropbox-like interface. Adapt existing working tools. The strategy of building systems by adapt- ing existing, successful tools is a key factor in ensuring the success of the project. “The tools we build will create a fully-functional, user-driven system from the beginning, making SciServer an indispensable tool for doing science in the 21st century,” Szalay said. As SciServer becomes more mature, the team will expand to other areas of science including genomics and connectomics, which explores cellular connections across the structure of the brain, according to the researchers.• How SciServer is cutting big data down to size 8 GCN SEPTEMBER 2014 • GCN.COM 8 GCN JANUARY 2015 • GCN.COM Researchers from the Argonne National Laboratory, working with DataDi- rect Networks (DDN), transferred 65 terabytes of data in under 100 minutes between storage systems, a digital accomplishment that would have taken two days with a 10 gigabit/sec connec- tion. The demonstration took place over a 100 Gbps wide-area network con- nection between storage centers in Ottawa Canada and New Orleans, La., in November at SC14, a conference for high-performance computing. The team, with support from net- working firms Ciena and Brocade and Internet research group ICAIR, reached data transfer rates above 85 Gbps— with peaks at over 90 Gbps, according to a report from the Argonne lab. Achieving the record speeds in- volved combining file and virtual machine features of the DDN storage controller, the wide-area data transfer capabilities of the Globus GridFTP server and an advanced 100 G wide- area network. DDN offers massively scalable stor- age systems for big data and data intensive applications, such as super- computing, seismic processing and genomics. The open source Globus GridFTP server uses an extension of the standard File Transfer Protocol for high-speed, secure data transfers. “Embedding the GridFTP servers in virtual machines on DDN’s storage controller eliminates the need for ex- ternal data transfer nodes and network adapters,” said Raj Kettimuthu, princi- pal software development specialist at Argonne. Kettimuthu pointed out that network- ing experts often say storage is the bottleneck in end-to-end transfers on high-speed networks, while storage experts claim that the network is the stumbling block. Achieving more than 90 Gbps for memory-to-memory transfers using a tool like iperf, a benchmarking tool for network performance measurement, is straightforward and has been demon- strated several times in the past, he added. However, achieving similar rates for disk-to-disk transfers presents a number of challenges, according to Kettimuthu, including choosing the appropriate block size that works well for both disk I/O and network I/O and picking parallel storage I/O threads and TCP streams for end-to-end perfor- mance. “The demonstration was aimed at bringing together experts and the latest developments in all aspects of disk-to- disk WAN data movement, including network, storage and data movement tools,” said Kettimuthu. • Argonne sets new marks for high-speed data transfer 0115gcn_006-016.indd 8 1/12/15 2:59 PM
November and December 2014