by clicking on the page. A slider will appear, allowing you to adjust your zoom level. Return to the original size by clicking on the page again.
the page around when zoomed in by dragging it.
the zoom using the slider on the top right.
by clicking on the zoomed-in page.
by entering text in the search field and click on "In This Issue" or "All Issues" to search the current issue or the archive of back issues respectively.
by clicking on thumbnails to select pages, and then press the print button.
this publication and page.
displays a table of sections with thumbnails and descriptions.
displays thumbnails of every page in the issue. Click on a page to jump.
allows you to browse through every available issue.
GCN : October 2014
[BrieFing] The National Science Foundation recently announced two $10 million projects to create cloud computing test beds — called Chameleon and Cloud- Lab — to help develop novel cloud architectures and new applications. The awards complement private sec- tor efforts to build cloud architectures that can support real-time and safety- critical applications like those used in medical devices, power grids and transportation systems, NSF said. Chameleon, to be co-located at the University of Chicago and the University of Texas at Austin, will consist of 650 cloud nodes with 5 petabytes of stor- age. Researchers will be able to configure slices of Chameleon as custom clouds to test the efficiency and usability of different cloud architectures on a range of problems, from machine learning and adaptive operating systems to climate simulations and flood prediction. The test bed will allow “bare-metal access,” an alternative to virtualization technologies currently used to share cloud. Chameleon is unique for its support for heterogeneous computer architec- tures, including low-power processors, general processing units and field-pro- grammable gate arrays, NSF said. Researchers can therefore mix and match hardware, software and network- ing components and test their perfor- mance. This flexibility is expected to benefit many scientific communities, including the growing field of cyber- physical systems or the Internet of Things. The CloudLab test bed is a large- scale distributed infrastructure based at the University of Utah, Clemson University and the University of Wiscon- sin, on top of which researchers will be able to construct many different types of clouds. Each site will have unique hardware, architecture and storage features, and will connect to the others via 100 gigabit/sec connections on Internet2’s advanced platform. CloudLab will also support OpenFlow (an open standard that enables researchers to run experi- mental protocols in campus networks) and other software-defined networking technologies. CloudLab will provide approximately 15,000 processing cores and in excess of 1 petabyte of storage at its three data centers. Each center will comprise different hardware, facilitating additional experimentation. In that capacity, the team is partnering with HP, Cisco and Dell to provide diverse platforms for research. Like Chameleon, CloudLab will feature bare-metal access. Over its lifetime, CloudLab is expect- ed to run dozens of virtual experiments simultaneously and to support thou- sands of researchers. Ultimately, the goal of the NSFCloud program and the two new test beds is to advance the field of cloud computing broadly. The awards will help research- ers develop new concepts, methods and technologies to enable infrastruc- ture design and execution. • NSF seeds cloud test beds to develop new applications 10 GCN OCTOBER 2014 • GCN.COM Researchers can mix and match hardware, software and networking components and test their performance. A major bottleneck in scientific discovery is now emerging because the amount of data available is outpacing local com- puting capacity, according to authors of new paper published on PLOSone. And though cloud computing gives researchers a way to match capacity and power with demand, the authors won- dered which cloud configuration would best met their needs. According to the paper, the authors benchmarked two cloud services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic data sets and a standard bio- informatic pipeline on a Hadoop-based platform. While not an exact matchup, they found that GCE outperformed EMR both in terms of cost and wall-clock time, though EMR was more consistent, which is an important issue in undedicated cloud computing, they wrote. The time differences, the authors said, “could be attributed to the hardware used by the Google and Amazon for their cloud services. Amazon offers a 2.0 GHz Intel Xeon Sandy Bridge CPU, while Google uses a 2.6 GHz Intel Xeon Sandy Bridge CPU. This clock speed variability is consid- ered the main contributing factor to the difference between the two undedicated platforms,” they wrote. The authors did note that while cloud computing is an “efficient and potentially cost-effective alternative for analysis of large genomic data sets,” the initial transfer of the data into the cloud was still a challenge. One option, they sug- gested, would be for the data providers to directly deposit the information to a designated cloud service provider, eliminating the need to handle the data twice. • Not all clouds created equal
November and December 2014