by clicking on the page. A slider will appear, allowing you to adjust your zoom level. Return to the original size by clicking on the page again.
the page around when zoomed in by dragging it.
the zoom using the slider on the top right.
by clicking on the zoomed-in page.
by entering text in the search field and click on "In This Issue" or "All Issues" to search the current issue or the archive of back issues respectively.
by clicking on thumbnails to select pages, and then press the print button.
this publication and page.
displays a table of sections with thumbnails and descriptions.
displays thumbnails of every page in the issue. Click on a page to jump.
allows you to browse through every available issue.
GCN : February 2013
32 GCN FEBRUARY 2013 • GCN.COM wasn't even part of the original plan. Shams checked his control panel and saw that all of the AWS programs NASA was using were running in the green. Amazon CloudWatch was a go. CloudFor- mation was green. Simple Workflow Ser- vice was ready when called upon. Simple Storage Service (S3) and Elastic Compute Cloud (EC2) were already being used. Even Route 53 was ready to provide Do- main Name System management. "The Amazon Cloud solution can scale to handle over a terabyte per second but only requires us to provision and pay for exactly how much capacity we need," Shams said. In addition to being a pay-as-you-need sys- tem, it was highly redundant, able to suffer the unlikely loss of over a dozen data cen- ters without ever disrupting the flow of vid- eo, data and information to its worldwide audience. As Curiosity entered the Marian atmo- sphere, Shams noticed that bandwidth usage had grown to over 40 gigabits/sec, according to the Amazon CloudWatch pro- gram, which monitors usage within the Am- azon Cloud and can send out warnings and messages when certain thresholds are met. However, CloudWatch never actually sent out any warnings during the Mars landing, because the CloudFormation stack was designed to seamlessly add more band- width and servers as needed. In fact, Shams was able to click to add a new 25 gigabit/ sec stack to the JPL cloud, and register those new machines to the project on the fly with the Route 53 program. Back on Mars, Curiosity was hanging just a few meters above the surface of the planet, suspended in air by its rocket-pow- ered sky crane. It would soon be dropped a few feet to the soil below, whereupon the crane would fly off and land a safe dis- tance away, a move designed to prevent dust from kicking up and possibly damag- ing the rover. Bandwidth usage worldwide was topping 70 gigabits/sec, when an un- expected crisis happened. "The main JPL website, still running un- der traditional infrastructure, was crum- bling under the load of millions of excited users," Shams said. "We built the Mars site for the mission, but people were coming in through the main JPL site." Acting quickly, Shams directed the IT staff to change the DNS entries for the JPL main site to dump all of its traffic into the cloud. This process had to be done manually be- cause it wasn't part of the system, so Route 53 couldn't be used. Even so, staff members at JPL were able to make the changes within five minutes. This problem might have caused major bandwidth problems for the cloud, with millions of users being directed into the Amazon system all at once, joining millions of others who were already there and those who were tuning in late. Bandwidth exceeded 100 gigabits/sec, but nobody was cut off. The cloud expanded to accommodate everyone. Jamie Kinney, AWS solutions architect, said one reason all went so smoothly for the millions of end users was the Amazon CloudFront content distribution network ensures that everyone is connected to the geographically closest part of the cloud. Amazon has nine regions that are set up into Availability Zones. Each zone is inde- pendent and isolated from problems in the other areas. One region is on the East Coast and two are on the West Coast. There are also regions in Europe, Singapore, Tokyo, Brazil and Australia. There is also a dedicated GovCloud, which is used only by the U.S. government and which, with regard to data, requires storage in an environment that can be ac- cessed only by authorized U.S. users. Shams stressed that NASA's ability to be efficient distributing data worldwide and in managing the costs of such an ambitious project did not happen by accident. He cred- ited JPL's CTO Tom Soderstrom with lay- ing the groundwork for getting the agency ready for the cloud. "To that end, he set up a cloud comput- ing commodity board that is responsible for setting up evaluations, contracts and demos of the most promising cloud computing ca- pabilities across multiple vendors," Shams said. "This board is meant to streamline the process through which we adopt the most relevant cloud capabilities." And without the cloud, Shams said, none of what they did in terms of public out- reach would be possible. That night, NASA achieved two major milestones. The agen- cy landed a robotic laboratory safely on a planet 154 million miles away, and it proved that efficient use of cloud computing could accomplish what might have seemed like a miracle just a few years ago. • CLOUD CASE STUDY GOALS: Capture photographs, scientific readings and other data from Mars, 154 mil- lion miles away, process and then distribute data worldwide to scientists' and others' desktops, tablets and smartphones within seconds of arrival on Earth. TACTICS: CTO Tom Soderstrom set up a cloud computing commodity board that worked behind the scenes for years evaluating and reviewing the best cloud-based solutions so that when the time came, they had the best tools for the job in place. TOOLS: Amazon Web Services provided the backbone on a pay-for-services basis, which gave NASA bandwidth it needed and a global, highly scalable network for processing and distribution of data. AT-A-GLANCE: NASA'S MISSION TO MARS