by clicking on the page. A slider will appear, allowing you to adjust your zoom level. Return to the original size by clicking on the page again.
the page around when zoomed in by dragging it.
the zoom using the slider on the top right.
by clicking on the zoomed-in page.
by entering text in the search field and click on "In This Issue" or "All Issues" to search the current issue or the archive of back issues respectively.
by clicking on thumbnails to select pages, and then press the print button.
this publication and page.
displays a table of sections with thumbnails and descriptions.
displays thumbnails of every page in the issue. Click on a page to jump.
allows you to browse through every available issue.
GCN : March 2013
38 a test allowing the network team can accurately measure latency, jitter, packet loss and MOS. e tests are known as "synthetic transactions" and can measure all of the aspects of call quality for key network resources and alert managers to problems before someone picks up a phone to make a call. 4. Develop the Right Tool Portfolio Devices that support the Simple Network Management Protocol (SNMP) remain a cornerstone of network monitoring, especially because makers have evolved tools over time to collect a wider range of data about network status. Netflow analyzers, now common in most routers and switches, provide visibility into who is using the network and the amount of bandwidth consumed. A combination of tools for active and passive performance testing offers another important resource for determining ongoing network status. With active testing, network administrators use software-like agents that mimic the activities of actual users. Administrators can schedule these events to occur at specified times. e value? ese tests let network administrators create trend reports. With passive network monitoring, administrators place a data-collection and packet losses. In addition, because voice and video traffic has become more common, the IT staff will need to home in on data detailing latency and jitter, enabling them to check whether even subsecond delays are contributing to unacceptable outcomes for users. Enterprise technology managers also should look at application performance to gauge overall performance issues. On the data side, the easiest way to measure user experience is to document response times, says Eric Bear, director of managed systems at Visual Network Systems. "Look at each web server's response time and the Domain Name System [DNS] resolve time for the website," he explains. "Determine how long it takes to download images and paint them on the screen. en, when a request comes through to a back-end database server, measure the overall transaction to see if there is a problem and figure out what's contributing to a potentially poor response time." In particular, organizations that rely on voice over IP (VoIP) and video calls will want to gain a user perspective on network performance because jitter and packet loss can easily wreak havoc with this data. Mean opinion scores, or MOS, can help formulate a picture of video and VoIP performance. To gather MOS data, network managers can place a probe somewhere on the network to measure video and voice streams and quantify traffic performance. 3. Be Proactive Veterans of network monitoring also point to proactive testing as an important technique for avoiding performance problems. e strategy hinges on finding potential bottlenecks before they significantly impact users. Again, because emerging IT initiatives such as unified communications and VoIP are so vulnerable to quality of service hiccups, these areas benefit from proactive testing. One way to proactively test network performance levels is by configuring VoIP phones to call each other for probe along a communication path to capture information about the transactions taking place. Bear concedes that there are strengths and weaknesses associated with active and passive testing. "Let's suppose someone is using a software- as-a-service [SaaS] application from home or at an airport. e transactions are going across the Internet and not accessing the enterprise network whatsoever," Bear explains. "In that case, it's difficult, if not impossible, to put a probe in place to passively capture information. In that scenario, you need active testing. It's almost the only way to determine if the service is up and running at the level it's supposed to deliver." By contrast, Bear describes a scenario where an organization relies on a private cloud. In a shared-networking environment, potential performance conflicts could arise between the traffic going to and from the private cloud and with VoIP transmissions. "In that case, you want to study your critical links and have a view of all the different applications that are being accessed to understand the interactions among them," he says. "In this type of scenario, passive testing is the only way to go because with the active test, you lose the ability to drill down and to see interaction between other traffic. You are isolated to knowing Network Monitoring and Management Gathering a lot of data won't improve network performance. Administrators need tools to help them slice and dice the data and, thereby, determine the most effective responses.