by clicking on the page. A slider will appear, allowing you to adjust your zoom level. Return to the original size by clicking on the page again.
the page around when zoomed in by dragging it.
the zoom using the slider on the top right.
by clicking on the zoomed-in page.
by entering text in the search field and click on "In This Issue" or "All Issues" to search the current issue or the archive of back issues respectively.
by clicking on thumbnails to select pages, and then press the print button.
this publication and page.
displays a table of sections with thumbnails and descriptions.
displays thumbnails of every page in the issue. Click on a page to jump.
allows you to browse through every available issue.
GCN : January and February 2016
4 challenges to realistic records management BY KON LEONG INDUSTRY INSIGHT DESPITE THE RAPIDLY ADVANCING processing and algorithmic tools avail- able today, organizations are having a difficult time reaping the insights they are supposed to generate. This is largely because data that hasn’t been managed and cleaned is of little use for analytics, no matter how slick the user interface is. Unstructured data, in particular, is resistant to uniform management. Nevertheless, structured data is critical for daily business productivity and for meeting legal and regu- latory requirements. It also has immense potential for business insight. At the 2015 ARMA Conference for information governance professionals, several common themes were debated, all with big implications for records and information management. 1. Deciding what to delete. More traditional, risk-averse units — such as records management and legal teams — strive to eliminate outdated con- tent as soon as it is legally permissible to do so. More proactive factions — such as marketing and manage- ment — want to keep as much data as possible. Regardless of where an organization falls on the “risk tolerance” spectrum, the first step is to decide what (and when) to delete. But even then, the orga- nization essentially needs to “touch” every item of content to decide whether to delete or retain it. The most elegant solu- tion is an environment in which all unstructured content is handled cen- trally, enterprisewide policies for retention can be programmed and executed consistently, and no dupli- cate copies linger in the shadows. 2. Designating consis- tent access privileges. Within a single applica- tion or silo, it’s typically straightforward to assign correct access rights. That falls apart, however, when multiple silos are involved. An individual who might have the right level of ac- cess in one system could be completely blocked in a related platform — or, conversely, might be given far too much access. Coordinating permis- sions across platforms often requires manual updating, which quickly becomes im- practicable due to lagtimes and high levels of human error. To complicate mat- ters, permissions are often based on variables that can change over time, such as timestamps, and the same document might have inconsistent policies across applications. 3. Maintaining complete audit trails. Many compli- ance and legal uses for data require a comprehensive log of changes, edits and own- ership. Those audit trails are crucial for defensibility, but they are nearly impossible to maintain when data cop- ies reside in multiple silos. The audit trail for a par- ticular item is only relevant for the actions taken in the same platform in which audit trails are generated. To definitively know the full history of an item of data, there must be one “master” environment into which all relevant items are stored as single copies, even if duplicates exist elsewhere. 4. Scaling up classifica- tion. Today’s data cannot be effectively classified manually. The ever-growing volume of information, changing policies and legal precedents that deem nearly all data discoverable have redefined what constitutes a record. Even if a company sets defensible policies for the expedited disposition of “junk” content, every single piece of data still must be assessed in some way to determine its status. Therefore, a blend of au- tomatic and manual classifi- cation is currently the only feasible approach. The divi- sion of roles between man and machine will depend on an organization’s needs, but in general, records profes- sionals should be allowed to do what they do best: assign policies for complicated or ambiguous items. Auto- classification can then be used to filter out the easily identifiable items. In order for that to happen, there must be a single classification engine through which all data passes. Separate systems with unique classification capabilities cannot create a consistent result, which causes irreconcilable holes in defensibility. • — Kon Leong is CEO and co-founder of ZL Technologies. Data that hasn’t been managed and cleaned is of little use for analytics, no matter how slick the user interface is. GCN JANUARY/FEBRUARY 2016 • GCN.COM 13 0216gcn_013.indd 13 2/1/16 12:05 PM
March and April 2016