| 19 September 2011
In a number of cases, Internet-attached private systems are not always maintained in an up-to-date state and, as such, are vulnerable to exploitation by mischief-makers and, of course, their cybercriminal peers.
With this as a given, it is my contention that most public/private-sector professional security installations enjoy a very healthy state of security and, in most cases, are secure against drive-by, targeted or opportunist hackers, and all the compromises that can result from this type of cyber-criminality. However, as can be seen from the recent Shady RAT analysis from McAfee, it would appear that, despite this high level of resilience, some otherwise well-managed security installations are failing to deliver a 100 per cent security defence strategy.
For those readers unfamiliar with the August Shady RAT (Remote Access Trojan) report, the broad brush strokes are that there have been a series of multi-year, multi-system attacks on at least 72 US and other Western-allied government, contractor and other server systems.
Regardless of which entities are behind the attacks, however, the reality is that conventional IT security defences—when deployed alongside well-planned and executed security strategies—may no longer be considered sufficient to stop a determined and targeted attack. This leads us into the interesting supposition that the majority of previous reports, many of which have been well-researched by industry professionals, may be operating on a rationale that is a little out-of-date, and, as such, may not be adding value in giving the reader a complete overview and explanation of the current state of play.
A second observation is that, whilst some of the report findings focus on the failings of current generations of defensive technologies, they do not account for the root causes of the problems caused by determined and targeted attacks. These causes, I believe, centre on everyday working practices and security configurations, which are not always included in the standard security mission in a typical IT systems environment. There may also be further issues in the areas of security skill sets and a hands-on understanding, appreciation and anticipation of the potential for insecurity that may arise from adhering solely to the standard security mission of a given organisation.
Is this a criticism of the current status quo that exists in most corporate IT security operations? Far from it; my main aim here is to set the scene for my observations. Recently I was fortunate enough to have a meeting with an incumbent IT security manager in a large organisation, and as part of the getting-to-know-you process—on both sides of the table—that all potential new candidates for IT security projects undertake, I was asked a number of key questions. Amongst these questions asked in connection for a senior IT security role were the following: (a) Do you know what access control is, and (b) Can you explain what “audit and log” means?
You could conclude that these were trick questions, but the reality is that these types of questions suggest that the person drawing up the list of primary questions may not have been be fully conversant with the in-house IT security function (and that is likely a generous assertion). This brings us to one of the biggest challenges of the current age of cybersecurity: Advanced Evasion Techniques (AETs). At their most basic, AETs are a logical means by which attacks can be engineered to exploit a condition by re-engineering a vector of attack, and so circumvent any currently deployed defence or control, with the intention to invade, compromise and/or impact a targeted operational environment.
Developing a typical AET-enabled security attack is no mean feat, but the task is made easier by the fact that there are significant volumes of unintentionally published— but very available—intelligence on various IT platforms that can assist cybercriminals in what the design the attack profile of his/her AET should take, and how a hostile `footprinting' a potential target and collating information on the system they are considering attacking. This process then allows the hacker to decide network incursion can be engineered. At this point I'd like to introduce a supposition, namely that all of the above events, skills and knowledge can be used to develop a highly effective data leakage strategy. By its very nature, data leakage is opportunistically invasive—and unless understood, will always be present in the background, trapping, recording, and then without any malicious intent, making the information available to unauthorised persons. In many organisations, AET-enabled data leakage is a potential disaster just waiting to happen. What many might interpret as mere snippets of information can be leveraged by an experienced cybercriminal to launch a highly effective attack on an organisation.
One of the biggest potential areas for data leakage in my experience lies in the hacker treasure trove that metadata has become. Because metadata is data-about-data, it is often classed as summary information, when, in fact the possession of metadata—along with other snippets of information about a given potential target—can allow an experienced cybercriminal to develop one or more attack vectors that have the same success rate potential as if they had possession of all the underlying data `summarised' by the metadata.
But before we move on with this analysis, what is metadata? Metadata exists in all types of documents, and is present to assist the application, machine or user to manage the objects by, for example, allowing tagging or applying some other deeper hidden detail analysis which may assist with searching or document management. Despite its potential for darkware development, metadata's underlying purpose is entirely above board.
Problems start to rear their ugly heads when the security implications of metadata are not fully understood. And it is here that we start to see the opportunities of data leakage starting to creep out of the security woodwork in a typical organisation, often as a direct result of the many document formats that exist in modern IT environment: DOC, DOCX, PDF, PPT, PPS, XLS, XLSX, ODT, ODS, ODG, ODP and SVG, along with many others.
The sheer variety of data formats gives cybercriminals the ability to gain legitimate access to published documentation, download it, and then subject the data to analysis in order to locate snippets of information, such as user names. This can then lead to the identification of active user and/or email accounts, internal URLs, printer names, network and user paths, shared folders, and operating systems. And this is before we even begin to talk about NetBIOS names, IP addresses, GPS data and applications, all of which support `footprint' intelligence to the would-be attacker, who gains a ready insight into your network platform.
Is this a threat? Most certainly yes; if we look at the following analysis of an organisation's Web site – gleaned after 15 hours' research on around 28,600 files accessed from the site/servers – we can see that the amount of information we have gathered is significant and useful.
Is this a realistic assertion? I believe yes, as with this volume of diverse data at hand, it is then a relatively easy task for an attacker to analyse his/her initial points of interest and decide how to leverage the data leak information they have assimilated. This type of footprinting—which may come as a surprise to many network admins—is a very effective method of working out how organisations operate on the inside and, by manipulating data such as an Admin Account to produce BAU documents or to locate the use of legacy operating systems and their applications. This is - as any network security professional will attest – a perfect environment to create a crafted AET attack process and, by definition, represent a clear and present danger to the organisation's IT system that is under the hacker microscope.
In some of my own analysis and research, I have found it perfectly possible to gather sufficient intelligence to identify those sensitive assets that can be attacked through the use of externally gathered data. In one instance, this process methodology allowed the identification and extraction of files containing hard-coded user IDs and their associated passwords. In another instance, my approach allowed the identification of some very sensitive servers and associated information assets that were hanging off of a third-party developer's Web site.
As a conclusion, I believe that data leakage has become one of the most misunderstood conditions that engender a potential threat in modern security landscapes. I am also of the opinion that data leakage is one of the primary reasons why organisations are falling easy prey to hacktivists, hackers and cybercriminals. A strong data leak prevention program, implemented with the proper governance and assurance considerations is critical; free guidance on how to implement one correctly is available from ISACA at www.isaca.org/dlp. If the associated insecurity of data leakage issues were addressed as a matter of routine cybersecurity housekeeping, our industry would enjoy a noticeable reduction in the success rate of AET-enabled data incursions.
***For the full McAfee report please go to:
About Professor John Walker FBCS CITP CRISC CISM MFSoc ITPC – MD Secure-Bastion LTD
John is the owner and MD of Secure-Bastion Ltd, a specialist Contracting/Consultancy in the arena of IT Security Research, Forensics, and Security Analytics. He is also activity involved with supporting countering of eCrime, eFraud, and on-line Child Abuse.
John is the originator, and publisher of CyberTag, an Alerting and Security Analytics, which has been proactive in reporting undetected security exposures and vulnerabilities to its subscribed user base.
John a practicing Expert Witness in the area of IT, and is a Visiting Professor of Science and Technology at the School of Computing and Informatics, Nottingham Trent University. In the academic arena, John is the originator, and author of a CPD/MSc Module covering Cyber/Digital Forensics, and Investigations.
He is a Fellow of the British Computer Society (FBCS), a Chartered Information Technology Practioner (CITP (BCS)), Certified Information Security Manager (CISM (ISACA)), Certified in Risk & Information Systems Control (CRISC (ISACA)). John holds Certification under the UK Government ITPC Scheme, and based on his Government background working with Agencies such as CESG, and GCHQ, provides services to the Public Sector in both Central, and Local Government areas.
John is a Member of the ISACA Security Advisory Group, and is the lead of the EURIM Cyber Security Group. In February 2011, John was appointed as Director of Communications for CAMM (Common Assurance Maturity Model) with a focus on Cloud Security in both Private and Public Sectors.
In the past decade, John has delivered around 100 Security Presentations to Global organisations including RSA, Virus Bulletin, BBC World Service, Radio 4, InfoSec (US), InfoSec (UK), ISSA, and ISACA. In 2009 he was selected to present a paper at OWASP in Washington DC on the subject of ‘Obscure Security’. He is the author of over 70 published papers in both the UK, and USA, covering topics ranging from Malicious Software, Virtual Machine Security, Cloud Security, through to Hacking, and Cyber Crime.