Below a media alert from Trusteer’s Senior Security Strategist, George Tubin on the Schneier-Ranum Face-Off about Blacklisting, Whitelisting, the malware explosion and the Way Forward.
Key points include:
They are both right but only if they get the context right.
· Blacklisting can work effectively against non-targeted and large-scale attacks where real-time intelligence is available because the cost of adapting the control is lower than the cost of adapting the malware, the hackers are at a major disadvantage.
· This isn’t true for targeted attacks in the enterprise world. In this case, a single attack on a large enterprise can be developed over a long period of time, using zero-day exploits to evade detection, delivering advanced malware to few endpoints, and exfiltrating data using encrypted channels. In this case, blacklisting technologies can’t provide an effective solution and the targeted nature of the attack means that timely intelligence is simply not available.
· Whitelisting is a daunting task. Imagine what is required to vet new application files being introduced by employees download and installs or through updates. It places severe restrictions on knowledge workers productivity that goes against current trends in BYOD and IT consumerization.
· Innovation should focus on using a whitelisting approach that can work for large enterprises. Maybe it isn’t necessary to whitelist every single good file in the universe.
· Employees’ endpoints are often compromised by zero-day exploits that deliver malware to the file system and execute it. If we can stop the exploitation of vulnerable internet-facing apps (web browsers, Adobe Reader/Acrobat and Flash, Microsoft Office, and Java), by whitelisting the legitimate ways they can access the file system or other processes, we can protect users when they go to the wrong web sites and open up the wrong documents. This reduces the attack surface considerably.
· If users are lured to directly install malware on the endpoint, the malware must communicate with its C&C server and the attackers to exfiltrate data. What if we could control which applications talks to the internet and how they do it (directly or via other processes) using a tightly managed whitelist? This can be a great way to detect endpoint compromise before the damage is done and evasion tactics are used to fool network controls.
· The innovation cycle in targeted attacks protection is accelerating. Solving this security challenge in a way that large enterprises can actually deploy is the Holy Grail of security.
The tradeoff between cost and effectiveness has been debated over security technologies since the dawn of ages. One such debate occurred in the SearchSecurity article Schneier-Ranum Face-Off on whitelisting and blacklisting. Marcus Ranum is the Chief of Security for Tenable Security, Inc. and a recognized innovator in firewall and intrusion detection system technologies. He leads this debate with “security effectiveness.” He suggests blacklisting technologies failed to keep up with the malware explosion. Whitelisting addresses this problem and enterprises should accept the cost of managing the whitelist. Bruce Schneier is Chief Security Technology Officer of BT Global Services and a recognized computer security technology expert, cryptographer, and writer. He leads this debate with “controlling cost and complexity.” He argues that in various implementations, maintaining a blacklist is easier when the blacklist is small compared with a huge whitelist.
Our view is that both methods could be effective if applied correctly to the right context. So, this is really about what you are trying to achieve and not a one-size-fits-all issue.
Blacklisting can work effectively against non-targeted and large-scale attacks where real-time intelligence is available. Let’s take financial malware, for example. It is notoriously famous for bypassing anti-virus signature based detection. Some solutions, like Trusteer Rapport, use behavioral blacklisting to effectively stop these threats. But, malware developers could adjust their software to evade detection. A blacklisting-based control can then use real-time intelligence to detect the change across many endpoints, deploy a counter-measure through the Cloud and break the attack before it can gather any steam. Because the cost of adapting the control is lower than the cost of adapting the malware, the hackers are at a major disadvantage.
This isn’t true for targeted attacks in the enterprise world. In this case, a single attack on a large enterprise can be developed over a long period of time, using zero-day exploits to evade detection, delivering advanced malware to few endpoints, and exfiltrating data using encrypted channels. In this case, blacklisting technologies cant provide an effective solution and the targeted nature of the attack means that timely intelligence is simply not available.
Whitelisting makes very few assumptions about the nature of the threat because it focuses on the list of known good application files. However, managing this list is a daunting task. Imagine what is required to vet new application files being introduced by employees download and installs or through updates. And there’s an ongoing concern that you could accidentally whitelist malware files (yes, this can happen). Beyond additional work for the IT department, whitelisting places severe restrictions on knowledge workers productivity that goes against current trends in BYOD and IT consumerization.
Does this mean that if you truly want to reduce the risk of targeted attacks you must accept the cost of whitelisting? Innovation should focus on using a whitelisting approach that can work for large enterprises. Maybe it isn’t necessary to whitelist every single good file in the universe.
Employees’ endpoints are often compromised by zero-day exploits that deliver malware to the file system and execute it. If we can stop the exploitation of vulnerable internet-facing apps (web browsers, Adobe Reader/Acrobat and Flash, Microsoft Office, and Java), by whitelisting the legitimate ways they can access the file system or other processes, we can protect users when they go to the wrong web sites and open up the wrong documents. This reduces the attack surface considerably.
If users are lured to directly install malware on the endpoint, the malware must communicate with its C&C server and the attackers to exfiltrate data. What if we could control which applications talks to the internet and how they do it (directly or via other processes) using a tightly managed whitelist? This can be a great way to detect endpoint compromise before the damage is done and evasion tactics are used to fool network controls.
The innovation cycle in targeted attacks protection is accelerating. Solving this security challenge in a way that large enterprises can actually deploy is the Holy Grail of security