Crazy Ideas for Combating Zombies and APTs
Randy Franklin Smith - June 11th, 2012
Whenever I think about detecting and defending against today’s sophisticated threats I keep coming back to the same question, “How do you distinguish legitimate activity from malicious?” That is not an easy question to answer.
For instance, read access by an authorized user or by a zombie process running on that user’s computer looks the same in an audit log. As soon as you try to detect anomalies – like alerting on activity at seemingly odd times of the day – you also create a stack of false positives for security analysts to wade through.
One industry in particular that I think is doing a horrible job of malicious behavior detection is the credit card industry. It’s such a hassle to buy anything online today that sometimes I wonder if it isn’t better to just take cash to a store. Anything out the “ordinary” causes your card to be locked, shipments held up and the necessity of making phone calls to customer service either at the merchant or credit card company – and who has time for that? There must be a better way.
But to find that better way, you have to get imaginative and start with some crazy ideas so here’s a few for starters.
Use CAPTCHAs internally as an added gate to keep automated malicious tools from accessing sensitive information. OK, nobody likes CAPTCHAs – I understand that. Maybe, I’m already starting down the wrong road here but think about the concept and maybe you’ll come up with a better idea. What does a CAPTCHA do? It helps provide assurance that the person accessing your system is a human and not some kind of ‘bot. So, do you have a sensitive web-based application or repository of sensitive information like SharePoint? Normal user authentication does not keep out bad guy programs running on the PC of a duly authorized user. But throwing up a CAPTCHA would stop such malware until the bad guys add CAPTCHA bypass technology or more start passing interactive sessions through zombied computers.
Split Internet Access
A long time ago I provided some training to a very secure military base. They had 2 networks (classified and de-classified) going to each PC with an A/B switch between each PC and the 2 networks. Each PC had a 2 removable hard drives. To access the Internet, they’d boot on the declassified drive and select the declassified network on the A/B switch and vice-versa for classified access. There were controls in the operating system to prevent a system booted on the classified drive from communicating if they mismatched the drives and network selection. All the drives went into a safe before each worker left.
I’m not talking about doing that now. But there are other ways. Here’s one – and remember I don’t claim this will be viable for every environment out there but it will be for some and more importantly, it will help you grasp the core concept which is to give users access to the Internet while preventing sensitive data and applications from touching the Internet. So here’s the idea: Block the majority of your user PCs from accessing the Internet. Notice I didn’t say block the users – just their PCs. How do you do that? Yes, a proxy server is a start but it’s still not good enough. Instead deliver their Internet browsing experience to them via thin-client. For instance deliver Internet Explorer to end-users as a RemoteApp running on a Remote Desktop Server. Firewall the server so that end-user PCs can only communicate via RDP. Lock down the session configuration to prevent drive sharing, etc. Now users can browse the web but none of the content they access touches their PC. At most, any malware will infect the RDS system which is firewalled off from the Internal network and can be re-imaged every night and that system.
Of course you’ll have to make exceptions for users that really need to be able download files from the Internet or run other applications that really do need to open outbound connections. And we haven’t solved the incoming email problem which is admittedly a key infiltration vector for APTs such as the one that hit RSA a while back. The biggest issue with incoming mail is how to handle attachments. Maybe mail client (e.g. Outlook) should run on the RDS as well. You could allow copy and paste between RDP sessions and the real desktop without exposing yourself to the majority of malicious content risks encountered today since most exploit malformed data structures in the file format and are only going to impact the application that directly parses the data. This method would keep malware off the user’s local desktop, it would keep the malware out of the internal network and prevent the malware from impersonating the end-user who is targeted by it.
So if you can’t actually implement either of the methods, how can you – or at least software vendors – make something similar possible? When a system encounters an access request to a resource from a user with the appropriate permissions, how can the system ensure that the request is really initiated by the user and not a zombie process running on the user’s PC? How can we allow users to access the Internet and internal applications while keeping some kind of boundary in place between the network and storage accessible to Internet applications (browsers, email clients) and internal applications? And how do we provide for exceptional applications that must bridge that boundary? Could operating system vendors add a flag to processes that insulate Internet, internal applications and hybrid applications from each other similar to the memory protection that already exists between processes or the kernel/user mode boundary? And could they build a new IPC (interprocess communication) method that allows data to safely cross this boundary in a format that precludes executable code?
Or is there a better idea? I hope you have one because we need it!