We were recently contracted to perform a penetration test for a SME (small-medium enterprise) with little previous experience of penetration testing. Understandably nervous, they had been requested by a client to undertake an overall company-wide assessment, with specific focus of an in-house CRM solution.
As always, we were careful to explain the scope, the types of issues we’ll be looking for, and the potential penetration testing pitfalls that can occur to even the most carefully planned project. What struck me however, was the very black & white pass/fail perception of penetration testing that everyone seemed to have. It really isn’t like that, or at least it shouldn’t be.
When it comes to bespoke applications, the vulnerabilities will generally be as unique as the application, but will fall into common categories such as SQL injection, cross-site scripting, and application logic flaws. This is the reality of modern computing: Vulnerabilities are identified, corrected, and the process repeated in order to help manage the overall security posture of the network. It is a continuous cycle of improvement; it is part of the technical security toolkit. You will never have a vulnerability free environment, that is an unrealistic goal for even the most well-healed environment, but knowing what vulnerabilities are present, triaging them on a risk and corrective cost basis, and dealing with those above the organisation’s risk appetite has enormous value in its own right.
So, what do we typically find on an organisation’s first penetration test?
Well, this varies massively. But, in general, I’d say:
Poor patch management
Many organisations have yet to adopt a comprehensive patch management effort. Many that do, patch only core OS components. This leaves your third-party applications, along with your third party appliances, firmware, etc, increasingly outdated and vulnerable over time. Patch management is not easy, but is necessary. Legacy systems are also a variant of this. The forgotten door access system sitting in the corner running W2K will catch our attention for certain.
Poor password policy
From weak passwords set by the service desk (W1nter2016 anybody?) to password re-use and shared administrative accounts. We often find the impact of a single system compromise is greatly exaggerated because of poor passwords. Take the local Windows server administrator account. If this is the same across all your servers, the compromise of a single system results in local admin access on all your servers. Using tools such as mimikatz, that is a situation that is very likely to yield DA credentials in short order.
Network layer vulnerabilities
From fantastic tools such as Responder to man-in-the-middle attacks against RDP sessions with Cain & Abel, a modern Ethernet network can provide a number of very useful attack paths that will quickly yield privileged credentials.
Lack of hardening
It isn’t a particularly exciting facet of IT, which is possibly why it’s so often overlooked, but hardening hosts before being deployment will prevent the superfluous services and default accounts that so often facilitate unauthorised access. There exist a number of available sources of guidance, such as IASE.
Lack of security awareness
Staff are wonderful. Particularly the type of staff that buzz you back in after a fictitious cigarette, or log into the new web portal that you’ve created under a similar sounding, freshly registered domain name. To be honest, staff are often the easiest way into almost any system.
This of course isn’t an exhaustive list, but if you’re sharing passwords, running NT4 because it’s ’too important’ to update, have zero staff awareness and don’t run any port security on your switches, you know where to set your expectation levels.