5 Reasons Why Privileged Identity Management Implementations Fail
As veterans of the privileged identity management (PIM) field, my colleagues and I hear some unsettling stories from organizations whose privileged identity management deployments did not provide the expected business value. We’ve also heard from organizations whose purchases led to years of expensive service engagements yet never delivered the agreed scope of work.
At the heart of this problem is that many organizations seem to grasp too late that implementing a privileged identity management solution is too important a process to delegate to a rubber-stamp RFP or a battle of vendor check boxes.
If handled correctly your implementation can enable you to close critical security loopholes; help make staff members accountable for actions that impact IT service and data security; and lower the cost of regulatory compliance.
Yet the wrong choices too often turn into expensive shelf-ware – or worse.
The Simple Truth
The truth is that privileged identity management (or privileged account password management) software is not a commodity and should not be purchased based on checkboxes and up-front fees alone. Vendor claims to the contrary, not all solutions perform equally well under vastly different deployment conditions that can include:
- Wide varieties of managed computers – Windows, Linux, UNIX, and mainframes along with numerous network appliance platforms, backup infrastructure, and other hardware to be secured;
- Large numbers of frequently changing target systems that can be separated by slow, unreliable, or expensive WAN links;
- Significant numbers of custom-designed and legacy applications that might be poorly documented and whose designs may pose significant vulnerabilities if not properly remediated;
- Complex organizational structures demanding solutions with the flexibility to handle overlapping and frequently-changing lines of delegation and control.
If any of these scenarios sound like your organization, you should disregard all vendors’ claims and instead focus on:
- Trial deployments that encompass, at the very least, a test environment with a realistic sampling of your target systems, applications, and user roles;
- Engaging in in-depth conversations with reference customers whose deployments realistically match the diversity and scale of your own organization, and whose managed applications at least reasonably approximate your own;
- Getting the facts from those reference customers about true timeframes and back-end costs of vendor deployments so that you can budget your project accordingly.
As you proceed with your evaluation be aware that vendor checkboxes often lie. Craftily written marketing pieces can suggest that a product’s capabilities with respect to one target platform, application or deployment scenario extend to all areas where the vendor claims coverage, and salespeople often believe their organization’s own marketing hype.
Ask very explicit questions – both of vendor engineers and reference customers – about how individual target platforms, managed applications and use case scenarios are configured and deployed. In each case was the vendor’s capability delivered out-of-the box, only through custom development, or never at all?
How the Process Should Work