How much time do you (or should you) spend evaluating a piece of commercial of the shelf software (COTS) for adherence to "security best practices?" Depending on your role within an organization or who that organization may be, you'll answer that question differently.
If you work for a company that allows you to do research on their time/dime, perhaps you spend 30-50% of your time pounding away on solutions sold by other companies for fun. This kind of research doesn't bring in any money, and in the capitalist economy, research doesn't make the bottom line any smaller, and it usually doesn't lead to any income either, unless you're in the engineering business, doing research to develop the next pharmaceutical drug, or working on the next patent. However, a remarkable finding in the IT security industry does bring with it a bit of fame and notoriety within the community, which is way it may make sense for a company that specializes in evaluating systems for security flaws (also known as penetration testing) to allow this type of research.
Perhaps you're a developer for a company that produces commercial software, in which case, I hope that your organization has some sort of software development life cycle (SDLC) that incorporates security definitions, designs, requirements, testing, etc. Given the current state of technology, I believe it to be extremely difficult to produce 100% bug free code, but we can at least do our diligence to ensure we minimize it. The driving factor here is often getting the product to market so your company can make money, worry about security later. My question still applies though. What if you use a 3rd party library within your own product? How much testing and evaluation is spent to ensure that library meets the standards you've set for your own product? Your company also has infrastructure to support the needs of business, which are likely COTS products. In which case, the following case probably applies.
As an "average security Joe" member of a security department (within IT, or outside of it, that's another topic all together), you typically have to juggle many different hats; you're the IDS guy, the firewall gal, the incident response person, or at least typically a member of the team that fulfills those roles, and likely many others. Depending on the maturity of the security program you are also involved in defining the requirements for new systems / software that is utilized within your organization. Then you have to test the implementation to ensure that it's up to specific standards that in the end hopefully don't increase the risk of your organization by any measurable amount. After all, it's your goal to reduce, minimize, and mitigate risk. If you can't do that, you should at least document your known risks, and be on the lookout for exploitation of those risks. I know, easier said than done.
When it comes to COTS, at a minimum, we are required to evaluate the software for a proper secure implementation within the enterprise. Ideally, the software would be installed in lab, configuration parameters analyzed, and documented for the production roll out. This includes things like establishing proper roles and groups, disabling listeners for un-needed services, enabling SSL communications for various interfaces, changing default usernames and passwords or SNMP strings. The list goes on.
What do we do with user or administrative interfaces that are web based? A lot of product's interfaces can get quite complex. Should we be doing full blown web application vulnerability assessments on them? Depending on the size and function of a product, a seasoned web app penetration tester could easily spend 2 weeks to a month or more on a product. The average security shop within a corporation doesn't have the time, money, or compelling business drivers to invest in such an endeavor. We are left to depend upon the software vendor to secure their own product. Perhaps it makes sense to run a quick scan utilizing a variety of commercial web app scanners. They will likely only catch low hanging fruit though. Even then, what do we do with our findings? Open a ticket with the vendor and wait for a patch? (also another topic)
So forget about simple web interfaces for a second, what about applications that are run locally by the user? Adobe acrobat reader, firefox, internet explorer? Obviously no one spends time doing binary analysis (since the source isn't available) and fuzzing input parameters to these applications, especially if they aren't freely available. The aforementioned products are probably an exception to that due to the pervasiveness of their installation base. Who's responsibility is it? The vendor certainly doesn't want to deal with it, they need to get the product to market, and the corporation doesn't make any money spending hundreds of hours investigating it? Who's job is it then? How much time, as end users should we, the security professionals, spend evaluating software for vulnerabilities? The usual answer applies. It depends. Every situation is unique, and it all boils down to risk, what am I exposing myself to? What will be lost in the event of a compromise? What will it cost me to recover? What can I do to reduce my exposure? How much of my business depends on this product being available to its users?
How much time do you spend? How do you determine how much time you spend? I'd love to hear your comments.