Apparently there is some confusion amongst people who are new to web application scanning regarding the "time it takes for a scan to run." That metric is typically perceived as a delta between the time at which the scanner starts running and the time at which the scanner completes. However, in reality there are different methods of tracking how long a scanner takes to complete, and some are more thorough than others. Probably the two most common methods of thinking about tracking this time are as follows:
The time it takes between when the scanner is started and the time at which the scanner stops.
The time it takes between when the customer begins the process of setting up to be scanned and the time it takes to give results to the customer and all false positives have been removed.
The disparity of these two options can be massive in terms of time, and people’s perception of the duration of a scan.
Unfortunately, there are many variables in this equation which can skew people’s perception of time when making a mental calculation of how long something has taken:
Is this the first time the customer has ever worked with the scanning vendor – meaning early implementation may be messy (sharing credentials, finding the right assets, giving access to the machines and so on)? It may also require ensuring correct hardware and operating system requirements are met, installation of software, licensing, and procuring hardware on the first scan.
Does the scanner require you to manually kick it off each time you want it to run? This can add quite a bit of overhead, simply by not continuously running.
Is there only one site or are there many sites that need to be tested – meaning that there could be a significant advantage involved in a cloud-based scanner where parallelism is easily achieved, compared to a desktop scanner?
Is this internal or external or both – meaning that there may be some VPN or virtual appliance needs involved in setting up and tearing down the scanner? That adds an unknown amount of time, due to the fact that credentials can expire over time, and sometimes it can be quite an ordeal to get login information.
Does the vendor remove false positives for the customer – meaning that if they do not, is there an extra hidden cost for the customer after results have been received?
Does the scanner require manual updating – meaning that between runs or before each run, does it require an update to ensure the latest test cases are being used? This can take minutes to hours depending on a number of variables such as: how easy it is to patch, whether or not the customer or a third party is handling manual updates, what level of access is required, how to share credentials and so on.
Does the vendor integrate with internal trouble ticketing, or does that require manual intervention to copy each vulnerability and paste it into the trouble ticketing system to begin triaging the vulnerability?
When timing a scanner it is important to take into account the actual overhead, and therefore cost, of running the scanner. Rather than looking at any one abstract metric, a more thorough analysis is better at identifying cost. So perhaps in the future we should ask a more relevant and useful question. Instead of asking "how long does the scanner take to run" instead we should ask, "how long does it take for us to take action on our actual vulnerabilities using this scanner?"