These Microsoft-prevalence-based analysis reports are supplementary to AV-Comparatives’ main reports, already published, of the September 2013 and March 2014 File-Detection Tests. No additional testing has been performed; rather, the existing test results have been re-analysed from a different perspective, to consider what impact the missed samples are likely to have on customers, according to telemetry data of Microsoft.
We have released a study of data transmission in Internet security products. Many Internet users are concerned about who has access to their personal information and what is done with it. After revelations by Edward Snowden regarding the extent of eavesdropping by the US-American NSA, users have become increasingly aware of privacy issues. Computer security software has legitimate grounds for sending its makers some information about the system it’s running on; in particular, details of malware found on the machine have to be sent to the manufacturer in order to protect the user effectively. however, this does not mean that a program should have carte blanche to send any and all personal information found on a computer to the manufacturer (other than with the specific knowledge and agreement of the system’s owner). This report gives some insight into data-sending by popular security programs.
Clearly, antivirus manufacturers have to comply with the laws of the countries in which they are established. In the event of e.g. a court order requiring the vendor to provide information about a customer, the company has no choice but to do this. However, this should be the only reason for providing user data to a third party. Some companies do not state that they will only pass on customer information in such circumstances.
This report was initially requested and commissioned by PCgo and PC Magazin Germany.
As announced publicly several times already last year, AV-Comparatives provides since 2008 its services for a fee. The fee is NOT for the tests itself, but for the various services we provide (bug reports, usage of our material/logos for reprint and marketing material, getting sample lists, getting false alarms after the public tests, internal services, etc.). Vendors are not obligated to pay for the FULL-service-package, and if a vendor e.g. prefers to do not get all services, the fee is lower. All major testers get paid for the services they provide, this is a common practice. Of course, the fee has no influence at all on the results and is to cover our expenses and time involved.
For users and magazines the public results are free of charge, as long the source is given. If we have time and e.g. a magazine wants us to make an additional work/test of e.g. products that are not already tested, this costs some money which has to be covered by the magazine (only for the service provided to the magazine; the vendor of the tested product will not get any service).
For those that did not already read this info on our website:
Starting from 2008, AV-Comparatives will – like most other testers do – no longer provide its services for free, as the expenses for the site and all the work involved are too high. The vendors will not pay for the big tests itselfs (and it has of course no influence of any kind on the test results or award system) – the participation fee includes e.g. the usage of the reached award for marketing purposes and other services (like getting false positives and missed malware after the tests, bug reports, etc.). The number of participants will be limited to about 18 vendors.
Here an overview about what is coming next on AV-Comparatives:
– the retrospective test (which will be released the 1st December) will be slightly changed/improved further. Some vendors where already suggesting some test method changes since about a half year and as those methods are a) for us easier to do than we did so far, b) seems to be widely accepted also from other vendors and c) should not change much anyway, this new “methods” will be used in this upcoming retrospective test. The changes etc. will be described in the report. At least for this retrospective test, the level-ranges will likely remain unchanged.
– the summary report of 2007 will be released during December, containing a brief summary of all tests and some notes about the various products.
– In 2008 we will probably start to ask for some kind of fees for the various services we provide and we will probably drop out some vendors from the regular tests and include some other vendors instead which are more known and often asked to be included. The number of tested products will probably be limited to 16-18 products.
– Behavioral-Testing is getting more and more important. There are currently discussions to find the best (vendor-independent) practice for such kind of tests and how testers can perform such tests. As such tests are not trivial and require lot of resources it may take a while until we do them.
– If manpower and resources permits, we would like to do from time to time also other tests showing how well products protect against malicious actions in some various scenarios. But first we will continue to focus on keeping and improving further the quality and methods of the current tests.
AV-Test.org presented at the VirusBulletin conference an interesting paper about the current desolate state of the WildList and made suggestions on how to improve it. Already at the AV Testing Workshop in Rekjavik 2007 most of the technical staff of the AV vendors admitted that the WildList is well-accepted and loved because it is easy to pass tests based on the WildList and because it is good for the marketing (100% detection*). So you may ask, why – if it is easy to pass – some vendors fail at detecting all samples from the WildList? The reasons could be either errors by the testers or temporary bugs in the software, but more often and likely it is because a) more variables than just detecting all samples are needed to pass (e.g. no false positives in case of VB100), b) sometimes also very old threats that were on the wildlist 10 years ago (e.g. boot sector viruses) are still included, and probably also because not all vendors receive the WildCore collection and therefore are not tested under same circumstances. So, who wants to keep the WildList alive? Of course (beside marketing** peoples and certification bodies which get lot of money for quite easy to do [and for av vendors to pass paid] tests) all those vendors that know that their product would not score well in tests using larger test-sets.