We compared the built-in anti-phishing protection of several popular web browsers (Opera, Microsoft Internet Explorer, Google Chrome, Apple Safari, Mozilla Firefox). You can find the report here.
As announced publicly several times already last year, AV-Comparatives provides since 2008 its services for a fee. The fee is NOT for the tests itself, but for the various services we provide (bug reports, usage of our material/logos for reprint and marketing material, getting sample lists, getting false alarms after the public tests, internal services, etc.). Vendors are not obligated to pay for the FULL-service-package, and if a vendor e.g. prefers to do not get all services, the fee is lower. All major testers get paid for the services they provide, this is a common practice. Of course, the fee has no influence at all on the results and is to cover our expenses and time involved.
For users and magazines the public results are free of charge, as long the source is given. If we have time and e.g. a magazine wants us to make an additional work/test of e.g. products that are not already tested, this costs some money which has to be covered by the magazine (only for the service provided to the magazine; the vendor of the tested product will not get any service).
For those that did not already read this info on our website:
Starting from 2008, AV-Comparatives will – like most other testers do – no longer provide its services for free, as the expenses for the site and all the work involved are too high. The vendors will not pay for the big tests itselfs (and it has of course no influence of any kind on the test results or award system) – the participation fee includes e.g. the usage of the reached award for marketing purposes and other services (like getting false positives and missed malware after the tests, bug reports, etc.). The number of participants will be limited to about 18 vendors.
Here an overview about what is coming next on AV-Comparatives:
– the retrospective test (which will be released the 1st December) will be slightly changed/improved further. Some vendors where already suggesting some test method changes since about a half year and as those methods are a) for us easier to do than we did so far, b) seems to be widely accepted also from other vendors and c) should not change much anyway, this new “methods” will be used in this upcoming retrospective test. The changes etc. will be described in the report. At least for this retrospective test, the level-ranges will likely remain unchanged.
– the summary report of 2007 will be released during December, containing a brief summary of all tests and some notes about the various products.
– In 2008 we will probably start to ask for some kind of fees for the various services we provide and we will probably drop out some vendors from the regular tests and include some other vendors instead which are more known and often asked to be included. The number of tested products will probably be limited to 16-18 products.
– Behavioral-Testing is getting more and more important. There are currently discussions to find the best (vendor-independent) practice for such kind of tests and how testers can perform such tests. As such tests are not trivial and require lot of resources it may take a while until we do them.
– If manpower and resources permits, we would like to do from time to time also other tests showing how well products protect against malicious actions in some various scenarios. But first we will continue to focus on keeping and improving further the quality and methods of the current tests.
AV-Test.org presented at the VirusBulletin conference an interesting paper about the current desolate state of the WildList and made suggestions on how to improve it. Already at the AV Testing Workshop in Rekjavik 2007 most of the technical staff of the AV vendors admitted that the WildList is well-accepted and loved because it is easy to pass tests based on the WildList and because it is good for the marketing (100% detection*). So you may ask, why – if it is easy to pass – some vendors fail at detecting all samples from the WildList? The reasons could be either errors by the testers or temporary bugs in the software, but more often and likely it is because a) more variables than just detecting all samples are needed to pass (e.g. no false positives in case of VB100), b) sometimes also very old threats that were on the wildlist 10 years ago (e.g. boot sector viruses) are still included, and probably also because not all vendors receive the WildCore collection and therefore are not tested under same circumstances. So, who wants to keep the WildList alive? Of course (beside marketing** peoples and certification bodies which get lot of money for quite easy to do [and for av vendors to pass paid] tests) all those vendors that know that their product would not score well in tests using larger test-sets.
For peoples which are too lazy to read the information on the website and the reports:
1) The products tested by AV-Comparatives are already a selection of very good scanners. E.g. is a minimum requirement a detection rate of at least 85%. There are some big vendors (and many small vendors) which do not reach this requirement and are therefore not included in our tests due that. Some (relativly unknown) products do not even detect 20% of the test-set.
2) STANDARD rating is a good rating. It means that a product provides a good detection rate also of malware which is not on the Wildlist. As long a product scores at least STANDARD and is able to pass regularly the tests of VirusBulletin or ICSA, you can feel pretty safe with them. ADVANCED and ADVANCED+ are higher ratings, depending on your surfing habits and needs, you may feel to be in the need of using a scanner with such a rating.
3) Do not look just at the percentages or placing orders. A product belonging to one category (STANDARD, ADVANCED, ADVANCED+) can be considered as good as the other products in the same category.
4) The detection rate of an Anti-Virus product is just one factor you have to consider when choosing an AV. Other important factors are e.g.: impact on system performance, support, compatibility, price, GUI, easy of management, other protection features offered by the product, etc. – in other words: do not base the decision based on detection results alone and do not let other peoples decide what is best for you; try the various anti-virus products by yourself on your PC by downloading an evaluation version.
5) Do not annoy or bash a vendor if it scores lower than you would have expected in one test (and see point 1 & 2). You can be sure that they will do their best to improve their product and that their first goal is to protect their customers from the malware which is submitted to them by their customers (which of course has higher priority than samples submitted by other sources). Only in case your product e.g. often failed to protect you or the support you needed did not help you, you may consider to change your AV. If you are happy with your actual product and feel comfortable with it, there is probably no need to change it. Remember that AV-Comparatives does not recommend any specific product to you to use, what you get is just data results, all the rest remains up to you.
6) Look at various tests, possibly from as many different (professional) testers you find, see how the AV’s perform in those in long terms and . Some other testing institutions are listed in our links sections.
7) There is no AV which offers 100% detection against all malware. So it may be good if you from time to time check your PC with some online scanners of other vendors than the one you have installed. It may find something that your AV missed to detect – even if in tests it scores e.g. 99,9%.
During the blog migration some blog posts got lost/removed. We will try to recover at least the most interesting posts.