Fingerprint Verification Competition (FVC) is an international competition focused on fingerprint verification software assessment. A subset of fingerprint impressions acquired with various sensors was provided to registered participants, to allow them to adjust the parameters of their algorithms. Participants were requested to provide enroll and match executable files of their algorithms; the evaluation was conducted at the organizers’ facilities using the submitted executable files on a sequestered database, acquired with the same sensors as the training set.

The Organizers of FVC are:

  • Biometric System Laboratory (University of Bologna)
  • Pattern Recognition and Image Processing Laboratory (Michigan State University)
  • Biometric Test Center (San Jose State University)

Each participant can submit up to one algorithm to the open and light categories.

The first, second and third international competitions on fingerprint verification (FVC2000, FVC2002 and FVC2004) were organized in 2000, 2002 and 2004, respectively. These events received great attention both from academic and industrial biometric communities. They established a common benchmark, allowing developers to unambiguously compare their algorithms, and provided an overview of the state-of-the-art in fingerprint recognition. Based on the response of the biometrics community, FVC2000, FVC2002 and FVC2004 were undoubtedly successful initiatives. The interest shown in previous editions by the biometrics research community has prompted the organizers to schedule a new competition for the year 2006.

In 2006 we had:

  • Four new databases (three real and one synthetic)
  • Two categories (Open Category and Light Category)
  • 53 participants (27 industrial, 13 academic, and 13 independent developers)
  • 70 algorithms submitted (44 in the Open Category and 26 in the Light Category)


  • Continuous advances in the field of biometric systems and, in particular, in fingerprint-based systems (both in matching techniques and sensing devices) require that performance evaluation of biometric systems be carried out at regular intervals.
  • The aim of FVC2006 is to track recent advances in fingerprint verification, for both academia and industry, and to benchmark the state-of-the-art in fingerprint technology.
  • Further testing, on interoperability and quality related issues, will be performed in a second stage, after the competition is completed.
  • This competition should not be viewed as an "official" performance certification of biometric systems, since only parts of the system software will be evaluated by using images from sensors not native to each system. Nonetheless, the results of this competition will give a useful overview of the state-of-the-art in this field and will provide guidance to the participants for improving their algorithms.


Two different sub-competitions (Open category and Light category) will be organized using the same databases.

  • Each participant is allowed to submit only one algorithm to each category.
  • The Open category has no limits on memory requirements and template size. For practical testing reasons, the maximum response time of the algorithms is limited as follows: the maximum time for each enrollment is 5 seconds, the maximum time for each matching is 3 seconds. The test will be executed under Windows XP Professional O.S. on PC INTEL PENTIUM 4 - 3.20 GHz - 1.00 GB RAM.
  • The Light category is intended for algorithms conceived for light architectures and therefore characterized by low computing needs, limited memory usage and small template size. The maximum time for enrollment is 0.3 seconds and the maximum time for matching is 0.1 seconds. The test will be executed under Windows XP Professional O.S. on PC INTEL PENTIUM 4 - 3.20 GHz - 1.00 GB RAM. The maximum memory that can be allocated by the processes is 4 MB. The maximum template size is 2 KB. A utility will be made available to the participants to test if their executables comply with the memory requirement.


One of the most important and time-consuming tasks of any biometric system evaluation is the data collection. We have created a multi-database, containing four disjoint fingerprint databases, each collected with a different sensor/technology.

  • Four distinct databases, provided by the organizers, will constitute the benchmark: DB1, DB2, DB3 and DB4. Each database is 150 fingers wide and 12 samples per finger in depth (i.e., it consists of 1800 fingerprint images). Each database will be partitioned in two disjoint subsets A and B:
    • - Subsets DB1-A, DB2-A, DB3-A and DB4-A, which contain the first 140 fingers (1680 images) of DB1, DB2, DB3 and DB4, respectively, will be used for the algorithm performance evaluation.
    • - Subsets DB1-B, DB2-B, DB3-B and DB4-B, containing the last 10 fingers (120 images) of DB1, DB2, DB3 and DB4, respectively, will be made available to the participants as a development set to allow parameter tuning before the submission.
  • During performance evaluation, fingerprints belonging to the same database will be matched against each other.
  • The image format is BMP, 256 gray-levels, uncompressed.
  • The image size and resolution vary depending on the database (detailed information will be available to the participants).
  • Data collection in FVC2006 was performed without deliberately introducing difficulties such as exaggerated distortion, large amounts of rotation and displacement, wet/dry impressions, etc. (as it was done in the previous editions), but the population is more heterogeneous and also includes manual workers and elderly people. The volunteers were simply asked to put their fingers naturally on the acquisition device, but no constraints were enforced to guarantee a minimum quality in the acquired images. The final datasets were selected from a larger database by choosing the most difficult fingers according to a quality index, to make the benchmark sufficiently difficult for a technology evaluation.

Performance Evaluation

For each database and for each algorithm:

  • Each sample in the subset A is matched against the remaining samples of the same finger to compute the False Non Match Rate FNMR (also referred as False Rejection Rate - FRR). If image g is matched to h, the symmetric match (i.e., h against g) is not executed to avoid correlation in the scores. The total number of genuine tests (in case no enrollment rejections occur) is:
    • ((12*11) /2) * 140 = 9,240
  • The first sample of each finger in the subset A is matched against the first sample of the remaining fingers in A to compute the False Match Rate FMR (also referred as False Acceptance Rate - FAR). If image g is matched to h, the symmetric match (i.e., h against g) is not executed to avoid correlation in the scores. The total number of impostor tests (in case no enrollment rejections occur) is:
    • ((140*139) /2) = 9,730

Although it is possible to reject images in enrollment, this is strongly discouraged. In fact, in FVC2006, as in FVC2004 and FVC2002, rejection in enrollment is fused with other error rates for the final ranking; in particular, each rejection in enrollment will produce a "ghost" template which will not match (matching score 0) with all the remaining fingerprints.

For each algorithm and for each database, the following performance indicators are reported:

  • REJENROLL (Number of rejected fingerprints during enrollment)
  • REJNGRA (Number of rejected fingerprints during genuine matches)
  • REJNIRA (Number of rejected fingerprints during impostor matches)
  • Impostor and Genuine score distributions
  • FMR(t)/FNMR(t) curves, where t is the acceptance threshold
  • ROC(t) curve
  • EER (equal-error-rate)
  • EER* (the value that EER would take if the matching failures were excluded from the computation of FMR and FNMR)
  • FMR100 (the lowest FNMR for FMR<=1%)
  • FMR1000 (the lowest FNMR for FMR<=0.1%)
  • ZeroFMR (the lowest FNMR for FMR=0%)
  • ZeroFNMR (the lowest FMR for FNMR=0%)
  • Average enrollment time
  • Average matching time
  • Average and maximum template size
  • Maximum amount of memory allocated

The following average performance indicators are reported over the four databases:

  • Average EER
  • Average FMR100
  • Average FMR1000
  • Average ZeroFMR
  • Average REJENROLL (Average number of rejected fingerprints during enrollment)
  • Average REJMATCH (Average number of rejected fingerprints during genuine and impostor matches)
  • Average enrollment time
  • Average matching time
  • Average template size (Calculated on the average template size for each database)
  • Average memory allocated (Calculated on the maximum amount of memory allocated for each database)


  • Participants can be from academia, from the industry, or independent developers.
  • Anonymous participation will be accepted: participants will be allowed to decide whether or not they want to publish their names together with their algorithm’s performance. Participants will be confidentially informed about the performance of their algorithm before they are required to make this decision. In case a participant decides to remain anonymous, the label "Anonymous organization" will be used, and the real identity will not be revealed.
  • Together with their submissions, participants will be required to provide some general, high-level information about their algorithms (similar to those reported in FVC2004, see [R. Cappelli, D. Maio, D. Maltoni, J.L. Wayman and A.K. Jain, “Performance Evaluation of Fingerprint Verification Systems”, IEEE Transactions on Pattern Analysis Machine Intelligence, January 2006]). Whilst this required information will not disclose industrial secrets, since it is a very high level description of the approaches, it could be of interest to the entire fingerprint community.
  • Organizers of FVC2006 will not participate in the contest.