Publication | Open Access
A linear-time probabilistic counting algorithm for database applications
493
Citations
16
References
1990
Year
EngineeringData DeduplicationComputational ComplexityProbabilistic ComputationUnique ValuesLinear CountingInformation RetrievalData ScienceData MiningManagementData IntegrationProbabilistic AlgorithmData ManagementStatisticsProbabilistic SystemKnowledge DiscoveryHash FunctionProbability TheoryComputer ScienceDatabase TheoryQuery OptimizationRecord LinkageDatabase ApplicationsSimilarity Search
Counting unique values in databases has traditionally relied on sorting with O(q log q) time, and key statistics such as column cardinality and join selectivity are essential for query optimization and physical design. The authors introduce a probabilistic linear‑counting algorithm, provide a thorough theoretical and experimental analysis, and demonstrate its use for estimating column cardinality and join selectivity. The algorithm, called linear counting, hashes input values into a compact table and estimates the number of distinct elements, allowing a high load factor (e.g., 12) while maintaining accuracy. The method runs in linear time O(q), uses little memory, achieves user‑specified accuracy, and shows that a load factor as high as 12 can still yield about 1 % error.
We present a probabilistic algorithm for counting the number of unique values in the presence of duplicates. This algorithm has O ( q ) time complexity, where q is the number of values including duplicates, and produces an estimation with an arbitrary accuracy prespecified by the user using only a small amount of space. Traditionally, accurate counts of unique values were obtained by sorting, which has O ( q log q ) time complexity. Our technique, called linear counting , is based on hashing. We present a comprehensive theoretical and experimental analysis of linear counting. The analysis reveals an interesting result: A load factor (number of unique values/hash table size) much larger than 1.0 (e.g., 12) can be used for accurate estimation (e.g., 1% of error). We present this technique with two important applications to database problems: namely, (1) obtaining the column cardinality (the number of unique values in a column of a relation) and (2) obtaining the join selectivity (the number of unique values in the join column resulting from an unconditional join divided by the number of unique join column values in the relation to he joined). These two parameters are important statistics that are used in relational query optimization and physical database design.
| Year | Citations | |
|---|---|---|
Page 1
Page 1