More Randfacts Testing

Feb 4, 2026
7 minute read
randfacts rust python math

I’ve covered this topic before in a previous post, but I’ve recently gotten back into improving randfacts after switching it from poetry to uv. I’m really not sure why, but after improving the unit tests, I had an urge to see how much faster I could make the checkduplicates test.

As a brief historical overview, the checkduplicates test finds any facts that are duplicated in the dataset. This sounds easy at first, but I originally wanted it to work for facts with different phrasings but the same meaning. From this, I stumbled across the Levenshtein Distance algorithm. As of the time I’m writing this, there have been four major iterations of this test, listed below:

Dice-Sørensen

There were a few main optimizations that I made. Firstly, I completely switched algorithms. I found a formula called the Dice-Sørensen coefficient which is used for calculating the similarity between two samples. It’s defined as follows:

DSC=2XYX+YDSC=\frac{2\left|X\cap Y\right|}{\left|X\right| + \left|Y\right|}

Between two sets, it computes the ratio of twice the number of items in their intersection to the sum of their lengths. For example, if you have two sets of {i, love, programming} and {programming, is, what, i, love}, their intersection is {i, programming, love} of length three, giving us:

233+5=0.75\frac{2\cdot 3}{3+5}=0.75

Once we establish a similarity threshold that works best (turned out to be 0.70 for me), then we can just use this to compare similarity without having to do recursion or complex matrix edit distance calculations.

The actual base algorithm itself ended up being fairly simple, as it is provided a sorted array of u64s (we’ll discuss why/how u64 in a moment), so we’re able to calculate the intersection in one linear pass:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#[inline(always)]
fn dice_sorensen_sorted(set1: &[u64], set2: &[u64]) -> f64 {
    let len1 = set1.len();
    let len2 = set2.len();

    let mut intersect = 0;
    let mut i = 0;
    let mut j = 0;

    // calculate the intersection of the two sorted vecs of tokens:
    while i < len1 && j < len2 {
        let x = set1[i];
        let y = set2[j];

        if x == y {
            intersect += 1;
            i += 1;
            j += 1;
        } else if x < y {
            i += 1;
        } else {
            j += 1;
        }
    }

    // DSC = (2|X intersect Y|)/(|X| + |Y|)
    (2.0 * intersect as f64) / ((len1 + len2) as f64) * 100.0
}

Hashing/Tokenizing

I mentioned in the last section that the algorithm is provided a list (set) of u64s. This is another reason why it can run so must faster. Since integer comparisons are so much quicker compared to full string comparisons, why not just tokenize the string (which we were already doing) and hash each word? This allows us to just store a vector of each hash and check if numbers are equal (O(1)O(1)) instead of checking if strings are equal (O(n)O(n)). In this same normalization pass, I also added “stop word” removal. This is a term from natural language processing that describes words such as “i”, “was”, “a”, “or”, “at”, or anything else like that that doesn’t really contribute “meaning” to a sample. By removing all of these words from the set of tokens, we can construct a set that more closely represents the core “subject” of the fact.

Length Difference Ratio Termination

The other huge optimization that this approach allowed us to do is the fact that we’re now able to calculate the mathematical limit at which two facts could not possibly be considered similar if their lengths differ by xx amount. This was a little bit complex to do but I’ll try to annotate the math here:

We already know the following formula for calculating the coefficient:

DSC=2XYX+YDSC=\frac{2\left|X\cap Y\right|}{\left|X\right| + \left|Y\right|}

Since this is a ratio against the sum of the lengths of the sets, we should be able to derive at what length of YY (given X|X|) will the ratio not surpass the set threshold. We know that the list is sorted from least to greatest by length, and from that, we know that as we traverse from start to finish (comparing every XX against everything after it), XY|X| \leq |Y|. This tells us that the maximum possible intersection length between the two sets is X|X|. From this, we can construct an inequality representing this maximum case:

2XX+Ythreshold2Xthreshold(X+Y)2XthresholdX+thresholdY2XthresholdXthresholdYX(2threshold)thresholdYX(2threshold)thresholdY \begin{aligned} \frac{2\cdot |X|}{|X| + |Y|} & \geq \text{threshold} \\ 2\cdot |X| & \geq \text{threshold}\cdot\left(|X|+|Y|\right) \\ 2\cdot |X| & \geq \text{threshold}\cdot|X| + \text{threshold}\cdot|Y| \\ 2\cdot |X| - \text{threshold}\cdot |X| & \geq \text{threshold}\cdot|Y| \\ |X|\left(2 - \text{threshold}\right) & \geq \text{threshold}\cdot|Y| \\ \frac{|X|\left(2 - \text{threshold}\right)}{\text{threshold}} & \geq |Y| \\ \end{aligned}

In my final Rust implementation, I was able to use this formula to break off processing once we’ve reached this limit, shown here:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// ...
// this would take 70.0 and make it into 0.7
let fractional_ratio = SIMILARITY_THRESHOLD / 100.0;
// use the formula we just calculated
let length_diff_ratio = (2.0 - fractional_ratio) / fractional_ratio;

// Process facts in parallel
facts
    .par_iter()
    .enumerate()
    .progress_with(pb)
    .flat_map(|(i, f1)| {
        let mut matches = vec![];
        // here is the length of X, or |X|
        let len1 = f1.token_hashes.len();

        for f2 in &facts[i + 1..] {
            // here is the length of the fact we're comparing against, or |Y|
            let len2 = f2.token_hashes.len();

            // if lengths are too different to possibly be the same, don't try any of the
            // remaining facts
            if (len2 as f64) > (len1 as f64) * length_diff_ratio {
                break;
            }

            let ratio = dice_sorensen_sorted(&f1.token_hashes, &f2.token_hashes);
            if ratio > SIMILARITY_THRESHOLD {
                matches.push((f1.original.clone(), f2.original.clone(), ratio));
            }
        }

        matches
    })
    .collect()

Speed Comparison and Conclusion

PythonC++/PythonWagner-Fischer RustDice-Sørensen Rust
Runtime (seconds)419.22 seconds (6m59s)68.34 seconds (1m08s)17.1 seconds0.165 seconds
Approximate Iterations/second~60,000-70,000~400,000~1,550,000~50,0001
Source Code PermalinkLinkLinkLinkLink

I found this comparison kind of interesting because the new Rust version is so fast, but it by far has the least number of iterations/second. While this may seem like it’d be slower at first, the speed is due to the test having far less comparisons to do than any of the other implementations because of the new ability to only compare facts that could possibly be similar. I’ve actually found this new implementation to have picked up duplicate facts that the Levenshtein and Wagner-Fischer implementations missed, while also having almost no false positives. Overall, this has been a very worthwhile investment and I had a lot of fun learning about this new algorithm.


  1. This is kind of difficult to measure because of how fast it runs, but manually calculating it gives around this. The quick flash of the progress bar also shows something around this ↩︎