GPU-Accelerated Rule Ranking

Ranker v5.0

Multi-Armed Bandit ranking with Early Elimination and memory-mapped file loading for 50× faster rule evaluation.

OpenCL GPU MAB Algorithm Memory-Mapped I/O NumPy Optimized
16K+Rules Supported
50×Loading Speed
MABAlgorithm
GPUAccelerated
Overview

Ranker v5.0 answers the question: "Which of my 16,000 rules actually cracks the most hashes?" — without running a full Hashcat session for every rule.

The Multi-Armed Bandit algorithm allocates GPU compute intelligently, spending more trials on promising rules and eliminating poor performers early. Combined with memory-mapped file loading, this achieves a 50× speedup over naive approaches.

Core Features
Multi-Armed Bandit + Early Elimination
Allocates GPU compute to top performers. Low-performing rules eliminated after confidence threshold crossed.
Memory-mapped file loading
OS-level mmap for near-instant rule file loading. 50× faster than standard file I/O for large rule sets.
NumPy-optimized scoring
Vectorized hit counting and confidence interval math using NumPy broadcasting.
All Hashcat modes
Compatible with modes 0, 1, 3, 6, 7 and custom corpora.
Ranking Workflow
Step 1
mmap rule files
Map all rule files into virtual memory. Load time: milliseconds even for 16K rules.
Step 2
GPU batch initialization
Compile OpenCL kernel, allocate device buffers for rules and hash corpus.
Step 3
MAB exploration phase
Each rule gets initial trials; bandit algorithm tracks win rates with UCB confidence bounds.
Step 4
Early elimination
Rules whose upper confidence bound falls below threshold are dropped, freeing GPU cycles.
Step 5
Final exploitation
Remaining budget spent on top-k rules for precise hit rate estimates.
Step 6
Export sorted ruleset
Output .rule file sorted descending by hit rate.