Snappy: Efficient Fuzzing with Adaptive and Mutable Snapshots

Abstract

Modern coverage-oriented fuzzers play a crucial role in vulnerability finding. While much research focuses on improving the core fuzzing techniques, some fundamental speed bottlenecks, such as the redundant computations incurred by re-executing the target for every input, remain. Prior solutions mitigate the impact of redundant computations by instead fuzzing a program snapshot, such as the one placed by a fork server at the program entry point or generalizations for annotated APIs, drivers, networked servers, etc. Such snapshots are static and, as such, cannot adapt to the characteristics of the target and the input, missing opportunities to further reduce redundancy and improve fuzzing speed.

In this paper, we present Snappy, a new approach to speed up fuzzing by aggressively pruning redundant computations with adaptive and mutable snapshots. The key ideas are to: (i) push the snapshot as deep in the target execution as possible and also end its execution as early as possible, according to how the target processes the relevant input data (adaptive placement); (ii) for each identified placement, cache snapshots across different inputs by patching the snapshot just-in-time with the relevant input data (mutable restore). We propose a generic design applicable to both branch-agnostic and branch-guided input mutation operators and demonstrate Snappy on top of Angora (supporting both classes of operators). Our evaluation shows that, while general, Snappy scores gains even compared to a fork server with hand-optimized static placement such as in FuzzBench, for instance obtaining up to ≈1.8x speedups across benchmarks.

Publication
ACM Annual Computer Security Applications Conference