Merge branch 'main' of github.com:AFLplusplus/LibAFL into main

This commit is contained in:
Andrea Fioraldi 2022-01-07 11:53:54 +01:00
commit e6f2f2d0b2
151 changed files with 3058 additions and 2579 deletions

View File

@ -2,7 +2,7 @@
- [ ] Objective-Specific Corpuses (named per objective)
- [ ] Good documentation
- [ ] More informative outpus, deeper introspection (monitor, what mutation did x, etc.)
- [ ] More informative outputs, deeper introspection (monitor, what mutation did x, etc.)
- [ ] Timeout handling for llmp clients (no ping for n seconds -> treat as disconnected)
- [ ] Heap for signal handling (bumpallo or llmp directly?)
- [x] Frida support for Windows

View File

@ -14,3 +14,44 @@ In Rust, we bind this concept to the [`Executor`](https://docs.rs/libafl/0/libaf
By default, we implement some commonly used Executors such as [`InProcessExecutor`](https://docs.rs/libafl/0/libafl/executors/inprocess/struct.InProcessExecutor.html) is which the target is a harness function providing in-process crash detection. Another Executor is the [`ForkserverExecutor`](https://docs.rs/libafl/0/libafl/executors/forkserver/struct.ForkserverExecutor.html) that implements an AFL-like mechanism to spawn child processes to fuzz.
A common pattern when creating an Executor is wrapping an existing one, for instance [`TimeoutExecutor`](https://docs.rs/libafl/0.6.1/libafl/executors/timeout/struct.TimeoutExecutor.html) wraps an executor and install a timeout callback before calling the original run function of the wrapped executor.
## InProcessExecutor
Let's begin with the base case; `InProcessExecutor`.
This executor uses [_SanitizerCoverage_](https://clang.llvm.org/docs/SanitizerCoverage.html) as its backend, as you can find the related code in `libafl_targets/src/sancov_pcguards`. Here we allocate a map called `EDGES_MAP` and then our compiler wrapper compiles the harness to write the coverage into this map.
When you want to execute the harness as fast as possible, you will most probably want to use this `InprocessExecutor`.
One thing to note here is, when your harness is likely to have heap corruption bugs, you want to use another allocator so that corrupted heap does not affect the fuzzer itself. (For example, we adopt MiMalloc in some of our fuzzers.). Alternatively you can compile your harness with address sanitizer to make sure you can catch these heap bugs.
## ForkserverExecutor
Next, we'll take a look at the `ForkserverExecutor`. In this case, it is `afl-cc` (from AFLplusplus/AFLplusplus) that compiles the harness code, and therefore, we can't use `EDGES_MAP` anymore. Hopefully, we have [_a way_](https://github.com/AFLplusplus/AFLplusplus/blob/2e15661f184c77ac1fbb6f868c894e946cbb7f17/instrumentation/afl-compiler-rt.o.c#L270) to tell the forkserver which map to record the coverage.
As you can see from the forkserver example,
```rust,ignore
//Coverage map shared between observer and executor
let mut shmem = StdShMemProvider::new().unwrap().new_map(MAP_SIZE).unwrap();
//let the forkserver know the shmid
shmem.write_to_env("__AFL_SHM_ID").unwrap();
let mut shmem_map = shmem.map_mut();
```
Here we make a shared memory region; `shmem`, and write this to environmental variable `__AFL_SHM_ID`. Then the instrumented binary, or the forkserver, finds this shared memory region (from the aforementioned env var) to record its coverage. On your fuzzer side, you can pass this shmem map to your `Observer` to obtain coverage feedbacks combined with any `Feedback`.
Another feature of the `ForkserverExecutor` to mention is the shared memory testcases. In normal cases, the mutated input is passed between the forkserver and the instrumented binary via `.cur_input` file. You can improve your forkserver fuzzer's performance by passing the input with shared memory.
See AFL++'s [_documentation_](https://github.com/AFLplusplus/AFLplusplus/blob/stable/instrumentation/README.persistent_mode.md#5-shared-memory-fuzzing) or the fuzzer example in `forkserver_simple/src/program.c` for reference.
It is very simple, when you call `ForkserverExecutor::new()` with `use_shmem_testcase` true, the `ForkserverExecutor` sets things up and your harness can just fetch the input from `__AFL_FUZZ_TESTCASE_BUF`
## InprocessForkExecutor
Finally, we'll talk about the `InProcessForkExecutor`.
`InProcessForkExecutor` has only one difference from `InprocessExecutor`; It forks before running the harness and that's it.
But why do we want to do so? well, under some circumstances, you may find your harness pretty unstable or your harness wreaks havoc on the global states. In this case, you want to fork it before executing the harness runs in the child process so that it doesn't break things.
However, we have to take care of the shared memory, it's the child process that runs the harness code and writes the coverage to the map.
We have to make the map shared between the parent process and the child process, so we'll use shared memory again. You should compile your harness with `pointer_maps` (for `libafl_targes`) features enabled, this way, we can have a pointer; `EDGES_MAP_PTR` that can point to any coverage map.
On your fuzzer side, you can allocate a shared memory region and make the `EDGES_MAP_PTR` point to your shared memory.
```rust,ignore
let mut shmem;
unsafe{
shmem = StdShMemProvider::new().unwrap().new_map(MAX_EDGES_NUM).unwrap();
}
let shmem_map = shmem.map_mut();
unsafe{
EDGES_PTR = shmem_map.as_ptr();
}
```
Again, you can pass this shmem map to your `Observer` and `Feedback` to obtain coverage feedbacks.

View File

@ -1,44 +0,0 @@
# ForkserverExecutor and InprocessForkExecutor
## Introduction
We have `ForkserverExecutor` and `InprocessForkExecutor` in libafl crate.
On this page, we'll quickly explain how they work and see how they compare to normal `InProcessExecutor`
## InprocessExecutor
Let's begin with the base case; `InProcessExecutor`.
This executor uses [_SanitizerCoverage_](https://clang.llvm.org/docs/SanitizerCoverage.html) as its backend, as you can find the related code in `libafl_targets/src/sancov_pcguards`. Here we allocate a map called `EDGES_MAP` and then our compiler wrapper compiles the harness to write the coverage into this map.
## ForkserverExecutor
Next, we'll look at the `ForkserverExecutor`. In this case, it is `afl-cc` (from AFLplusplus/AFLplusplus) that compiles the harness code, and therefore, we can't use `EDGES_MAP` anymore. Hopefully, we have [_a way_](https://github.com/AFLplusplus/AFLplusplus/blob/2e15661f184c77ac1fbb6f868c894e946cbb7f17/instrumentation/afl-compiler-rt.o.c#L270) to tell the forkserver which map to record the coverage.
As you can see from the forkserver example,
```rust,ignore
//Coverage map shared between observer and executor
let mut shmem = StdShMemProvider::new().unwrap().new_map(MAP_SIZE).unwrap();
//let the forkserver know the shmid
shmem.write_to_env("__AFL_SHM_ID").unwrap();
let mut shmem_map = shmem.map_mut();
```
Here we make a shared memory region; `shmem`, and write this to environmental variable `__AFL_SHM_ID`. Then the instrumented binary, or the forkserver, finds this shared memory region (from the aforementioned env var) to record its coverage. On your fuzzer side, you can pass this shmem map to your `Observer` to obtain coverage feedbacks combined with any `Feedback`.
Another feature of the `ForkserverExecutor` to mention is the shared memory testcases. In normal cases, the mutated input is passed between the forkserver and the instrumented binary via `.cur_input` file. You can improve your forkserver fuzzer's performance by passing the input with shared memory.
See AFL++'s [_documentation_](https://github.com/AFLplusplus/AFLplusplus/blob/stable/instrumentation/README.persistent_mode.md#5-shared-memory-fuzzing) or the fuzzer example in `forkserver_simple/src/program.c` for reference.
It is very simple, when you call `ForkserverExecutor::new()` with `use_shmem_testcase` true, the `ForkserverExecutor` sets things up and your harness can just fetch the input from `__AFL_FUZZ_TESTCASE_BUF`
## InprocessForkExecutor
Finally, we'll talk about the `InProcessForkExecutor`.
`InProcessForkExecutor` has only one difference from `InprocessExecutor`; It forks before running the harness and that's it.
But why do we want to do so? well, under some circumstances, you may find your harness pretty unstable or your harness wreaks havoc on the global states. In this case, you want to fork it before executing the harness runs in the child process so that it doesn't break things.
However, we have to take care of the shared memory, it's the child process that runs the harness code and writes the coverage to the map.
We have to make the map shared between the parent process and the child process, so we'll use shared memory again. You should compile your harness with `pointer_maps` (for `libafl_targes`) features enabled, this way, we can have a pointer; `EDGES_MAP_PTR` that can point to any coverage map.
On your fuzzer side, you can allocate a shared memory region and make the `EDGES_MAP_PTR` point to your shared memory.
```rust,ignore
let mut shmem;
unsafe{
shmem = StdShMemProvider::new().unwrap().new_map(MAX_EDGES_NUM).unwrap();
}
let shmem_map = shmem.map_mut();
unsafe{
EDGES_PTR = shmem_map.as_ptr();
}
```
Again, you can pass this shmem map to your `Observer` and `Feedback` to obtain coverage feedbacks.

View File

@ -11,7 +11,7 @@ extern crate serde;
use libafl::SerdeAny;
use serde::{Serialize, Deserialize};
#[derive(Serialize, Deserialize, SerdeAny)]
#[derive(Debug, Serialize, Deserialize, SerdeAny)]
pub struct MyMetadata {
//...
}

View File

@ -17,4 +17,4 @@ opt-level = 3
[dependencies]
libafl = { path = "../../libafl/" }
clap = { version = "3.0.0-rc.4", features = ["default"] }
clap = { version = "3.0", features = ["default"] }

View File

@ -65,12 +65,12 @@ pub fn main() {
let mut shmem = StdShMemProvider::new().unwrap().new_map(MAP_SIZE).unwrap();
//let the forkserver know the shmid
shmem.write_to_env("__AFL_SHM_ID").unwrap();
let mut shmem_map = shmem.map_mut();
let shmem_map = shmem.map_mut();
// Create an observation channel using the signals map
let edges_observer = HitcountsMapObserver::new(ConstMapObserver::<_, MAP_SIZE>::new(
"shared_mem",
&mut shmem_map,
shmem_map,
));
// Create an observation channel to keep track of the execution time

View File

@ -35,11 +35,12 @@ libafl_frida = { path = "../../libafl_frida", features = ["cmplog"] }
libafl_targets = { path = "../../libafl_targets", features = ["sancov_cmplog"] }
lazy_static = "1.4.0"
libc = "0.2"
libloading = "0.7.0"
libloading = "0.7"
num-traits = "0.2.14"
rangemap = "0.1.10"
structopt = "0.3.25"
rangemap = "0.1"
clap = { version = "3.0", features = ["derive"] }
serde = "1.0"
mimalloc = { version = "*", default-features = false }
backtrace = "0.3"
color-backtrace = "0.5"

View File

@ -1,14 +1,16 @@
//! A libfuzzer-like fuzzer with llmp-multithreading support and restarts
//! The example harness is built for libpng.
use mimalloc::MiMalloc;
#[global_allocator]
static GLOBAL: MiMalloc = MiMalloc;
use clap::{self, StructOpt};
use frida_gum::Gum;
use std::{
env,
net::SocketAddr,
path::{Path, PathBuf},
time::Duration,
};
use structopt::StructOpt;
use libafl::{
bolts::{
@ -51,12 +53,9 @@ use libafl_targets::cmplog::{CmpLogObserver, CMPLOG_MAP};
#[cfg(unix)]
use libafl_frida::asan::errors::{AsanErrorsFeedback, AsanErrorsObserver, ASAN_ERRORS};
fn timeout_from_millis_str(time: &str) -> Result<Duration, Error> {
Ok(Duration::from_millis(time.parse()?))
}
#[derive(Debug, StructOpt)]
#[structopt(
#[clap(
name = "libafl_frida",
version = "0.1.0",
about = "A frida-based binary-only libfuzzer-style fuzzer for with llmp-multithreading support",
@ -64,7 +63,7 @@ fn timeout_from_millis_str(time: &str) -> Result<Duration, Error> {
Dongjia Zhang <toka@aflplus.plus>, Andrea Fioraldi <andreafioraldi@gmail.com>, Dominik Maier <domenukk@gmail.com>"
)]
struct Opt {
#[structopt(
#[clap(
short,
long,
parse(try_from_str = Cores::from_cmdline),
@ -73,8 +72,8 @@ struct Opt {
)]
cores: Cores,
#[structopt(
short = "p",
#[clap(
short = 'p',
long,
help = "Choose the broker TCP port, default is 1337",
name = "PORT",
@ -82,16 +81,16 @@ struct Opt {
)]
broker_port: u16,
#[structopt(
#[clap(
parse(try_from_str),
short = "a",
short = 'a',
long,
help = "Specify a remote broker",
name = "REMOTE"
)]
remote_broker_addr: Option<SocketAddr>,
#[structopt(
#[clap(
parse(try_from_str),
short,
long,
@ -100,7 +99,7 @@ struct Opt {
)]
input: Vec<PathBuf>,
#[structopt(
#[clap(
short,
long,
parse(try_from_str),
@ -110,27 +109,7 @@ struct Opt {
)]
output: PathBuf,
#[structopt(
parse(try_from_str = timeout_from_millis_str),
short,
long,
help = "Set the exeucution timeout in milliseconds, default is 1000",
name = "TIMEOUT",
default_value = "1000"
)]
timeout: Duration,
#[structopt(
parse(from_os_str),
short = "x",
long,
help = "Feed the fuzzer with an user-specified list of tokens (often called \"dictionary\"",
name = "TOKENS",
multiple = true
)]
tokens: Vec<PathBuf>,
#[structopt(
#[clap(
long,
help = "The configuration this fuzzer runs with, for multiprocessing",
name = "CONF",
@ -138,19 +117,19 @@ struct Opt {
)]
configuration: String,
#[structopt(
#[clap(
long,
help = "The file to redirect stdout input to (/dev/null if unset)"
)]
stdout_file: Option<String>,
#[structopt(help = "The harness")]
#[clap(help = "The harness")]
harness: String,
#[structopt(help = "The symbol name to look up and hook")]
#[clap(help = "The symbol name to look up and hook")]
symbol: String,
#[structopt(help = "The modules to instrument, separated by colons")]
#[clap(help = "The modules to instrument, separated by colons")]
modules_to_instrument: String,
}
@ -160,7 +139,7 @@ pub fn main() {
// Needed only on no_std
//RegistryBuilder::register::<Tokens>();
let opt = Opt::from_args();
let opt = Opt::parse();
color_backtrace::install();
println!(

View File

@ -24,8 +24,9 @@ libafl = { path = "../../libafl/" }
libafl_targets = { path = "../../libafl_targets/", features = ["sancov_pcguard_hitcounts", "sancov_cmplog", "libfuzzer"] }
# TODO Include it only when building cc
libafl_cc = { path = "../../libafl_cc/" }
clap = { version = "3.0.0-rc.4", features = ["default"] }
nix = "0.23.0"
clap = { version = "3.0", features = ["default"] }
nix = "0.23"
mimalloc = { version = "*", default-features = false }
[lib]
name = "fuzzbench"

View File

@ -1,4 +1,7 @@
//! A singlethreaded libfuzzer-like fuzzer that can auto-restart.
use mimalloc::MiMalloc;
#[global_allocator]
static GLOBAL: MiMalloc = MiMalloc;
use clap::{App, Arg};
use core::{cell::RefCell, time::Duration};

View File

@ -14,5 +14,5 @@ debug = true
[dependencies]
libafl = { path = "../../libafl/" }
libafl_qemu = { path = "../../libafl_qemu/", features = ["x86_64"] }
clap = { version = "3.0.0-rc.4", features = ["default"] }
nix = "0.23.0"
clap = { version = "3.0", features = ["default"] }
nix = "0.23"

View File

@ -24,7 +24,8 @@ libafl = { path = "../../libafl/" }
libafl_targets = { path = "../../libafl_targets/", features = ["sancov_pcguard_hitcounts", "sancov_cmplog", "libfuzzer"] }
# TODO Include it only when building cc
libafl_cc = { path = "../../libafl_cc/" }
structopt = "0.3.25"
clap = { version = "3.0", features = ["derive"] }
mimalloc = { version = "*", default-features = false }
[lib]
name = "generic_inmemory"

View File

@ -1,9 +1,12 @@
//! A libfuzzer-like fuzzer with llmp-multithreading support and restarts
//! The `launcher` will spawn new processes for each cpu core.
use mimalloc::MiMalloc;
#[global_allocator]
static GLOBAL: MiMalloc = MiMalloc;
use clap::{self, StructOpt};
use core::time::Duration;
use std::{env, net::SocketAddr, path::PathBuf};
use structopt::StructOpt;
use libafl::{
bolts::{
@ -47,13 +50,13 @@ fn timeout_from_millis_str(time: &str) -> Result<Duration, Error> {
}
#[derive(Debug, StructOpt)]
#[structopt(
#[clap(
name = "generic_inmemory",
about = "A generic libfuzzer-like fuzzer with llmp-multithreading support",
author = "Andrea Fioraldi <andreafioraldi@gmail.com>, Dominik Maier <domenukk@gmail.com>"
)]
struct Opt {
#[structopt(
#[clap(
short,
long,
parse(try_from_str = Cores::from_cmdline),
@ -62,24 +65,24 @@ struct Opt {
)]
cores: Cores,
#[structopt(
short = "p",
#[clap(
short = 'p',
long,
help = "Choose the broker TCP port, default is 1337",
name = "PORT"
)]
broker_port: u16,
#[structopt(
#[clap(
parse(try_from_str),
short = "a",
short = 'a',
long,
help = "Specify a remote broker",
name = "REMOTE"
)]
remote_broker_addr: Option<SocketAddr>,
#[structopt(
#[clap(
parse(try_from_str),
short,
long,
@ -88,7 +91,7 @@ struct Opt {
)]
input: Vec<PathBuf>,
#[structopt(
#[clap(
short,
long,
parse(try_from_str),
@ -98,7 +101,7 @@ struct Opt {
)]
output: PathBuf,
#[structopt(
#[clap(
parse(try_from_str = timeout_from_millis_str),
short,
long,
@ -108,13 +111,13 @@ struct Opt {
)]
timeout: Duration,
#[structopt(
#[clap(
parse(from_os_str),
short = "x",
short = 'x',
long,
help = "Feed the fuzzer with an user-specified list of tokens (often called \"dictionary\"",
name = "TOKENS",
multiple = true
multiple_occurrences = true
)]
tokens: Vec<PathBuf>,
}
@ -128,7 +131,7 @@ pub fn libafl_main() {
let workdir = env::current_dir().unwrap();
let opt = Opt::from_args();
let opt = Opt::parse();
let cores = opt.cores;
let broker_port = opt.broker_port;

View File

@ -22,7 +22,7 @@ num_cpus = "1.0"
[dependencies]
libafl = { path = "../../libafl/" }
libafl_targets = { path = "../../libafl_targets/", features = ["pointer_maps", "sancov_cmplog", "libfuzzer"] }
clap = { version = "3.0.0-beta.4", features = ["default", "yaml"] }
clap = { version = "3.0", features = ["default"] }
[lib]
name = "afl_atheris"

View File

@ -216,8 +216,13 @@ pub fn LLVMFuzzerRunDriver(
let mut run_client = |state: Option<StdState<_, _, _, _, _>>, mut mgr, _core_id| {
// Create an observation channel using the coverage map
let edges = unsafe { slice::from_raw_parts_mut(EDGES_MAP_PTR, MAX_EDGES_NUM) };
let edges_observer = HitcountsMapObserver::new(StdMapObserver::new("edges", edges));
let edges_observer = unsafe {
HitcountsMapObserver::new(StdMapObserver::new_from_ptr(
"edges",
EDGES_MAP_PTR,
MAX_EDGES_NUM,
))
};
// Create an observation channel to keep track of the execution time
let time_observer = TimeObserver::new("time");

View File

@ -19,6 +19,7 @@ libafl = { path = "../../libafl/" }
libafl_targets = { path = "../../libafl_targets/", features = ["sancov_pcguard_edges", "sancov_value_profile", "libfuzzer"] }
# TODO Include it only when building cc
libafl_cc = { path = "../../libafl_cc/" }
mimalloc = { version = "*", default-features = false }
[build-dependencies]
cc = { version = "1.0", features = ["parallel"] }

View File

@ -1,5 +1,8 @@
//! A libfuzzer-like fuzzer with llmp-multithreading support and restarts
//! The example harness is built for libmozjpeg.
use mimalloc::MiMalloc;
#[global_allocator]
static GLOBAL: MiMalloc = MiMalloc;
use std::{env, path::PathBuf};

View File

@ -25,6 +25,7 @@ libafl = { path = "../../libafl/", features = ["default"] }
libafl_targets = { path = "../../libafl_targets/", features = ["sancov_pcguard_hitcounts", "libfuzzer", "sancov_cmplog"] }
# TODO Include it only when building cc
libafl_cc = { path = "../../libafl_cc/" }
mimalloc = { version = "*", default-features = false }
[lib]
name = "libfuzzer_libpng"

View File

@ -1,5 +1,8 @@
//! A libfuzzer-like fuzzer with llmp-multithreading support and restarts
//! The example harness is built for libpng.
use mimalloc::MiMalloc;
#[global_allocator]
static GLOBAL: MiMalloc = MiMalloc;
use core::time::Duration;
use std::{env, path::PathBuf};

View File

@ -20,11 +20,12 @@ which = { version = "4.0.2" }
num_cpus = "1.0"
[dependencies]
libafl = { path = "../../libafl/", features = ["std", "anymap_debug", "derive", "llmp_compression", "introspection"] }
libafl = { path = "../../libafl/", features = ["std", "derive", "llmp_compression", "introspection"] }
libafl_targets = { path = "../../libafl_targets/", features = ["libfuzzer"] }
# TODO Include it only when building cc
libafl_cc = { path = "../../libafl_cc/" }
structopt = "0.3.25"
clap = { version = "3.0", features = ["derive"] }
mimalloc = { version = "*", default-features = false }
[lib]
name = "libfuzzer_libpng"

View File

@ -2,10 +2,13 @@
//! The example harness is built for libpng.
//! In this example, you will see the use of the `launcher` feature.
//! The `launcher` will spawn new processes for each cpu core.
use mimalloc::MiMalloc;
#[global_allocator]
static GLOBAL: MiMalloc = MiMalloc;
use clap::{self, StructOpt};
use core::time::Duration;
use std::{env, net::SocketAddr, path::PathBuf};
use structopt::StructOpt;
use libafl::{
bolts::{
@ -44,13 +47,13 @@ fn timeout_from_millis_str(time: &str) -> Result<Duration, Error> {
}
#[derive(Debug, StructOpt)]
#[structopt(
#[clap(
name = "libfuzzer_libpng_ctx",
about = "A clone of libfuzzer using LibAFL for a libpng harness",
author = "Andrea Fioraldi <andreafioraldi@gmail.com>, Dominik Maier <domenukk@gmail.com>"
)]
struct Opt {
#[structopt(
#[clap(
short,
long,
parse(try_from_str = Cores::from_cmdline),
@ -59,8 +62,8 @@ struct Opt {
)]
cores: Cores,
#[structopt(
short = "p",
#[clap(
short = 'p',
long,
help = "Choose the broker TCP port, default is 1337",
name = "PORT",
@ -68,16 +71,16 @@ struct Opt {
)]
broker_port: u16,
#[structopt(
#[clap(
parse(try_from_str),
short = "a",
short = 'a',
long,
help = "Specify a remote broker",
name = "REMOTE"
)]
remote_broker_addr: Option<SocketAddr>,
#[structopt(
#[clap(
parse(try_from_str),
short,
long,
@ -86,7 +89,7 @@ struct Opt {
)]
input: Vec<PathBuf>,
#[structopt(
#[clap(
short,
long,
parse(try_from_str),
@ -96,7 +99,7 @@ struct Opt {
)]
output: PathBuf,
#[structopt(
#[clap(
short,
long,
parse(try_from_str = timeout_from_millis_str),
@ -107,7 +110,7 @@ struct Opt {
timeout: Duration,
/*
// The tokens are hardcoded in this example.
#[structopt(
#[clap(
parse(from_os_str),
short = "x",
long,
@ -124,7 +127,7 @@ pub fn libafl_main() {
// Registry the metadata types used in this fuzzer
// Needed only on no_std
//RegistryBuilder::register::<Tokens>();
let opt = Opt::from_args();
let opt = Opt::parse();
let broker_port = opt.broker_port;
@ -141,8 +144,9 @@ pub fn libafl_main() {
let mut run_client = |state: Option<StdState<_, _, _, _, _>>, mut restarting_mgr, _core_id| {
// Create an observation channel using the coverage map
let edges = edges_map_from_ptr();
let edges_observer = HitcountsMapObserver::new(StdMapObserver::new("edges", edges));
let edges = unsafe { edges_map_from_ptr() };
let edges_observer =
HitcountsMapObserver::new(StdMapObserver::new_from_ownedref("edges", edges));
// Create an observation channel to keep track of the execution time
let time_observer = TimeObserver::new("time");

View File

@ -20,11 +20,12 @@ which = { version = "4.0.2" }
num_cpus = "1.0"
[dependencies]
libafl = { path = "../../libafl/", features = ["std", "anymap_debug", "derive", "llmp_compression", "introspection"] }
libafl = { path = "../../libafl/", features = ["std", "derive", "llmp_compression", "introspection"] }
libafl_targets = { path = "../../libafl_targets/", features = ["sancov_pcguard_hitcounts", "libfuzzer"] }
# TODO Include it only when building cc
libafl_cc = { path = "../../libafl_cc/" }
structopt = "0.3.25"
clap = { version = "3.0", features = ["derive"] }
mimalloc = { version = "*", default-features = false }
[lib]
name = "libfuzzer_libpng"

View File

@ -2,10 +2,13 @@
//! The example harness is built for libpng.
//! In this example, you will see the use of the `launcher` feature.
//! The `launcher` will spawn new processes for each cpu core.
use mimalloc::MiMalloc;
#[global_allocator]
static GLOBAL: MiMalloc = MiMalloc;
use clap::{self, StructOpt};
use core::time::Duration;
use std::{env, net::SocketAddr, path::PathBuf};
use structopt::StructOpt;
use libafl::{
bolts::{
@ -44,13 +47,13 @@ fn timeout_from_millis_str(time: &str) -> Result<Duration, Error> {
/// The commandline args this fuzzer accepts
#[derive(Debug, StructOpt)]
#[structopt(
#[clap(
name = "libfuzzer_libpng_launcher",
about = "A libfuzzer-like fuzzer for libpng with llmp-multithreading support and a launcher",
author = "Andrea Fioraldi <andreafioraldi@gmail.com>, Dominik Maier <domenukk@gmail.com>"
)]
struct Opt {
#[structopt(
#[clap(
short,
long,
parse(try_from_str = Cores::from_cmdline),
@ -59,8 +62,8 @@ struct Opt {
)]
cores: Cores,
#[structopt(
short = "p",
#[clap(
short = 'p',
long,
help = "Choose the broker TCP port, default is 1337",
name = "PORT",
@ -68,16 +71,16 @@ struct Opt {
)]
broker_port: u16,
#[structopt(
#[clap(
parse(try_from_str),
short = "a",
short = 'a',
long,
help = "Specify a remote broker",
name = "REMOTE"
)]
remote_broker_addr: Option<SocketAddr>,
#[structopt(
#[clap(
parse(try_from_str),
short,
long,
@ -86,7 +89,7 @@ struct Opt {
)]
input: Vec<PathBuf>,
#[structopt(
#[clap(
short,
long,
parse(try_from_str),
@ -96,7 +99,7 @@ struct Opt {
)]
output: PathBuf,
#[structopt(
#[clap(
parse(try_from_str = timeout_from_millis_str),
short,
long,
@ -107,7 +110,7 @@ struct Opt {
timeout: Duration,
/*
/// This fuzzer has hard-coded tokens
#[structopt(
#[clap(
parse(from_os_str),
short = "x",
long,
@ -125,7 +128,7 @@ pub fn libafl_main() {
// Registry the metadata types used in this fuzzer
// Needed only on no_std
//RegistryBuilder::register::<Tokens>();
let opt = Opt::from_args();
let opt = Opt::parse();
let broker_port = opt.broker_port;
let cores = opt.cores;

View File

@ -24,6 +24,7 @@ libafl = { path = "../../libafl/" }
libafl_targets = { path = "../../libafl_targets/", features = ["sancov_pcguard_hitcounts", "libfuzzer"] }
# TODO Include it only when building cc
libafl_cc = { path = "../../libafl_cc/" }
mimalloc = { version = "*", default-features = false }
[lib]
name = "libfuzzer_libpng"

View File

@ -1,5 +1,8 @@
//! A libfuzzer-like fuzzer with llmp-multithreading support and restarts
//! The example harness is built for libpng.
use mimalloc::MiMalloc;
#[global_allocator]
static GLOBAL: MiMalloc = MiMalloc;
use std::{env, path::PathBuf};

View File

@ -18,6 +18,7 @@ debug = true
[dependencies]
libafl = { path = "../../libafl/" }
libafl_targets = { path = "../../libafl_targets/", features = ["sancov_pcguard_edges", "sancov_cmplog", "libfuzzer"] }
mimalloc = { version = "*", default-features = false }
[build-dependencies]
cc = { version = "1.0", features = ["parallel"] }

View File

@ -1,5 +1,8 @@
//! A libfuzzer-like fuzzer with llmp-multithreading support and restarts
//! The example harness is built for `stb_image`.
use mimalloc::MiMalloc;
#[global_allocator]
static GLOBAL: MiMalloc = MiMalloc;
use std::{env, path::PathBuf};

View File

@ -18,7 +18,8 @@ debug = true
[dependencies]
libafl = { path = "../../../libafl/", features = ["concolic_mutation"] }
libafl_targets = { path = "../../../libafl_targets/", features = ["sancov_pcguard_edges", "sancov_cmplog", "libfuzzer"] }
structopt = "0.3.21"
clap = { version = "3.0", features = ["derive"]}
mimalloc = { version = "*", default-features = false }
[build-dependencies]
cc = { version = "1.0", features = ["parallel"] }

View File

@ -1,6 +1,10 @@
//! A libfuzzer-like fuzzer with llmp-multithreading support and restarts
//! The example harness is built for `stb_image`.
use mimalloc::MiMalloc;
#[global_allocator]
static GLOBAL: MiMalloc = MiMalloc;
use clap::{self, StructOpt};
use std::{env, path::PathBuf};
use libafl::{
@ -22,6 +26,7 @@ use libafl::{
feedbacks::{CrashFeedback, MapFeedbackState, MaxMapFeedback, TimeFeedback},
fuzzer::{Fuzzer, StdFuzzer},
inputs::{BytesInput, HasTargetBytes, Input},
monitors::MultiMonitor,
mutators::{
scheduled::{havoc_mutations, StdScheduledMutator},
token_mutations::I2SRandReplace,
@ -38,7 +43,6 @@ use libafl::{
StdMutationalStage, TracingStage,
},
state::{HasCorpus, StdState},
monitors::MultiMonitor,
Error,
};
@ -47,12 +51,10 @@ use libafl_targets::{
MAX_EDGES_NUM,
};
use structopt::StructOpt;
#[derive(Debug, StructOpt)]
struct Opt {
/// This node should do concolic tracing + solving instead of traditional fuzzing
#[structopt(short, long)]
#[clap(short, long)]
concolic: bool,
}
@ -61,7 +63,7 @@ pub fn main() {
// Needed only on no_std
//RegistryBuilder::register::<Tokens>();
let opt = Opt::from_args();
let opt = Opt::parse();
println!(
"Workdir: {:?}",

View File

@ -19,6 +19,7 @@ debug = true
libafl = { path = "../../libafl/" }
libafl_targets = { path = "../../libafl_targets/", features = ["sancov_pcguard_edges", "sancov_cmplog", "libfuzzer"] }
libafl_sugar = { path = "../../libafl_sugar/" }
mimalloc = { version = "*", default-features = false }
[build-dependencies]
cc = { version = "1.0", features = ["parallel"] }

View File

@ -1,5 +1,8 @@
//! A libfuzzer-like fuzzer with llmp-multithreading support and restarts
//! The example harness is built for `stb_image`.
use mimalloc::MiMalloc;
#[global_allocator]
static GLOBAL: MiMalloc = MiMalloc;
use std::{env, path::PathBuf};

View File

@ -41,7 +41,7 @@ pub fn main() {
let feedback_state = MapFeedbackState::with_observer(&observer);
// Feedback to rate the interestingness of an input
let feedback = MaxMapFeedback::<_, BytesInput, _, _, _>::new(&feedback_state, &observer);
let feedback = MaxMapFeedback::<BytesInput, _, _, _>::new(&feedback_state, &observer);
// A feedback to choose if an input is a solution or not
let objective = CrashFeedback::new();

View File

@ -61,7 +61,7 @@ impl HasTargetBytes for PacketData {
fn target_bytes(&self) -> OwnedSlice<u8> {
let mut serialized_data = Vec::with_capacity(self.serialized_size());
self.binary_serialize::<_, LittleEndian>(&mut serialized_data);
OwnedSlice::Owned(serialized_data)
OwnedSlice::from(serialized_data)
}
}

View File

@ -13,7 +13,7 @@ use crate::input::PacketData;
use serde::{Deserialize, Serialize};
#[derive(SerdeAny, Serialize, Deserialize)]
#[derive(Debug, SerdeAny, Serialize, Deserialize)]
pub struct PacketLenMetadata {
pub length: u64,
}
@ -29,8 +29,8 @@ impl FavFactor<PacketData> for PacketLenFavFactor {
}
}
pub type PacketLenMinimizerCorpusScheduler<C, CS, R, S> =
MinimizerCorpusScheduler<C, CS, PacketLenFavFactor, PacketData, MapIndexesMetadata, R, S>;
pub type PacketLenMinimizerCorpusScheduler<CS, S> =
MinimizerCorpusScheduler<CS, PacketLenFavFactor, PacketData, MapIndexesMetadata, S>;
#[derive(Serialize, Deserialize, Default, Clone, Debug)]
pub struct PacketLenFeedback {

View File

@ -10,23 +10,13 @@ use libafl::{
use crate::input::PacketData;
use core::marker::PhantomData;
use lain::traits::Mutatable;
pub struct LainMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
pub struct LainMutator {
inner: lain::mutator::Mutator<StdRand>,
phantom: PhantomData<*const (R, S)>,
}
impl<R, S> Mutator<PacketData, S> for LainMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl<S: HasRand> Mutator<PacketData, S> for LainMutator {
fn mutate(
&mut self,
state: &mut S,
@ -40,35 +30,22 @@ where
}
}
impl<R, S> Named for LainMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl Named for LainMutator {
fn name(&self) -> &str {
"LainMutator"
}
}
impl<R, S> LainMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl LainMutator {
#[must_use]
pub fn new() -> Self {
Self {
inner: lain::mutator::Mutator::new(StdRand::with_seed(0)),
phantom: PhantomData,
}
}
}
impl<R, S> Default for LainMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl Default for LainMutator {
#[must_use]
fn default() -> Self {
Self::new()

View File

@ -12,9 +12,8 @@ edition = "2021"
build = "build.rs"
[features]
default = ["std", "anymap_debug", "derive", "llmp_compression", "rand_trait", "fork"]
default = ["std", "derive", "llmp_compression", "rand_trait", "fork"]
std = ["serde_json", "serde_json/std", "hostname", "core_affinity", "nix", "serde/std", "bincode", "wait-timeout", "regex", "build_id", "uuid"] # print, env, launcher ... support
anymap_debug = ["serde_json"] # uses serde_json to Debug the anymap trait. Disable for smaller footprint.
derive = ["libafl_derive"] # provide derive(SerdeAny) macro.
fork = [] # uses the fork() syscall to spawn children, instead of launching a new command, if supported by the OS (has no effect on Windows, no_std).
rand_trait = ["rand_core"] # If set, libafl's rand implementations will implement `rand::Rng`
@ -39,7 +38,7 @@ criterion = "0.3" # Benchmarking
ahash = "0.7" # another hash
fxhash = "0.2.1" # yet another hash
xxhash-rust = { version = "0.8.2", features = ["xxh3"] } # xxh3 hashing for rust
serde_json = "1.0.60"
serde_json = "1.0"
num_cpus = "1.0" # cpu count, for llmp example
serial_test = "0.5"
@ -54,18 +53,18 @@ postcard = { version = "0.7", features = ["alloc"] } # no_std compatible serde s
bincode = {version = "1.3", optional = true }
static_assertions = "1.1.0"
ctor = "0.1.20"
num_enum = { version = "0.5.1", default-features = false }
num_enum = { version = "0.5.4", default-features = false }
typed-builder = "0.9.1" # Implement the builder pattern at compiletime
ahash = { version = "0.7", default-features=false, features=["compile-time-rng"] } # The hash function already used in hashbrown
intervaltree = { version = "0.2.7", default-features = false, features = ["serde"] }
libafl_derive = { version = "0.7.0", optional = true, path = "../libafl_derive" }
serde_json = { version = "1.0", optional = true, default-features = false, features = ["alloc"] } # an easy way to debug print SerdeAnyMap
miniz_oxide = { version = "0.4.4", optional = true}
serde_json = { version = "1.0", optional = true, default-features = false, features = ["alloc"] }
miniz_oxide = { version = "0.5", optional = true}
core_affinity = { version = "0.5", git = "https://github.com/s1341/core_affinity_rs", rev = "6648a7a", optional = true }
hostname = { version = "^0.3", optional = true } # Is there really no gethostname in the stdlib?
rand_core = { version = "0.5.1", optional = true } # This dependency allows us to export our RomuRand as rand::Rng.
nix = { version = "0.23.0", optional = true }
rand_core = { version = "0.5.1", optional = true } # This dependency allows us to export our RomuRand as rand::Rng. We cannot update to the latest version because it breaks compatibility to microsoft lain.
nix = { version = "0.23", optional = true }
regex = { version = "1", optional = true }
build_id = { version = "0.2.1", git = "https://github.com/domenukk/build_id", rev = "6a61943", optional = true }
uuid = { version = "0.8.2", optional = true, features = ["serde", "v4"] }
@ -86,10 +85,10 @@ regex = "1.4.5"
backtrace = "0.3"
[target.'cfg(windows)'.dependencies]
windows = { version = "0.28.0", features = ["std", "Win32_Foundation", "Win32_System_Threading", "Win32_System_Diagnostics_Debug", "Win32_System_Kernel", "Win32_System_Memory", "Win32_Security"] }
windows = { version = "0.29.0", features = ["std", "Win32_Foundation", "Win32_System_Threading", "Win32_System_Diagnostics_Debug", "Win32_System_Kernel", "Win32_System_Memory", "Win32_Security"] }
[target.'cfg(windows)'.build-dependencies]
windows = "0.28.0"
windows = "0.29.0"
[[bench]]
name = "rand_speeds"

View File

@ -24,6 +24,7 @@ use crate::{
Error,
};
use core::fmt::{self, Debug, Formatter};
#[cfg(feature = "std")]
use core::marker::PhantomData;
#[cfg(all(feature = "std", any(windows, not(feature = "fork"))))]
@ -44,7 +45,7 @@ const _AFL_LAUNCHER_CLIENT: &str = "AFL_LAUNCHER_CLIENT";
/// Provides a Launcher, which can be used to launch a fuzzing run on a specified list of cores
#[cfg(feature = "std")]
#[derive(TypedBuilder)]
#[allow(clippy::type_complexity)]
#[allow(clippy::type_complexity, missing_debug_implementations)]
pub struct Launcher<'a, CF, I, MT, OT, S, SP>
where
CF: FnOnce(Option<S>, LlmpRestartingEventManager<I, OT, S, SP>, usize) -> Result<(), Error>,
@ -85,12 +86,33 @@ where
phantom_data: PhantomData<(&'a I, &'a OT, &'a S, &'a SP)>,
}
impl<'a, CF, I, MT, OT, S, SP> Debug for Launcher<'_, CF, I, MT, OT, S, SP>
where
CF: FnOnce(Option<S>, LlmpRestartingEventManager<I, OT, S, SP>, usize) -> Result<(), Error>,
I: Input,
OT: ObserversTuple<I, S> + DeserializeOwned,
MT: Monitor + Clone,
SP: ShMemProvider + 'static,
S: DeserializeOwned,
{
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("Launcher")
.field("configuration", &self.configuration)
.field("broker_port", &self.broker_port)
.field("core", &self.cores)
.field("spawn_broker", &self.spawn_broker)
.field("remote_broker_addr", &self.remote_broker_addr)
.field("stdout_file", &self.stdout_file)
.finish_non_exhaustive()
}
}
#[cfg(feature = "std")]
impl<'a, CF, I, MT, OT, S, SP> Launcher<'a, CF, I, MT, OT, S, SP>
where
CF: FnOnce(Option<S>, LlmpRestartingEventManager<I, OT, S, SP>, usize) -> Result<(), Error>,
I: Input,
OT: ObserversTuple<I, S> + serde::de::DeserializeOwned,
OT: ObserversTuple<I, S> + DeserializeOwned,
MT: Monitor + Clone,
SP: ShMemProvider + 'static,
S: DeserializeOwned,
@ -112,11 +134,9 @@ where
println!("spawning on cores: {:?}", self.cores);
#[cfg(feature = "std")]
let stdout_file = if let Some(filename) = self.stdout_file {
Some(File::create(filename).unwrap())
} else {
None
};
let stdout_file = self
.stdout_file
.map(|filename| File::create(filename).unwrap());
// Spawn clients
let mut index = 0_u64;

View File

@ -63,9 +63,10 @@ use alloc::{string::String, vec::Vec};
use core::{
cmp::max,
fmt::Debug,
hint,
mem::size_of,
ptr, slice,
sync::atomic::{compiler_fence, Ordering},
sync::atomic::{fence, AtomicU16, AtomicU64, Ordering},
time::Duration,
};
use serde::{Deserialize, Serialize};
@ -192,7 +193,7 @@ pub enum TcpRequest {
}
impl TryFrom<&Vec<u8>> for TcpRequest {
type Error = crate::Error;
type Error = Error;
fn try_from(bytes: &Vec<u8>) -> Result<Self, Error> {
Ok(postcard::from_bytes(bytes)?)
@ -213,7 +214,7 @@ pub struct TcpRemoteNewMessage {
}
impl TryFrom<&Vec<u8>> for TcpRemoteNewMessage {
type Error = crate::Error;
type Error = Error;
fn try_from(bytes: &Vec<u8>) -> Result<Self, Error> {
Ok(postcard::from_bytes(bytes)?)
@ -249,7 +250,7 @@ pub enum TcpResponse {
}
impl TryFrom<&Vec<u8>> for TcpResponse {
type Error = crate::Error;
type Error = Error;
fn try_from(bytes: &Vec<u8>) -> Result<Self, Error> {
Ok(postcard::from_bytes(bytes)?)
@ -258,6 +259,7 @@ impl TryFrom<&Vec<u8>> for TcpResponse {
/// Abstraction for listeners
#[cfg(feature = "std")]
#[derive(Debug)]
pub enum Listener {
/// Listener listening on `tcp`.
Tcp(TcpListener),
@ -265,6 +267,7 @@ pub enum Listener {
/// A listener stream abstraction
#[cfg(feature = "std")]
#[derive(Debug)]
pub enum ListenerStream {
/// Listener listening on `tcp`.
Tcp(TcpStream, SocketAddr),
@ -279,7 +282,7 @@ impl Listener {
Listener::Tcp(inner) => match inner.accept() {
Ok(res) => ListenerStream::Tcp(res.0, res.1),
Err(err) => {
dbg!("Ignoring failed accept", err);
println!("Ignoring failed accept: {:?}", err);
ListenerStream::Empty()
}
},
@ -389,11 +392,11 @@ fn recv_tcp_msg(stream: &mut TcpStream) -> Result<Vec<u8>, Error> {
stream.read_timeout().unwrap_or(None)
);
let mut size_bytes = [0u8; 4];
let mut size_bytes = [0_u8; 4];
stream.read_exact(&mut size_bytes)?;
let size = u32::from_be_bytes(size_bytes);
let mut bytes = vec![];
bytes.resize(size as usize, 0u8);
bytes.resize(size as usize, 0_u8);
#[cfg(feature = "llmp_debug")]
println!("LLMP TCP: Receiving payload of size {}", size);
@ -420,11 +423,11 @@ fn new_map_size(max_alloc: usize) -> usize {
/// `llmp_page->messages`
unsafe fn _llmp_page_init<SHM: ShMem>(shmem: &mut SHM, sender: u32, allow_reinit: bool) {
#[cfg(all(feature = "llmp_debug", feature = "std"))]
dbg!("_llmp_page_init: shmem {}", &shmem);
println!("_llmp_page_init: shmem {:?}", &shmem);
let map_size = shmem.len();
let page = shmem2page_mut(shmem);
#[cfg(all(feature = "llmp_debug", feature = "std"))]
dbg!("_llmp_page_init: page {}", *page);
println!("_llmp_page_init: page {:?}", &(*page));
if !allow_reinit {
assert!(
@ -437,15 +440,15 @@ unsafe fn _llmp_page_init<SHM: ShMem>(shmem: &mut SHM, sender: u32, allow_reinit
(*page).magic = PAGE_INITIALIZED_MAGIC;
(*page).sender = sender;
ptr::write_volatile(ptr::addr_of_mut!((*page).current_msg_id), 0);
(*page).current_msg_id.store(0, Ordering::Relaxed);
(*page).max_alloc_size = 0;
// Don't forget to subtract our own header size
(*page).size_total = map_size - LLMP_PAGE_HEADER_LEN;
(*page).size_used = 0;
(*(*page).messages.as_mut_ptr()).message_id = 0;
(*(*page).messages.as_mut_ptr()).tag = LLMP_TAG_UNSET;
ptr::write_volatile(ptr::addr_of_mut!((*page).safe_to_unmap), 0);
ptr::write_volatile(ptr::addr_of_mut!((*page).sender_dead), 0);
(*page).safe_to_unmap.store(0, Ordering::Relaxed);
(*page).sender_dead.store(0, Ordering::Relaxed);
assert!((*page).size_total != 0);
}
@ -556,8 +559,7 @@ impl LlmpMsg {
let map_size = map.shmem.map().len();
let buf_ptr = self.buf.as_ptr();
if buf_ptr > (map.page_mut() as *const u8).add(size_of::<LlmpPage>())
&& buf_ptr
<= (map.page_mut() as *const u8).add(map_size - size_of::<LlmpMsg>() as usize)
&& buf_ptr <= (map.page_mut() as *const u8).add(map_size - size_of::<LlmpMsg>())
{
// The message header is in the page. Continue with checking the body.
let len = self.buf_len_padded as usize + size_of::<LlmpMsg>();
@ -597,7 +599,7 @@ where
match tcp_bind(port) {
Ok(listener) => {
// We got the port. We are the broker! :)
dbg!("We're the broker");
println!("We're the broker");
let mut broker = LlmpBroker::new(shmem_provider)?;
let _listener_thread = broker.launch_listener(Listener::Tcp(listener))?;
@ -669,7 +671,7 @@ where
}
/// Contents of the share mem pages, used by llmp internally
#[derive(Copy, Clone, Debug)]
#[derive(Debug)]
#[repr(C)]
pub struct LlmpPage {
/// to check if this page got initialized properly
@ -679,11 +681,11 @@ pub struct LlmpPage {
/// Set to != 1 by the receiver, once it got mapped
/// It's not safe for the sender to unmap this page before
/// (The os may have tidied up the memory when the receiver starts to map)
pub safe_to_unmap: u16,
pub safe_to_unmap: AtomicU16,
/// Not used at the moment (would indicate that the sender is no longer there)
pub sender_dead: u16,
pub sender_dead: AtomicU16,
/// The current message ID
pub current_msg_id: u64,
pub current_msg_id: AtomicU64,
/// How much space is available on this page in bytes
pub size_total: usize,
/// How much space is used on this page in bytes
@ -815,6 +817,7 @@ where
if self.safe_to_unmap() {
return;
}
hint::spin_loop();
// We log that we're looping -> see when we're blocking.
#[cfg(feature = "std")]
{
@ -830,9 +833,11 @@ where
pub fn safe_to_unmap(&self) -> bool {
let current_out_map = self.out_maps.last().unwrap();
unsafe {
compiler_fence(Ordering::SeqCst);
// println!("Reading safe_to_unmap from {:?}", current_out_map.page() as *const _);
ptr::read_volatile(ptr::addr_of!((*current_out_map.page()).safe_to_unmap)) != 0
(*current_out_map.page())
.safe_to_unmap
.load(Ordering::Relaxed)
!= 0
}
}
@ -840,8 +845,9 @@ where
/// # Safety
/// If this method is called, the page may be unmapped before it is read by any receiver.
pub unsafe fn mark_safe_to_unmap(&mut self) {
// No need to do this volatile, as we should be the same thread in this scenario.
(*self.out_maps.last_mut().unwrap().page_mut()).safe_to_unmap = 1;
(*self.out_maps.last_mut().unwrap().page_mut())
.safe_to_unmap
.store(1, Ordering::Relaxed);
}
/// Reattach to a vacant `out_map`.
@ -876,7 +882,7 @@ where
// Exclude the current page by splitting of the last element for this iter
let mut unmap_until_excl = 0;
for map in self.out_maps.split_last_mut().unwrap().1 {
if (*map.page_mut()).safe_to_unmap == 0 {
if (*map.page()).safe_to_unmap.load(Ordering::Relaxed) == 0 {
// The broker didn't read this page yet, no more pages to unmap.
break;
}
@ -959,7 +965,7 @@ where
#[cfg(all(feature = "llmp_debug", feature = "std"))]
dbg!(
page,
*page,
&(*page),
(*page).size_used,
buf_len_padded,
EOP_MSG_SIZE,
@ -983,7 +989,7 @@ where
* with 0... */
(*ret).message_id = if last_msg.is_null() {
1
} else if (*page).current_msg_id == (*last_msg).message_id {
} else if (*page).current_msg_id.load(Ordering::Relaxed) == (*last_msg).message_id {
(*last_msg).message_id + 1
} else {
/* Oops, wrong usage! */
@ -1033,10 +1039,14 @@ where
msg
)));
}
(*msg).message_id = (*page).current_msg_id + 1;
compiler_fence(Ordering::SeqCst);
ptr::write_volatile(ptr::addr_of_mut!((*page).current_msg_id), (*msg).message_id);
compiler_fence(Ordering::SeqCst);
(*msg).message_id = (*page).current_msg_id.load(Ordering::Relaxed) + 1;
// Make sure all things have been written to the page, and commit the message to the page
(*page)
.current_msg_id
.store((*msg).message_id, Ordering::Release);
self.last_msg_sent = msg;
self.has_unsent_message = false;
Ok(())
@ -1075,9 +1085,9 @@ where
#[cfg(all(feature = "llmp_debug", feature = "std"))]
println!("got new map at: {:?}", new_map);
ptr::write_volatile(
ptr::addr_of_mut!((*new_map).current_msg_id),
(*old_map).current_msg_id,
(*new_map).current_msg_id.store(
(*old_map).current_msg_id.load(Ordering::Relaxed),
Ordering::Relaxed,
);
#[cfg(all(feature = "llmp_debug", feature = "std"))]
@ -1185,7 +1195,7 @@ where
// Doing this step by step will catch underflows in debug builds :)
(*page).size_used -= old_len_padded as usize;
(*page).size_used += buf_len_padded as usize;
(*page).size_used += buf_len_padded;
(*_llmp_next_msg_ptr(msg)).tag = LLMP_TAG_UNSET;
@ -1282,6 +1292,8 @@ where
pub shmem_provider: SP,
/// current page. After EOP, this gets replaced with the new one
pub current_recv_map: LlmpSharedMap<SP::Mem>,
/// Caches the highest msg id we've seen so far
highest_msg_id: u64,
}
/// Receiving end of an llmp channel
@ -1327,6 +1339,7 @@ where
current_recv_map,
last_msg_recvd,
shmem_provider,
highest_msg_id: 0,
})
}
@ -1335,10 +1348,19 @@ where
#[inline(never)]
unsafe fn recv(&mut self) -> Result<Option<*mut LlmpMsg>, Error> {
/* DBG("recv %p %p\n", page, last_msg); */
compiler_fence(Ordering::SeqCst);
let mut page = self.current_recv_map.page_mut();
let last_msg = self.last_msg_recvd;
let current_msg_id = ptr::read_volatile(ptr::addr_of!((*page).current_msg_id));
let (current_msg_id, loaded) =
if !last_msg.is_null() && self.highest_msg_id > (*last_msg).message_id {
// read the msg_id from cache
(self.highest_msg_id, false)
} else {
// read the msg_id from shared map
let current_msg_id = (*page).current_msg_id.load(Ordering::Relaxed);
self.highest_msg_id = current_msg_id;
(current_msg_id, true)
};
// Read the message from the page
let ret = if current_msg_id == 0 {
@ -1346,11 +1368,16 @@ where
None
} else if last_msg.is_null() {
/* We never read a message from this queue. Return first. */
fence(Ordering::Acquire);
Some((*page).messages.as_mut_ptr())
} else if (*last_msg).message_id == current_msg_id {
/* Oops! No new message! */
None
} else {
if loaded {
// we read a higher id from this page, fetch.
fence(Ordering::Acquire);
}
// We don't know how big the msg wants to be, assert at least the header has space.
Some(llmp_next_msg_ptr_checked(
&mut self.current_recv_map,
@ -1359,14 +1386,18 @@ where
)?)
};
// Let's see what we go here.
// Let's see what we got.
if let Some(msg) = ret {
if !(*msg).in_map(&mut self.current_recv_map) {
return Err(Error::IllegalState("Unexpected message in map (out of map bounds) - bugy client or tampered shared map detedted!".into()));
}
// Handle special, LLMP internal, messages.
match (*msg).tag {
LLMP_TAG_UNSET => panic!("BUG: Read unallocated msg"),
LLMP_TAG_UNSET => panic!(
"BUG: Read unallocated msg (tag was {:x} - msg header: {:?}",
LLMP_TAG_UNSET,
&(*msg)
),
LLMP_TAG_EXITING => {
// The other side is done.
assert_eq!((*msg).buf_len, 0);
@ -1393,9 +1424,10 @@ where
// Set last msg we received to null (as the map may no longer exist)
self.last_msg_recvd = ptr::null();
self.highest_msg_id = 0;
// Mark the old page save to unmap, in case we didn't so earlier.
ptr::write_volatile(ptr::addr_of_mut!((*page).safe_to_unmap), 1);
(*page).safe_to_unmap.store(1, Ordering::Relaxed);
// Map the new page. The old one should be unmapped by Drop
self.current_recv_map =
@ -1405,7 +1437,7 @@ where
)?);
page = self.current_recv_map.page_mut();
// Mark the new page save to unmap also (it's mapped by us, the broker now)
ptr::write_volatile(ptr::addr_of_mut!((*page).safe_to_unmap), 1);
(*page).safe_to_unmap.store(1, Ordering::Relaxed);
#[cfg(all(feature = "llmp_debug", feature = "std"))]
println!(
@ -1442,13 +1474,13 @@ where
current_msg_id = (*last_msg).message_id;
}
loop {
compiler_fence(Ordering::SeqCst);
if ptr::read_volatile(ptr::addr_of!((*page).current_msg_id)) != current_msg_id {
if (*page).current_msg_id.load(Ordering::Relaxed) != current_msg_id {
return match self.recv()? {
Some(msg) => Ok(msg),
None => panic!("BUG: blocking llmp message should never be NULL"),
};
}
hint::spin_loop();
}
}
@ -1561,7 +1593,7 @@ where
//let bt = Backtrace::new();
//#[cfg(not(debug_assertions))]
//let bt = "<n/a (release)>";
dbg!(
println!(
"LLMP_DEBUG: Using existing map {} with size {}",
existing_map.id(),
existing_map.len(),
@ -1579,7 +1611,7 @@ where
&ret.shmem
);
#[cfg(all(feature = "llmp_debug", feature = "std"))]
dbg!("PAGE: {}", *ret.page());
println!("PAGE: {:?}", &(*ret.page()));
}
ret
}
@ -1588,7 +1620,7 @@ where
/// This indicates, that the page may safely be unmapped by the sender.
pub fn mark_safe_to_unmap(&mut self) {
unsafe {
ptr::write_volatile(ptr::addr_of_mut!((*self.page_mut()).safe_to_unmap), 1);
(*self.page_mut()).safe_to_unmap.store(1, Ordering::Relaxed);
}
}
@ -1691,6 +1723,7 @@ where
/// A signal handler for the [`LlmpBroker`].
#[cfg(unix)]
#[derive(Debug, Clone)]
pub struct LlmpBrokerSignalHandler {
shutting_down: bool,
}
@ -1765,6 +1798,7 @@ where
current_recv_map: client_page,
last_msg_recvd: ptr::null_mut(),
shmem_provider: self.shmem_provider.clone(),
highest_msg_id: 0,
});
}
@ -1856,7 +1890,6 @@ where
where
F: FnMut(ClientId, Tag, Flags, &[u8]) -> Result<LlmpMsgHookResult, Error>,
{
compiler_fence(Ordering::SeqCst);
for i in 0..self.llmp_clients.len() {
unsafe {
self.handle_new_msgs(i as u32, on_new_msg)?;
@ -1896,7 +1929,6 @@ where
}
while !self.is_shutting_down() {
compiler_fence(Ordering::SeqCst);
self.once(on_new_msg)
.expect("An error occurred when brokering. Exiting.");
@ -2008,7 +2040,7 @@ where
.expect("Failed to map local page in broker 2 broker thread!");
#[cfg(all(feature = "llmp_debug", feature = "std"))]
dbg!("B2B: Starting proxy loop :)");
println!("B2B: Starting proxy loop :)");
loop {
// first, forward all data we have.
@ -2017,13 +2049,16 @@ where
.expect("Error reading from local page!")
{
if client_id == b2b_client_id {
dbg!("Ignored message we probably sent earlier (same id)", tag);
println!(
"Ignored message we probably sent earlier (same id), TAG: {:x}",
tag
);
continue;
}
#[cfg(all(feature = "llmp_debug", feature = "std"))]
dbg!(
"Fowarding message via broker2broker connection",
println!(
"Fowarding message ({} bytes) via broker2broker connection",
payload.len()
);
// We got a new message! Forward...
@ -2051,8 +2086,8 @@ where
);
#[cfg(all(feature = "llmp_debug", feature = "std"))]
dbg!(
"Fowarding incoming message from broker2broker connection",
println!(
"Fowarding incoming message ({} bytes) from broker2broker connection",
msg.payload.len()
);
@ -2063,7 +2098,7 @@ where
.expect("B2B: Error forwarding message. Exiting.");
} else {
#[cfg(all(feature = "llmp_debug", feature = "std"))]
dbg!("Received no input, timeout or closed. Looping back up :)");
println!("Received no input, timeout or closed. Looping back up :)");
}
}
});
@ -2073,7 +2108,7 @@ where
});
#[cfg(all(feature = "llmp_debug", feature = "std"))]
dbg!("B2B: returning from loop. Success: {}", ret.is_ok());
println!("B2B: returning from loop. Success: {}", ret.is_ok());
ret
}
@ -2184,14 +2219,18 @@ where
loop {
match listener.accept() {
ListenerStream::Tcp(mut stream, addr) => {
dbg!("New connection", addr, stream.peer_addr().unwrap());
eprintln!(
"New connection: {:?}/{:?}",
addr,
stream.peer_addr().unwrap()
);
// Send initial information, without anyone asking.
// This makes it a tiny bit easier to map the broker map for new Clients.
match send_tcp_msg(&mut stream, &broker_hello) {
Ok(()) => {}
Err(e) => {
dbg!("Error sending initial hello: {:?}", e);
eprintln!("Error sending initial hello: {:?}", e);
continue;
}
}
@ -2199,14 +2238,14 @@ where
let buf = match recv_tcp_msg(&mut stream) {
Ok(buf) => buf,
Err(e) => {
dbg!("Error receving from tcp", e);
eprintln!("Error receving from tcp: {:?}", e);
continue;
}
};
let req = match (&buf).try_into() {
Ok(req) => req,
Err(e) => {
dbg!("Could not deserialize tcp message", e);
eprintln!("Could not deserialize tcp message: {:?}", e);
continue;
}
};
@ -2288,6 +2327,7 @@ where
current_recv_map: new_page,
last_msg_recvd: ptr::null_mut(),
shmem_provider: self.shmem_provider.clone(),
highest_msg_id: 0,
});
}
Err(e) => {
@ -2467,6 +2507,7 @@ where
current_recv_map: initial_broker_map,
last_msg_recvd: ptr::null_mut(),
shmem_provider,
highest_msg_id: 0,
},
})
}
@ -2574,7 +2615,7 @@ where
match TcpStream::connect((_LLMP_CONNECT_ADDR, port)) {
Ok(stream) => break stream,
Err(_) => {
dbg!("Connection Refused.. Retrying");
println!("Connection Refused.. Retrying");
}
}
}
@ -2668,7 +2709,7 @@ mod tests {
.unwrap();
let tag: Tag = 0x1337;
let arr: [u8; 1] = [1u8];
let arr: [u8; 1] = [1_u8];
// Send stuff
client.send_buf(tag, &arr).unwrap();

View File

@ -108,14 +108,14 @@ pub fn dump_registers<W: Write>(
writer,
"x{:02}: 0x{:016x} ",
reg, mcontext.__ss.__x[reg as usize]
);
)?;
if reg % 4 == 3 {
writeln!(writer);
writeln!(writer)?;
}
}
write!(writer, "fp: 0x{:016x} ", mcontext.__ss.__fp);
write!(writer, "lr: 0x{:016x} ", mcontext.__ss.__lr);
write!(writer, "pc: 0x{:016x} ", mcontext.__ss.__pc);
write!(writer, "fp: 0x{:016x} ", mcontext.__ss.__fp)?;
write!(writer, "lr: 0x{:016x} ", mcontext.__ss.__lr)?;
write!(writer, "pc: 0x{:016x} ", mcontext.__ss.__pc)?;
Ok(())
}
@ -269,6 +269,7 @@ fn write_crash<W: Write>(
/// Generates a mini-BSOD given a signal and context.
#[cfg(unix)]
#[allow(clippy::non_ascii_literal)]
pub fn generate_minibsod<W: Write>(
writer: &mut BufWriter<W>,
signal: Signal,

View File

@ -41,8 +41,11 @@ pub trait HasLen {
}
}
/// Has a ref count
pub trait HasRefCnt {
/// The ref count
fn refcnt(&self) -> isize;
/// The ref count, mutable
fn refcnt_mut(&mut self) -> &mut isize;
}

View File

@ -24,7 +24,9 @@ pub mod pipes;
#[cfg(all(unix, feature = "std"))]
use std::ffi::CString;
// Allow a few extra features we need for the whole module
#[cfg(all(windows, feature = "std"))]
#[allow(missing_docs, overflowing_literals)]
pub mod windows_exceptions;
#[cfg(unix)]
@ -32,7 +34,9 @@ use libc::pid_t;
/// Child Process Handle
#[cfg(unix)]
#[derive(Debug)]
pub struct ChildHandle {
/// The process id
pub pid: pid_t,
}
@ -51,6 +55,7 @@ impl ChildHandle {
/// The `ForkResult` (result of a fork)
#[cfg(unix)]
#[derive(Debug)]
pub enum ForkResult {
/// The fork finished, we are the parent process.
/// The child has the handle `ChildHandle`.
@ -103,6 +108,7 @@ pub fn dup2(fd: i32, device: i32) -> Result<(), Error> {
/// Core ID
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub struct CoreId {
/// The id of this core
pub id: usize,
}

View File

@ -11,15 +11,19 @@ use std::{
#[cfg(not(feature = "std"))]
type RawFd = i32;
/// A unix pipe wrapper for `LibAFL`
#[cfg(feature = "std")]
#[derive(Debug, Clone)]
pub struct Pipe {
/// The read end of the pipe
read_end: Option<RawFd>,
/// The write end of the pipe
write_end: Option<RawFd>,
}
#[cfg(feature = "std")]
impl Pipe {
/// Create a new `Unix` pipe
pub fn new() -> Result<Self, Error> {
let (read_end, write_end) = pipe()?;
Ok(Self {
@ -28,6 +32,7 @@ impl Pipe {
})
}
/// Close the read end of a pipe
pub fn close_read_end(&mut self) {
if let Some(read_end) = self.read_end {
let _ = close(read_end);
@ -35,6 +40,7 @@ impl Pipe {
}
}
/// Close the write end of a pipe
pub fn close_write_end(&mut self) {
if let Some(write_end) = self.write_end {
let _ = close(write_end);
@ -42,11 +48,13 @@ impl Pipe {
}
}
/// The read end
#[must_use]
pub fn read_end(&self) -> Option<RawFd> {
self.read_end
}
/// The write end
#[must_use]
pub fn write_end(&self) -> Option<RawFd> {
self.write_end

View File

@ -118,7 +118,7 @@ where
.write_all(&message)
.expect("Failed to send message");
let mut shm_slice = [0u8; 20];
let mut shm_slice = [0_u8; 20];
let mut fd_buf = [-1; 1];
self.stream
.recv_fds(&mut shm_slice, &mut fd_buf)
@ -172,7 +172,7 @@ where
res.id = id;
Ok(res)
}
fn new_map(&mut self, map_size: usize) -> Result<Self::Mem, crate::Error> {
fn new_map(&mut self, map_size: usize) -> Result<Self::Mem, Error> {
let (server_fd, client_fd) = self.send_receive(ServedShMemRequest::NewMap(map_size))?;
Ok(ServedShMem {
@ -302,12 +302,18 @@ pub enum ShMemService<SP>
where
SP: ShMemProvider,
{
/// A started service
Started {
/// The background thread
bg_thread: Arc<Mutex<ShMemServiceThread>>,
/// The pantom data
phantom: PhantomData<SP>,
},
/// A failed service
Failed {
/// The error message
err_msg: String,
/// The pantom data
phantom: PhantomData<SP>,
},
}
@ -541,7 +547,7 @@ where
let client = self.clients.get_mut(&client_id).unwrap();
let maps = client.maps.entry(map_id).or_default();
if maps.is_empty() {
Ok(ServedShMemResponse::RefCount(0u32))
Ok(ServedShMemResponse::RefCount(0_u32))
} else {
Ok(ServedShMemResponse::RefCount(
Rc::strong_count(&maps.pop().unwrap()) as u32,
@ -563,11 +569,11 @@ where
let client = self.clients.get_mut(&client_id).unwrap();
// Always receive one be u32 of size, then the command.
let mut size_bytes = [0u8; 4];
let mut size_bytes = [0_u8; 4];
client.stream.read_exact(&mut size_bytes)?;
let size = u32::from_be_bytes(size_bytes);
let mut bytes = vec![];
bytes.resize(size as usize, 0u8);
bytes.resize(size as usize, 0_u8);
client
.stream
.read_exact(&mut bytes)

View File

@ -74,7 +74,7 @@ extern "C" {
}
/// All signals on this system, as `enum`.
#[derive(IntoPrimitive, TryFromPrimitive, Clone, Copy)]
#[derive(Debug, IntoPrimitive, TryFromPrimitive, Clone, Copy)]
#[repr(i32)]
pub enum Signal {
/// `SIGABRT` signal id

View File

@ -36,55 +36,55 @@ pub const SIGABRT: i32 = 22;
pub const SIGABRT2: i32 = 22;
// From https://github.com/wine-mirror/wine/blob/master/include/winnt.h#L611
pub const STATUS_WAIT_0: u32 = 0x00000000;
pub const STATUS_ABANDONED_WAIT_0: u32 = 0x00000080;
pub const STATUS_USER_APC: u32 = 0x000000C0;
pub const STATUS_TIMEOUT: u32 = 0x00000102;
pub const STATUS_PENDING: u32 = 0x00000103;
pub const STATUS_SEGMENT_NOTIFICATION: u32 = 0x40000005;
pub const STATUS_FATAL_APP_EXIT: u32 = 0x40000015;
pub const STATUS_GUARD_PAGE_VIOLATION: u32 = 0x80000001;
pub const STATUS_DATATYPE_MISALIGNMENT: u32 = 0x80000002;
pub const STATUS_BREAKPOINT: u32 = 0x80000003;
pub const STATUS_SINGLE_STEP: u32 = 0x80000004;
pub const STATUS_LONGJUMP: u32 = 0x80000026;
pub const STATUS_UNWIND_CONSOLIDATE: u32 = 0x80000029;
pub const STATUS_ACCESS_VIOLATION: u32 = 0xC0000005;
pub const STATUS_IN_PAGE_ERROR: u32 = 0xC0000006;
pub const STATUS_INVALID_HANDLE: u32 = 0xC0000008;
pub const STATUS_NO_MEMORY: u32 = 0xC0000017;
pub const STATUS_ILLEGAL_INSTRUCTION: u32 = 0xC000001D;
pub const STATUS_NONCONTINUABLE_EXCEPTION: u32 = 0xC0000025;
pub const STATUS_INVALID_DISPOSITION: u32 = 0xC0000026;
pub const STATUS_ARRAY_BOUNDS_EXCEEDED: u32 = 0xC000008C;
pub const STATUS_FLOAT_DENORMAL_OPERAND: u32 = 0xC000008D;
pub const STATUS_FLOAT_DIVIDE_BY_ZERO: u32 = 0xC000008E;
pub const STATUS_FLOAT_INEXACT_RESULT: u32 = 0xC000008F;
pub const STATUS_FLOAT_INVALID_OPERATION: u32 = 0xC0000090;
pub const STATUS_FLOAT_OVERFLOW: u32 = 0xC0000091;
pub const STATUS_FLOAT_STACK_CHECK: u32 = 0xC0000092;
pub const STATUS_FLOAT_UNDERFLOW: u32 = 0xC0000093;
pub const STATUS_INTEGER_DIVIDE_BY_ZERO: u32 = 0xC0000094;
pub const STATUS_INTEGER_OVERFLOW: u32 = 0xC0000095;
pub const STATUS_PRIVILEGED_INSTRUCTION: u32 = 0xC0000096;
pub const STATUS_STACK_OVERFLOW: u32 = 0xC00000FD;
pub const STATUS_DLL_NOT_FOUND: u32 = 0xC0000135;
pub const STATUS_ORDINAL_NOT_FOUND: u32 = 0xC0000138;
pub const STATUS_ENTRYPOINT_NOT_FOUND: u32 = 0xC0000139;
pub const STATUS_CONTROL_C_EXIT: u32 = 0xC000013A;
pub const STATUS_DLL_INIT_FAILED: u32 = 0xC0000142;
pub const STATUS_FLOAT_MULTIPLE_FAULTS: u32 = 0xC00002B4;
pub const STATUS_FLOAT_MULTIPLE_TRAPS: u32 = 0xC00002B5;
pub const STATUS_REG_NAT_CONSUMPTION: u32 = 0xC00002C9;
pub const STATUS_HEAP_CORRUPTION: u32 = 0xC0000374;
pub const STATUS_STACK_BUFFER_OVERRUN: u32 = 0xC0000409;
pub const STATUS_INVALID_CRUNTIME_PARAMETER: u32 = 0xC0000417;
pub const STATUS_ASSERTION_FAILURE: u32 = 0xC0000420;
pub const STATUS_SXS_EARLY_DEACTIVATION: u32 = 0xC015000F;
pub const STATUS_SXS_INVALID_DEACTIVATION: u32 = 0xC0150010;
pub const STATUS_WAIT_0: i32 = 0x00000000;
pub const STATUS_ABANDONED_WAIT_0: i32 = 0x00000080;
pub const STATUS_USER_APC: i32 = 0x000000C0;
pub const STATUS_TIMEOUT: i32 = 0x00000102;
pub const STATUS_PENDING: i32 = 0x00000103;
pub const STATUS_SEGMENT_NOTIFICATION: i32 = 0x40000005;
pub const STATUS_FATAL_APP_EXIT: i32 = 0x40000015;
pub const STATUS_GUARD_PAGE_VIOLATION: i32 = 0x80000001;
pub const STATUS_DATATYPE_MISALIGNMENT: i32 = 0x80000002;
pub const STATUS_BREAKPOINT: i32 = 0x80000003;
pub const STATUS_SINGLE_STEP: i32 = 0x80000004;
pub const STATUS_LONGJUMP: i32 = 0x80000026;
pub const STATUS_UNWIND_CONSOLIDATE: i32 = 0x80000029;
pub const STATUS_ACCESS_VIOLATION: i32 = 0xC0000005;
pub const STATUS_IN_PAGE_ERROR: i32 = 0xC0000006;
pub const STATUS_INVALID_HANDLE: i32 = 0xC0000008;
pub const STATUS_NO_MEMORY: i32 = 0xC0000017;
pub const STATUS_ILLEGAL_INSTRUCTION: i32 = 0xC000001D;
pub const STATUS_NONCONTINUABLE_EXCEPTION: i32 = 0xC0000025;
pub const STATUS_INVALID_DISPOSITION: i32 = 0xC0000026;
pub const STATUS_ARRAY_BOUNDS_EXCEEDED: i32 = 0xC000008C;
pub const STATUS_FLOAT_DENORMAL_OPERAND: i32 = 0xC000008D;
pub const STATUS_FLOAT_DIVIDE_BY_ZERO: i32 = 0xC000008E;
pub const STATUS_FLOAT_INEXACT_RESULT: i32 = 0xC000008F;
pub const STATUS_FLOAT_INVALID_OPERATION: i32 = 0xC0000090;
pub const STATUS_FLOAT_OVERFLOW: i32 = 0xC0000091;
pub const STATUS_FLOAT_STACK_CHECK: i32 = 0xC0000092;
pub const STATUS_FLOAT_UNDERFLOW: i32 = 0xC0000093;
pub const STATUS_INTEGER_DIVIDE_BY_ZERO: i32 = 0xC0000094;
pub const STATUS_INTEGER_OVERFLOW: i32 = 0xC0000095;
pub const STATUS_PRIVILEGED_INSTRUCTION: i32 = 0xC0000096;
pub const STATUS_STACK_OVERFLOW: i32 = 0xC00000FD;
pub const STATUS_DLL_NOT_FOUND: i32 = 0xC0000135;
pub const STATUS_ORDINAL_NOT_FOUND: i32 = 0xC0000138;
pub const STATUS_ENTRYPOINT_NOT_FOUND: i32 = 0xC0000139;
pub const STATUS_CONTROL_C_EXIT: i32 = 0xC000013A;
pub const STATUS_DLL_INIT_FAILED: i32 = 0xC0000142;
pub const STATUS_FLOAT_MULTIPLE_FAULTS: i32 = 0xC00002B4;
pub const STATUS_FLOAT_MULTIPLE_TRAPS: i32 = 0xC00002B5;
pub const STATUS_REG_NAT_CONSUMPTION: i32 = 0xC00002C9;
pub const STATUS_HEAP_CORRUPTION: i32 = 0xC0000374;
pub const STATUS_STACK_BUFFER_OVERRUN: i32 = 0xC0000409;
pub const STATUS_INVALID_CRUNTIME_PARAMETER: i32 = 0xC0000417;
pub const STATUS_ASSERTION_FAILURE: i32 = 0xC0000420;
pub const STATUS_SXS_EARLY_DEACTIVATION: i32 = 0xC015000F;
pub const STATUS_SXS_INVALID_DEACTIVATION: i32 = 0xC0150010;
#[derive(TryFromPrimitive, Clone, Copy)]
#[repr(u32)]
#[derive(Debug, TryFromPrimitive, Clone, Copy)]
#[repr(i32)]
pub enum ExceptionCode {
// From https://docs.microsoft.com/en-us/windows/win32/debug/getexceptioncode
AccessViolation = STATUS_ACCESS_VIOLATION,
@ -157,7 +157,7 @@ pub static CRASH_EXCEPTIONS: &[ExceptionCode] = &[
impl PartialEq for ExceptionCode {
fn eq(&self, other: &Self) -> bool {
*self as u32 == *other as u32
*self as i32 == *other as i32
}
}
@ -210,7 +210,7 @@ impl Display for ExceptionCode {
ExceptionCode::HeapCorruption => write!(f, "STATUS_HEAP_CORRUPTION")?,
ExceptionCode::StackBufferOverrun => write!(f, "STATUS_STACK_BUFFER_OVERRUN")?,
ExceptionCode::InvalidCRuntimeParameter => {
write!(f, "STATUS_INVALID_CRUNTIME_PARAMETER")?
write!(f, "STATUS_INVALID_CRUNTIME_PARAMETER")?;
}
ExceptionCode::AssertionFailure => write!(f, "STATUS_ASSERTION_FAILURE")?,
ExceptionCode::SXSEarlyDeactivation => write!(f, "STATUS_SXS_EARLY_DEACTIVATION")?,
@ -325,8 +325,7 @@ unsafe extern "system" fn handle_exception(exception_pointers: *mut EXCEPTION_PO
.ExceptionCode;
let exception_code = ExceptionCode::try_from(code.0).unwrap();
// println!("Received {}", exception_code);
let ret = internal_handle_exception(exception_code, exception_pointers);
ret
internal_handle_exception(exception_code, exception_pointers)
}
type NativeSignalHandlerType = unsafe extern "C" fn(i32);

View File

@ -2,7 +2,7 @@
// The serialization is towards owned, allowing to serialize pointers without troubles.
use alloc::{boxed::Box, vec::Vec};
use core::{clone::Clone, fmt::Debug};
use core::{clone::Clone, fmt::Debug, slice};
use serde::{Deserialize, Deserializer, Serialize, Serializer};
/// Trait to convert into an Owned type
@ -166,26 +166,31 @@ where
/// Wrap a slice and convert to a Vec on serialize
#[derive(Clone, Debug)]
pub enum OwnedSlice<'a, T: 'a + Sized> {
enum OwnedSliceInner<'a, T: 'a + Sized> {
/// A ref to a raw slice and length
RefRaw(*const T, usize),
/// A ref to a slice
Ref(&'a [T]),
/// A ref to an owned [`Vec`]
Owned(Vec<T>),
}
impl<'a, T: 'a + Sized + Serialize> Serialize for OwnedSlice<'a, T> {
impl<'a, T: 'a + Sized + Serialize> Serialize for OwnedSliceInner<'a, T> {
fn serialize<S>(&self, se: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match self {
OwnedSlice::Ref(r) => r.serialize(se),
OwnedSlice::Owned(b) => b.serialize(se),
OwnedSliceInner::RefRaw(rr, len) => unsafe {
slice::from_raw_parts(*rr, *len).serialize(se)
},
OwnedSliceInner::Ref(r) => r.serialize(se),
OwnedSliceInner::Owned(b) => b.serialize(se),
}
}
}
impl<'de, 'a, T: 'a + Sized> Deserialize<'de> for OwnedSlice<'a, T>
impl<'de, 'a, T: 'a + Sized> Deserialize<'de> for OwnedSliceInner<'a, T>
where
Vec<T>: Deserialize<'de>,
{
@ -193,7 +198,79 @@ where
where
D: Deserializer<'de>,
{
Deserialize::deserialize(deserializer).map(OwnedSlice::Owned)
Deserialize::deserialize(deserializer).map(OwnedSliceInner::Owned)
}
}
/// Wrap a slice and convert to a Vec on serialize
/// We use a hidden inner enum so the public API can be safe,
/// unless the user uses the unsafe [`OwnedSlice::from_raw_parts`]
#[allow(clippy::unsafe_derive_deserialize)]
#[derive(Debug, Serialize, Deserialize)]
pub struct OwnedSlice<'a, T: 'a + Sized> {
inner: OwnedSliceInner<'a, T>,
}
impl<'a, T: 'a + Clone> Clone for OwnedSlice<'a, T> {
fn clone(&self) -> Self {
Self {
inner: OwnedSliceInner::Owned(self.as_slice().to_vec()),
}
}
}
impl<'a, T> OwnedSlice<'a, T> {
/// Create a new [`OwnedSlice`] from a raw pointer and length
///
/// # Safety
///
/// The pointer must be valid and point to a map of the size `size_of<T>() * len`
/// The contents will be dereferenced in subsequent operations.
#[must_use]
pub unsafe fn from_raw_parts(ptr: *const T, len: usize) -> Self {
Self {
inner: OwnedSliceInner::RefRaw(ptr, len),
}
}
}
/// Create a new [`OwnedSlice`] from a vector
impl<'a, T> From<Vec<T>> for OwnedSlice<'a, T> {
fn from(vec: Vec<T>) -> Self {
Self {
inner: OwnedSliceInner::Owned(vec),
}
}
}
/// Create a new [`OwnedSlice`] from a vector reference
impl<'a, T> From<&'a Vec<T>> for OwnedSlice<'a, T> {
fn from(vec: &'a Vec<T>) -> Self {
Self {
inner: OwnedSliceInner::Ref(vec),
}
}
}
/// Create a new [`OwnedSlice`] from a reference to a slice
impl<'a, T> From<&'a [T]> for OwnedSlice<'a, T> {
fn from(r: &'a [T]) -> Self {
Self {
inner: OwnedSliceInner::Ref(r),
}
}
}
/// Create a new [`OwnedSlice`] from a [`OwnedSliceMut`]
impl<'a, T> From<OwnedSliceMut<'a, T>> for OwnedSlice<'a, T> {
fn from(mut_slice: OwnedSliceMut<'a, T>) -> Self {
Self {
inner: match mut_slice.inner {
OwnedSliceMutInner::RefRaw(ptr, len) => OwnedSliceInner::RefRaw(ptr as _, len),
OwnedSliceMutInner::Ref(r) => OwnedSliceInner::Ref(r as _),
OwnedSliceMutInner::Owned(v) => OwnedSliceInner::Owned(v),
},
}
}
}
@ -201,9 +278,10 @@ impl<'a, T: Sized> OwnedSlice<'a, T> {
/// Get the [`OwnedSlice`] as slice.
#[must_use]
pub fn as_slice(&self) -> &[T] {
match self {
OwnedSlice::Ref(r) => r,
OwnedSlice::Owned(v) => v.as_slice(),
match &self.inner {
OwnedSliceInner::Ref(r) => r,
OwnedSliceInner::RefRaw(rr, len) => unsafe { slice::from_raw_parts(*rr, *len) },
OwnedSliceInner::Owned(v) => v.as_slice(),
}
}
}
@ -214,43 +292,57 @@ where
{
#[must_use]
fn is_owned(&self) -> bool {
match self {
OwnedSlice::Ref(_) => false,
OwnedSlice::Owned(_) => true,
match self.inner {
OwnedSliceInner::RefRaw(_, _) | OwnedSliceInner::Ref(_) => false,
OwnedSliceInner::Owned(_) => true,
}
}
#[must_use]
fn into_owned(self) -> Self {
match self {
OwnedSlice::Ref(r) => OwnedSlice::Owned(r.to_vec()),
OwnedSlice::Owned(v) => OwnedSlice::Owned(v),
match self.inner {
OwnedSliceInner::RefRaw(rr, len) => Self {
inner: OwnedSliceInner::Owned(unsafe { slice::from_raw_parts(rr, len).to_vec() }),
},
OwnedSliceInner::Ref(r) => Self {
inner: OwnedSliceInner::Owned(r.to_vec()),
},
OwnedSliceInner::Owned(v) => Self {
inner: OwnedSliceInner::Owned(v),
},
}
}
}
/// Wrap a mutable slice and convert to a Vec on serialize
/// We use a hidden inner enum so the public API can be safe,
/// unless the user uses the unsafe [`OwnedSliceMut::from_raw_parts_mut`]
#[derive(Debug)]
pub enum OwnedSliceMut<'a, T: 'a + Sized> {
pub enum OwnedSliceMutInner<'a, T: 'a + Sized> {
/// A raw ptr to a memory location and a length
RefRaw(*mut T, usize),
/// A ptr to a mutable slice of the type
Ref(&'a mut [T]),
/// An owned [`Vec`] of the type
Owned(Vec<T>),
}
impl<'a, T: 'a + Sized + Serialize> Serialize for OwnedSliceMut<'a, T> {
impl<'a, T: 'a + Sized + Serialize> Serialize for OwnedSliceMutInner<'a, T> {
fn serialize<S>(&self, se: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match self {
OwnedSliceMut::Ref(r) => r.serialize(se),
OwnedSliceMut::Owned(b) => b.serialize(se),
OwnedSliceMutInner::RefRaw(rr, len) => {
unsafe { slice::from_raw_parts_mut(*rr, *len) }.serialize(se)
}
OwnedSliceMutInner::Ref(r) => r.serialize(se),
OwnedSliceMutInner::Owned(b) => b.serialize(se),
}
}
}
impl<'de, 'a, T: 'a + Sized> Deserialize<'de> for OwnedSliceMut<'a, T>
impl<'de, 'a, T: 'a + Sized> Deserialize<'de> for OwnedSliceMutInner<'a, T>
where
Vec<T>: Deserialize<'de>,
{
@ -258,7 +350,35 @@ where
where
D: Deserializer<'de>,
{
Deserialize::deserialize(deserializer).map(OwnedSliceMut::Owned)
Deserialize::deserialize(deserializer).map(OwnedSliceMutInner::Owned)
}
}
/// Wrap a mutable slice and convert to a Vec on serialize
#[allow(clippy::unsafe_derive_deserialize)]
#[derive(Debug, Serialize, Deserialize)]
pub struct OwnedSliceMut<'a, T: 'a + Sized> {
inner: OwnedSliceMutInner<'a, T>,
}
impl<'a, T: 'a + Sized> OwnedSliceMut<'a, T> {
/// Create a new [`OwnedSliceMut`] from a raw pointer and length
///
/// # Safety
///
/// The pointer must be valid and point to a map of the size `size_of<T>() * len`
/// The contents will be dereferenced in subsequent operations.
#[must_use]
pub unsafe fn from_raw_parts_mut(ptr: *mut T, len: usize) -> OwnedSliceMut<'a, T> {
if ptr.is_null() || len == 0 {
Self {
inner: OwnedSliceMutInner::Owned(Vec::new()),
}
} else {
Self {
inner: OwnedSliceMutInner::RefRaw(ptr, len),
}
}
}
}
@ -266,18 +386,20 @@ impl<'a, T: Sized> OwnedSliceMut<'a, T> {
/// Get the value as slice
#[must_use]
pub fn as_slice(&self) -> &[T] {
match self {
OwnedSliceMut::Ref(r) => r,
OwnedSliceMut::Owned(v) => v.as_slice(),
match &self.inner {
OwnedSliceMutInner::RefRaw(rr, len) => unsafe { slice::from_raw_parts(*rr, *len) },
OwnedSliceMutInner::Ref(r) => r,
OwnedSliceMutInner::Owned(v) => v.as_slice(),
}
}
/// Get the value as mut slice
#[must_use]
pub fn as_mut_slice(&mut self) -> &mut [T] {
match self {
OwnedSliceMut::Ref(r) => r,
OwnedSliceMut::Owned(v) => v.as_mut_slice(),
match &mut self.inner {
OwnedSliceMutInner::RefRaw(rr, len) => unsafe { slice::from_raw_parts_mut(*rr, *len) },
OwnedSliceMutInner::Ref(r) => r,
OwnedSliceMutInner::Owned(v) => v.as_mut_slice(),
}
}
}
@ -288,17 +410,68 @@ where
{
#[must_use]
fn is_owned(&self) -> bool {
match self {
OwnedSliceMut::Ref(_) => false,
OwnedSliceMut::Owned(_) => true,
match self.inner {
OwnedSliceMutInner::RefRaw(_, _) | OwnedSliceMutInner::Ref(_) => false,
OwnedSliceMutInner::Owned(_) => true,
}
}
#[must_use]
fn into_owned(self) -> Self {
match self {
OwnedSliceMut::Ref(r) => OwnedSliceMut::Owned(r.to_vec()),
OwnedSliceMut::Owned(v) => OwnedSliceMut::Owned(v),
let vec = match self.inner {
OwnedSliceMutInner::RefRaw(rr, len) => unsafe {
slice::from_raw_parts_mut(rr, len).to_vec()
},
OwnedSliceMutInner::Ref(r) => r.to_vec(),
OwnedSliceMutInner::Owned(v) => v,
};
Self {
inner: OwnedSliceMutInner::Owned(vec),
}
}
}
impl<'a, T: 'a + Clone> Clone for OwnedSliceMut<'a, T> {
fn clone(&self) -> Self {
Self {
inner: OwnedSliceMutInner::Owned(self.as_slice().to_vec()),
}
}
}
/// Create a new [`OwnedSliceMut`] from a vector
impl<'a, T> From<Vec<T>> for OwnedSliceMut<'a, T> {
fn from(vec: Vec<T>) -> Self {
Self {
inner: OwnedSliceMutInner::Owned(vec),
}
}
}
/// Create a new [`OwnedSliceMut`] from a vector reference
impl<'a, T> From<&'a mut Vec<T>> for OwnedSliceMut<'a, T> {
fn from(vec: &'a mut Vec<T>) -> Self {
Self {
inner: OwnedSliceMutInner::Ref(vec),
}
}
}
/// Create a new [`OwnedSliceMut`] from a reference to ref to a slice
impl<'a, T> From<&'a mut [T]> for OwnedSliceMut<'a, T> {
fn from(r: &'a mut [T]) -> Self {
Self {
inner: OwnedSliceMutInner::Ref(r),
}
}
}
/// Create a new [`OwnedSliceMut`] from a reference to ref to a slice
#[allow(clippy::mut_mut)] // This makes use in some iterators easier
impl<'a, T> From<&'a mut &'a mut [T]> for OwnedSliceMut<'a, T> {
fn from(r: &'a mut &'a mut [T]) -> Self {
Self {
inner: OwnedSliceMutInner::Ref(r),
}
}
}

View File

@ -1,3 +1,4 @@
//! The random number generators of `LibAFL`
use core::{debug_assert, fmt::Debug};
use serde::{de::DeserializeOwned, Deserialize, Serialize};
use xxhash_rust::xxh3::xxh3_64_with_seed;
@ -83,7 +84,7 @@ macro_rules! default_rand {
/// A default RNG will usually produce a nondeterministic stream of random numbers.
/// As we do not have any way to get random seeds for `no_std`, they have to be reproducible there.
/// Use [`$rand::with_seed`] to generate a reproducible RNG.
impl core::default::Default for $rand {
impl Default for $rand {
#[cfg(feature = "std")]
fn default() -> Self {
Self::new()
@ -295,7 +296,7 @@ impl Rand for RomuTrioRand {
let xp = self.x_state;
let yp = self.y_state;
let zp = self.z_state;
self.x_state = 15241094284759029579u64.wrapping_mul(zp);
self.x_state = 15241094284759029579_u64.wrapping_mul(zp);
self.y_state = yp.wrapping_sub(xp).rotate_left(12);
self.z_state = zp.wrapping_sub(yp).rotate_left(44);
xp
@ -332,7 +333,7 @@ impl Rand for RomuDuoJrRand {
#[allow(clippy::unreadable_literal)]
fn next(&mut self) -> u64 {
let xp = self.x_state;
self.x_state = 15241094284759029579u64.wrapping_mul(self.y_state);
self.x_state = 15241094284759029579_u64.wrapping_mul(self.y_state);
self.y_state = self.y_state.wrapping_sub(xp).rotate_left(27);
xp
}

View File

@ -1,9 +1,12 @@
//! Poor-rust-man's downcasts for stuff we send over the wire (or shared maps)
use serde::{Deserialize, Deserializer, Serialize, Serializer};
use serde::{de::DeserializeSeed, Deserialize, Deserializer, Serialize, Serializer};
use alloc::boxed::Box;
use core::any::{Any, TypeId};
use core::{
any::{Any, TypeId},
fmt::Debug,
};
// yolo
@ -30,7 +33,7 @@ pub fn unpack_type_id(id: TypeId) -> u64 {
}
/// A (de)serializable Any trait
pub trait SerdeAny: Any + erased_serde::Serialize {
pub trait SerdeAny: Any + erased_serde::Serialize + Debug {
/// returns this as Any trait
fn as_any(&self) -> &dyn Any;
/// returns this as mutable Any trait
@ -40,10 +43,11 @@ pub trait SerdeAny: Any + erased_serde::Serialize {
}
/// Wrap a type for serialization
pub struct Wrap<'a, T: ?Sized>(pub &'a T);
#[derive(Debug)]
pub struct Wrap<'a, T: ?Sized + Debug>(pub &'a T);
impl<'a, T> Serialize for Wrap<'a, T>
where
T: ?Sized + erased_serde::Serialize + 'a,
T: ?Sized + erased_serde::Serialize + 'a + Debug,
{
/// Serialize the type
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
@ -59,6 +63,7 @@ pub type DeserializeCallback<B> =
fn(&mut dyn erased_serde::Deserializer) -> Result<Box<B>, erased_serde::Error>;
/// Callback struct for deserialization of a [`SerdeAny`] type.
#[allow(missing_debug_implementations)]
pub struct DeserializeCallbackSeed<B>
where
B: ?Sized,
@ -67,7 +72,7 @@ where
pub cb: DeserializeCallback<B>,
}
impl<'de, B> serde::de::DeserializeSeed<'de> for DeserializeCallbackSeed<B>
impl<'de, B> DeserializeSeed<'de> for DeserializeCallbackSeed<B>
where
B: ?Sized,
{
@ -75,7 +80,7 @@ where
fn deserialize<D>(self, deserializer: D) -> Result<Self::Value, D::Error>
where
D: serde::de::Deserializer<'de>,
D: Deserializer<'de>,
{
let mut erased = <dyn erased_serde::Deserializer>::erase(deserializer);
(self.cb)(&mut erased).map_err(serde::de::Error::custom)
@ -105,7 +110,9 @@ macro_rules! create_serde_registry_for_trait {
use $crate::Error;
/// Visitor object used internally for the [`SerdeAny`] registry.
#[derive(Debug)]
pub struct BoxDynVisitor {}
#[allow(unused_qualifications)]
impl<'de> serde::de::Visitor<'de> for BoxDynVisitor {
type Value = Box<dyn $trait_name>;
@ -132,11 +139,13 @@ macro_rules! create_serde_registry_for_trait {
}
}
#[allow(unused_qualifications)]
struct Registry {
deserializers: Option<HashMap<u64, DeserializeCallback<dyn $trait_name>>>,
finalized: bool,
}
#[allow(unused_qualifications)]
impl Registry {
pub fn register<T>(&mut self)
where
@ -162,8 +171,10 @@ macro_rules! create_serde_registry_for_trait {
/// This shugar must be used to register all the structs which
/// have trait objects that can be serialized and deserialized in the program
#[derive(Debug)]
pub struct RegistryBuilder {}
#[allow(unused_qualifications)]
impl RegistryBuilder {
/// Register a given struct type for trait object (de)serialization
pub fn register<T>()
@ -183,9 +194,9 @@ macro_rules! create_serde_registry_for_trait {
}
}
#[derive(Serialize, Deserialize)]
/// A (de)serializable anymap containing (de)serializable trait objects registered
/// in the registry
#[derive(Debug, Serialize, Deserialize)]
pub struct SerdeAnyMap {
map: HashMap<u64, Box<dyn $trait_name>>,
}
@ -199,6 +210,7 @@ macro_rules! create_serde_registry_for_trait {
}
}
/*
#[cfg(feature = "anymap_debug")]
impl fmt::Debug for SerdeAnyMap {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
@ -212,8 +224,9 @@ macro_rules! create_serde_registry_for_trait {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
write!(f, "SerdeAnymap with {} elements", self.len())
}
}
}*/
#[allow(unused_qualifications)]
impl SerdeAnyMap {
/// Get an element from the map.
#[must_use]
@ -309,11 +322,13 @@ macro_rules! create_serde_registry_for_trait {
}
/// A serializable [`HashMap`] wrapper for [`SerdeAny`] types, addressable by name.
#[derive(Serialize, Deserialize)]
#[allow(unused_qualifications)]
#[derive(Debug, Serialize, Deserialize)]
pub struct NamedSerdeAnyMap {
map: HashMap<u64, HashMap<u64, Box<dyn $trait_name>>>,
}
#[allow(unused_qualifications)]
impl NamedSerdeAnyMap {
/// Get an element by name
#[must_use]
@ -332,6 +347,7 @@ macro_rules! create_serde_registry_for_trait {
/// Get an element of a given type contained in this map by [`TypeId`].
#[must_use]
#[allow(unused_qualifications)]
#[inline]
pub fn by_typeid(&self, name: &str, typeid: &TypeId) -> Option<&dyn $trait_name> {
match self.map.get(&unpack_type_id(*typeid)) {
@ -375,6 +391,7 @@ macro_rules! create_serde_registry_for_trait {
/// Get all elements of a type contained in this map.
#[must_use]
#[allow(unused_qualifications)]
#[inline]
pub fn get_all<T>(
&self,
@ -398,6 +415,7 @@ macro_rules! create_serde_registry_for_trait {
/// Get all elements of a given type contained in this map by [`TypeId`].
#[must_use]
#[allow(unused_qualifications)]
#[inline]
pub fn all_by_typeid(
&self,
@ -417,6 +435,7 @@ macro_rules! create_serde_registry_for_trait {
/// Get all elements contained in this map, as mut.
#[inline]
#[allow(unused_qualifications)]
pub fn get_all_mut<T>(
&mut self,
) -> Option<
@ -440,6 +459,7 @@ macro_rules! create_serde_registry_for_trait {
/// Get all [`TypeId`]`s` contained in this map, as mut.
#[inline]
#[allow(unused_qualifications)]
pub fn all_by_typeid_mut(
&mut self,
typeid: &TypeId,
@ -458,6 +478,7 @@ macro_rules! create_serde_registry_for_trait {
/// Get all [`TypeId`]`s` contained in this map.
#[inline]
#[allow(unused_qualifications)]
pub fn all_typeids(
&self,
) -> core::iter::Map<
@ -469,6 +490,7 @@ macro_rules! create_serde_registry_for_trait {
/// Run `func` for each element in this map.
#[inline]
#[allow(unused_qualifications)]
pub fn for_each(
&self,
func: fn(&TypeId, &Box<dyn $trait_name>) -> Result<(), Error>,
@ -497,6 +519,7 @@ macro_rules! create_serde_registry_for_trait {
/// Insert an element into this map.
#[inline]
#[allow(unused_qualifications)]
pub fn insert(&mut self, val: Box<dyn $trait_name>, name: &str) {
let id = unpack_type_id((*val).type_id());
if !self.map.contains_key(&id) {
@ -560,6 +583,7 @@ macro_rules! create_serde_registry_for_trait {
}
}
#[allow(unused_qualifications)]
impl<'a> Serialize for dyn $trait_name {
fn serialize<S>(&self, se: S) -> Result<S::Ok, S::Error>
where
@ -575,6 +599,7 @@ macro_rules! create_serde_registry_for_trait {
}
}
#[allow(unused_qualifications)]
impl<'de> Deserialize<'de> for Box<dyn $trait_name> {
fn deserialize<D>(deserializer: D) -> Result<Box<dyn $trait_name>, D::Error>
where
@ -618,6 +643,7 @@ macro_rules! impl_serdeany {
};
}
/// Implement [`SerdeAny`] for a type
#[cfg(not(feature = "std"))]
#[macro_export]
macro_rules! impl_serdeany {

View File

@ -1,43 +1,60 @@
//! A generic sharememory region to be used by any functions (queues or feedbacks
// too.)
#[cfg(all(unix, feature = "std"))]
use crate::bolts::os::pipes::Pipe;
use crate::Error;
use alloc::{rc::Rc, string::ToString};
use core::{
cell::RefCell,
fmt::{self, Debug, Display},
mem::ManuallyDrop,
};
use serde::{Deserialize, Serialize};
#[cfg(feature = "std")]
use std::env;
#[cfg(all(unix, feature = "std"))]
use std::io::Read;
#[cfg(feature = "std")]
use std::io::Write;
#[cfg(all(feature = "std", unix, not(target_os = "android")))]
pub use unix_shmem::{MmapShMem, MmapShMemProvider};
#[cfg(all(feature = "std", unix))]
pub use unix_shmem::{UnixShMem, UnixShMemProvider};
use crate::Error;
#[cfg(all(feature = "std", unix))]
pub use crate::bolts::os::unix_shmem_server::{ServedShMemProvider, ShMemService};
#[cfg(all(windows, feature = "std"))]
pub use win32_shmem::{Win32ShMem, Win32ShMemProvider};
/// The standard sharedmem provider
#[cfg(all(windows, feature = "std"))]
pub type StdShMemProvider = Win32ShMemProvider;
/// The standard sharedmem type
#[cfg(all(windows, feature = "std"))]
pub type StdShMem = Win32ShMem;
/// The standard sharedmem provider
#[cfg(all(target_os = "android", feature = "std"))]
pub type StdShMemProvider =
RcShMemProvider<ServedShMemProvider<unix_shmem::ashmem::AshmemShMemProvider>>;
/// The standard sharedmem type
#[cfg(all(target_os = "android", feature = "std"))]
pub type StdShMem = RcShMem<ServedShMemProvider<unix_shmem::ashmem::AshmemShMemProvider>>;
/// The standard sharedmem service
#[cfg(all(target_os = "android", feature = "std"))]
pub type StdShMemService = ShMemService<unix_shmem::ashmem::AshmemShMemProvider>;
/// The standard sharedmem provider
#[cfg(all(feature = "std", target_vendor = "apple"))]
pub type StdShMemProvider = RcShMemProvider<ServedShMemProvider<MmapShMemProvider>>;
/// The standard sharedmem type
#[cfg(all(feature = "std", target_vendor = "apple"))]
pub type StdShMem = RcShMem<ServedShMemProvider<MmapShMemProvider>>;
#[cfg(all(feature = "std", target_vendor = "apple"))]
/// The standard sharedmem service
pub type StdShMemService = ShMemService<MmapShMemProvider>;
/// The default [`ShMemProvider`] for this os.
@ -55,21 +72,13 @@ pub type StdShMemProvider = UnixShMemProvider;
))]
pub type StdShMem = UnixShMem;
/// The standard sharedmem service
#[cfg(any(
not(any(target_os = "android", target_vendor = "apple")),
not(feature = "std")
))]
pub type StdShMemService = DummyShMemService;
use serde::{Deserialize, Serialize};
#[cfg(feature = "std")]
use std::env;
#[cfg(all(unix, feature = "std"))]
use crate::bolts::os::pipes::Pipe;
#[cfg(all(unix, feature = "std"))]
use std::io::{Read, Write};
/// Description of a shared map.
/// May be used to restore the map by id.
#[derive(Copy, Clone, Debug, Serialize, Deserialize)]
@ -262,7 +271,7 @@ pub struct RcShMem<T: ShMemProvider> {
impl<T> ShMem for RcShMem<T>
where
T: ShMemProvider + alloc::fmt::Debug,
T: ShMemProvider + Debug,
{
fn id(&self) -> ShMemId {
self.internal.id()
@ -314,7 +323,7 @@ where
#[cfg(all(unix, feature = "std"))]
impl<SP> ShMemProvider for RcShMemProvider<SP>
where
SP: ShMemProvider + alloc::fmt::Debug,
SP: ShMemProvider + Debug,
{
type Mem = RcShMem<SP>;
@ -391,7 +400,7 @@ where
fn pipe_set(pipe: &mut Option<Pipe>) -> Result<(), Error> {
match pipe {
Some(pipe) => {
let ok = [0u8; 4];
let ok = [0_u8; 4];
pipe.write_all(&ok)?;
Ok(())
}
@ -405,7 +414,7 @@ where
fn pipe_await(pipe: &mut Option<Pipe>) -> Result<(), Error> {
match pipe {
Some(pipe) => {
let ok = [0u8; 4];
let ok = [0_u8; 4];
let mut ret = ok;
pipe.read_exact(&mut ret)?;
if ret == ok {
@ -447,7 +456,7 @@ where
#[cfg(all(unix, feature = "std"))]
impl<SP> Default for RcShMemProvider<SP>
where
SP: ShMemProvider + alloc::fmt::Debug,
SP: ShMemProvider + Debug,
{
fn default() -> Self {
Self::new().unwrap()
@ -489,7 +498,7 @@ pub mod unix_shmem {
c_int, c_long, c_uchar, c_uint, c_ulong, c_ushort, close, ftruncate, mmap, munmap,
perror, shm_open, shm_unlink, shmat, shmctl, shmget,
};
use std::{io::Write, process, ptr::null_mut};
use std::{io::Write, process};
use crate::{
bolts::shmem::{ShMem, ShMemId, ShMemProvider},
@ -549,6 +558,7 @@ pub mod unix_shmem {
}
impl MmapShMem {
/// Create a new [`MmapShMem`]
pub fn new(map_size: usize, shmem_ctr: usize) -> Result<Self, Error> {
unsafe {
let mut filename_path = [0_u8; MAX_MMAP_FILENAME_LEN];
@ -585,7 +595,7 @@ pub mod unix_shmem {
/* map the shared memory segment to the address space of the process */
let map = mmap(
null_mut(),
ptr::null_mut(),
map_size,
libc::PROT_READ | libc::PROT_WRITE,
libc::MAP_SHARED,
@ -618,7 +628,7 @@ pub mod unix_shmem {
/* map the shared memory segment to the address space of the process */
let map = mmap(
null_mut(),
ptr::null_mut(),
map_size,
libc::PROT_READ | libc::PROT_WRITE,
libc::MAP_SHARED,
@ -766,7 +776,7 @@ pub mod unix_shmem {
let id_int: i32 = id.into();
let map = shmat(id_int, ptr::null(), 0) as *mut c_uchar;
if map.is_null() || map == null_mut::<c_uchar>().wrapping_sub(1) {
if map.is_null() || map == ptr::null_mut::<c_uchar>().wrapping_sub(1) {
return Err(Error::Unknown(
"Failed to map the shared mapping".to_string(),
));
@ -842,7 +852,7 @@ pub mod unix_shmem {
/// Module containing `ashmem` shared memory support, commonly used on Android.
#[cfg(all(unix, feature = "std"))]
pub mod ashmem {
use core::slice;
use core::{ptr, slice};
use libc::{
c_uint, c_ulong, c_void, close, ioctl, mmap, open, MAP_SHARED, O_RDWR, PROT_READ,
PROT_WRITE,
@ -909,6 +919,7 @@ pub mod unix_shmem {
//return Err(Error::Unknown("Failed to set the ashmem mapping's name".to_string()));
//};
#[allow(trivial_numeric_casts)]
if ioctl(fd, ASHMEM_SET_SIZE as _, map_size) != 0 {
close(fd);
return Err(Error::Unknown(
@ -917,7 +928,7 @@ pub mod unix_shmem {
};
let map = mmap(
std::ptr::null_mut(),
ptr::null_mut(),
map_size,
PROT_READ | PROT_WRITE,
MAP_SHARED,
@ -943,7 +954,7 @@ pub mod unix_shmem {
pub fn from_id_and_size(id: ShMemId, map_size: usize) -> Result<Self, Error> {
unsafe {
let fd: i32 = id.to_string().parse().unwrap();
#[allow(clippy::cast_sign_loss)]
#[allow(trivial_numeric_casts, clippy::cast_sign_loss)]
if ioctl(fd, ASHMEM_GET_SIZE as _) as u32 as usize != map_size {
return Err(Error::Unknown(
"The mapping's size differs from the requested size".to_string(),
@ -951,7 +962,7 @@ pub mod unix_shmem {
};
let map = mmap(
std::ptr::null_mut(),
ptr::null_mut(),
map_size,
PROT_READ | PROT_WRITE,
MAP_SHARED,
@ -996,10 +1007,12 @@ pub mod unix_shmem {
/// [`Drop`] implementation for [`AshmemShMem`], which cleans up the mapping.
#[cfg(unix)]
impl Drop for AshmemShMem {
#[allow(trivial_numeric_casts)]
fn drop(&mut self) {
unsafe {
let fd: i32 = self.id.to_string().parse().unwrap();
#[allow(trivial_numeric_casts)]
#[allow(clippy::cast_sign_loss)]
let length = ioctl(fd, ASHMEM_GET_SIZE as _) as u32;
@ -1049,6 +1062,7 @@ pub mod unix_shmem {
}
}
/// Then `win32` implementation for shared memory.
#[cfg(all(feature = "std", windows))]
pub mod win32_shmem {
@ -1057,7 +1071,11 @@ pub mod win32_shmem {
Error,
};
use core::{ffi::c_void, ptr, slice};
use core::{
ffi::c_void,
fmt::{self, Debug, Formatter},
ptr, slice,
};
use std::convert::TryInto;
use uuid::Uuid;
@ -1072,7 +1090,7 @@ pub mod win32_shmem {
};
/// The default Sharedmap impl for windows using shmctl & shmget
#[derive(Clone, Debug)]
#[derive(Clone)]
pub struct Win32ShMem {
id: ShMemId,
handle: HANDLE,
@ -1080,6 +1098,17 @@ pub mod win32_shmem {
map_size: usize,
}
impl Debug for Win32ShMem {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("Win32ShMem")
.field("id", &self.id)
.field("handle", &self.handle.0)
.field("map", &self.map)
.field("map_size", &self.map_size)
.finish()
}
}
impl Win32ShMem {
fn new_map(map_size: usize) -> Result<Self, Error> {
unsafe {
@ -1123,7 +1152,7 @@ pub mod win32_shmem {
let map_str_bytes = id.id;
// Unlike MapViewOfFile this one needs u32
let handle = OpenFileMappingA(
FILE_MAP_ALL_ACCESS.0,
FILE_MAP_ALL_ACCESS,
BOOL(0),
PSTR(&map_str_bytes as *const u8 as *mut u8),
);
@ -1219,8 +1248,9 @@ impl DummyShMemService {
}
}
#[cfg(feature = "std")]
/// A cursor around [`ShMem`] that immitates [`std::io::Cursor`]. Notably, this implements [`Write`] for [`ShMem`] in std environments.
#[cfg(feature = "std")]
#[derive(Debug)]
pub struct ShMemCursor<T: ShMem> {
inner: T,
pos: usize,
@ -1228,6 +1258,7 @@ pub struct ShMemCursor<T: ShMem> {
#[cfg(feature = "std")]
impl<T: ShMem> ShMemCursor<T> {
/// Create a new [`ShMemCursor`] around [`ShMem`]
pub fn new(shmem: T) -> Self {
Self {
inner: shmem,
@ -1242,7 +1273,7 @@ impl<T: ShMem> ShMemCursor<T> {
}
#[cfg(feature = "std")]
impl<T: ShMem> std::io::Write for ShMemCursor<T> {
impl<T: ShMem> Write for ShMemCursor<T> {
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
match self.empty_slice_mut().write(buf) {
Ok(w) => {

View File

@ -1,5 +1,5 @@
/// Stores and restores state when a client needs to relaunch.
/// Uses a [`ShMem`] up to a threshold, then write to disk.
//! Stores and restores state when a client needs to relaunch.
//! Uses a [`ShMem`] up to a threshold, then write to disk.
use ahash::AHasher;
use core::{hash::Hasher, marker::PhantomData, mem::size_of, ptr, slice};
use serde::{de::DeserializeOwned, Serialize};
@ -204,7 +204,7 @@ where
S: DeserializeOwned,
{
if !self.has_content() {
return Ok(Option::None);
return Ok(None);
}
let state_shmem_content = self.content();
let bytes = unsafe {
@ -216,7 +216,7 @@ where
let mut state = bytes;
let mut file_content;
if state_shmem_content.buf_len == 0 {
return Ok(Option::None);
return Ok(None);
} else if state_shmem_content.is_disk {
let filename: String = postcard::from_bytes(bytes)?;
let tmpfile = temp_dir().join(&filename);

View File

@ -18,13 +18,13 @@ use serde::{Deserialize, Serialize};
pub const DEFAULT_SKIP_NON_FAVORED_PROB: u64 = 95;
/// A testcase metadata saying if a testcase is favored
#[derive(Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize)]
pub struct IsFavoredMetadata {}
crate::impl_serdeany!(IsFavoredMetadata);
/// A state metadata holding a map of favoreds testcases for each map entry
#[derive(Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize)]
pub struct TopRatedsMetadata {
/// map index -> corpus index
pub map: HashMap<usize, usize>,
@ -59,6 +59,7 @@ where
/// Multiply the testcase size with the execution time.
/// This favors small and quick testcases.
#[derive(Debug, Clone)]
pub struct LenTimeMulFavFactor<I>
where
I: Input + HasLen,
@ -79,29 +80,27 @@ where
/// The [`MinimizerCorpusScheduler`] employs a genetic algorithm to compute a subset of the
/// corpus that exercise all the requested features (e.g. all the coverage seen so far)
/// prioritizing [`Testcase`]`s` using [`FavFactor`]
pub struct MinimizerCorpusScheduler<C, CS, F, I, M, R, S>
#[derive(Debug, Clone)]
pub struct MinimizerCorpusScheduler<CS, F, I, M, S>
where
CS: CorpusScheduler<I, S>,
F: FavFactor<I>,
I: Input,
M: AsSlice<usize> + SerdeAny + HasRefCnt,
S: HasCorpus<C, I> + HasMetadata,
C: Corpus<I>,
S: HasCorpus<I> + HasMetadata,
{
base: CS,
skip_non_favored_prob: u64,
phantom: PhantomData<(C, F, I, M, R, S)>,
phantom: PhantomData<(F, I, M, S)>,
}
impl<C, CS, F, I, M, R, S> CorpusScheduler<I, S> for MinimizerCorpusScheduler<C, CS, F, I, M, R, S>
impl<CS, F, I, M, S> CorpusScheduler<I, S> for MinimizerCorpusScheduler<CS, F, I, M, S>
where
CS: CorpusScheduler<I, S>,
F: FavFactor<I>,
I: Input,
M: AsSlice<usize> + SerdeAny + HasRefCnt,
S: HasCorpus<C, I> + HasMetadata + HasRand<R>,
C: Corpus<I>,
R: Rand,
S: HasCorpus<I> + HasMetadata + HasRand,
{
/// Add an entry to the corpus and return its index
fn on_add(&self, state: &mut S, idx: usize) -> Result<(), Error> {
@ -143,15 +142,13 @@ where
}
}
impl<C, CS, F, I, M, R, S> MinimizerCorpusScheduler<C, CS, F, I, M, R, S>
impl<CS, F, I, M, S> MinimizerCorpusScheduler<CS, F, I, M, S>
where
CS: CorpusScheduler<I, S>,
F: FavFactor<I>,
I: Input,
M: AsSlice<usize> + SerdeAny + HasRefCnt,
S: HasCorpus<C, I> + HasMetadata + HasRand<R>,
C: Corpus<I>,
R: Rand,
S: HasCorpus<I> + HasMetadata + HasRand,
{
/// Update the `Corpus` score using the `MinimizerCorpusScheduler`
#[allow(clippy::unused_self)]
@ -282,10 +279,10 @@ where
}
/// A [`MinimizerCorpusScheduler`] with [`LenTimeMulFavFactor`] to prioritize quick and small [`Testcase`]`s`.
pub type LenTimeMinimizerCorpusScheduler<C, CS, I, M, R, S> =
MinimizerCorpusScheduler<C, CS, LenTimeMulFavFactor<I>, I, M, R, S>;
pub type LenTimeMinimizerCorpusScheduler<CS, I, M, S> =
MinimizerCorpusScheduler<CS, LenTimeMulFavFactor<I>, I, M, S>;
/// A [`MinimizerCorpusScheduler`] with [`LenTimeMulFavFactor`] to prioritize quick and small [`Testcase`]`s`
/// that exercise all the entries registered in the [`MapIndexesMetadata`].
pub type IndexesLenTimeMinimizerCorpusScheduler<C, CS, I, R, S> =
MinimizerCorpusScheduler<C, CS, LenTimeMulFavFactor<I>, I, MapIndexesMetadata, R, S>;
pub type IndexesLenTimeMinimizerCorpusScheduler<CS, I, S> =
MinimizerCorpusScheduler<CS, LenTimeMulFavFactor<I>, I, MapIndexesMetadata, S>;

View File

@ -30,7 +30,7 @@ pub mod powersched;
pub use powersched::PowerQueueCorpusScheduler;
use alloc::borrow::ToOwned;
use core::{cell::RefCell, marker::PhantomData};
use core::cell::RefCell;
use crate::{
bolts::rands::Rand,
@ -107,22 +107,13 @@ where
}
/// Feed the fuzzer simpply with a random testcase on request
pub struct RandCorpusScheduler<C, I, R, S>
where
S: HasCorpus<C, I> + HasRand<R>,
C: Corpus<I>,
I: Input,
R: Rand,
{
phantom: PhantomData<(C, I, R, S)>,
}
#[derive(Debug, Clone)]
pub struct RandCorpusScheduler;
impl<C, I, R, S> CorpusScheduler<I, S> for RandCorpusScheduler<C, I, R, S>
impl<I, S> CorpusScheduler<I, S> for RandCorpusScheduler
where
S: HasCorpus<C, I> + HasRand<R>,
C: Corpus<I>,
S: HasCorpus<I> + HasRand,
I: Input,
R: Rand,
{
/// Gets the next entry at random
fn next(&self, state: &mut S) -> Result<usize, Error> {
@ -137,29 +128,15 @@ where
}
}
impl<C, I, R, S> RandCorpusScheduler<C, I, R, S>
where
S: HasCorpus<C, I> + HasRand<R>,
C: Corpus<I>,
I: Input,
R: Rand,
{
impl RandCorpusScheduler {
/// Create a new [`RandCorpusScheduler`] that just schedules randomly.
#[must_use]
pub fn new() -> Self {
Self {
phantom: PhantomData,
}
Self
}
}
impl<C, I, R, S> Default for RandCorpusScheduler<C, I, R, S>
where
S: HasCorpus<C, I> + HasRand<R>,
C: Corpus<I>,
I: Input,
R: Rand,
{
impl Default for RandCorpusScheduler {
fn default() -> Self {
Self::new()
}
@ -167,4 +144,4 @@ where
/// A [`StdCorpusScheduler`] uses the default scheduler in `LibAFL` to schedule [`Testcase`]s
/// The current `Std` is a [`RandCorpusScheduler`], although this may change in the future, if another [`CorpusScheduler`] delivers better results.
pub type StdCorpusScheduler<C, I, R, S> = RandCorpusScheduler<C, I, R, S>;
pub type StdCorpusScheduler = RandCorpusScheduler;

View File

@ -30,7 +30,7 @@ pub enum OnDiskMetadataFormat {
/// A corpus able to store testcases to disk, and load them from disk, when they are being used.
#[cfg(feature = "std")]
#[derive(Serialize)]
#[derive(Debug, Serialize)]
pub struct OnDiskMetadata<'a> {
metadata: &'a SerdeAnyMap,
exec_time: &'a Option<Duration>,

View File

@ -1,7 +1,6 @@
//! The queue corpus scheduler for power schedules.
use alloc::string::{String, ToString};
use core::marker::PhantomData;
use crate::{
corpus::{Corpus, CorpusScheduler, PowerScheduleTestcaseMetaData},
@ -11,30 +10,19 @@ use crate::{
Error,
};
pub struct PowerQueueCorpusScheduler<C, I, S>
where
S: HasCorpus<C, I> + HasMetadata,
C: Corpus<I>,
I: Input,
{
phantom: PhantomData<(C, I, S)>,
}
/// A corpus scheduler using power schedules
#[derive(Clone, Debug)]
pub struct PowerQueueCorpusScheduler;
impl<C, I, S> Default for PowerQueueCorpusScheduler<C, I, S>
where
S: HasCorpus<C, I> + HasMetadata,
C: Corpus<I>,
I: Input,
{
impl Default for PowerQueueCorpusScheduler {
fn default() -> Self {
Self::new()
}
}
impl<C, I, S> CorpusScheduler<I, S> for PowerQueueCorpusScheduler<C, I, S>
impl<I, S> CorpusScheduler<I, S> for PowerQueueCorpusScheduler
where
S: HasCorpus<C, I> + HasMetadata,
C: Corpus<I>,
S: HasCorpus<I> + HasMetadata,
I: Input,
{
/// Add an entry to the corpus and return its index
@ -90,16 +78,10 @@ where
}
}
impl<C, I, S> PowerQueueCorpusScheduler<C, I, S>
where
S: HasCorpus<C, I> + HasMetadata,
C: Corpus<I>,
I: Input,
{
impl PowerQueueCorpusScheduler {
/// Create a new [`PowerQueueCorpusScheduler`]
#[must_use]
pub fn new() -> Self {
Self {
phantom: PhantomData,
}
Self
}
}

View File

@ -1,7 +1,6 @@
//! The queue corpus scheduler implements an AFL-like queue mechanism
use alloc::borrow::ToOwned;
use core::marker::PhantomData;
use crate::{
corpus::{Corpus, CorpusScheduler},
@ -11,19 +10,12 @@ use crate::{
};
/// Walk the corpus in a queue-like fashion
pub struct QueueCorpusScheduler<C, I, S>
where
S: HasCorpus<C, I>,
C: Corpus<I>,
I: Input,
{
phantom: PhantomData<(C, I, S)>,
}
#[derive(Debug, Clone)]
pub struct QueueCorpusScheduler;
impl<C, I, S> CorpusScheduler<I, S> for QueueCorpusScheduler<C, I, S>
impl<I, S> CorpusScheduler<I, S> for QueueCorpusScheduler
where
S: HasCorpus<C, I>,
C: Corpus<I>,
S: HasCorpus<I>,
I: Input,
{
/// Gets the next entry in the queue
@ -47,27 +39,15 @@ where
}
}
impl<C, I, S> QueueCorpusScheduler<C, I, S>
where
S: HasCorpus<C, I>,
C: Corpus<I>,
I: Input,
{
impl QueueCorpusScheduler {
/// Creates a new `QueueCorpusScheduler`
#[must_use]
pub fn new() -> Self {
Self {
phantom: PhantomData,
}
Self
}
}
impl<C, I, S> Default for QueueCorpusScheduler<C, I, S>
where
S: HasCorpus<C, I>,
C: Corpus<I>,
I: Input,
{
impl Default for QueueCorpusScheduler {
fn default() -> Self {
Self::new()
}

View File

@ -133,6 +133,7 @@ where
&mut self.exec_time
}
/// Sets the execution time of the current testcase
#[inline]
pub fn set_exec_time(&mut self, time: Duration) {
self.exec_time = Some(time);
@ -260,6 +261,7 @@ pub struct PowerScheduleTestcaseMetaData {
}
impl PowerScheduleTestcaseMetaData {
/// Create new [`struct@PowerScheduleTestcaseMetaData`]
#[must_use]
pub fn new(depth: u64) -> Self {
Self {
@ -271,47 +273,57 @@ impl PowerScheduleTestcaseMetaData {
}
}
/// Get the bitmap size
#[must_use]
pub fn bitmap_size(&self) -> u64 {
self.bitmap_size
}
/// Set the bitmap size
pub fn set_bitmap_size(&mut self, val: u64) {
self.bitmap_size = val;
}
/// Get the fuzz level
#[must_use]
pub fn fuzz_level(&self) -> u64 {
self.fuzz_level
}
/// Set the fuzz level
pub fn set_fuzz_level(&mut self, val: u64) {
self.fuzz_level = val;
}
/// Get the handicap
#[must_use]
pub fn handicap(&self) -> u64 {
self.handicap
}
/// Set the handicap
pub fn set_handicap(&mut self, val: u64) {
self.handicap = val;
}
/// Get the depth
#[must_use]
pub fn depth(&self) -> u64 {
self.depth
}
/// Set the depth
pub fn set_depth(&mut self, val: u64) {
self.depth = val;
}
/// Get the `n_fuzz_entry`
#[must_use]
pub fn n_fuzz_entry(&self) -> usize {
self.n_fuzz_entry
}
/// Set the `n_fuzz_entry`
pub fn set_n_fuzz_entry(&mut self, val: usize) {
self.n_fuzz_entry = val;
}

View File

@ -1,32 +1,24 @@
//! LLMP-backed event manager for scalable multi-processed fuzzing
use alloc::string::ToString;
use core::{marker::PhantomData, time::Duration};
#[cfg(feature = "std")]
use core::sync::atomic::{compiler_fence, Ordering};
#[cfg(feature = "std")]
use core_affinity::CoreId;
#[cfg(feature = "std")]
use serde::{de::DeserializeOwned, Serialize};
#[cfg(feature = "std")]
use std::net::{SocketAddr, ToSocketAddrs};
#[cfg(feature = "std")]
#[cfg(all(feature = "std", any(windows, not(feature = "fork"))))]
use crate::bolts::os::startable_self;
#[cfg(all(feature = "std", feature = "fork", unix))]
use crate::bolts::os::{fork, ForkResult};
#[cfg(feature = "llmp_compression")]
use crate::bolts::{
llmp::{LlmpClient, LlmpConnection},
shmem::StdShMemProvider,
staterestore::StateRestorer,
compress::GzipCompressor,
llmp::{LLMP_FLAG_COMPRESSED, LLMP_FLAG_INITIALIZED},
};
#[cfg(feature = "std")]
use crate::bolts::{llmp::LlmpConnection, shmem::StdShMemProvider, staterestore::StateRestorer};
use crate::{
bolts::{
llmp::{self, Flags, LlmpClientDescription, Tag},
llmp::{self, Flags, LlmpClient, LlmpClientDescription, Tag},
shmem::ShMemProvider,
},
events::{
BrokerEventResult, Event, EventConfig, EventFirer, EventManager, EventManagerId,
EventProcessor, EventRestarter, HasEventManagerId,
EventProcessor, EventRestarter, HasEventManagerId, ProgressReporter,
},
executors::{Executor, HasObservers},
fuzzer::{EvaluatorObservers, ExecutionProcessor},
@ -35,38 +27,35 @@ use crate::{
observers::ObserversTuple,
Error,
};
#[cfg(feature = "llmp_compression")]
use crate::bolts::{
compress::GzipCompressor,
llmp::{LLMP_FLAG_COMPRESSED, LLMP_FLAG_INITIALIZED},
};
#[cfg(all(feature = "std", any(windows, not(feature = "fork"))))]
use crate::bolts::os::startable_self;
#[cfg(all(feature = "std", feature = "fork", unix))]
use crate::bolts::os::{fork, ForkResult};
use alloc::string::ToString;
#[cfg(feature = "std")]
use core::sync::atomic::{compiler_fence, Ordering};
use core::{marker::PhantomData, time::Duration};
#[cfg(feature = "std")]
use core_affinity::CoreId;
use serde::de::DeserializeOwned;
#[cfg(feature = "std")]
use serde::Serialize;
#[cfg(feature = "std")]
use std::net::{SocketAddr, ToSocketAddrs};
#[cfg(feature = "std")]
use typed_builder::TypedBuilder;
use super::ProgressReporter;
/// Forward this to the client
const _LLMP_TAG_EVENT_TO_CLIENT: llmp::Tag = 0x2C11E471;
const _LLMP_TAG_EVENT_TO_CLIENT: Tag = 0x2C11E471;
/// Only handle this in the broker
const _LLMP_TAG_EVENT_TO_BROKER: llmp::Tag = 0x2B80438;
const _LLMP_TAG_EVENT_TO_BROKER: Tag = 0x2B80438;
/// Handle in both
///
const LLMP_TAG_EVENT_TO_BOTH: llmp::Tag = 0x2B0741;
const _LLMP_TAG_RESTART: llmp::Tag = 0x8357A87;
const _LLMP_TAG_NO_RESTART: llmp::Tag = 0x57A7EE71;
const LLMP_TAG_EVENT_TO_BOTH: Tag = 0x2B0741;
const _LLMP_TAG_RESTART: Tag = 0x8357A87;
const _LLMP_TAG_NO_RESTART: Tag = 0x57A7EE71;
/// The minimum buffer size at which to compress LLMP IPC messages.
#[cfg(feature = "llmp_compression")]
const COMPRESS_THRESHOLD: usize = 1024;
/// An LLMP-backed event manager for scalable multi-processed fuzzing
#[derive(Debug)]
pub struct LlmpEventBroker<I, MT, SP>
where
@ -112,6 +101,7 @@ where
})
}
/// Connect to an llmp broker on the givien address
#[cfg(feature = "std")]
pub fn connect_b2b<A>(&mut self, addr: A) -> Result<(), Error>
where
@ -180,15 +170,11 @@ where
Event::UpdateExecStats {
time,
executions,
stability,
phantom: _,
} => {
// TODO: The monitor buffer should be added on client add.
let client = monitor.client_stats_mut_for(client_id);
client.update_executions(*executions as u64, *time);
if let Some(stability) = stability {
client.update_stability(*stability);
}
monitor.display(event.name().to_string(), client_id);
Ok(BrokerEventResult::Handled)
}
@ -206,7 +192,6 @@ where
Event::UpdatePerfMonitor {
time,
executions,
stability,
introspection_monitor,
phantom: _,
} => {
@ -218,10 +203,6 @@ where
// Update the normal monitor for this client
client.update_executions(*executions as u64, *time);
if let Some(stability) = stability {
client.update_stability(*stability);
}
// Update the performance monitor for this client
client.update_introspection_monitor((**introspection_monitor).clone());
@ -262,7 +243,7 @@ where
SP: ShMemProvider + 'static,
//CE: CustomEvent<I>,
{
llmp: llmp::LlmpClient<SP>,
llmp: LlmpClient<SP>,
#[cfg(feature = "llmp_compression")]
compressor: GzipCompressor,
configuration: EventConfig,
@ -288,7 +269,7 @@ where
SP: ShMemProvider + 'static,
{
/// Create a manager from a raw llmp client
pub fn new(llmp: llmp::LlmpClient<SP>, configuration: EventConfig) -> Result<Self, Error> {
pub fn new(llmp: LlmpClient<SP>, configuration: EventConfig) -> Result<Self, Error> {
Ok(Self {
llmp,
#[cfg(feature = "llmp_compression")]
@ -369,7 +350,7 @@ where
event: Event<I>,
) -> Result<(), Error>
where
OT: ObserversTuple<I, S> + serde::de::DeserializeOwned,
OT: ObserversTuple<I, S> + DeserializeOwned,
E: Executor<Self, I, S, Z> + HasObservers<I, OT, S>,
Z: ExecutionProcessor<I, OT, S> + EvaluatorObservers<I, OT, S>,
{
@ -470,7 +451,7 @@ where
SP: ShMemProvider,
E: Executor<Self, I, S, Z> + HasObservers<I, OT, S>,
I: Input,
OT: ObserversTuple<I, S> + serde::de::DeserializeOwned,
OT: ObserversTuple<I, S> + DeserializeOwned,
Z: ExecutionProcessor<I, OT, S> + EvaluatorObservers<I, OT, S>, //CE: CustomEvent<I>,
{
fn process(&mut self, fuzzer: &mut Z, state: &mut S, executor: &mut E) -> Result<usize, Error> {
@ -512,7 +493,7 @@ impl<E, I, OT, S, SP, Z> EventManager<E, I, S, Z> for LlmpEventManager<I, OT, S,
where
E: Executor<Self, I, S, Z> + HasObservers<I, OT, S>,
I: Input,
OT: ObserversTuple<I, S> + serde::de::DeserializeOwned,
OT: ObserversTuple<I, S> + DeserializeOwned,
SP: ShMemProvider,
Z: ExecutionProcessor<I, OT, S> + EvaluatorObservers<I, OT, S>, //CE: CustomEvent<I>,
{
@ -521,7 +502,7 @@ where
impl<I, OT, S, SP> ProgressReporter<I> for LlmpEventManager<I, OT, S, SP>
where
I: Input,
OT: ObserversTuple<I, S> + serde::de::DeserializeOwned,
OT: ObserversTuple<I, S> + DeserializeOwned,
SP: ShMemProvider,
{
}
@ -529,7 +510,7 @@ where
impl<I, OT, S, SP> HasEventManagerId for LlmpEventManager<I, OT, S, SP>
where
I: Input,
OT: ObserversTuple<I, S> + serde::de::DeserializeOwned,
OT: ObserversTuple<I, S> + DeserializeOwned,
SP: ShMemProvider,
{
/// Gets the id assigned to this staterestorer.
@ -615,7 +596,7 @@ where
E: Executor<LlmpEventManager<I, OT, S, SP>, I, S, Z> + HasObservers<I, OT, S>,
I: Input,
Z: ExecutionProcessor<I, OT, S> + EvaluatorObservers<I, OT, S>,
OT: ObserversTuple<I, S> + serde::de::DeserializeOwned,
OT: ObserversTuple<I, S> + DeserializeOwned,
SP: ShMemProvider + 'static,
//CE: CustomEvent<I>,
{
@ -631,7 +612,7 @@ where
I: Input,
S: Serialize,
Z: ExecutionProcessor<I, OT, S> + EvaluatorObservers<I, OT, S>,
OT: ObserversTuple<I, S> + serde::de::DeserializeOwned,
OT: ObserversTuple<I, S> + DeserializeOwned,
SP: ShMemProvider + 'static,
//CE: CustomEvent<I>,
{
@ -641,7 +622,7 @@ where
impl<I, OT, S, SP> HasEventManagerId for LlmpRestartingEventManager<I, OT, S, SP>
where
I: Input,
OT: ObserversTuple<I, S> + serde::de::DeserializeOwned,
OT: ObserversTuple<I, S> + DeserializeOwned,
S: Serialize,
SP: ShMemProvider + 'static,
{
@ -660,7 +641,7 @@ const _ENV_FUZZER_BROKER_CLIENT_INITIAL: &str = "_AFL_ENV_FUZZER_BROKER_CLIENT";
impl<I, OT, S, SP> LlmpRestartingEventManager<I, OT, S, SP>
where
I: Input,
OT: ObserversTuple<I, S> + serde::de::DeserializeOwned,
OT: ObserversTuple<I, S> + DeserializeOwned,
SP: ShMemProvider + 'static,
//CE: CustomEvent<I>,
{
@ -690,7 +671,10 @@ pub enum ManagerKind {
/// Any kind will do
Any,
/// A client, getting messages from a local broker.
Client { cpu_core: Option<CoreId> },
Client {
/// The cpu core id of this client
cpu_core: Option<CoreId>,
},
/// A [`llmp::LlmpBroker`], forwarding the packets of local clients.
Broker,
}
@ -715,7 +699,7 @@ where
I: Input,
S: DeserializeOwned,
MT: Monitor + Clone,
OT: ObserversTuple<I, S> + serde::de::DeserializeOwned,
OT: ObserversTuple<I, S> + DeserializeOwned,
S: DeserializeOwned,
{
RestartingMgr::builder()
@ -736,7 +720,7 @@ where
pub struct RestartingMgr<I, MT, OT, S, SP>
where
I: Input,
OT: ObserversTuple<I, S> + serde::de::DeserializeOwned,
OT: ObserversTuple<I, S> + DeserializeOwned,
S: DeserializeOwned,
SP: ShMemProvider + 'static,
MT: Monitor,
@ -768,7 +752,7 @@ where
impl<I, MT, OT, S, SP> RestartingMgr<I, MT, OT, S, SP>
where
I: Input,
OT: ObserversTuple<I, S> + serde::de::DeserializeOwned,
OT: ObserversTuple<I, S> + DeserializeOwned,
S: DeserializeOwned,
SP: ShMemProvider,
MT: Monitor + Clone,

View File

@ -6,7 +6,10 @@ pub mod llmp;
pub use llmp::*;
use ahash::AHasher;
use alloc::{string::String, vec::Vec};
use alloc::{
string::{String, ToString},
vec::Vec,
};
use core::{fmt, hash::Hasher, marker::PhantomData, time::Duration};
use serde::{Deserialize, Serialize};
@ -72,17 +75,23 @@ pub enum BrokerEventResult {
/// Distinguish a fuzzer by its config
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq)]
pub enum EventConfig {
/// Always assume unique setups for fuzzer configs
AlwaysUnique,
/// Create a fuzzer config from a name hash
FromName {
/// The name hash
name_hash: u64,
},
/// Create a fuzzer config from a build-time [`Uuid`]
#[cfg(feature = "std")]
BuildID {
/// The build-time [`Uuid`]
id: Uuid,
},
}
impl EventConfig {
/// Create a new [`EventConfig`] from a name hash
#[must_use]
pub fn from_name(name: &str) -> Self {
let mut hasher = AHasher::new_with_keys(0, 0);
@ -92,6 +101,7 @@ impl EventConfig {
}
}
/// Create a new [`EventConfig`] from a build-time [`Uuid`]
#[cfg(feature = "std")]
#[must_use]
pub fn from_build_id() -> Self {
@ -100,6 +110,7 @@ impl EventConfig {
}
}
/// Match if the currenti [`EventConfig`] matches another given config
#[must_use]
pub fn match_with(&self, other: &EventConfig) -> bool {
match self {
@ -179,8 +190,6 @@ where
UpdateExecStats {
/// The time of generation of the [`Event`]
time: Duration,
/// The stability of this fuzzer node, if known
stability: Option<f32>,
/// The executions of this client
executions: usize,
/// [`PhantomData`]
@ -202,11 +211,10 @@ where
time: Duration,
/// The executions of this client
executions: usize,
/// The stability of this fuzzer node, if known
stability: Option<f32>,
/// Current performance statistics
introspection_monitor: Box<ClientPerfMonitor>,
/// phantomm data
phantom: PhantomData<I>,
},
/// A new objective was found
@ -247,7 +255,6 @@ where
} => "Testcase",
Event::UpdateExecStats {
time: _,
stability: _,
executions: _,
phantom: _,
}
@ -260,7 +267,6 @@ where
Event::UpdatePerfMonitor {
time: _,
executions: _,
stability: _,
introspection_monitor: _,
phantom: _,
} => "PerfMonitor",
@ -313,7 +319,7 @@ where
/// Serialize all observers for this type and manager
fn serialize_observers<OT, S>(&mut self, observers: &OT) -> Result<Vec<u8>, Error>
where
OT: ObserversTuple<I, S> + serde::Serialize,
OT: ObserversTuple<I, S> + Serialize,
{
Ok(postcard::to_allocvec(observers)?)
}
@ -342,7 +348,6 @@ where
S: HasExecutions + HasClientPerfMonitor,
{
let executions = *state.executions();
let stability = *state.stability();
let cur = current_time();
// default to 0 here to avoid crashes on clock skew
if cur.checked_sub(last_report_time).unwrap_or_default() > monitor_timeout {
@ -352,12 +357,23 @@ where
state,
Event::UpdateExecStats {
executions,
stability,
time: cur,
phantom: PhantomData,
},
)?;
if let Some(x) = state.stability() {
let stability = f64::from(*x);
self.fire(
state,
Event::UpdateUserStats {
name: "stability".to_string(),
value: UserStats::Float(stability),
phantom: PhantomData,
},
)?;
}
// If performance monitor are requested, fire the `UpdatePerfMonitor` event
#[cfg(feature = "introspection")]
{
@ -372,7 +388,6 @@ where
Event::UpdatePerfMonitor {
executions,
time: cur,
stability,
introspection_monitor: Box::new(state.introspection_monitor().clone()),
phantom: PhantomData,
},
@ -387,6 +402,7 @@ where
}
}
/// Restartable trait
pub trait EventRestarter<S> {
/// For restarting event managers, implement a way to forward state to their next peers.
#[inline]
@ -413,7 +429,9 @@ pub trait EventProcessor<E, I, S, Z> {
Ok(postcard::from_bytes(observers_buf)?)
}
}
/// The id of this [`EventManager`].
/// For multi processed [`EventManager`]s,
/// each connected client sholud have a unique ids.
pub trait HasEventManagerId {
/// The id of this manager. For Multiprocessed [`EventManager`]s,
/// each client sholud have a unique ids.

View File

@ -11,10 +11,7 @@ use crate::{
};
use alloc::{string::ToString, vec::Vec};
#[cfg(feature = "std")]
use core::{
marker::PhantomData,
sync::atomic::{compiler_fence, Ordering},
};
use core::sync::atomic::{compiler_fence, Ordering};
#[cfg(feature = "std")]
use serde::{de::DeserializeOwned, Serialize};
@ -153,16 +150,12 @@ where
Event::UpdateExecStats {
time,
executions,
stability,
phantom: _,
} => {
// TODO: The monitor buffer should be added on client add.
let client = monitor.client_stats_mut_for(0);
client.update_executions(*executions as u64, *time);
if let Some(stability) = stability {
client.update_stability(*stability);
}
monitor.display(event.name().to_string(), 0);
Ok(BrokerEventResult::Handled)
@ -182,7 +175,6 @@ where
Event::UpdatePerfMonitor {
time,
executions,
stability,
introspection_monitor,
phantom: _,
} => {
@ -190,9 +182,6 @@ where
let client = &mut monitor.client_stats_mut()[0];
client.update_executions(*executions as u64, *time);
client.update_introspection_monitor((**introspection_monitor).clone());
if let Some(stability) = stability {
client.update_stability(*stability);
}
monitor.display(event.name().to_string(), 0);
Ok(BrokerEventResult::Handled)
}
@ -231,11 +220,10 @@ where
/// `restarter` will start a new process each time the child crashes or times out.
#[cfg(feature = "std")]
#[allow(clippy::default_trait_access)]
pub struct SimpleRestartingEventManager<'a, C, I, MT, S, SC, SP>
#[derive(Debug, Clone)]
pub struct SimpleRestartingEventManager<I, MT, SP>
where
C: Corpus<I>,
I: Input,
S: Serialize,
SP: ShMemProvider,
MT: Monitor, //CE: CustomEvent<I, OT>,
{
@ -243,17 +231,12 @@ where
simple_event_mgr: SimpleEventManager<I, MT>,
/// [`StateRestorer`] for restarts
staterestorer: StateRestorer<SP>,
/// Phantom data
_phantom: PhantomData<&'a (C, I, S, SC)>,
}
#[cfg(feature = "std")]
impl<'a, C, I, MT, S, SC, SP> EventFirer<I>
for SimpleRestartingEventManager<'a, C, I, MT, S, SC, SP>
impl<I, MT, SP> EventFirer<I> for SimpleRestartingEventManager<I, MT, SP>
where
C: Corpus<I>,
I: Input,
S: Serialize,
SP: ShMemProvider,
MT: Monitor, //CE: CustomEvent<I, OT>,
{
@ -263,10 +246,8 @@ where
}
#[cfg(feature = "std")]
impl<'a, C, I, MT, S, SC, SP> EventRestarter<S>
for SimpleRestartingEventManager<'a, C, I, MT, S, SC, SP>
impl<I, MT, S, SP> EventRestarter<S> for SimpleRestartingEventManager<I, MT, SP>
where
C: Corpus<I>,
I: Input,
S: Serialize,
SP: ShMemProvider,
@ -281,10 +262,8 @@ where
}
#[cfg(feature = "std")]
impl<'a, C, E, I, S, SC, SP, MT, Z> EventProcessor<E, I, S, Z>
for SimpleRestartingEventManager<'a, C, I, MT, S, SC, SP>
impl<E, I, S, SP, MT, Z> EventProcessor<E, I, S, Z> for SimpleRestartingEventManager<I, MT, SP>
where
C: Corpus<I>,
I: Input,
S: Serialize,
SP: ShMemProvider,
@ -296,10 +275,8 @@ where
}
#[cfg(feature = "std")]
impl<'a, C, E, I, S, SC, SP, MT, Z> EventManager<E, I, S, Z>
for SimpleRestartingEventManager<'a, C, I, MT, S, SC, SP>
impl<E, I, S, SP, MT, Z> EventManager<E, I, S, Z> for SimpleRestartingEventManager<I, MT, SP>
where
C: Corpus<I>,
I: Input,
S: Serialize,
SP: ShMemProvider,
@ -308,24 +285,18 @@ where
}
#[cfg(feature = "std")]
impl<'a, C, I, MT, S, SC, SP> ProgressReporter<I>
for SimpleRestartingEventManager<'a, C, I, MT, S, SC, SP>
impl<I, MT, SP> ProgressReporter<I> for SimpleRestartingEventManager<I, MT, SP>
where
I: Input,
C: Corpus<I>,
S: Serialize,
SP: ShMemProvider,
MT: Monitor, //CE: CustomEvent<I, OT>,
{
}
#[cfg(feature = "std")]
impl<'a, C, I, MT, S, SC, SP> HasEventManagerId
for SimpleRestartingEventManager<'a, C, I, MT, S, SC, SP>
impl<I, MT, SP> HasEventManagerId for SimpleRestartingEventManager<I, MT, SP>
where
C: Corpus<I>,
I: Input,
S: Serialize,
SP: ShMemProvider,
MT: Monitor,
{
@ -336,12 +307,9 @@ where
#[cfg(feature = "std")]
#[allow(clippy::type_complexity, clippy::too_many_lines)]
impl<'a, C, I, MT, S, SC, SP> SimpleRestartingEventManager<'a, C, I, MT, S, SC, SP>
impl<'a, I, MT, SP> SimpleRestartingEventManager<I, MT, SP>
where
C: Corpus<I>,
I: Input,
S: DeserializeOwned + Serialize + HasCorpus<C, I> + HasSolutions<SC, I>,
SC: Corpus<I>,
SP: ShMemProvider,
MT: Monitor, //TODO CE: CustomEvent,
{
@ -350,7 +318,6 @@ where
Self {
staterestorer,
simple_event_mgr: SimpleEventManager::new(monitor),
_phantom: PhantomData {},
}
}
@ -358,7 +325,10 @@ where
/// This [`EventManager`] is simple and single threaded,
/// but can still used shared maps to recover from crashes and timeouts.
#[allow(clippy::similar_names)]
pub fn launch(mut monitor: MT, shmem_provider: &mut SP) -> Result<(Option<S>, Self), Error> {
pub fn launch<S>(mut monitor: MT, shmem_provider: &mut SP) -> Result<(Option<S>, Self), Error>
where
S: DeserializeOwned + Serialize + HasCorpus<I> + HasSolutions<I>,
{
// We start ourself as child process to actually fuzz
let mut staterestorer = if std::env::var(_ENV_FUZZER_SENDER).is_err() {
// First, create a place to store state in, for restarts.

View File

@ -6,14 +6,16 @@ use crate::{
observers::ObserversTuple,
Error,
};
use core::fmt::Debug;
/// A [`CombinedExecutor`] wraps a primary executor, forwarding its methods, and a secondary one
pub struct CombinedExecutor<A, B> {
#[derive(Debug)]
pub struct CombinedExecutor<A: Debug, B: Debug> {
primary: A,
secondary: B,
}
impl<A, B> CombinedExecutor<A, B> {
impl<A: Debug, B: Debug> CombinedExecutor<A, B> {
/// Create a new `CombinedExecutor`, wrapping the given `executor`s.
pub fn new<EM, I, S, Z>(primary: A, secondary: B) -> Self
where
@ -55,6 +57,7 @@ where
impl<A, B, I, OT, S> HasObservers<I, OT, S> for CombinedExecutor<A, B>
where
A: HasObservers<I, OT, S>,
B: Debug,
OT: ObserversTuple<I, S>,
{
#[inline]

View File

@ -1,4 +1,8 @@
use core::marker::PhantomData;
//! The command executor executes a sub program for each run
use core::{
fmt::{self, Debug, Formatter},
marker::PhantomData,
};
#[cfg(feature = "std")]
use std::process::Child;
@ -14,13 +18,24 @@ use std::time::Duration;
/// A `CommandExecutor` is a wrapper around [`std::process::Command`] to execute a target as a child process.
/// Construct a `CommandExecutor` by implementing [`CommandConfigurator`] for a type of your choice and calling [`CommandConfigurator::into_executor`] on it.
pub struct CommandExecutor<EM, I, S, Z, T, OT> {
pub struct CommandExecutor<EM, I, OT: Debug, S, T: Debug, Z> {
inner: T,
/// [`crate::observers::Observer`]s for this executor
observers: OT,
phantom: PhantomData<(EM, I, S, Z)>,
}
impl<EM, I, S, Z, T, OT> CommandExecutor<EM, I, S, Z, T, OT> {
impl<EM, I, OT: Debug, S, T: Debug, Z> Debug for CommandExecutor<EM, I, OT, S, T, Z> {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("CommandExecutor")
.field("inner", &self.inner)
.field("observers", &self.observers)
.finish()
}
}
impl<EM, I, OT: Debug, S, T: Debug, Z> CommandExecutor<EM, I, OT, S, T, Z> {
/// Accesses the inner value
pub fn inner(&mut self) -> &mut T {
&mut self.inner
}
@ -28,7 +43,7 @@ impl<EM, I, S, Z, T, OT> CommandExecutor<EM, I, S, Z, T, OT> {
// this only works on unix because of the reliance on checking the process signal for detecting OOM
#[cfg(all(feature = "std", unix))]
impl<EM, I, S, Z, T, OT> Executor<EM, I, S, Z> for CommandExecutor<EM, I, S, Z, T, OT>
impl<EM, I, OT: Debug, S, T: Debug, Z> Executor<EM, I, S, Z> for CommandExecutor<EM, I, OT, S, T, Z>
where
I: Input,
T: CommandConfigurator<EM, I, S, Z>,
@ -68,7 +83,8 @@ where
}
#[cfg(all(feature = "std", unix))]
impl<EM, I, S, Z, T, OT> HasObservers<I, OT, S> for CommandExecutor<EM, I, S, Z, T, OT>
impl<EM, I, OT: Debug, S, T: Debug, Z> HasObservers<I, OT, S>
for CommandExecutor<EM, I, OT, S, T, Z>
where
I: Input,
OT: ObserversTuple<I, S>,
@ -90,6 +106,7 @@ where
/// ```
/// use std::{io::Write, process::{Stdio, Command, Child}};
/// use libafl::{Error, inputs::{Input, HasTargetBytes}, executors::{Executor, command::CommandConfigurator}};
/// #[derive(Debug)]
/// struct MyExecutor;
///
/// impl<EM, I: Input + HasTargetBytes, S, Z> CommandConfigurator<EM, I, S, Z> for MyExecutor {
@ -118,7 +135,8 @@ where
/// }
/// ```
#[cfg(all(feature = "std", unix))]
pub trait CommandConfigurator<EM, I: Input, S, Z>: Sized {
pub trait CommandConfigurator<EM, I: Input, S, Z>: Sized + Debug {
/// Spawns a new process with the given configuration.
fn spawn_child(
&mut self,
fuzzer: &mut Z,
@ -127,7 +145,8 @@ pub trait CommandConfigurator<EM, I: Input, S, Z>: Sized {
input: &I,
) -> Result<Child, Error>;
fn into_executor<OT>(self, observers: OT) -> CommandExecutor<EM, I, S, Z, Self, OT>
/// Create an `Executor` from this `CommandConfigurator`.
fn into_executor<OT: Debug>(self, observers: OT) -> CommandExecutor<EM, I, OT, S, Self, Z>
where
OT: ObserversTuple<I, S>,
{

View File

@ -1,6 +1,10 @@
//! Expose an `Executor` based on a `Forkserver` in order to execute AFL/AFL++ binaries
use core::{marker::PhantomData, time::Duration};
use core::{
fmt::{self, Debug, Formatter},
marker::PhantomData,
time::Duration,
};
use std::{
fs::{File, OpenOptions},
io::{self, prelude::*, ErrorKind, SeekFrom},
@ -33,17 +37,21 @@ use nix::{
const FORKSRV_FD: i32 = 198;
#[allow(clippy::cast_possible_wrap)]
const FS_OPT_ENABLED: i32 = 0x80000001u32 as i32;
const FS_OPT_ENABLED: i32 = 0x80000001_u32 as i32;
#[allow(clippy::cast_possible_wrap)]
const FS_OPT_SHDMEM_FUZZ: i32 = 0x01000000u32 as i32;
const FS_OPT_SHDMEM_FUZZ: i32 = 0x01000000_u32 as i32;
const SHMEM_FUZZ_HDR_SIZE: usize = 4;
const MAX_FILE: usize = 1024 * 1024;
// Configure the target. setlimit, setsid, pipe_stdin, I borrowed the code from Angora fuzzer
/// Configure the target, `limit`, `setsid`, `pipe_stdin`, the code was borrowed from the [`Angora`](https://github.com/AngoraFuzzer/Angora) fuzzer
pub trait ConfigTarget {
/// Sets the sid
fn setsid(&mut self) -> &mut Self;
/// Sets a mem limit
fn setlimit(&mut self, memlimit: u64) -> &mut Self;
/// Sets the stdin
fn setstdin(&mut self, fd: RawFd, use_stdin: bool) -> &mut Self;
/// Sets the AFL forkserver pipes
fn setpipe(
&mut self,
st_read: RawFd,
@ -113,6 +121,7 @@ impl ConfigTarget for Command {
}
}
#[allow(trivial_numeric_casts)]
fn setlimit(&mut self, memlimit: u64) -> &mut Self {
if memlimit == 0 {
return self;
@ -145,11 +154,16 @@ impl ConfigTarget for Command {
}
}
/// The [`OutFile`] to write input to.
/// The target/forkserver will read from this file.
#[derive(Debug)]
pub struct OutFile {
/// The file
file: File,
}
impl OutFile {
/// Creates a new [`OutFile`]
pub fn new(file_name: &str) -> Result<Self, Error> {
let f = OpenOptions::new()
.read(true)
@ -159,11 +173,13 @@ impl OutFile {
Ok(Self { file: f })
}
/// Gets the file as raw file descriptor
#[must_use]
pub fn as_raw_fd(&self) -> RawFd {
self.file.as_raw_fd()
}
/// Writes the given buffer to the file
pub fn write_buf(&mut self, buf: &[u8]) {
self.rewind();
self.file.write_all(buf).unwrap();
@ -173,6 +189,7 @@ impl OutFile {
self.rewind();
}
/// Rewinds the file to the beginning
pub fn rewind(&mut self) {
self.file.seek(SeekFrom::Start(0)).unwrap();
}
@ -180,6 +197,7 @@ impl OutFile {
/// The [`Forkserver`] is communication channel with a child process that forks on request of the fuzzer.
/// The communication happens via pipe.
#[derive(Debug)]
pub struct Forkserver {
st_pipe: Pipe,
ctl_pipe: Pipe,
@ -189,6 +207,7 @@ pub struct Forkserver {
}
impl Forkserver {
/// Create a new [`Forkserver`]
pub fn new(
target: String,
args: Vec<String>,
@ -245,35 +264,42 @@ impl Forkserver {
})
}
/// If the last run timed out
#[must_use]
pub fn last_run_timed_out(&self) -> i32 {
self.last_run_timed_out
}
/// Sets if the last run timed out
pub fn set_last_run_timed_out(&mut self, last_run_timed_out: i32) {
self.last_run_timed_out = last_run_timed_out;
}
/// The status
#[must_use]
pub fn status(&self) -> i32 {
self.status
}
/// Sets the status
pub fn set_status(&mut self, status: i32) {
self.status = status;
}
/// The child pid
#[must_use]
pub fn child_pid(&self) -> Pid {
self.child_pid
}
/// Set the child pid
pub fn set_child_pid(&mut self, child_pid: Pid) {
self.child_pid = child_pid;
}
/// Read from the st pipe
pub fn read_st(&mut self) -> Result<(usize, i32), Error> {
let mut buf: [u8; 4] = [0u8; 4];
let mut buf: [u8; 4] = [0_u8; 4];
let rlen = self.st_pipe.read(&mut buf)?;
let val: i32 = i32::from_ne_bytes(buf);
@ -281,14 +307,16 @@ impl Forkserver {
Ok((rlen, val))
}
/// Write to the ctl pipe
pub fn write_ctl(&mut self, val: i32) -> Result<usize, Error> {
let slen = self.ctl_pipe.write(&val.to_ne_bytes())?;
Ok(slen)
}
/// Read a message from the child process.
pub fn read_st_timed(&mut self, timeout: &TimeSpec) -> Result<Option<i32>, Error> {
let mut buf: [u8; 4] = [0u8; 4];
let mut buf: [u8; 4] = [0_u8; 4];
let st_read = match self.st_pipe.read_end() {
Some(fd) => fd,
None => {
@ -324,27 +352,36 @@ impl Forkserver {
}
}
/// A struct that has a forkserver
pub trait HasForkserver {
/// The forkserver
fn forkserver(&self) -> &Forkserver;
/// The forkserver, mutable
fn forkserver_mut(&mut self) -> &mut Forkserver;
/// The file the forkserver is reading from
fn out_file(&self) -> &OutFile;
/// The file the forkserver is reading from, mutable
fn out_file_mut(&mut self) -> &mut OutFile;
/// The map of the fuzzer
fn map(&self) -> &Option<StdShMem>;
/// The map of the fuzzer, mutable
fn map_mut(&mut self) -> &mut Option<StdShMem>;
}
/// The timeout forkserver executor that wraps around the standard forkserver executor and sets a timeout before each run.
pub struct TimeoutForkserverExecutor<E> {
#[derive(Debug)]
pub struct TimeoutForkserverExecutor<E: Debug> {
executor: E,
timeout: TimeSpec,
}
impl<E> TimeoutForkserverExecutor<E> {
impl<E: Debug> TimeoutForkserverExecutor<E> {
/// Create a new [`TimeoutForkserverExecutor`]
pub fn new(executor: E, exec_tmout: Duration) -> Result<Self, Error> {
let milli_sec = exec_tmout.as_millis() as i64;
let timeout = TimeSpec::milliseconds(milli_sec);
@ -352,7 +389,7 @@ impl<E> TimeoutForkserverExecutor<E> {
}
}
impl<E, EM, I, S, Z> Executor<EM, I, S, Z> for TimeoutForkserverExecutor<E>
impl<E: Debug, EM, I, S, Z> Executor<EM, I, S, Z> for TimeoutForkserverExecutor<E>
where
I: Input + HasTargetBytes,
E: Executor<EM, I, S, Z> + HasForkserver,
@ -464,11 +501,29 @@ where
phantom: PhantomData<(I, S)>,
}
impl<I, OT, S> Debug for ForkserverExecutor<I, OT, S>
where
I: Input + HasTargetBytes,
OT: ObserversTuple<I, S>,
{
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("ForkserverExecutor")
.field("target", &self.target)
.field("args", &self.args)
.field("out_file", &self.out_file)
.field("forkserver", &self.forkserver)
.field("observers", &self.observers)
.field("map", &self.map)
.finish()
}
}
impl<I, OT, S> ForkserverExecutor<I, OT, S>
where
I: Input + HasTargetBytes,
OT: ObserversTuple<I, S>,
{
/// Creates a new [`ForkserverExecutor`] with the given target, arguments and observers.
pub fn new(
target: String,
arguments: &[String],
@ -478,6 +533,7 @@ where
Self::with_debug(target, arguments, use_shmem_testcase, observers, false)
}
/// Creates a new [`ForkserverExecutor`] with the given target, arguments and observers, with debug mode
pub fn with_debug(
target: String,
arguments: &[String],
@ -557,18 +613,22 @@ where
})
}
/// The `target` binary that's going to run.
pub fn target(&self) -> &String {
&self.target
}
/// The `args` used for the binary.
pub fn args(&self) -> &[String] {
&self.args
}
/// The [`Forkserver`] instance.
pub fn forkserver(&self) -> &Forkserver {
&self.forkserver
}
/// The [`OutFile`] used by this [`Executor`].
pub fn out_file(&self) -> &OutFile {
&self.out_file
}
@ -737,10 +797,7 @@ mod tests {
let bin = "echo";
let args = vec![String::from("@@")];
let mut shmem = StdShMemProvider::new()
.unwrap()
.new_map(MAP_SIZE as usize)
.unwrap();
let mut shmem = StdShMemProvider::new().unwrap().new_map(MAP_SIZE).unwrap();
shmem.write_to_env("__AFL_SHM_ID").unwrap();
let shmem_map = shmem.map_mut();

View File

@ -3,7 +3,12 @@
//!
//! Needs the `fork` feature flag.
use core::{ffi::c_void, marker::PhantomData, ptr};
use core::{
ffi::c_void,
fmt::{self, Debug, Formatter},
marker::PhantomData,
ptr,
};
#[cfg(any(unix, all(windows, feature = "std")))]
use core::{
@ -29,7 +34,6 @@ use crate::bolts::os::windows_exceptions::setup_exception_handler;
use windows::Win32::System::Threading::SetThreadStackGuarantee;
use crate::{
corpus::Corpus,
events::{EventFirer, EventRestarter},
executors::{Executor, ExitKind, HasObservers},
feedbacks::Feedback,
@ -42,7 +46,6 @@ use crate::{
/// The inmem executor simply calls a target function, then returns afterwards.
#[allow(dead_code)]
#[derive(Debug)]
pub struct InProcessExecutor<'a, H, I, OT, S>
where
H: FnMut(&I) -> ExitKind,
@ -58,6 +61,20 @@ where
phantom: PhantomData<(I, S)>,
}
impl<'a, H, I, OT, S> Debug for InProcessExecutor<'a, H, I, OT, S>
where
H: FnMut(&I) -> ExitKind,
I: Input,
OT: ObserversTuple<I, S>,
{
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("InProcessExecutor")
.field("harness_fn", &"<fn>")
.field("observers", &self.observers)
.finish_non_exhaustive()
}
}
impl<'a, EM, H, I, OT, S, Z> Executor<EM, I, S, Z> for InProcessExecutor<'a, H, I, OT, S>
where
H: FnMut(&I) -> ExitKind,
@ -109,7 +126,7 @@ where
/// * `harness_fn` - the harness, executiong the function
/// * `observers` - the observers observing the target during execution
/// This may return an error on unix, if signal handler setup fails
pub fn new<EM, OC, OF, Z>(
pub fn new<EM, OF, Z>(
harness_fn: &'a mut H,
observers: OT,
_fuzzer: &mut Z,
@ -118,12 +135,11 @@ where
) -> Result<Self, Error>
where
EM: EventFirer<I> + EventRestarter<S>,
OC: Corpus<I>,
OF: Feedback<I, S>,
S: HasSolutions<OC, I> + HasClientPerfMonitor,
S: HasSolutions<I> + HasClientPerfMonitor,
Z: HasObjective<I, OF, S>,
{
let handlers = InProcessHandlers::new::<Self, EM, I, OC, OF, OT, S, Z>()?;
let handlers = InProcessHandlers::new::<Self, EM, I, OF, OT, S, Z>()?;
#[cfg(windows)]
unsafe {
/*
@ -159,17 +175,20 @@ where
self.harness_fn
}
/// The inprocess handlers
#[inline]
pub fn handlers(&self) -> &InProcessHandlers {
&self.handlers
}
/// The inprocess handlers, mut
#[inline]
pub fn handlers_mut(&mut self) -> &mut InProcessHandlers {
&mut self.handlers
}
}
/// The inmem executor's handlers.
#[derive(Debug)]
pub struct InProcessHandlers {
/// On crash C function pointer
@ -179,32 +198,33 @@ pub struct InProcessHandlers {
}
impl InProcessHandlers {
/// Call before running a target.
pub fn pre_run_target<E, EM, I, S, Z>(
&self,
executor: &E,
fuzzer: &mut Z,
state: &mut S,
mgr: &mut EM,
input: &I,
_executor: &E,
_fuzzer: &mut Z,
_state: &mut S,
_mgr: &mut EM,
_input: &I,
) {
#[cfg(unix)]
unsafe {
let data = &mut GLOBAL_STATE;
write_volatile(
&mut data.current_input_ptr,
input as *const _ as *const c_void,
_input as *const _ as *const c_void,
);
write_volatile(
&mut data.executor_ptr,
executor as *const _ as *const c_void,
_executor as *const _ as *const c_void,
);
data.crash_handler = self.crash_handler;
data.timeout_handler = self.timeout_handler;
// Direct raw pointers access /aliasing is pretty undefined behavior.
// Since the state and event may have moved in memory, refresh them right before the signal may happen
write_volatile(&mut data.state_ptr, state as *mut _ as *mut c_void);
write_volatile(&mut data.event_mgr_ptr, mgr as *mut _ as *mut c_void);
write_volatile(&mut data.fuzzer_ptr, fuzzer as *mut _ as *mut c_void);
write_volatile(&mut data.state_ptr, _state as *mut _ as *mut c_void);
write_volatile(&mut data.event_mgr_ptr, _mgr as *mut _ as *mut c_void);
write_volatile(&mut data.fuzzer_ptr, _fuzzer as *mut _ as *mut c_void);
compiler_fence(Ordering::SeqCst);
}
#[cfg(all(windows, feature = "std"))]
@ -212,23 +232,24 @@ impl InProcessHandlers {
let data = &mut GLOBAL_STATE;
write_volatile(
&mut data.current_input_ptr,
input as *const _ as *const c_void,
_input as *const _ as *const c_void,
);
write_volatile(
&mut data.executor_ptr,
executor as *const _ as *const c_void,
_executor as *const _ as *const c_void,
);
data.crash_handler = self.crash_handler;
data.timeout_handler = self.timeout_handler;
// Direct raw pointers access /aliasing is pretty undefined behavior.
// Since the state and event may have moved in memory, refresh them right before the signal may happen
write_volatile(&mut data.state_ptr, state as *mut _ as *mut c_void);
write_volatile(&mut data.event_mgr_ptr, mgr as *mut _ as *mut c_void);
write_volatile(&mut data.fuzzer_ptr, fuzzer as *mut _ as *mut c_void);
write_volatile(&mut data.state_ptr, _state as *mut _ as *mut c_void);
write_volatile(&mut data.event_mgr_ptr, _mgr as *mut _ as *mut c_void);
write_volatile(&mut data.fuzzer_ptr, _fuzzer as *mut _ as *mut c_void);
compiler_fence(Ordering::SeqCst);
}
}
/// Call after running a target.
#[allow(clippy::unused_self)]
pub fn post_run_target(&self) {
#[cfg(unix)]
@ -243,15 +264,15 @@ impl InProcessHandlers {
}
}
pub fn new<E, EM, I, OC, OF, OT, S, Z>() -> Result<Self, Error>
/// Create new [`InProcessHandlers`].
pub fn new<E, EM, I, OF, OT, S, Z>() -> Result<Self, Error>
where
I: Input,
E: HasObservers<I, OT, S>,
OT: ObserversTuple<I, S>,
EM: EventFirer<I> + EventRestarter<S>,
OC: Corpus<I>,
OF: Feedback<I, S>,
S: HasSolutions<OC, I> + HasClientPerfMonitor,
S: HasSolutions<I> + HasClientPerfMonitor,
Z: HasObjective<I, OF, S>,
{
#[cfg(unix)]
@ -261,18 +282,10 @@ impl InProcessHandlers {
compiler_fence(Ordering::SeqCst);
Ok(Self {
crash_handler: unix_signal_handler::inproc_crash_handler::<E, EM, I, OC, OF, OT, S, Z>
crash_handler: unix_signal_handler::inproc_crash_handler::<E, EM, I, OF, OT, S, Z>
as *const _,
timeout_handler: unix_signal_handler::inproc_timeout_handler::<E, EM, I, OF, OT, S, Z>
as *const _,
timeout_handler: unix_signal_handler::inproc_timeout_handler::<
E,
EM,
I,
OC,
OF,
OT,
S,
Z,
> as *const _,
})
}
#[cfg(all(windows, feature = "std"))]
@ -286,7 +299,6 @@ impl InProcessHandlers {
E,
EM,
I,
OC,
OF,
OT,
S,
@ -296,7 +308,6 @@ impl InProcessHandlers {
E,
EM,
I,
OC,
OF,
OT,
S,
@ -311,6 +322,7 @@ impl InProcessHandlers {
})
}
/// Replace the handlers with `nop` handlers, deactivating the handlers
#[must_use]
pub fn nop() -> Self {
Self {
@ -320,6 +332,9 @@ impl InProcessHandlers {
}
}
/// The global state of the in-process harness.
#[derive(Debug)]
#[allow(missing_docs)]
pub struct InProcessExecutorHandlerData {
pub state_ptr: *mut c_void,
pub event_mgr_ptr: *mut c_void,
@ -367,21 +382,25 @@ pub static mut GLOBAL_STATE: InProcessExecutorHandlerData = InProcessExecutorHan
timeout_input_ptr: ptr::null_mut(),
};
/// Get the inprocess [`crate::state::State`]
#[must_use]
pub fn inprocess_get_state<'a, S>() -> Option<&'a mut S> {
unsafe { (GLOBAL_STATE.state_ptr as *mut S).as_mut() }
}
/// Get the [`crate::events::EventManager`]
#[must_use]
pub fn inprocess_get_event_manager<'a, EM>() -> Option<&'a mut EM> {
unsafe { (GLOBAL_STATE.event_mgr_ptr as *mut EM).as_mut() }
}
/// Gets the inprocess [`crate::fuzzer::Fuzzer`]
#[must_use]
pub fn inprocess_get_fuzzer<'a, F>() -> Option<&'a mut F> {
unsafe { (GLOBAL_STATE.fuzzer_ptr as *mut F).as_mut() }
}
/// Gets the inprocess [`Executor`]
#[must_use]
pub fn inprocess_get_executor<'a, E>() -> Option<&'a mut E> {
unsafe { (GLOBAL_STATE.executor_ptr as *mut E).as_mut() }
@ -461,7 +480,7 @@ mod unix_signal_handler {
}
#[cfg(unix)]
pub unsafe fn inproc_timeout_handler<E, EM, I, OC, OF, OT, S, Z>(
pub unsafe fn inproc_timeout_handler<E, EM, I, OF, OT, S, Z>(
_signal: Signal,
_info: siginfo_t,
_context: &mut ucontext_t,
@ -470,9 +489,8 @@ mod unix_signal_handler {
E: HasObservers<I, OT, S>,
EM: EventFirer<I> + EventRestarter<S>,
OT: ObserversTuple<I, S>,
OC: Corpus<I>,
OF: Feedback<I, S>,
S: HasSolutions<OC, I> + HasClientPerfMonitor,
S: HasSolutions<I> + HasClientPerfMonitor,
I: Input,
Z: HasObjective<I, OF, S>,
{
@ -539,7 +557,7 @@ mod unix_signal_handler {
/// Will be used for signal handling.
/// It will store the current State to shmem, then exit.
#[allow(clippy::too_many_lines)]
pub unsafe fn inproc_crash_handler<E, EM, I, OC, OF, OT, S, Z>(
pub unsafe fn inproc_crash_handler<E, EM, I, OF, OT, S, Z>(
signal: Signal,
_info: siginfo_t,
_context: &mut ucontext_t,
@ -548,9 +566,8 @@ mod unix_signal_handler {
E: HasObservers<I, OT, S>,
EM: EventFirer<I> + EventRestarter<S>,
OT: ObserversTuple<I, S>,
OC: Corpus<I>,
OF: Feedback<I, S>,
S: HasSolutions<OC, I> + HasClientPerfMonitor,
S: HasSolutions<I> + HasClientPerfMonitor,
I: Input,
Z: HasObjective<I, OF, S>,
{
@ -697,7 +714,7 @@ mod windows_exception_handler {
impl Handler for InProcessExecutorHandlerData {
#[allow(clippy::not_unsafe_ptr_arg_deref)]
fn handle(&mut self, code: ExceptionCode, exception_pointers: *mut EXCEPTION_POINTERS) {
fn handle(&mut self, _code: ExceptionCode, exception_pointers: *mut EXCEPTION_POINTERS) {
unsafe {
let data = &mut GLOBAL_STATE;
if !data.crash_handler.is_null() {
@ -716,7 +733,7 @@ mod windows_exception_handler {
EnterCriticalSection, LeaveCriticalSection, RTL_CRITICAL_SECTION,
};
pub unsafe extern "system" fn inproc_timeout_handler<E, EM, I, OC, OF, OT, S, Z>(
pub unsafe extern "system" fn inproc_timeout_handler<E, EM, I, OF, OT, S, Z>(
_p0: *mut u8,
global_state: *mut c_void,
_p1: *mut u8,
@ -724,9 +741,8 @@ mod windows_exception_handler {
E: HasObservers<I, OT, S>,
EM: EventFirer<I> + EventRestarter<S>,
OT: ObserversTuple<I, S>,
OC: Corpus<I>,
OF: Feedback<I, S>,
S: HasSolutions<OC, I> + HasClientPerfMonitor,
S: HasSolutions<I> + HasClientPerfMonitor,
I: Input,
Z: HasObjective<I, OF, S>,
{
@ -815,16 +831,15 @@ mod windows_exception_handler {
// println!("TIMER INVOKED!");
}
pub unsafe fn inproc_crash_handler<E, EM, I, OC, OF, OT, S, Z>(
pub unsafe fn inproc_crash_handler<E, EM, I, OF, OT, S, Z>(
exception_pointers: *mut EXCEPTION_POINTERS,
data: &mut InProcessExecutorHandlerData,
) where
E: HasObservers<I, OT, S>,
EM: EventFirer<I> + EventRestarter<S>,
OT: ObserversTuple<I, S>,
OC: Corpus<I>,
OF: Feedback<I, S>,
S: HasSolutions<OC, I> + HasClientPerfMonitor,
S: HasSolutions<I> + HasClientPerfMonitor,
I: Input,
Z: HasObjective<I, OF, S>,
{
@ -908,7 +923,7 @@ mod windows_exception_handler {
let interesting = fuzzer
.objective_mut()
.is_interesting(state, event_mgr, &input, observers, &ExitKind::Crash)
.is_interesting(state, event_mgr, input, observers, &ExitKind::Crash)
.expect("In crash handler objective failure.");
if interesting {
@ -945,8 +960,10 @@ mod windows_exception_handler {
}
}
/// The struct has [`InProcessHandlers`].
#[cfg(windows)]
pub trait HasInProcessHandlers {
/// Get the in-process handlers.
fn inprocess_handlers(&self) -> &InProcessHandlers;
}
@ -964,6 +981,7 @@ where
}
}
/// [`InProcessForkExecutor`] is an executor that forks the current process before each execution.
#[cfg(all(feature = "std", unix))]
pub struct InProcessForkExecutor<'a, H, I, OT, S, SP>
where
@ -979,7 +997,23 @@ where
}
#[cfg(all(feature = "std", unix))]
impl<'a, EM, H, I, OT, S, Z, SP> Executor<EM, I, S, Z>
impl<'a, H, I, OT, S, SP> Debug for InProcessForkExecutor<'a, H, I, OT, S, SP>
where
H: FnMut(&I) -> ExitKind,
I: Input,
OT: ObserversTuple<I, S>,
SP: ShMemProvider,
{
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("InProcessForkExecutor")
.field("observers", &self.observers)
.field("shmem_provider", &self.shmem_provider)
.finish()
}
}
#[cfg(all(feature = "std", unix))]
impl<'a, EM, H, I, OT, S, SP, Z> Executor<EM, I, S, Z>
for InProcessForkExecutor<'a, H, I, OT, S, SP>
where
H: FnMut(&I) -> ExitKind,
@ -1033,7 +1067,8 @@ where
OT: ObserversTuple<I, S>,
SP: ShMemProvider,
{
pub fn new<EM, OC, OF, Z>(
/// Creates a new [`InProcessForkExecutor`]
pub fn new<EM, OF, Z>(
harness_fn: &'a mut H,
observers: OT,
_fuzzer: &mut Z,
@ -1043,9 +1078,8 @@ where
) -> Result<Self, Error>
where
EM: EventFirer<I> + EventRestarter<S>,
OC: Corpus<I>,
OF: Feedback<I, S>,
S: HasSolutions<OC, I> + HasClientPerfMonitor,
S: HasSolutions<I> + HasClientPerfMonitor,
Z: HasObjective<I, OF, S>,
{
Ok(Self {

View File

@ -37,6 +37,7 @@ use crate::{
Error,
};
use core::fmt::Debug;
use serde::{Deserialize, Serialize};
/// How an execution finished.
@ -57,7 +58,7 @@ pub enum ExitKind {
crate::impl_serdeany!(ExitKind);
/// Holds a tuple of Observers
pub trait HasObservers<I, OT, S>
pub trait HasObservers<I, OT, S>: Debug
where
OT: ObserversTuple<I, S>,
{
@ -69,7 +70,7 @@ where
}
/// An executor takes the given inputs, and runs the harness/target.
pub trait Executor<EM, I, S, Z>
pub trait Executor<EM, I, S, Z>: Debug
where
I: Input,
{
@ -97,6 +98,7 @@ where
/// A simple executor that does nothing.
/// If intput len is 0, `run_target` will return Err
#[derive(Debug)]
struct NopExecutor {}
impl<EM, I, S, Z> Executor<EM, I, S, Z> for NopExecutor

View File

@ -1,6 +1,9 @@
//! A `ShadowExecutor` wraps an executor to have shadow observer that will not be considered by the feedbacks and the manager
use core::marker::PhantomData;
use core::{
fmt::{self, Debug, Formatter},
marker::PhantomData,
};
use crate::{
executors::{Executor, ExitKind, HasObservers},
@ -10,13 +13,25 @@ use crate::{
};
/// A [`ShadowExecutor`] wraps an executor and a set of shadow observers
pub struct ShadowExecutor<E, I, S, SOT> {
pub struct ShadowExecutor<E: Debug, I: Debug, S, SOT: Debug> {
/// The wrapped executor
executor: E,
/// The shadow observers
shadow_observers: SOT,
/// phantom data
phantom: PhantomData<(I, S)>,
}
impl<E, I, S, SOT> ShadowExecutor<E, I, S, SOT>
impl<E: Debug, I: Debug, S, SOT: Debug> Debug for ShadowExecutor<E, I, S, SOT> {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("ShadowExecutor")
.field("executor", &self.executor)
.field("shadow_observers", &self.shadow_observers)
.finish()
}
}
impl<E: Debug, I: Debug, S, SOT: Debug> ShadowExecutor<E, I, S, SOT>
where
SOT: ObserversTuple<I, S>,
{
@ -29,11 +44,13 @@ where
}
}
/// The shadow observers are not considered by the feedbacks and the manager, mutable
#[inline]
pub fn shadow_observers(&self) -> &SOT {
&self.shadow_observers
}
/// The shadow observers are not considered by the feedbacks and the manager, mutable
#[inline]
pub fn shadow_observers_mut(&mut self) -> &mut SOT {
&mut self.shadow_observers
@ -59,6 +76,8 @@ where
impl<E, I, OT, S, SOT> HasObservers<I, OT, S> for ShadowExecutor<E, I, S, SOT>
where
I: Debug,
S: Debug,
E: HasObservers<I, OT, S>,
OT: ObserversTuple<I, S>,
SOT: ObserversTuple<I, S>,

View File

@ -1,7 +1,10 @@
//! A `TimeoutExecutor` sets a timeout before each target run
#[cfg(any(windows, unix))]
use core::time::Duration;
use core::{
fmt::{self, Debug, Formatter},
time::Duration,
};
use crate::{
executors::{Executor, ExitKind, HasObservers},
@ -24,15 +27,12 @@ use windows::Win32::{
System::Threading::{
CloseThreadpoolTimer, CreateThreadpoolTimer, EnterCriticalSection,
InitializeCriticalSection, LeaveCriticalSection, SetThreadpoolTimer, RTL_CRITICAL_SECTION,
TP_CALLBACK_ENVIRON_V3, TP_TIMER,
TP_CALLBACK_ENVIRON_V3, TP_CALLBACK_INSTANCE, TP_TIMER,
},
};
#[cfg(all(windows, feature = "std"))]
use core::{
ffi::c_void,
ptr::{write, write_volatile},
};
use core::{ffi::c_void, ptr::write_volatile};
#[cfg(windows)]
use core::sync::atomic::{compiler_fence, Ordering};
@ -44,8 +44,23 @@ struct Timeval {
pub tv_usec: i64,
}
#[cfg(unix)]
impl Debug for Timeval {
#[allow(clippy::cast_sign_loss)]
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
write!(
f,
"Timeval {{ tv_sec: {:?}, tv_usec: {:?} (tv: {:?}) }}",
self.tv_sec,
self.tv_usec,
Duration::new(self.tv_sec as _, (self.tv_usec * 1000) as _)
)
}
}
#[repr(C)]
#[cfg(unix)]
#[derive(Debug)]
struct Itimerval {
pub it_interval: Timeval,
pub it_value: Timeval,
@ -89,17 +104,35 @@ pub struct TimeoutExecutor<E> {
critical: RTL_CRITICAL_SECTION,
}
impl<E: Debug> Debug for TimeoutExecutor<E> {
#[cfg(windows)]
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("TimeoutExecutor")
.field("executor", &self.executor)
.field("milli_sec", &self.milli_sec)
.finish_non_exhaustive()
}
#[cfg(unix)]
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("TimeoutExecutor")
.field("executor", &self.executor)
.field("itimerval", &self.itimerval)
.finish()
}
}
#[cfg(windows)]
#[allow(non_camel_case_types)]
type PTP_TIMER_CALLBACK = unsafe extern "system" fn(
param0: *mut windows::Win32::System::Threading::TP_CALLBACK_INSTANCE,
param0: *mut TP_CALLBACK_INSTANCE,
param1: *mut c_void,
param2: *mut windows::Win32::System::Threading::TP_TIMER,
param2: *mut TP_TIMER,
);
#[cfg(unix)]
impl<E> TimeoutExecutor<E> {
/// Create a new `TimeoutExecutor`, wrapping the given `executor` and checking for timeouts.
/// Create a new [`TimeoutExecutor`], wrapping the given `executor` and checking for timeouts.
/// This should usually be used for `InProcess` fuzzing.
pub fn new(executor: E, exec_tmout: Duration) -> Self {
let milli_sec = exec_tmout.as_millis();
@ -124,6 +157,7 @@ impl<E> TimeoutExecutor<E> {
#[cfg(windows)]
impl<E: HasInProcessHandlers> TimeoutExecutor<E> {
/// Create a new [`TimeoutExecutor`], wrapping the given `executor` and checking for timeouts.
pub fn new(executor: E, exec_tmout: Duration) -> Self {
let milli_sec = exec_tmout.as_millis() as i64;
let timeout_handler: PTP_TIMER_CALLBACK =
@ -149,6 +183,7 @@ impl<E: HasInProcessHandlers> TimeoutExecutor<E> {
}
}
/// Set the timeout for this executor
#[cfg(unix)]
pub fn set_timeout(&mut self, exec_tmout: Duration) {
let milli_sec = exec_tmout.as_millis();
@ -167,6 +202,7 @@ impl<E: HasInProcessHandlers> TimeoutExecutor<E> {
self.itimerval = itimerval;
}
/// Set the timeout for this executor
#[cfg(windows)]
pub fn set_timeout(&mut self, exec_tmout: Duration) {
self.milli_sec = exec_tmout.as_millis() as i64;
@ -177,6 +213,7 @@ impl<E: HasInProcessHandlers> TimeoutExecutor<E> {
&mut self.executor
}
/// Reset the timeout for this executor
#[cfg(windows)]
pub fn windows_reset_timeout(&self) -> Result<(), Error> {
unsafe {
@ -192,6 +229,7 @@ where
E: Executor<EM, I, S, Z> + HasInProcessHandlers,
I: Input,
{
#[allow(clippy::cast_sign_loss)]
fn run_target(
&mut self,
fuzzer: &mut Z,
@ -210,10 +248,11 @@ where
&mut data.timeout_input_ptr,
&mut data.current_input_ptr as *mut _ as *mut c_void,
);
let tm: i64 = -1 * self.milli_sec * 10 * 1000;
let mut ft = FILETIME::default();
ft.dwLowDateTime = (tm & 0xffffffff) as u32;
ft.dwHighDateTime = (tm >> 32) as u32;
let tm: i64 = -self.milli_sec * 10 * 1000;
let ft = FILETIME {
dwLowDateTime: (tm & 0xffffffff) as u32,
dwHighDateTime: (tm >> 32) as u32,
};
compiler_fence(Ordering::SeqCst);
EnterCriticalSection(&mut self.critical);

View File

@ -1,17 +1,26 @@
use crate::{inputs::Input, observers::ObserversTuple, Error};
//! A wrapper for any [`Executor`] to make it implement [`HasObservers`] using a given [`ObserversTuple`].
use super::{Executor, ExitKind, HasObservers};
use core::fmt::Debug;
use crate::{
executors::{Executor, ExitKind, HasObservers},
inputs::Input,
observers::ObserversTuple,
Error,
};
/// A wrapper for any [`Executor`] to make it implement [`HasObservers`] using a given [`ObserversTuple`].
pub struct WithObservers<E, OT> {
#[derive(Debug)]
pub struct WithObservers<E: Debug, OT: Debug> {
executor: E,
observers: OT,
}
impl<EM, I, S, Z, E, OT> Executor<EM, I, S, Z> for WithObservers<E, OT>
impl<E, EM, I, OT, S, Z> Executor<EM, I, S, Z> for WithObservers<E, OT>
where
I: Input,
E: Executor<EM, I, S, Z>,
OT: Debug,
{
fn run_target(
&mut self,
@ -24,7 +33,7 @@ where
}
}
impl<I, S, E, OT> HasObservers<I, OT, S> for WithObservers<E, OT>
impl<I, E: Debug, OT: Debug, S> HasObservers<I, OT, S> for WithObservers<E, OT>
where
I: Input,
OT: ObserversTuple<I, S>,
@ -38,7 +47,7 @@ where
}
}
impl<E, OT> WithObservers<E, OT> {
impl<E: Debug, OT: Debug> WithObservers<E, OT> {
/// Wraps the given [`Executor`] with the given [`ObserversTuple`] to implement [`HasObservers`].
///
/// If the executor already implements [`HasObservers`], then the original implementation will be overshadowed by

View File

@ -1,3 +1,8 @@
//! Concoliic feedback for comcolic fuzzing.
//! It is used to attach concolic tracing metadata to the testcase.
//! This feedback should be used in combination with another feedback as this feedback always considers testcases
//! to be not interesting.
//! Requires a [`ConcolicObserver`] to observe the concolic trace.
use crate::{
bolts::tuples::Named,
corpus::Testcase,
@ -17,12 +22,14 @@ use crate::{
/// This feedback should be used in combination with another feedback as this feedback always considers testcases
/// to be not interesting.
/// Requires a [`ConcolicObserver`] to observe the concolic trace.
#[derive(Debug)]
pub struct ConcolicFeedback {
name: String,
metadata: Option<ConcolicMetadata>,
}
impl ConcolicFeedback {
/// Creates a concolic feedback from an observer
#[allow(unused)]
#[must_use]
pub fn from_observer(observer: &ConcolicObserver) -> Self {

View File

@ -9,11 +9,14 @@ use num_traits::PrimInt;
use serde::{Deserialize, Serialize};
use crate::{
bolts::{tuples::Named, AsSlice, HasRefCnt},
bolts::{
tuples::{MatchName, Named},
AsSlice, HasRefCnt,
},
corpus::Testcase,
events::{Event, EventFirer},
executors::ExitKind,
feedbacks::{Feedback, FeedbackState, FeedbackStatesTuple},
feedbacks::{Feedback, FeedbackState},
inputs::Input,
monitors::UserStats,
observers::{MapObserver, ObserversTuple},
@ -22,26 +25,25 @@ use crate::{
};
/// A [`MapFeedback`] that implements the AFL algorithm using an [`OrReducer`] combining the bits for the history map and the bit from ``HitcountsMapObserver``.
pub type AflMapFeedback<FT, I, O, S, T> = MapFeedback<FT, I, DifferentIsNovel, O, OrReducer, S, T>;
pub type AflMapFeedback<I, O, S, T> = MapFeedback<I, DifferentIsNovel, O, OrReducer, S, T>;
/// A [`MapFeedback`] that strives to maximize the map contents.
pub type MaxMapFeedback<FT, I, O, S, T> = MapFeedback<FT, I, DifferentIsNovel, O, MaxReducer, S, T>;
pub type MaxMapFeedback<I, O, S, T> = MapFeedback<I, DifferentIsNovel, O, MaxReducer, S, T>;
/// A [`MapFeedback`] that strives to minimize the map contents.
pub type MinMapFeedback<FT, I, O, S, T> = MapFeedback<FT, I, DifferentIsNovel, O, MinReducer, S, T>;
pub type MinMapFeedback<I, O, S, T> = MapFeedback<I, DifferentIsNovel, O, MinReducer, S, T>;
/// A [`MapFeedback`] that strives to maximize the map contents,
/// but only, if a value is larger than `pow2` of the previous.
pub type MaxMapPow2Feedback<FT, I, O, S, T> =
MapFeedback<FT, I, NextPow2IsNovel, O, MaxReducer, S, T>;
pub type MaxMapPow2Feedback<I, O, S, T> = MapFeedback<I, NextPow2IsNovel, O, MaxReducer, S, T>;
/// A [`MapFeedback`] that strives to maximize the map contents,
/// but only, if a value is larger than `pow2` of the previous.
pub type MaxMapOneOrFilledFeedback<FT, I, O, S, T> =
MapFeedback<FT, I, OneOrFilledIsNovel, O, MaxReducer, S, T>;
pub type MaxMapOneOrFilledFeedback<I, O, S, T> =
MapFeedback<I, OneOrFilledIsNovel, O, MaxReducer, S, T>;
/// A `Reducer` function is used to aggregate values for the novelty search
pub trait Reducer<T>: Serialize + serde::de::DeserializeOwned + 'static
pub trait Reducer<T>: Serialize + serde::de::DeserializeOwned + 'static + Debug
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
/// Reduce two values to one value, with the current [`Reducer`].
fn reduce(first: T, second: T) -> T;
@ -53,13 +55,7 @@ pub struct OrReducer {}
impl<T> Reducer<T> for OrReducer
where
T: PrimInt
+ Default
+ Copy
+ 'static
+ serde::Serialize
+ serde::de::DeserializeOwned
+ PartialOrd,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + PartialOrd,
{
#[inline]
fn reduce(history: T, new: T) -> T {
@ -73,13 +69,7 @@ pub struct AndReducer {}
impl<T> Reducer<T> for AndReducer
where
T: PrimInt
+ Default
+ Copy
+ 'static
+ serde::Serialize
+ serde::de::DeserializeOwned
+ PartialOrd,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + PartialOrd,
{
#[inline]
fn reduce(history: T, new: T) -> T {
@ -93,13 +83,7 @@ pub struct MaxReducer {}
impl<T> Reducer<T> for MaxReducer
where
T: PrimInt
+ Default
+ Copy
+ 'static
+ serde::Serialize
+ serde::de::DeserializeOwned
+ PartialOrd,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + PartialOrd,
{
#[inline]
fn reduce(first: T, second: T) -> T {
@ -117,13 +101,7 @@ pub struct MinReducer {}
impl<T> Reducer<T> for MinReducer
where
T: PrimInt
+ Default
+ Copy
+ 'static
+ serde::Serialize
+ serde::de::DeserializeOwned
+ PartialOrd,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + PartialOrd,
{
#[inline]
fn reduce(first: T, second: T) -> T {
@ -136,9 +114,9 @@ where
}
/// A `IsNovel` function is used to discriminate if a reduced value is considered novel.
pub trait IsNovel<T>: Serialize + serde::de::DeserializeOwned + 'static
pub trait IsNovel<T>: Serialize + serde::de::DeserializeOwned + 'static + Debug
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
/// If a new value in the [`MapFeedback`] was found,
/// this filter can decide if the result is considered novel or not.
@ -151,7 +129,7 @@ pub struct AllIsNovel {}
impl<T> IsNovel<T> for AllIsNovel
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
#[inline]
fn is_novel(_old: T, _new: T) -> bool {
@ -178,7 +156,7 @@ fn saturating_next_power_of_two<T: PrimInt>(n: T) -> T {
pub struct DifferentIsNovel {}
impl<T> IsNovel<T> for DifferentIsNovel
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
#[inline]
fn is_novel(old: T, new: T) -> bool {
@ -191,7 +169,7 @@ where
pub struct NextPow2IsNovel {}
impl<T> IsNovel<T> for NextPow2IsNovel
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
#[inline]
fn is_novel(old: T, new: T) -> bool {
@ -211,7 +189,7 @@ where
pub struct OneOrFilledIsNovel {}
impl<T> IsNovel<T> for OneOrFilledIsNovel
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
#[inline]
fn is_novel(old: T, new: T) -> bool {
@ -220,7 +198,7 @@ where
}
/// A testcase metadata holding a list of indexes of a map
#[derive(Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize)]
pub struct MapIndexesMetadata {
/// The list of indexes.
pub list: Vec<usize>,
@ -256,7 +234,7 @@ impl MapIndexesMetadata {
}
/// A testcase metadata holding a list of indexes of a map
#[derive(Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize)]
pub struct MapNoveltiesMetadata {
/// A `list` of novelties.
pub list: Vec<usize>,
@ -284,7 +262,7 @@ impl MapNoveltiesMetadata {
#[serde(bound = "T: serde::de::DeserializeOwned")]
pub struct MapFeedbackState<T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
/// Contains information about untouched entries
pub history_map: Vec<T>,
@ -294,7 +272,7 @@ where
impl<T> FeedbackState for MapFeedbackState<T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + Debug,
{
fn reset(&mut self) -> Result<(), Error> {
self.history_map
@ -306,7 +284,7 @@ where
impl<T> Named for MapFeedbackState<T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
#[inline]
fn name(&self) -> &str {
@ -316,7 +294,7 @@ where
impl<T> MapFeedbackState<T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
/// Create new `MapFeedbackState`
#[must_use]
@ -353,14 +331,13 @@ where
/// The most common AFL-like feedback type
#[derive(Serialize, Deserialize, Clone, Debug)]
#[serde(bound = "T: serde::de::DeserializeOwned")]
pub struct MapFeedback<FT, I, N, O, R, S, T>
pub struct MapFeedback<I, N, O, R, S, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned + Debug,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + Debug,
R: Reducer<T>,
O: MapObserver<T>,
N: IsNovel<T>,
S: HasFeedbackStates<FT>,
FT: FeedbackStatesTuple,
S: HasFeedbackStates,
{
/// Indexes used in the last observation
indexes: Option<Vec<usize>>,
@ -371,18 +348,17 @@ where
/// Name identifier of the observer
observer_name: String,
/// Phantom Data of Reducer
phantom: PhantomData<(FT, I, N, S, R, O, T)>,
phantom: PhantomData<(I, N, S, R, O, T)>,
}
impl<FT, I, N, O, R, S, T> Feedback<I, S> for MapFeedback<FT, I, N, O, R, S, T>
impl<I, N, O, R, S, T> Feedback<I, S> for MapFeedback<I, N, O, R, S, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned + Debug,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + Debug,
R: Reducer<T>,
O: MapObserver<T>,
N: IsNovel<T>,
I: Input,
S: HasFeedbackStates<FT> + HasClientPerfMonitor,
FT: FeedbackStatesTuple,
S: HasFeedbackStates + HasClientPerfMonitor + Debug,
{
fn is_interesting<EM, OT>(
&mut self,
@ -483,14 +459,13 @@ where
}
}
impl<FT, I, N, O, R, S, T> Named for MapFeedback<FT, I, N, O, R, S, T>
impl<I, N, O, R, S, T> Named for MapFeedback<I, N, O, R, S, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned + Debug,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + Debug,
R: Reducer<T>,
N: IsNovel<T>,
O: MapObserver<T>,
S: HasFeedbackStates<FT>,
FT: FeedbackStatesTuple,
S: HasFeedbackStates,
{
#[inline]
fn name(&self) -> &str {
@ -498,21 +473,20 @@ where
}
}
impl<FT, I, N, O, R, S, T> MapFeedback<FT, I, N, O, R, S, T>
impl<I, N, O, R, S, T> MapFeedback<I, N, O, R, S, T>
where
T: PrimInt
+ Default
+ Copy
+ 'static
+ serde::Serialize
+ Serialize
+ serde::de::DeserializeOwned
+ PartialOrd
+ Debug,
R: Reducer<T>,
N: IsNovel<T>,
O: MapObserver<T>,
S: HasFeedbackStates<FT>,
FT: FeedbackStatesTuple,
S: HasFeedbackStates,
{
/// Create new `MapFeedback`
#[must_use]

View File

@ -28,12 +28,16 @@ use crate::{
Error,
};
use core::{marker::PhantomData, time::Duration};
use core::{
fmt::{self, Debug, Formatter},
marker::PhantomData,
time::Duration,
};
/// Feedbacks evaluate the observers.
/// Basically, they reduce the information provided by an observer to a value,
/// indicating the "interestingness" of the last run.
pub trait Feedback<I, S>: Named
pub trait Feedback<I, S>: Named + Debug
where
I: Input,
S: HasClientPerfMonitor,
@ -51,6 +55,8 @@ where
EM: EventFirer<I>,
OT: ObserversTuple<I, S>;
/// Returns if the result of a run is interesting and the value input should be stored in a corpus.
/// It also keeps track of introspection stats.
#[cfg(feature = "introspection")]
#[allow(clippy::too_many_arguments)]
fn is_interesting_introspection<EM, OT>(
@ -101,7 +107,7 @@ where
/// [`FeedbackState`] is the data associated with a [`Feedback`] that must persist as part
/// of the fuzzer State
pub trait FeedbackState: Named + serde::Serialize + serde::de::DeserializeOwned {
pub trait FeedbackState: Named + Serialize + serde::de::DeserializeOwned + Debug {
/// Reset the internal state
fn reset(&mut self) -> Result<(), Error> {
Ok(())
@ -109,7 +115,8 @@ pub trait FeedbackState: Named + serde::Serialize + serde::de::DeserializeOwned
}
/// A haskell-style tuple of feedback states
pub trait FeedbackStatesTuple: MatchName + serde::Serialize + serde::de::DeserializeOwned {
pub trait FeedbackStatesTuple: MatchName + Serialize + serde::de::DeserializeOwned + Debug {
/// Resets all the feedback states of the tuple
fn reset_all(&mut self) -> Result<(), Error>;
}
@ -130,7 +137,9 @@ where
}
}
pub struct CombinedFeedback<A, B, I, S, FL>
/// A cobined feedback consisting of ultiple [`Feedback`]s
#[derive(Debug)]
pub struct CombinedFeedback<A, B, FL, I, S>
where
A: Feedback<I, S>,
B: Feedback<I, S>,
@ -138,13 +147,15 @@ where
I: Input,
S: HasClientPerfMonitor,
{
/// First [`Feedback`]
pub first: A,
/// Second [`Feedback`]
pub second: B,
name: String,
phantom: PhantomData<(I, S, FL)>,
}
impl<A, B, I, S, FL> Named for CombinedFeedback<A, B, I, S, FL>
impl<A, B, FL, I, S> Named for CombinedFeedback<A, B, FL, I, S>
where
A: Feedback<I, S>,
B: Feedback<I, S>,
@ -157,7 +168,7 @@ where
}
}
impl<A, B, I, S, FL> CombinedFeedback<A, B, I, S, FL>
impl<A, B, FL, I, S> CombinedFeedback<A, B, FL, I, S>
where
A: Feedback<I, S>,
B: Feedback<I, S>,
@ -165,6 +176,7 @@ where
I: Input,
S: HasClientPerfMonitor,
{
/// Create a new combined feedback
pub fn new(first: A, second: B) -> Self {
let name = format!("{} ({},{})", FL::name(), first.name(), second.name());
Self {
@ -176,13 +188,13 @@ where
}
}
impl<A, B, I, S, FL> Feedback<I, S> for CombinedFeedback<A, B, I, S, FL>
impl<A, B, FL, I, S> Feedback<I, S> for CombinedFeedback<A, B, FL, I, S>
where
A: Feedback<I, S>,
B: Feedback<I, S>,
FL: FeedbackLogic<A, B, I, S>,
I: Input,
S: HasClientPerfMonitor,
S: HasClientPerfMonitor + Debug,
{
fn is_interesting<EM, OT>(
&mut self,
@ -244,15 +256,18 @@ where
}
}
pub trait FeedbackLogic<A, B, I, S>: 'static
/// Logical combination of two feedbacks
pub trait FeedbackLogic<A, B, I, S>: 'static + Debug
where
A: Feedback<I, S>,
B: Feedback<I, S>,
I: Input,
S: HasClientPerfMonitor,
{
/// The name of this cobination
fn name() -> &'static str;
/// If the feedback pair is interesting
fn is_pair_interesting<EM, OT>(
first: &mut A,
second: &mut B,
@ -266,6 +281,7 @@ where
EM: EventFirer<I>,
OT: ObserversTuple<I, S>;
/// If this pair is interesting (with introspection features enabled)
#[cfg(feature = "introspection")]
#[allow(clippy::too_many_arguments)]
fn is_pair_interesting_introspection<EM, OT>(
@ -282,9 +298,20 @@ where
OT: ObserversTuple<I, S>;
}
/// Eager `OR` combination of two feedbacks
#[derive(Debug, Clone)]
pub struct LogicEagerOr {}
/// Fast `OR` combination of two feedbacks
#[derive(Debug, Clone)]
pub struct LogicFastOr {}
/// Eager `AND` combination of two feedbacks
#[derive(Debug, Clone)]
pub struct LogicEagerAnd {}
/// Fast `AND` combination of two feedbacks
#[derive(Debug, Clone)]
pub struct LogicFastAnd {}
impl<A, B, I, S> FeedbackLogic<A, B, I, S> for LogicEagerOr
@ -505,23 +532,24 @@ where
/// Combine two feedbacks with an eager AND operation,
/// will call all feedbacks functions even if not necessery to conclude the result
pub type EagerAndFeedback<A, B, I, S> = CombinedFeedback<A, B, I, S, LogicEagerAnd>;
pub type EagerAndFeedback<A, B, I, S> = CombinedFeedback<A, B, LogicEagerAnd, I, S>;
/// Combine two feedbacks with an fast AND operation,
/// might skip calling feedbacks functions if not necessery to conclude the result
pub type FastAndFeedback<A, B, I, S> = CombinedFeedback<A, B, I, S, LogicFastAnd>;
pub type FastAndFeedback<A, B, I, S> = CombinedFeedback<A, B, LogicFastAnd, I, S>;
/// Combine two feedbacks with an eager OR operation,
/// will call all feedbacks functions even if not necessery to conclude the result
pub type EagerOrFeedback<A, B, I, S> = CombinedFeedback<A, B, I, S, LogicEagerOr>;
pub type EagerOrFeedback<A, B, I, S> = CombinedFeedback<A, B, LogicEagerOr, I, S>;
/// Combine two feedbacks with an fast OR operation,
/// might skip calling feedbacks functions if not necessery to conclude the result
/// This means any feedback that is not first might be skipped, use caution when using with
/// `TimeFeedback`
pub type FastOrFeedback<A, B, I, S> = CombinedFeedback<A, B, I, S, LogicFastOr>;
pub type FastOrFeedback<A, B, I, S> = CombinedFeedback<A, B, LogicFastOr, I, S>;
/// Compose feedbacks with an OR operation
/// Compose feedbacks with an `NOT` operation
#[derive(Clone)]
pub struct NotFeedback<A, I, S>
where
A: Feedback<I, S>,
@ -535,6 +563,20 @@ where
phantom: PhantomData<(I, S)>,
}
impl<A, I, S> Debug for NotFeedback<A, I, S>
where
A: Feedback<I, S>,
I: Input,
S: HasClientPerfMonitor,
{
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("NotFeedback")
.field("name", &self.name)
.field("first", &self.first)
.finish()
}
}
impl<A, I, S> Feedback<I, S> for NotFeedback<A, I, S>
where
A: Feedback<I, S>,
@ -631,6 +673,7 @@ macro_rules! feedback_or {
};
}
/// Combines multiple feedbacks with an `OR` operation, not executing feedbacks after the first positive result
#[macro_export]
macro_rules! feedback_or_fast {
( $last:expr ) => { $last };

View File

@ -1,5 +1,8 @@
//! Nautilus grammar mutator, see <https://github.com/nautilus-fuzz/nautilus>
use core::fmt::Debug;
use grammartec::{chunkstore::ChunkStore, context::Context};
use serde::{Deserialize, Serialize};
use serde_json;
use std::fs::create_dir_all;
use crate::{
@ -15,14 +18,27 @@ use crate::{
Error,
};
/// Metadata for Nautilus grammar mutator chunks
#[derive(Serialize, Deserialize)]
pub struct NautilusChunksMetadata {
/// the chunk store
pub cks: ChunkStore,
}
impl Debug for NautilusChunksMetadata {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"NautilusChunksMetadata {{ {} }}",
serde_json::to_string_pretty(self).unwrap(),
)
}
}
crate::impl_serdeany!(NautilusChunksMetadata);
impl NautilusChunksMetadata {
/// Creates a new [`NautilusChunksMetadata`]
#[must_use]
pub fn new(work_dir: String) -> Self {
create_dir_all(format!("{}/outputs/chunks", &work_dir))
@ -33,11 +49,19 @@ impl NautilusChunksMetadata {
}
}
/// A nautilus feedback for grammar fuzzing
pub struct NautilusFeedback<'a> {
ctx: &'a Context,
}
impl Debug for NautilusFeedback<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "NautilusFeedback {{}}")
}
}
impl<'a> NautilusFeedback<'a> {
/// Create a new [`NautilusFeedback`]
#[must_use]
pub fn new(context: &'a NautilusContext) -> Self {
Self { ctx: &context.ctx }

View File

@ -220,16 +220,20 @@ where
}
}
/// The corpus this input should be added to
#[derive(Debug, PartialEq)]
pub enum ExecuteInputResult {
/// No special input
None,
/// This input should be stored ini the corpus
Corpus,
/// This input leads to a solution
Solution,
}
/// Your default fuzzer instance, for everyday use.
#[derive(Debug)]
pub struct StdFuzzer<C, CS, F, I, OF, OT, S, SC>
pub struct StdFuzzer<CS, F, I, OF, OT, S>
where
CS: CorpusScheduler<I, S>,
F: Feedback<I, S>,
@ -240,11 +244,10 @@ where
scheduler: CS,
feedback: F,
objective: OF,
phantom: PhantomData<(C, I, OT, S, SC)>,
phantom: PhantomData<(I, OT, S)>,
}
impl<C, CS, F, I, OF, OT, S, SC> HasCorpusScheduler<CS, I, S>
for StdFuzzer<C, CS, F, I, OF, OT, S, SC>
impl<CS, F, I, OF, OT, S> HasCorpusScheduler<CS, I, S> for StdFuzzer<CS, F, I, OF, OT, S>
where
CS: CorpusScheduler<I, S>,
F: Feedback<I, S>,
@ -261,7 +264,7 @@ where
}
}
impl<C, CS, F, I, OF, OT, S, SC> HasFeedback<F, I, S> for StdFuzzer<C, CS, F, I, OF, OT, S, SC>
impl<CS, F, I, OF, OT, S> HasFeedback<F, I, S> for StdFuzzer<CS, F, I, OF, OT, S>
where
CS: CorpusScheduler<I, S>,
F: Feedback<I, S>,
@ -278,7 +281,7 @@ where
}
}
impl<C, CS, F, I, OF, OT, S, SC> HasObjective<I, OF, S> for StdFuzzer<C, CS, F, I, OF, OT, S, SC>
impl<CS, F, I, OF, OT, S> HasObjective<I, OF, S> for StdFuzzer<CS, F, I, OF, OT, S>
where
CS: CorpusScheduler<I, S>,
F: Feedback<I, S>,
@ -295,17 +298,14 @@ where
}
}
impl<C, CS, F, I, OF, OT, S, SC> ExecutionProcessor<I, OT, S>
for StdFuzzer<C, CS, F, I, OF, OT, S, SC>
impl<CS, F, I, OF, OT, S> ExecutionProcessor<I, OT, S> for StdFuzzer<CS, F, I, OF, OT, S>
where
C: Corpus<I>,
SC: Corpus<I>,
CS: CorpusScheduler<I, S>,
F: Feedback<I, S>,
I: Input,
OF: Feedback<I, S>,
OT: ObserversTuple<I, S> + serde::Serialize + serde::de::DeserializeOwned,
S: HasCorpus<C, I> + HasSolutions<SC, I> + HasClientPerfMonitor + HasExecutions,
S: HasCorpus<I> + HasSolutions<I> + HasClientPerfMonitor + HasExecutions,
{
/// Evaluate if a set of observation channels has an interesting state
fn process_execution<EM>(
@ -412,17 +412,14 @@ where
}
}
impl<C, CS, F, I, OF, OT, S, SC> EvaluatorObservers<I, OT, S>
for StdFuzzer<C, CS, F, I, OF, OT, S, SC>
impl<CS, F, I, OF, OT, S> EvaluatorObservers<I, OT, S> for StdFuzzer<CS, F, I, OF, OT, S>
where
C: Corpus<I>,
CS: CorpusScheduler<I, S>,
OT: ObserversTuple<I, S> + serde::Serialize + serde::de::DeserializeOwned,
F: Feedback<I, S>,
I: Input,
OF: Feedback<I, S>,
S: HasCorpus<C, I> + HasSolutions<SC, I> + HasClientPerfMonitor + HasExecutions,
SC: Corpus<I>,
S: HasCorpus<I> + HasSolutions<I> + HasClientPerfMonitor + HasExecutions,
{
/// Process one input, adding to the respective corpuses if needed and firing the right events
#[inline]
@ -444,10 +441,8 @@ where
}
}
impl<C, CS, E, EM, F, I, OF, OT, S, SC> Evaluator<E, EM, I, S>
for StdFuzzer<C, CS, F, I, OF, OT, S, SC>
impl<CS, E, EM, F, I, OF, OT, S> Evaluator<E, EM, I, S> for StdFuzzer<CS, F, I, OF, OT, S>
where
C: Corpus<I>,
CS: CorpusScheduler<I, S>,
E: Executor<EM, I, S, Self> + HasObservers<I, OT, S>,
OT: ObserversTuple<I, S> + serde::Serialize + serde::de::DeserializeOwned,
@ -455,8 +450,7 @@ where
F: Feedback<I, S>,
I: Input,
OF: Feedback<I, S>,
S: HasCorpus<C, I> + HasSolutions<SC, I> + HasClientPerfMonitor + HasExecutions,
SC: Corpus<I>,
S: HasCorpus<I> + HasSolutions<I> + HasClientPerfMonitor + HasExecutions,
{
/// Process one input, adding to the respective corpuses if needed and firing the right events
#[inline]
@ -513,8 +507,7 @@ where
}
}
impl<C, CS, E, EM, F, I, OF, OT, S, ST, SC> Fuzzer<E, EM, I, S, ST>
for StdFuzzer<C, CS, F, I, OF, OT, S, SC>
impl<CS, E, EM, F, I, OF, OT, S, ST> Fuzzer<E, EM, I, S, ST> for StdFuzzer<CS, F, I, OF, OT, S>
where
CS: CorpusScheduler<I, S>,
EM: EventManager<E, I, S, Self>,
@ -564,7 +557,7 @@ where
}
}
impl<C, CS, F, I, OF, OT, S, SC> StdFuzzer<C, CS, F, I, OF, OT, S, SC>
impl<CS, F, I, OF, OT, S> StdFuzzer<CS, F, I, OF, OT, S>
where
CS: CorpusScheduler<I, S>,
F: Feedback<I, S>,
@ -612,6 +605,7 @@ where
}
}
/// Structs with this trait will execute an [`Input`]
pub trait ExecutesInput<I, OT, S, Z>
where
I: Input,
@ -630,8 +624,7 @@ where
OT: ObserversTuple<I, S>;
}
impl<C, CS, F, I, OF, OT, S, SC> ExecutesInput<I, OT, S, Self>
for StdFuzzer<C, CS, F, I, OF, OT, S, SC>
impl<CS, F, I, OF, OT, S> ExecutesInput<I, OT, S, Self> for StdFuzzer<CS, F, I, OF, OT, S>
where
CS: CorpusScheduler<I, S>,
F: Feedback<I, S>,

View File

@ -1,3 +1,4 @@
//! Gramamtron generator
use alloc::{string::String, vec::Vec};
use core::marker::PhantomData;
use serde::{Deserialize, Serialize};
@ -10,34 +11,39 @@ use crate::{
Error,
};
/// A trigger
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]
pub struct Trigger {
/// the destination
pub dest: usize,
/// the term
pub term: String,
}
/// The [`Automaton`]
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]
pub struct Automaton {
/// final state
pub final_state: usize,
/// init state
pub init_state: usize,
/// pda of [`Trigger`]s
pub pda: Vec<Vec<Trigger>>,
}
#[derive(Clone, Debug)]
/// Generates random inputs from a grammar automatron
pub struct GramatronGenerator<'a, R, S>
pub struct GramatronGenerator<'a, S>
where
R: Rand,
S: HasRand<R>,
S: HasRand,
{
automaton: &'a Automaton,
phantom: PhantomData<(R, S)>,
phantom: PhantomData<S>,
}
impl<'a, R, S> Generator<GramatronInput, S> for GramatronGenerator<'a, R, S>
impl<'a, S> Generator<GramatronInput, S> for GramatronGenerator<'a, S>
where
R: Rand,
S: HasRand<R>,
S: HasRand,
{
fn generate(&mut self, state: &mut S) -> Result<GramatronInput, Error> {
let mut input = GramatronInput::new(vec![]);
@ -50,10 +56,9 @@ where
}
}
impl<'a, R, S> GramatronGenerator<'a, R, S>
impl<'a, S> GramatronGenerator<'a, S>
where
R: Rand,
S: HasRand<R>,
S: HasRand,
{
/// Returns a new [`GramatronGenerator`]
#[must_use]
@ -64,6 +69,7 @@ where
}
}
/// Append the generated terminals
pub fn append_generated_terminals(&self, input: &mut GramatronInput, state: &mut S) -> usize {
let mut counter = 0;
let final_state = self.automaton.final_state;

View File

@ -35,19 +35,17 @@ where
#[derive(Clone, Debug)]
/// Generates random bytes
pub struct RandBytesGenerator<R, S>
pub struct RandBytesGenerator<S>
where
R: Rand,
S: HasRand<R>,
S: HasRand,
{
max_size: usize,
phantom: PhantomData<(R, S)>,
phantom: PhantomData<S>,
}
impl<R, S> Generator<BytesInput, S> for RandBytesGenerator<R, S>
impl<S> Generator<BytesInput, S> for RandBytesGenerator<S>
where
R: Rand,
S: HasRand<R>,
S: HasRand,
{
fn generate(&mut self, state: &mut S) -> Result<BytesInput, Error> {
let mut size = state.rand_mut().below(self.max_size as u64);
@ -67,10 +65,9 @@ where
}
}
impl<R, S> RandBytesGenerator<R, S>
impl<S> RandBytesGenerator<S>
where
R: Rand,
S: HasRand<R>,
S: HasRand,
{
/// Returns a new [`RandBytesGenerator`], generating up to `max_size` random bytes.
#[must_use]
@ -84,19 +81,17 @@ where
#[derive(Clone, Debug)]
/// Generates random printable characters
pub struct RandPrintablesGenerator<R, S>
pub struct RandPrintablesGenerator<S>
where
R: Rand,
S: HasRand<R>,
S: HasRand,
{
max_size: usize,
phantom: PhantomData<(R, S)>,
phantom: PhantomData<S>,
}
impl<R, S> Generator<BytesInput, S> for RandPrintablesGenerator<R, S>
impl<S> Generator<BytesInput, S> for RandPrintablesGenerator<S>
where
R: Rand,
S: HasRand<R>,
S: HasRand,
{
fn generate(&mut self, state: &mut S) -> Result<BytesInput, Error> {
let mut size = state.rand_mut().below(self.max_size as u64);
@ -117,10 +112,9 @@ where
}
}
impl<R, S> RandPrintablesGenerator<R, S>
impl<S> RandPrintablesGenerator<S>
where
R: Rand,
S: HasRand<R>,
S: HasRand,
{
/// Creates a new [`RandPrintablesGenerator`], generating up to `max_size` random printable characters.
#[must_use]

View File

@ -1,15 +1,24 @@
//! Generators for the [`Nautilus`](https://github.com/RUB-SysSec/nautilus) grammar fuzzer
use crate::{generators::Generator, inputs::nautilus::NautilusInput, Error};
use alloc::{string::String, vec::Vec};
use core::fmt::Debug;
use grammartec::context::Context;
use std::{fs, io::BufReader, path::Path};
use crate::{generators::Generator, inputs::nautilus::NautilusInput, Error};
use grammartec::context::Context;
pub use grammartec::newtypes::NTermID;
/// The nautilus context for a generator
pub struct NautilusContext {
/// The nautilus context for a generator
pub ctx: Context,
}
impl Debug for NautilusContext {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "NautilusContext {{}}",)
}
}
impl NautilusContext {
/// Returns a new [`NautilusGenerator`]
#[must_use]
@ -26,6 +35,7 @@ impl NautilusContext {
Self { ctx }
}
/// Create a new [`NautilusContext`] from a file
#[must_use]
pub fn from_file<P: AsRef<Path>>(tree_depth: usize, grammar_file: P) -> Self {
let file = fs::File::open(grammar_file).expect("Cannot open grammar file");
@ -39,9 +49,16 @@ impl NautilusContext {
#[derive(Clone)]
/// Generates random inputs from a grammar
pub struct NautilusGenerator<'a> {
/// The nautilus context of the grammar
pub ctx: &'a Context,
}
impl Debug for NautilusGenerator<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "NautilusGenerator {{}}",)
}
}
impl<'a, S> Generator<NautilusInput, S> for NautilusGenerator<'a> {
fn generate(&mut self, _state: &mut S) -> Result<NautilusInput, Error> {
let nonterm = self.nonterminal("START");
@ -63,12 +80,14 @@ impl<'a> NautilusGenerator<'a> {
Self { ctx: &context.ctx }
}
/// Gets the nonterminal from this input
// TODO create from a python grammar
#[must_use]
pub fn nonterminal(&self, name: &str) -> NTermID {
self.ctx.nt_id(name)
}
/// Generates a [`NautilusInput`] from a nonterminal
pub fn generate_from_nonterminal(&self, input: &mut NautilusInput, start: NTermID, len: usize) {
input.tree_mut().generate_from_nt(start, len, self.ctx);
}

View File

@ -75,7 +75,7 @@ impl HasBytesVec for BytesInput {
impl HasTargetBytes for BytesInput {
#[inline]
fn target_bytes(&self) -> OwnedSlice<u8> {
OwnedSlice::Ref(&self.bytes)
OwnedSlice::from(&self.bytes)
}
}

View File

@ -15,25 +15,35 @@ use serde::{Deserialize, Serialize};
use crate::{bolts::HasLen, inputs::Input, Error};
/// Trait to encode bytes to an [`EncodedInput`] using the given [`Tokenizer`]
pub trait InputEncoder<T>
where
T: Tokenizer,
{
/// Encode bytes to an [`EncodedInput`] using the given [`Tokenizer`]
fn encode(&mut self, bytes: &[u8], tokenizer: &mut T) -> Result<EncodedInput, Error>;
}
/// Trait to decode encoded input to bytes
pub trait InputDecoder {
/// Decode encoded input to bytes
fn decode(&self, input: &EncodedInput, bytes: &mut Vec<u8>) -> Result<(), Error>;
}
/// Tokenizer is a trait that can tokenize bytes into a ][`Vec`] of tokens
pub trait Tokenizer {
/// Tokanize the given bytes
fn tokenize(&self, bytes: &[u8]) -> Result<Vec<String>, Error>;
}
/// A token input encoder/decoder
#[derive(Clone, Debug)]
pub struct TokenInputEncoderDecoder {
/// The table of tokens
token_table: HashMap<String, u32>,
/// The table of ids
id_table: HashMap<u32, String>,
/// The next id
next_id: u32,
}
@ -72,6 +82,7 @@ impl InputDecoder for TokenInputEncoderDecoder {
}
impl TokenInputEncoderDecoder {
/// Creates a new [`TokenInputEncoderDecoder`]
#[must_use]
pub fn new() -> Self {
Self {
@ -88,15 +99,21 @@ impl Default for TokenInputEncoderDecoder {
}
}
/// A native tokenizer struct
#[cfg(feature = "std")]
#[derive(Clone, Debug)]
pub struct NaiveTokenizer {
/// Ident regex
ident_re: Regex,
/// Comement regex
comment_re: Regex,
/// String regex
string_re: Regex,
}
#[cfg(feature = "std")]
impl NaiveTokenizer {
/// Creates a new [`NaiveTokenizer`]
#[must_use]
pub fn new(ident_re: Regex, comment_re: Regex, string_re: Regex) -> Self {
Self {
@ -221,11 +238,13 @@ impl EncodedInput {
Self { codes }
}
/// The codes of this encoded input
#[must_use]
pub fn codes(&self) -> &[u32] {
&self.codes
}
/// The codes of this encoded input, mutable
#[must_use]
pub fn codes_mut(&mut self) -> &mut Vec<u32> {
&mut self.codes

View File

@ -1,3 +1,4 @@
//! The gramatron grammar fuzzer
use ahash::AHasher;
use core::hash::Hasher;
@ -7,14 +8,19 @@ use serde::{Deserialize, Serialize};
use crate::{bolts::HasLen, inputs::Input, Error};
/// A terminal for gramatron grammar fuzzing
#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq, Eq)]
pub struct Terminal {
/// The state
pub state: usize,
/// The trigger index
pub trigger_idx: usize,
/// The symbol
pub symbol: String,
}
impl Terminal {
/// Creates a new [`Terminal`]
#[must_use]
pub fn new(state: usize, trigger_idx: usize, symbol: String) -> Self {
Self {
@ -25,6 +31,7 @@ impl Terminal {
}
}
/// An input for gramatron grammar fuzzing
#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq, Eq)]
pub struct GramatronInput {
/// The input representation as list of terminals
@ -64,16 +71,19 @@ impl GramatronInput {
Self { terms }
}
/// The terminals of this input
#[must_use]
pub fn terminals(&self) -> &[Terminal] {
&self.terms
}
/// The terminals of this input, mutable
#[must_use]
pub fn terminals_mut(&mut self) -> &mut Vec<Terminal> {
&mut self.terms
}
/// Create a bytes representation of this input
pub fn unparse(&self, bytes: &mut Vec<u8>) {
bytes.clear();
for term in &self.terms {
@ -81,6 +91,7 @@ impl GramatronInput {
}
}
/// crop the value to the given length
pub fn crop(&self, from: usize, to: usize) -> Result<Self, Error> {
if from < to && to <= self.terms.len() {
let mut terms = vec![];

View File

@ -28,7 +28,7 @@ use crate::bolts::fs::write_file_atomic;
use crate::{bolts::ownedref::OwnedSlice, Error};
/// An input for the target
pub trait Input: Clone + serde::Serialize + serde::de::DeserializeOwned + Debug {
pub trait Input: Clone + Serialize + serde::de::DeserializeOwned + Debug {
#[cfg(feature = "std")]
/// Write this input to the file
fn to_file<P>(&self, path: P) -> Result<(), Error>
@ -76,7 +76,7 @@ impl Input for NopInput {
}
impl HasTargetBytes for NopInput {
fn target_bytes(&self) -> OwnedSlice<u8> {
OwnedSlice::Owned(vec![0])
OwnedSlice::from(vec![0])
}
}

View File

@ -1,3 +1,6 @@
//! Input for the [`Nautilus`](https://github.com/RUB-SysSec/nautilus) grammar fuzzer methods
//!
//use ahash::AHasher;
//use core::hash::Hasher;
@ -12,6 +15,7 @@ use grammartec::{
tree::{Tree, TreeLike},
};
/// An [`Input`] implementation for `Nautilus` grammar.
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct NautilusInput {
/// The input representation as Tree
@ -52,6 +56,7 @@ impl NautilusInput {
Self { tree }
}
/// Create an empty [`Input`]
#[must_use]
pub fn empty() -> Self {
Self {
@ -63,16 +68,19 @@ impl NautilusInput {
}
}
/// Generate a `Nautilus` input from the given bytes
pub fn unparse(&self, context: &NautilusContext, bytes: &mut Vec<u8>) {
bytes.clear();
self.tree.unparse(NodeID::from(0), &context.ctx, bytes);
}
/// Get the tree representation of this input
#[must_use]
pub fn tree(&self) -> &Tree {
&self.tree
}
/// Get the tree representation of this input, as a mutable reference
#[must_use]
pub fn tree_mut(&mut self) -> &mut Tree {
&mut self.tree

View File

@ -5,14 +5,66 @@ Welcome to `LibAFL`
#![cfg_attr(not(feature = "std"), no_std)]
#![cfg_attr(feature = "RUSTC_IS_NIGHTLY", feature(min_specialization))]
#![deny(rustdoc::broken_intra_doc_links)]
#![deny(clippy::pedantic)]
#![allow(
clippy::unreadable_literal,
clippy::type_repetition_in_bounds,
clippy::missing_errors_doc,
clippy::cast_possible_truncation,
clippy::used_underscore_binding,
clippy::ptr_as_ptr,
clippy::missing_panics_doc,
clippy::missing_docs_in_private_items,
clippy::module_name_repetitions,
clippy::unreadable_literal
)]
#![cfg_attr(debug_assertions, warn(
missing_debug_implementations,
missing_docs,
//trivial_casts,
trivial_numeric_casts,
unused_extern_crates,
unused_import_braces,
unused_qualifications,
//unused_results
))]
#![cfg_attr(not(debug_assertions), deny(
missing_debug_implementations,
missing_docs,
//trivial_casts,
trivial_numeric_casts,
unused_extern_crates,
unused_import_braces,
unused_qualifications,
//unused_results
))]
#![cfg_attr(
not(debug_assertions),
deny(
bad_style,
const_err,
dead_code,
improper_ctypes,
non_shorthand_field_patterns,
no_mangle_generic_items,
overflowing_literals,
path_statements,
patterns_in_fns_without_body,
private_in_public,
unconditional_recursion,
unused,
unused_allocation,
unused_comparisons,
unused_parens,
while_true
)
)]
#[macro_use]
extern crate alloc;
#[macro_use]
extern crate static_assertions;
#[cfg(feature = "std")]
extern crate ctor;
#[cfg(feature = "std")]
pub use ctor::ctor;
// Re-export derive(SerdeAny)
@ -185,6 +237,9 @@ impl From<TryFromIntError> for Error {
}
}
#[cfg(feature = "std")]
impl std::error::Error for Error {}
// TODO: no_std test
#[cfg(feature = "std")]
#[cfg(test)]

View File

@ -3,11 +3,12 @@
pub mod multi;
pub use multi::MultiMonitor;
use alloc::{
string::{String, ToString},
vec::Vec,
};
use core::{fmt, time, time::Duration};
use alloc::{string::String, vec::Vec};
#[cfg(feature = "introspection")]
use alloc::string::ToString;
use core::{fmt, time::Duration};
use hashbrown::HashMap;
use serde::{Deserialize, Serialize};
@ -18,8 +19,13 @@ const CLIENT_STATS_TIME_WINDOW_SECS: u64 = 5; // 5 seconds
/// User-defined stat types
#[derive(Serialize, Deserialize, Debug, Clone)]
pub enum UserStats {
/// A numerical value
Number(u64),
/// A Float value
Float(f64),
/// A `String`
String(String),
/// A ratio of two values
Ratio(u64, u64),
}
@ -27,6 +33,7 @@ impl fmt::Display for UserStats {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
UserStats::Number(n) => write!(f, "{}", n),
UserStats::Float(n) => write!(f, "{}", n),
UserStats::String(s) => write!(f, "{}", s),
UserStats::Ratio(a, b) => {
if *b == 0 {
@ -52,13 +59,11 @@ pub struct ClientStats {
/// The last reported executions for this client
pub last_window_executions: u64,
/// The last time we got this information
pub last_window_time: time::Duration,
pub last_window_time: Duration,
/// The last executions per sec
pub last_execs_per_sec: f32,
/// User-defined monitor
pub user_monitor: HashMap<String, UserStats>,
/// Stability, and if we ever received a stability value
pub stability: Option<f32>,
/// Client performance statistics
#[cfg(feature = "introspection")]
pub introspection_monitor: ClientPerfMonitor,
@ -66,7 +71,7 @@ pub struct ClientStats {
impl ClientStats {
/// We got a new information about executions for this client, insert them.
pub fn update_executions(&mut self, executions: u64, cur_time: time::Duration) {
pub fn update_executions(&mut self, executions: u64, cur_time: Duration) {
let diff = cur_time
.checked_sub(self.last_window_time)
.map_or(0, |d| d.as_secs());
@ -88,14 +93,9 @@ impl ClientStats {
self.objective_size = objective_size;
}
/// we got a new information about stability for this client, insert it.
pub fn update_stability(&mut self, stability: f32) {
self.stability = Some(stability);
}
/// Get the calculated executions per second for this client
#[allow(clippy::cast_sign_loss, clippy::cast_precision_loss)]
pub fn execs_per_sec(&mut self, cur_time: time::Duration) -> u64 {
pub fn execs_per_sec(&mut self, cur_time: Duration) -> u64 {
if self.executions == 0 {
return 0;
}
@ -149,29 +149,11 @@ pub trait Monitor {
fn client_stats(&self) -> &[ClientStats];
/// creation time
fn start_time(&mut self) -> time::Duration;
fn start_time(&mut self) -> Duration;
/// show the monitor to the user
fn display(&mut self, event_msg: String, sender_id: u32);
/// Show the Stabiliity
fn stability(&self) -> Option<f32> {
let mut stability_total = 0_f32;
let mut num = 0_usize;
for stat in self.client_stats() {
if let Some(stability) = stat.stability {
stability_total += stability;
num += 1;
}
}
if num == 0 {
None
} else {
#[allow(clippy::cast_precision_loss)]
Some(stability_total / num as f32)
}
}
/// Amount of elements in the corpus (combined for all children)
fn corpus_size(&self) -> u64 {
self.client_stats()
@ -218,6 +200,7 @@ pub trait Monitor {
/// Monitor that print exactly nothing.
/// Not good for debuging, very good for speed.
#[derive(Debug)]
pub struct NopMonitor {
start_time: Duration,
client_stats: Vec<ClientStats>,
@ -235,7 +218,7 @@ impl Monitor for NopMonitor {
}
/// Time this fuzzing run stated
fn start_time(&mut self) -> time::Duration {
fn start_time(&mut self) -> Duration {
self.start_time
}
@ -285,13 +268,13 @@ where
}
/// Time this fuzzing run stated
fn start_time(&mut self) -> time::Duration {
fn start_time(&mut self) -> Duration {
self.start_time
}
fn display(&mut self, event_msg: String, sender_id: u32) {
let fmt = format!(
"[{} #{}] run time: {}, clients: {}, corpus: {}, objectives: {}, executions: {}{}, exec/sec: {}",
"[{} #{}] run time: {}, clients: {}, corpus: {}, objectives: {}, executions: {}, exec/sec: {}",
event_msg,
sender_id,
format_duration_hms(&(current_time() - self.start_time)),
@ -299,11 +282,6 @@ where
self.corpus_size(),
self.objective_size(),
self.total_execs(),
if let Some(stability) = self.stability() {
format!(", stability: {:.2}", stability)
} else {
"".to_string()
},
self.execs_per_sec()
);
(self.print_fn)(fmt);
@ -338,7 +316,7 @@ where
}
/// Creates the monitor with a given `start_time`.
pub fn with_time(print_fn: F, start_time: time::Duration) -> Self {
pub fn with_time(print_fn: F, start_time: Duration) -> Self {
Self {
print_fn,
start_time,
@ -347,6 +325,7 @@ where
}
}
/// Start the timer
#[macro_export]
macro_rules! start_timer {
($state:expr) => {{
@ -356,6 +335,7 @@ macro_rules! start_timer {
}};
}
/// Mark the elapsed time for the given feature
#[macro_export]
macro_rules! mark_feature_time {
($state:expr, $feature:expr) => {{
@ -367,6 +347,7 @@ macro_rules! mark_feature_time {
}};
}
/// Mark the elapsed time for the given feature
#[macro_export]
macro_rules! mark_feedback_time {
($state:expr) => {{
@ -708,7 +689,7 @@ impl ClientPerfMonitor {
self.stages
.iter()
.enumerate()
.filter(move |(stage_index, _)| used[*stage_index as usize])
.filter(move |(stage_index, _)| used[*stage_index])
}
/// A map of all `feedbacks`

View File

@ -1,7 +1,7 @@
//! Monitor to disply both cumulative and per-client monitor
use alloc::{string::String, vec::Vec};
use core::{time, time::Duration};
use core::time::Duration;
#[cfg(feature = "introspection")]
use alloc::string::ToString;
@ -37,7 +37,7 @@ where
}
/// Time this fuzzing run stated
fn start_time(&mut self) -> time::Duration {
fn start_time(&mut self) -> Duration {
self.start_time
}
@ -104,7 +104,7 @@ where
}
/// Creates the monitor with a given `start_time`.
pub fn with_time(print_fn: F, start_time: time::Duration) -> Self {
pub fn with_time(print_fn: F, start_time: Duration) -> Self {
Self {
print_fn,
start_time,

View File

@ -1,8 +1,7 @@
//! Mutations for [`EncodedInput`]s
//!
use alloc::vec::Vec;
use core::{
cmp::{max, min},
marker::PhantomData,
};
use core::cmp::{max, min};
use crate::{
bolts::{
@ -20,20 +19,10 @@ use crate::{
};
/// Set a code in the input as a random value
#[derive(Default)]
pub struct EncodedRandMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
phantom: PhantomData<(R, S)>,
}
#[derive(Debug, Default)]
pub struct EncodedRandMutator;
impl<R, S> Mutator<EncodedInput, S> for EncodedRandMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl<S: HasRand> Mutator<EncodedInput, S> for EncodedRandMutator {
fn mutate(
&mut self,
state: &mut S,
@ -50,45 +39,25 @@ where
}
}
impl<R, S> Named for EncodedRandMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl Named for EncodedRandMutator {
fn name(&self) -> &str {
"EncodedRandMutator"
}
}
impl<R, S> EncodedRandMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl EncodedRandMutator {
/// Creates a new [`EncodedRandMutator`].
#[must_use]
pub fn new() -> Self {
Self {
phantom: PhantomData,
}
Self
}
}
/// Increment a random code in the input
#[derive(Default)]
pub struct EncodedIncMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
phantom: PhantomData<(R, S)>,
}
#[derive(Debug, Default)]
pub struct EncodedIncMutator;
impl<R, S> Mutator<EncodedInput, S> for EncodedIncMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl<S: HasRand> Mutator<EncodedInput, S> for EncodedIncMutator {
fn mutate(
&mut self,
state: &mut S,
@ -105,45 +74,25 @@ where
}
}
impl<R, S> Named for EncodedIncMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl Named for EncodedIncMutator {
fn name(&self) -> &str {
"EncodedIncMutator"
}
}
impl<R, S> EncodedIncMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
/// Creates a new [`EncodedRandMutator`].
impl EncodedIncMutator {
/// Creates a new [`EncodedIncMutator`].
#[must_use]
pub fn new() -> Self {
Self {
phantom: PhantomData,
}
Self
}
}
/// Decrement a random code in the input
#[derive(Default)]
pub struct EncodedDecMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
phantom: PhantomData<(R, S)>,
}
#[derive(Debug, Default)]
pub struct EncodedDecMutator;
impl<R, S> Mutator<EncodedInput, S> for EncodedDecMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl<S: HasRand> Mutator<EncodedInput, S> for EncodedDecMutator {
fn mutate(
&mut self,
state: &mut S,
@ -160,45 +109,25 @@ where
}
}
impl<R, S> Named for EncodedDecMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl Named for EncodedDecMutator {
fn name(&self) -> &str {
"EncodedDecMutator"
}
}
impl<R, S> EncodedDecMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
/// Creates a new [`EncodedRandMutator`].
impl EncodedDecMutator {
/// Creates a new [`EncodedDecMutator`].
#[must_use]
pub fn new() -> Self {
Self {
phantom: PhantomData,
}
Self
}
}
/// Adds or subtracts a random value up to `ARITH_MAX` to a random place in the codes [`Vec`].
#[derive(Default)]
pub struct EncodedAddMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
phantom: PhantomData<(R, S)>,
}
#[derive(Debug, Default)]
pub struct EncodedAddMutator;
impl<R, S> Mutator<EncodedInput, S> for EncodedAddMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl<S: HasRand> Mutator<EncodedInput, S> for EncodedAddMutator {
fn mutate(
&mut self,
state: &mut S,
@ -219,45 +148,25 @@ where
}
}
impl<R, S> Named for EncodedAddMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl Named for EncodedAddMutator {
fn name(&self) -> &str {
"EncodedAddMutator"
}
}
impl<R, S> EncodedAddMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl EncodedAddMutator {
/// Creates a new [`EncodedAddMutator`].
#[must_use]
pub fn new() -> Self {
Self {
phantom: PhantomData,
}
Self
}
}
/// Codes delete mutation for encoded inputs
#[derive(Default)]
pub struct EncodedDeleteMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
phantom: PhantomData<(R, S)>,
}
#[derive(Debug, Default)]
pub struct EncodedDeleteMutator;
impl<R, S> Mutator<EncodedInput, S> for EncodedDeleteMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl<S: HasRand> Mutator<EncodedInput, S> for EncodedDeleteMutator {
fn mutate(
&mut self,
state: &mut S,
@ -277,45 +186,29 @@ where
}
}
impl<R, S> Named for EncodedDeleteMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl Named for EncodedDeleteMutator {
fn name(&self) -> &str {
"EncodedDeleteMutator"
}
}
impl<R, S> EncodedDeleteMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl EncodedDeleteMutator {
/// Creates a new [`EncodedDeleteMutator`].
#[must_use]
pub fn new() -> Self {
Self {
phantom: PhantomData,
}
Self
}
}
/// Insert mutation for encoded inputs
#[derive(Default)]
pub struct EncodedInsertCopyMutator<R, S>
where
S: HasRand<R> + HasMaxSize,
R: Rand,
{
#[derive(Debug, Default)]
pub struct EncodedInsertCopyMutator {
tmp_buf: Vec<u32>,
phantom: PhantomData<(R, S)>,
}
impl<R, S> Mutator<EncodedInput, S> for EncodedInsertCopyMutator<R, S>
impl<S> Mutator<EncodedInput, S> for EncodedInsertCopyMutator
where
S: HasRand<R> + HasMaxSize,
R: Rand,
S: HasRand + HasMaxSize,
{
fn mutate(
&mut self,
@ -356,46 +249,25 @@ where
}
}
impl<R, S> Named for EncodedInsertCopyMutator<R, S>
where
S: HasRand<R> + HasMaxSize,
R: Rand,
{
impl Named for EncodedInsertCopyMutator {
fn name(&self) -> &str {
"EncodedInsertCopyMutator"
}
}
impl<R, S> EncodedInsertCopyMutator<R, S>
where
S: HasRand<R> + HasMaxSize,
R: Rand,
{
impl EncodedInsertCopyMutator {
/// Creates a new [`EncodedInsertCopyMutator`].
#[must_use]
pub fn new() -> Self {
Self {
tmp_buf: vec![],
phantom: PhantomData,
}
Self::default()
}
}
/// Codes copy mutation for encoded inputs
#[derive(Default)]
pub struct EncodedCopyMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
phantom: PhantomData<(R, S)>,
}
#[derive(Debug, Default)]
pub struct EncodedCopyMutator;
impl<R, S> Mutator<EncodedInput, S> for EncodedCopyMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl<S: HasRand> Mutator<EncodedInput, S> for EncodedCopyMutator {
fn mutate(
&mut self,
state: &mut S,
@ -417,46 +289,27 @@ where
}
}
impl<R, S> Named for EncodedCopyMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl Named for EncodedCopyMutator {
fn name(&self) -> &str {
"EncodedCopyMutator"
}
}
impl<R, S> EncodedCopyMutator<R, S>
where
S: HasRand<R>,
R: Rand,
{
impl EncodedCopyMutator {
/// Creates a new [`EncodedCopyMutator`].
#[must_use]
pub fn new() -> Self {
Self {
phantom: PhantomData,
}
Self
}
}
/// Crossover insert mutation for encoded inputs
#[derive(Default)]
pub struct EncodedCrossoverInsertMutator<C, R, S>
where
C: Corpus<EncodedInput>,
R: Rand,
S: HasRand<R> + HasCorpus<C, EncodedInput> + HasMaxSize,
{
phantom: PhantomData<(C, R, S)>,
}
#[derive(Debug, Default)]
pub struct EncodedCrossoverInsertMutator;
impl<C, R, S> Mutator<EncodedInput, S> for EncodedCrossoverInsertMutator<C, R, S>
impl<S> Mutator<EncodedInput, S> for EncodedCrossoverInsertMutator
where
C: Corpus<EncodedInput>,
R: Rand,
S: HasRand<R> + HasCorpus<C, EncodedInput> + HasMaxSize,
S: HasRand + HasCorpus<EncodedInput> + HasMaxSize,
{
fn mutate(
&mut self,
@ -510,48 +363,27 @@ where
}
}
impl<C, R, S> Named for EncodedCrossoverInsertMutator<C, R, S>
where
C: Corpus<EncodedInput>,
R: Rand,
S: HasRand<R> + HasCorpus<C, EncodedInput> + HasMaxSize,
{
impl Named for EncodedCrossoverInsertMutator {
fn name(&self) -> &str {
"EncodedCrossoverInsertMutator"
}
}
impl<C, R, S> EncodedCrossoverInsertMutator<C, R, S>
where
C: Corpus<EncodedInput>,
R: Rand,
S: HasRand<R> + HasCorpus<C, EncodedInput> + HasMaxSize,
{
impl EncodedCrossoverInsertMutator {
/// Creates a new [`EncodedCrossoverInsertMutator`].
#[must_use]
pub fn new() -> Self {
Self {
phantom: PhantomData,
}
Self
}
}
/// Crossover replace mutation for encoded inputs
#[derive(Default)]
pub struct EncodedCrossoverReplaceMutator<C, R, S>
where
C: Corpus<EncodedInput>,
R: Rand,
S: HasRand<R> + HasCorpus<C, EncodedInput>,
{
phantom: PhantomData<(C, R, S)>,
}
#[derive(Debug, Default)]
pub struct EncodedCrossoverReplaceMutator;
impl<C, R, S> Mutator<EncodedInput, S> for EncodedCrossoverReplaceMutator<C, R, S>
impl<S> Mutator<EncodedInput, S> for EncodedCrossoverReplaceMutator
where
C: Corpus<EncodedInput>,
R: Rand,
S: HasRand<R> + HasCorpus<C, EncodedInput>,
S: HasRand + HasCorpus<EncodedInput>,
{
fn mutate(
&mut self,
@ -597,50 +429,33 @@ where
}
}
impl<C, R, S> Named for EncodedCrossoverReplaceMutator<C, R, S>
where
C: Corpus<EncodedInput>,
R: Rand,
S: HasRand<R> + HasCorpus<C, EncodedInput>,
{
impl Named for EncodedCrossoverReplaceMutator {
fn name(&self) -> &str {
"EncodedCrossoverReplaceMutator"
}
}
impl<C, R, S> EncodedCrossoverReplaceMutator<C, R, S>
where
C: Corpus<EncodedInput>,
R: Rand,
S: HasRand<R> + HasCorpus<C, EncodedInput>,
{
impl EncodedCrossoverReplaceMutator {
/// Creates a new [`EncodedCrossoverReplaceMutator`].
#[must_use]
pub fn new() -> Self {
Self {
phantom: PhantomData,
}
Self
}
}
/// Get the mutations that compose the encoded mutator
#[must_use]
pub fn encoded_mutations<C, R, S>() -> tuple_list_type!(
EncodedRandMutator<R, S>,
EncodedIncMutator<R, S>,
EncodedDecMutator<R, S>,
EncodedAddMutator<R, S>,
EncodedDeleteMutator<R, S>,
EncodedInsertCopyMutator<R, S>,
EncodedCopyMutator<R, S>,
EncodedCrossoverInsertMutator<C, R, S>,
EncodedCrossoverReplaceMutator<C, R, S>,
)
where
S: HasRand<R> + HasCorpus<C, EncodedInput> + HasMaxSize,
C: Corpus<EncodedInput>,
R: Rand,
{
pub fn encoded_mutations() -> tuple_list_type!(
EncodedRandMutator,
EncodedIncMutator,
EncodedDecMutator,
EncodedAddMutator,
EncodedDeleteMutator,
EncodedInsertCopyMutator,
EncodedCopyMutator,
EncodedCrossoverInsertMutator,
EncodedCrossoverReplaceMutator,
) {
tuple_list!(
EncodedRandMutator::new(),
EncodedIncMutator::new(),

View File

@ -1,5 +1,7 @@
//! Gramatron is the rewritten gramatron fuzzer in rust.
//! See the original gramatron repo [`Gramatron`](https://github.com/HexHive/Gramatron) for more details.
use alloc::vec::Vec;
use core::{cmp::max, marker::PhantomData};
use core::cmp::max;
use hashbrown::HashMap;
use serde::{Deserialize, Serialize};
@ -13,18 +15,18 @@ use crate::{
Error,
};
pub struct GramatronRandomMutator<'a, R, S>
/// A random mutator for grammar fuzzing
#[derive(Debug)]
pub struct GramatronRandomMutator<'a, S>
where
S: HasRand<R> + HasMetadata,
R: Rand,
S: HasRand + HasMetadata,
{
generator: &'a GramatronGenerator<'a, R, S>,
generator: &'a GramatronGenerator<'a, S>,
}
impl<'a, R, S> Mutator<GramatronInput, S> for GramatronRandomMutator<'a, R, S>
impl<'a, S> Mutator<GramatronInput, S> for GramatronRandomMutator<'a, S>
where
S: HasRand<R> + HasMetadata,
R: Rand,
S: HasRand + HasMetadata,
{
fn mutate(
&mut self,
@ -44,29 +46,29 @@ where
}
}
impl<'a, R, S> Named for GramatronRandomMutator<'a, R, S>
impl<'a, S> Named for GramatronRandomMutator<'a, S>
where
S: HasRand<R> + HasMetadata,
R: Rand,
S: HasRand + HasMetadata,
{
fn name(&self) -> &str {
"GramatronRandomMutator"
}
}
impl<'a, R, S> GramatronRandomMutator<'a, R, S>
impl<'a, S> GramatronRandomMutator<'a, S>
where
S: HasRand<R> + HasMetadata,
R: Rand,
S: HasRand + HasMetadata,
{
/// Creates a new [`GramatronRandomMutator`].
#[must_use]
pub fn new(generator: &'a GramatronGenerator<'a, R, S>) -> Self {
pub fn new(generator: &'a GramatronGenerator<'a, S>) -> Self {
Self { generator }
}
}
#[derive(Serialize, Deserialize)]
/// The metadata used for `gramatron`
#[derive(Debug, Serialize, Deserialize)]
#[allow(missing_docs)]
pub struct GramatronIdxMapMetadata {
pub map: HashMap<usize, Vec<usize>>,
}
@ -74,6 +76,7 @@ pub struct GramatronIdxMapMetadata {
crate::impl_serdeany!(GramatronIdxMapMetadata);
impl GramatronIdxMapMetadata {
/// Creates a new [`struct@GramatronIdxMapMetadata`].
#[must_use]
pub fn new(input: &GramatronInput) -> Self {
let mut map = HashMap::default();
@ -85,21 +88,13 @@ impl GramatronIdxMapMetadata {
}
}
#[derive(Default)]
pub struct GramatronSpliceMutator<C, R, S>
where
C: Corpus<GramatronInput>,
S: HasRand<R> + HasCorpus<C, GramatronInput> + HasMetadata,
R: Rand,
{
phantom: PhantomData<(C, R, S)>,
}
/// A [`Mutator`] that mutates a [`GramatronInput`] by splicing inputs together.
#[derive(Default, Debug)]
pub struct GramatronSpliceMutator;
impl<C, R, S> Mutator<GramatronInput, S> for GramatronSpliceMutator<C, R, S>
impl<S> Mutator<GramatronInput, S> for GramatronSpliceMutator
where
C: Corpus<GramatronInput>,
S: HasRand<R> + HasCorpus<C, GramatronInput> + HasMetadata,
R: Rand,
S: HasRand + HasCorpus<GramatronInput> + HasMetadata,
{
fn mutate(
&mut self,
@ -147,48 +142,31 @@ where
}
}
impl<C, R, S> Named for GramatronSpliceMutator<C, R, S>
where
C: Corpus<GramatronInput>,
S: HasRand<R> + HasCorpus<C, GramatronInput> + HasMetadata,
R: Rand,
{
impl Named for GramatronSpliceMutator {
fn name(&self) -> &str {
"GramatronSpliceMutator"
}
}
impl<'a, C, R, S> GramatronSpliceMutator<C, R, S>
where
C: Corpus<GramatronInput>,
S: HasRand<R> + HasCorpus<C, GramatronInput> + HasMetadata,
R: Rand,
{
impl GramatronSpliceMutator {
/// Creates a new [`GramatronSpliceMutator`].
#[must_use]
pub fn new() -> Self {
Self {
phantom: PhantomData,
}
Self
}
}
#[derive(Default)]
pub struct GramatronRecursionMutator<R, S>
where
S: HasRand<R> + HasMetadata,
R: Rand,
{
/// A mutator that uses Gramatron for grammar fuzzing and mutation.
#[derive(Default, Debug)]
pub struct GramatronRecursionMutator {
counters: HashMap<usize, (usize, usize, usize)>,
states: Vec<usize>,
temp: Vec<Terminal>,
phantom: PhantomData<(R, S)>,
}
impl<R, S> Mutator<GramatronInput, S> for GramatronRecursionMutator<R, S>
impl<S> Mutator<GramatronInput, S> for GramatronRecursionMutator
where
S: HasRand<R> + HasMetadata,
R: Rand,
S: HasRand + HasMetadata,
{
fn mutate(
&mut self,
@ -257,29 +235,16 @@ where
}
}
impl<R, S> Named for GramatronRecursionMutator<R, S>
where
S: HasRand<R> + HasMetadata,
R: Rand,
{
impl Named for GramatronRecursionMutator {
fn name(&self) -> &str {
"GramatronRecursionMutator"
}
}
impl<R, S> GramatronRecursionMutator<R, S>
where
S: HasRand<R> + HasMetadata,
R: Rand,
{
impl GramatronRecursionMutator {
/// Creates a new [`GramatronRecursionMutator`].
#[must_use]
pub fn new() -> Self {
Self {
counters: HashMap::default(),
states: vec![],
temp: vec![],
phantom: PhantomData,
}
Self::default()
}
}

View File

@ -30,9 +30,13 @@ pub struct MOpt {
pub finds_until_last_swarm: usize,
/// These w_* and g_* values are the coefficients for updating variables according to the PSO algorithms
pub w_init: f64,
/// These w_* and g_* values are the coefficients for updating variables according to the PSO algorithms
pub w_end: f64,
/// These w_* and g_* values are the coefficients for updating variables according to the PSO algorithms
pub w_now: f64,
/// These w_* and g_* values are the coefficients for updating variables according to the PSO algorithms
pub g_now: f64,
/// These w_* and g_* values are the coefficients for updating variables according to the PSO algorithms
pub g_max: f64,
/// The number of mutation operators
pub operator_num: usize,
@ -48,11 +52,15 @@ pub struct MOpt {
pub core_time: usize,
/// The swarm identifier that we are currently using in the pilot fuzzing mode
pub swarm_now: usize,
/// These are the parameters for the PSO algorithm
/// A parameter for the PSO algorithm
x_now: Vec<Vec<f64>>,
/// A parameter for the PSO algorithm
l_best: Vec<Vec<f64>>,
/// A parameter for the PSO algorithm
eff_best: Vec<Vec<f64>>,
/// A parameter for the PSO algorithm
g_best: Vec<f64>,
/// A parameter for the PSO algorithm
v_now: Vec<Vec<f64>>,
/// The probability that we want to use to choose the mutation operator.
probability_now: Vec<Vec<f64>>,
@ -84,7 +92,7 @@ pub struct MOpt {
crate::impl_serdeany!(MOpt);
impl fmt::Debug for MOpt {
impl Debug for MOpt {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("MOpt")
.field("\ntotal_finds", &self.total_finds)
@ -122,13 +130,14 @@ impl fmt::Debug for MOpt {
.field("\n\ncore_operator_cycles", &self.core_operator_cycles)
.field("\n\ncore_operator_cycles_v2", &self.core_operator_cycles_v2)
.field("\n\ncore_operator_cycles_v3", &self.core_operator_cycles_v3)
.finish()
.finish_non_exhaustive()
}
}
const PERIOD_PILOT_COEF: f64 = 5000.0;
impl MOpt {
/// Creates a new [`struct@MOpt`] instance.
pub fn new(operator_num: usize, swarm_num: usize) -> Result<Self, Error> {
let mut mopt = Self {
rand: StdRand::with_seed(0),
@ -169,6 +178,7 @@ impl MOpt {
Ok(mopt)
}
/// initialize pso
#[allow(clippy::cast_precision_loss)]
pub fn pso_initialize(&mut self) -> Result<(), Error> {
if self.g_now > self.g_max {
@ -229,7 +239,7 @@ impl MOpt {
Ok(())
}
/// Update the PSO algorithm parameters
/// Update the `PSO` algorithm parameters
/// See <https://github.com/puppet-meteor/MOpt-AFL/blob/master/MOpt/afl-fuzz.c#L10623>
#[allow(clippy::cast_precision_loss)]
pub fn pso_update(&mut self) -> Result<(), Error> {
@ -339,35 +349,34 @@ impl MOpt {
const V_MAX: f64 = 1.0;
const V_MIN: f64 = 0.05;
/// The `MOpt` mode to use
#[derive(Serialize, Deserialize, Clone, Copy, Debug)]
pub enum MOptMode {
/// Pilot fuzzing mode
Pilotfuzzing,
/// Core fuzzing mode
Corefuzzing,
}
pub struct StdMOptMutator<C, I, MT, R, S, SC>
/// This is the main struct of `MOpt`, an `AFL` mutator.
/// See the original `MOpt` implementation in <https://github.com/puppet-meteor/MOpt-AFL>
pub struct StdMOptMutator<I, MT, S>
where
C: Corpus<I>,
I: Input,
MT: MutatorsTuple<I, S>,
R: Rand,
S: HasRand<R> + HasMetadata + HasCorpus<C, I> + HasSolutions<SC, I>,
SC: Corpus<I>,
S: HasRand + HasMetadata + HasCorpus<I> + HasSolutions<I>,
{
mode: MOptMode,
finds_before: usize,
mutations: MT,
phantom: PhantomData<(C, I, R, S, SC)>,
phantom: PhantomData<(I, S)>,
}
impl<C, I, MT, R, S, SC> Debug for StdMOptMutator<C, I, MT, R, S, SC>
impl<I, MT, S> Debug for StdMOptMutator<I, MT, S>
where
C: Corpus<I>,
I: Input,
MT: MutatorsTuple<I, S>,
R: Rand,
S: HasRand<R> + HasMetadata + HasCorpus<C, I> + HasSolutions<SC, I>,
SC: Corpus<I>,
S: HasRand + HasMetadata + HasCorpus<I> + HasSolutions<I>,
{
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
@ -379,14 +388,11 @@ where
}
}
impl<C, I, MT, R, S, SC> Mutator<I, S> for StdMOptMutator<C, I, MT, R, S, SC>
impl<I, MT, S> Mutator<I, S> for StdMOptMutator<I, MT, S>
where
C: Corpus<I>,
I: Input,
MT: MutatorsTuple<I, S>,
R: Rand,
S: HasRand<R> + HasMetadata + HasCorpus<C, I> + HasSolutions<SC, I>,
SC: Corpus<I>,
S: HasRand + HasMetadata + HasCorpus<I> + HasSolutions<I>,
{
#[inline]
fn mutate(
@ -517,15 +523,13 @@ where
}
}
impl<C, I, MT, R, S, SC> StdMOptMutator<C, I, MT, R, S, SC>
impl<I, MT, S> StdMOptMutator<I, MT, S>
where
C: Corpus<I>,
I: Input,
MT: MutatorsTuple<I, S>,
R: Rand,
S: HasRand<R> + HasMetadata + HasCorpus<C, I> + HasSolutions<SC, I>,
SC: Corpus<I>,
S: HasRand + HasMetadata + HasCorpus<I> + HasSolutions<I>,
{
/// Create a new [`StdMOptMutator`].
pub fn new(state: &mut S, mutations: MT, swarm_num: usize) -> Result<Self, Error> {
state.add_metadata::<MOpt>(MOpt::new(mutations.len(), swarm_num)?);
Ok(Self {
@ -603,14 +607,11 @@ where
}
}
impl<C, I, MT, R, S, SC> ComposedByMutations<I, MT, S> for StdMOptMutator<C, I, MT, R, S, SC>
impl<I, MT, S> ComposedByMutations<I, MT, S> for StdMOptMutator<I, MT, S>
where
C: Corpus<I>,
I: Input,
MT: MutatorsTuple<I, S>,
R: Rand,
S: HasRand<R> + HasMetadata + HasCorpus<C, I> + HasSolutions<SC, I>,
SC: Corpus<I>,
S: HasRand + HasMetadata + HasCorpus<I> + HasSolutions<I>,
{
/// Get the mutations
#[inline]
@ -625,14 +626,11 @@ where
}
}
impl<C, I, MT, R, S, SC> ScheduledMutator<I, MT, S> for StdMOptMutator<C, I, MT, R, S, SC>
impl<I, MT, S> ScheduledMutator<I, MT, S> for StdMOptMutator<I, MT, S>
where
C: Corpus<I>,
I: Input,
MT: MutatorsTuple<I, S>,
R: Rand,
S: HasRand<R> + HasMetadata + HasCorpus<C, I> + HasSolutions<SC, I>,
SC: Corpus<I>,
S: HasRand + HasMetadata + HasCorpus<I> + HasSolutions<I>,
{
/// Compute the number of iterations used to apply stacked mutations
fn iterations(&self, state: &mut S, _: &I) -> u64 {

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +1,7 @@
use core::marker::PhantomData;
//! Mutators for the `Nautilus` grammmar fuzzer
use crate::{
bolts::tuples::Named,
corpus::Corpus,
feedbacks::NautilusChunksMetadata,
generators::nautilus::NautilusContext,
inputs::nautilus::NautilusInput,
@ -11,18 +10,26 @@ use crate::{
Error,
};
use core::fmt::Debug;
use grammartec::mutator::Mutator as BackingMutator;
use grammartec::{
context::Context,
tree::{Tree, TreeMutation},
};
/// The randomic mutator for `Nautilus` grammar.
pub struct NautilusRandomMutator<'a> {
ctx: &'a Context,
mutator: BackingMutator,
}
impl<'a, S> Mutator<NautilusInput, S> for NautilusRandomMutator<'a> {
impl Debug for NautilusRandomMutator<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "NautilusRandomMutator {{}}")
}
}
impl<S> Mutator<NautilusInput, S> for NautilusRandomMutator<'_> {
fn mutate(
&mut self,
_state: &mut S,
@ -52,7 +59,7 @@ impl<'a, S> Mutator<NautilusInput, S> for NautilusRandomMutator<'a> {
}
}
impl<'a> Named for NautilusRandomMutator<'a> {
impl Named for NautilusRandomMutator<'_> {
fn name(&self) -> &str {
"NautilusRandomMutator"
}
@ -70,13 +77,20 @@ impl<'a> NautilusRandomMutator<'a> {
}
}
/// The `Nautilus` recursion mutator
// TODO calculate reucursions only for new items in corpus
pub struct NautilusRecursionMutator<'a> {
ctx: &'a Context,
mutator: BackingMutator,
}
impl<'a, S> Mutator<NautilusInput, S> for NautilusRecursionMutator<'a> {
impl Debug for NautilusRecursionMutator<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "NautilusRecursionMutator {{}}")
}
}
impl<S> Mutator<NautilusInput, S> for NautilusRecursionMutator<'_> {
fn mutate(
&mut self,
_state: &mut S,
@ -109,7 +123,7 @@ impl<'a, S> Mutator<NautilusInput, S> for NautilusRecursionMutator<'a> {
}
}
impl<'a> Named for NautilusRecursionMutator<'a> {
impl Named for NautilusRecursionMutator<'_> {
fn name(&self) -> &str {
"NautilusRecursionMutator"
}
@ -127,16 +141,21 @@ impl<'a> NautilusRecursionMutator<'a> {
}
}
pub struct NautilusSpliceMutator<'a, C> {
/// The splicing mutator for `Nautilus` that can splice inputs together
pub struct NautilusSpliceMutator<'a> {
ctx: &'a Context,
mutator: BackingMutator,
phantom: PhantomData<C>,
}
impl<'a, S, C> Mutator<NautilusInput, S> for NautilusSpliceMutator<'a, C>
impl Debug for NautilusSpliceMutator<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "NautilusSpliceMutator {{}}")
}
}
impl<S> Mutator<NautilusInput, S> for NautilusSpliceMutator<'_>
where
C: Corpus<NautilusInput>,
S: HasCorpus<C, NautilusInput> + HasMetadata,
S: HasCorpus<NautilusInput> + HasMetadata,
{
fn mutate(
&mut self,
@ -172,13 +191,13 @@ where
}
}
impl<'a, C> Named for NautilusSpliceMutator<'a, C> {
impl Named for NautilusSpliceMutator<'_> {
fn name(&self) -> &str {
"NautilusSpliceMutator"
}
}
impl<'a, C> NautilusSpliceMutator<'a, C> {
impl<'a> NautilusSpliceMutator<'a> {
/// Creates a new [`NautilusSpliceMutator`].
#[must_use]
pub fn new(context: &'a NautilusContext) -> Self {
@ -186,7 +205,6 @@ impl<'a, C> NautilusSpliceMutator<'a, C> {
Self {
ctx: &context.ctx,
mutator,
phantom: PhantomData,
}
}
}

View File

@ -14,9 +14,9 @@ use crate::{
AsSlice,
},
corpus::Corpus,
inputs::{HasBytesVec, Input},
inputs::Input,
mutators::{MutationResult, Mutator, MutatorsTuple},
state::{HasCorpus, HasMaxSize, HasMetadata, HasRand},
state::{HasCorpus, HasMetadata, HasRand},
Error,
};
@ -24,7 +24,7 @@ pub use crate::mutators::mutations::*;
pub use crate::mutators::token_mutations::*;
/// The metadata placed in a [`crate::corpus::Testcase`] by a [`LoggerScheduledMutator`].
#[derive(Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize)]
pub struct LogMutationMetadata {
/// A list of logs
pub list: Vec<String>,
@ -95,24 +95,22 @@ where
}
/// A [`Mutator`] that schedules one of the embedded mutations on each call.
pub struct StdScheduledMutator<I, MT, R, S>
pub struct StdScheduledMutator<I, MT, S>
where
I: Input,
MT: MutatorsTuple<I, S>,
R: Rand,
S: HasRand<R>,
S: HasRand,
{
mutations: MT,
max_iterations: u64,
phantom: PhantomData<(I, R, S)>,
phantom: PhantomData<(I, S)>,
}
impl<I, MT, R, S> Debug for StdScheduledMutator<I, MT, R, S>
impl<I, MT, S> Debug for StdScheduledMutator<I, MT, S>
where
I: Input,
MT: MutatorsTuple<I, S>,
R: Rand,
S: HasRand<R>,
S: HasRand,
{
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
@ -124,12 +122,11 @@ where
}
}
impl<I, MT, R, S> Mutator<I, S> for StdScheduledMutator<I, MT, R, S>
impl<I, MT, S> Mutator<I, S> for StdScheduledMutator<I, MT, S>
where
I: Input,
MT: MutatorsTuple<I, S>,
R: Rand,
S: HasRand<R>,
S: HasRand,
{
#[inline]
fn mutate(
@ -142,12 +139,11 @@ where
}
}
impl<I, MT, R, S> ComposedByMutations<I, MT, S> for StdScheduledMutator<I, MT, R, S>
impl<I, MT, S> ComposedByMutations<I, MT, S> for StdScheduledMutator<I, MT, S>
where
I: Input,
MT: MutatorsTuple<I, S>,
R: Rand,
S: HasRand<R>,
S: HasRand,
{
/// Get the mutations
#[inline]
@ -162,12 +158,11 @@ where
}
}
impl<I, MT, R, S> ScheduledMutator<I, MT, S> for StdScheduledMutator<I, MT, R, S>
impl<I, MT, S> ScheduledMutator<I, MT, S> for StdScheduledMutator<I, MT, S>
where
I: Input,
MT: MutatorsTuple<I, S>,
R: Rand,
S: HasRand<R>,
S: HasRand,
{
/// Compute the number of iterations used to apply stacked mutations
fn iterations(&self, state: &mut S, _: &I) -> u64 {
@ -181,12 +176,11 @@ where
}
}
impl<I, MT, R, S> StdScheduledMutator<I, MT, R, S>
impl<I, MT, S> StdScheduledMutator<I, MT, S>
where
I: Input,
MT: MutatorsTuple<I, S>,
R: Rand,
S: HasRand<R>,
S: HasRand,
{
/// Create a new [`StdScheduledMutator`] instance specifying mutations
pub fn new(mutations: MT) -> Self {
@ -209,41 +203,35 @@ where
/// Get the mutations that compose the Havoc mutator
#[must_use]
pub fn havoc_mutations<C, I, R, S>() -> tuple_list_type!(
BitFlipMutator<I, R, S>,
ByteFlipMutator<I, R, S>,
ByteIncMutator<I, R, S>,
ByteDecMutator<I, R, S>,
ByteNegMutator<I, R, S>,
ByteRandMutator<I, R, S>,
ByteAddMutator<I, R, S>,
WordAddMutator<I, R, S>,
DwordAddMutator<I, R, S>,
QwordAddMutator<I, R, S>,
ByteInterestingMutator<I, R, S>,
WordInterestingMutator<I, R, S>,
DwordInterestingMutator<I, R, S>,
BytesDeleteMutator<I, R, S>,
BytesDeleteMutator<I, R, S>,
BytesDeleteMutator<I, R, S>,
BytesDeleteMutator<I, R, S>,
BytesExpandMutator<I, R, S>,
BytesInsertMutator<I, R, S>,
BytesRandInsertMutator<I, R, S>,
BytesSetMutator<I, R, S>,
BytesRandSetMutator<I, R, S>,
BytesCopyMutator<I, R, S>,
BytesInsertCopyMutator<I, R, S>,
BytesSwapMutator<I, R, S>,
CrossoverInsertMutator<C, I, R, S>,
CrossoverReplaceMutator<C, I, R, S>,
)
where
I: Input + HasBytesVec,
S: HasRand<R> + HasCorpus<C, I> + HasMetadata + HasMaxSize,
C: Corpus<I>,
R: Rand,
{
pub fn havoc_mutations() -> tuple_list_type!(
BitFlipMutator,
ByteFlipMutator,
ByteIncMutator,
ByteDecMutator,
ByteNegMutator,
ByteRandMutator,
ByteAddMutator,
WordAddMutator,
DwordAddMutator,
QwordAddMutator,
ByteInterestingMutator,
WordInterestingMutator,
DwordInterestingMutator,
BytesDeleteMutator,
BytesDeleteMutator,
BytesDeleteMutator,
BytesDeleteMutator,
BytesExpandMutator,
BytesInsertMutator,
BytesRandInsertMutator,
BytesSetMutator,
BytesRandSetMutator,
BytesCopyMutator,
BytesInsertCopyMutator,
BytesSwapMutator,
CrossoverInsertMutator,
CrossoverReplaceMutator,
) {
tuple_list!(
BitFlipMutator::new(),
ByteFlipMutator::new(),
@ -277,39 +265,28 @@ where
/// Get the mutations that uses the Tokens metadata
#[must_use]
pub fn tokens_mutations<C, I, R, S>(
) -> tuple_list_type!(TokenInsert<I, R, S>, TokenReplace<I, R, S>)
where
I: Input + HasBytesVec,
S: HasRand<R> + HasCorpus<C, I> + HasMetadata + HasMaxSize,
C: Corpus<I>,
R: Rand,
{
pub fn tokens_mutations() -> tuple_list_type!(TokenInsert, TokenReplace) {
tuple_list!(TokenInsert::new(), TokenReplace::new(),)
}
/// A logging [`Mutator`] that wraps around a [`StdScheduledMutator`].
pub struct LoggerScheduledMutator<C, I, MT, R, S, SM>
pub struct LoggerScheduledMutator<I, MT, S, SM>
where
C: Corpus<I>,
I: Input,
MT: MutatorsTuple<I, S> + NamedTuple,
R: Rand,
S: HasRand<R> + HasCorpus<C, I>,
S: HasRand + HasCorpus<I>,
SM: ScheduledMutator<I, MT, S>,
{
scheduled: SM,
mutation_log: Vec<usize>,
phantom: PhantomData<(C, I, MT, R, S)>,
phantom: PhantomData<(I, MT, S)>,
}
impl<C, I, MT, R, S, SM> Debug for LoggerScheduledMutator<C, I, MT, R, S, SM>
impl<I, MT, S, SM> Debug for LoggerScheduledMutator<I, MT, S, SM>
where
C: Corpus<I>,
I: Input,
MT: MutatorsTuple<I, S> + NamedTuple,
R: Rand,
S: HasRand<R> + HasCorpus<C, I>,
S: HasRand + HasCorpus<I>,
SM: ScheduledMutator<I, MT, S>,
{
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
@ -322,13 +299,11 @@ where
}
}
impl<C, I, MT, R, S, SM> Mutator<I, S> for LoggerScheduledMutator<C, I, MT, R, S, SM>
impl<I, MT, S, SM> Mutator<I, S> for LoggerScheduledMutator<I, MT, S, SM>
where
C: Corpus<I>,
I: Input,
MT: MutatorsTuple<I, S> + NamedTuple,
R: Rand,
S: HasRand<R> + HasCorpus<C, I>,
S: HasRand + HasCorpus<I>,
SM: ScheduledMutator<I, MT, S>,
{
fn mutate(
@ -362,14 +337,11 @@ where
}
}
impl<C, I, MT, R, S, SM> ComposedByMutations<I, MT, S>
for LoggerScheduledMutator<C, I, MT, R, S, SM>
impl<I, MT, S, SM> ComposedByMutations<I, MT, S> for LoggerScheduledMutator<I, MT, S, SM>
where
C: Corpus<I>,
I: Input,
MT: MutatorsTuple<I, S> + NamedTuple,
R: Rand,
S: HasRand<R> + HasCorpus<C, I>,
S: HasRand + HasCorpus<I>,
SM: ScheduledMutator<I, MT, S>,
{
#[inline]
@ -383,13 +355,11 @@ where
}
}
impl<C, I, MT, R, S, SM> ScheduledMutator<I, MT, S> for LoggerScheduledMutator<C, I, MT, R, S, SM>
impl<I, MT, S, SM> ScheduledMutator<I, MT, S> for LoggerScheduledMutator<I, MT, S, SM>
where
C: Corpus<I>,
I: Input,
MT: MutatorsTuple<I, S> + NamedTuple,
R: Rand,
S: HasRand<R> + HasCorpus<C, I>,
S: HasRand + HasCorpus<I>,
SM: ScheduledMutator<I, MT, S>,
{
/// Compute the number of iterations used to apply stacked mutations
@ -428,13 +398,11 @@ where
}
}
impl<C, I, MT, R, S, SM> LoggerScheduledMutator<C, I, MT, R, S, SM>
impl<I, MT, S, SM> LoggerScheduledMutator<I, MT, S, SM>
where
C: Corpus<I>,
I: Input,
MT: MutatorsTuple<I, S> + NamedTuple,
R: Rand,
S: HasRand<R> + HasCorpus<C, I>,
S: HasRand + HasCorpus<I>,
SM: ScheduledMutator<I, MT, S>,
{
/// Create a new [`StdScheduledMutator`] instance without mutations and corpus

View File

@ -1,7 +1,7 @@
//! Tokens are what afl calls extras or dictionaries.
//! They may be inserted as part of mutations during fuzzing.
use alloc::vec::Vec;
use core::{marker::PhantomData, mem::size_of};
use core::mem::size_of;
use serde::{Deserialize, Serialize};
#[cfg(feature = "std")]
@ -23,7 +23,7 @@ use crate::{
};
/// A state metadata holding a list of tokens
#[derive(Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize)]
pub struct Tokens {
token_vec: Vec<Vec<u8>>,
}
@ -126,21 +126,13 @@ impl Tokens {
}
/// Inserts a random token at a random position in the `Input`.
#[derive(Default)]
pub struct TokenInsert<I, R, S>
where
I: Input + HasBytesVec,
S: HasMetadata + HasRand<R> + HasMaxSize,
R: Rand,
{
phantom: PhantomData<(I, R, S)>,
}
#[derive(Debug, Default)]
pub struct TokenInsert;
impl<I, R, S> Mutator<I, S> for TokenInsert<I, R, S>
impl<I, S> Mutator<I, S> for TokenInsert
where
I: Input + HasBytesVec,
S: HasMetadata + HasRand<R> + HasMaxSize,
R: Rand,
S: HasMetadata + HasRand + HasMaxSize,
{
fn mutate(
&mut self,
@ -184,49 +176,29 @@ where
}
}
impl<I, R, S> Named for TokenInsert<I, R, S>
where
I: Input + HasBytesVec,
S: HasMetadata + HasRand<R> + HasMaxSize,
R: Rand,
{
impl Named for TokenInsert {
fn name(&self) -> &str {
"TokenInsert"
}
}
impl<I, R, S> TokenInsert<I, R, S>
where
I: Input + HasBytesVec,
S: HasMetadata + HasRand<R> + HasMaxSize,
R: Rand,
{
impl TokenInsert {
/// Create a `TokenInsert` `Mutation`.
#[must_use]
pub fn new() -> Self {
Self {
phantom: PhantomData,
}
Self
}
}
/// A `TokenReplace` [`Mutator`] replaces a random part of the input with one of a range of tokens.
/// From AFL terms, this is called as `Dictionary` mutation (which doesn't really make sense ;) ).
#[derive(Default)]
pub struct TokenReplace<I, R, S>
where
I: Input + HasBytesVec,
S: HasMetadata + HasRand<R> + HasMaxSize,
R: Rand,
{
phantom: PhantomData<(I, R, S)>,
}
#[derive(Debug, Default)]
pub struct TokenReplace;
impl<I, R, S> Mutator<I, S> for TokenReplace<I, R, S>
impl<I, S> Mutator<I, S> for TokenReplace
where
I: Input + HasBytesVec,
S: HasMetadata + HasRand<R> + HasMaxSize,
R: Rand,
S: HasMetadata + HasRand + HasMaxSize,
{
fn mutate(
&mut self,
@ -266,49 +238,29 @@ where
}
}
impl<I, R, S> Named for TokenReplace<I, R, S>
where
I: Input + HasBytesVec,
S: HasMetadata + HasRand<R> + HasMaxSize,
R: Rand,
{
impl Named for TokenReplace {
fn name(&self) -> &str {
"TokenReplace"
}
}
impl<I, R, S> TokenReplace<I, R, S>
where
I: Input + HasBytesVec,
S: HasMetadata + HasRand<R> + HasMaxSize,
R: Rand,
{
impl TokenReplace {
/// Creates a new `TokenReplace` struct.
#[must_use]
pub fn new() -> Self {
Self {
phantom: PhantomData,
}
Self
}
}
/// A `I2SRandReplace` [`Mutator`] replaces a random matching input-2-state comparison operand with the other.
/// it needs a valid [`CmpValuesMetadata`] in the state.
#[derive(Default)]
pub struct I2SRandReplace<I, R, S>
where
I: Input + HasBytesVec,
S: HasMetadata + HasRand<R> + HasMaxSize,
R: Rand,
{
phantom: PhantomData<(I, R, S)>,
}
#[derive(Debug, Default)]
pub struct I2SRandReplace;
impl<I, R, S> Mutator<I, S> for I2SRandReplace<I, R, S>
impl<I, S> Mutator<I, S> for I2SRandReplace
where
I: Input + HasBytesVec,
S: HasMetadata + HasRand<R> + HasMaxSize,
R: Rand,
S: HasMetadata + HasRand + HasMaxSize,
{
#[allow(clippy::too_many_lines)]
fn mutate(
@ -471,29 +423,17 @@ where
}
}
impl<I, R, S> Named for I2SRandReplace<I, R, S>
where
I: Input + HasBytesVec,
S: HasMetadata + HasRand<R> + HasMaxSize,
R: Rand,
{
impl Named for I2SRandReplace {
fn name(&self) -> &str {
"I2SRandReplace"
}
}
impl<I, R, S> I2SRandReplace<I, R, S>
where
I: Input + HasBytesVec,
S: HasMetadata + HasRand<R> + HasMaxSize,
R: Rand,
{
impl I2SRandReplace {
/// Creates a new `I2SRandReplace` struct.
#[must_use]
pub fn new() -> Self {
Self {
phantom: PhantomData,
}
Self
}
}

View File

@ -4,7 +4,7 @@ use alloc::{
string::{String, ToString},
vec::Vec,
};
use core::fmt::Debug;
use serde::{de::DeserializeOwned, Deserialize, Serialize};
use crate::{
@ -14,16 +14,23 @@ use crate::{
Error,
};
/// Compare values collected during a run
#[derive(Debug, Serialize, Deserialize)]
pub enum CmpValues {
/// Two u8 values
U8((u8, u8)),
/// Two u16 values
U16((u16, u16)),
/// Two u32 values
U32((u32, u32)),
/// Two u64 values
U64((u64, u64)),
/// Two vecs of u8 values/byte
Bytes((Vec<u8>, Vec<u8>)),
}
impl CmpValues {
/// Returns if the values are numericals
#[must_use]
pub fn is_numeric(&self) -> bool {
matches!(
@ -32,6 +39,7 @@ impl CmpValues {
)
}
/// Converts the value to a u64 tuple
#[must_use]
pub fn to_u64_tuple(&self) -> Option<(u64, u64)> {
match self {
@ -45,7 +53,7 @@ impl CmpValues {
}
/// A state metadata holding a list of values logged from comparisons
#[derive(Default, Serialize, Deserialize)]
#[derive(Debug, Default, Serialize, Deserialize)]
pub struct CmpValuesMetadata {
/// A `list` of values.
#[serde(skip)]
@ -71,7 +79,7 @@ impl CmpValuesMetadata {
}
/// A [`CmpMap`] traces comparisons during the current execution
pub trait CmpMap {
pub trait CmpMap: Debug {
/// Get the number of cmps
fn len(&self) -> usize;
@ -81,13 +89,13 @@ pub trait CmpMap {
self.len() == 0
}
// Get the number of executions for a cmp
/// Get the number of executions for a cmp
fn executions_for(&self, idx: usize) -> usize;
// Get the number of logged executions for a cmp
/// Get the number of logged executions for a cmp
fn usable_executions_for(&self, idx: usize) -> usize;
// Get the logged values for a cmp
/// Get the logged values for a cmp
fn values_of(&self, idx: usize, execution: usize) -> CmpValues;
/// Reset the state

View File

@ -52,6 +52,7 @@ impl From<usize> for Location {
/// The messages in the format are a perfect mirror of the methods that are called on the runtime during execution.
#[cfg(feature = "std")]
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[allow(missing_docs)]
pub enum SymExpr {
InputByte {
offset: usize,

View File

@ -18,6 +18,7 @@ pub struct ConcolicObserver<'map> {
impl<'map, I, S> Observer<I, S> for ConcolicObserver<'map> {}
impl<'map> ConcolicObserver<'map> {
/// Create the concolic observer metadata for this run
#[must_use]
pub fn create_metadata_from_current_map(&self) -> ConcolicMetadata {
let reader = MessageFileReader::from_length_prefixed_buffer(self.map)

View File

@ -43,7 +43,10 @@
#![cfg(feature = "std")]
use std::io::{self, Cursor, Read, Seek, SeekFrom, Write};
use std::{
fmt::{self, Debug, Formatter},
io::{self, Cursor, Read, Seek, SeekFrom, Write},
};
use bincode::{DefaultOptions, Options};
@ -58,10 +61,16 @@ fn serialization_options() -> DefaultOptions {
/// A `MessageFileReader` reads a stream of [`SymExpr`] and their corresponding [`SymExprRef`]s from any [`Read`].
pub struct MessageFileReader<R: Read> {
reader: R,
deserializer_config: bincode::DefaultOptions,
deserializer_config: DefaultOptions,
current_id: usize,
}
impl<R: Read> Debug for MessageFileReader<R> {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
write!(f, "MessageFileReader {{ current_id: {} }}", self.current_id)
}
}
impl<R: Read> MessageFileReader<R> {
/// Construct from the given reader.
pub fn from_reader(reader: R) -> Self {
@ -78,7 +87,7 @@ impl<R: Read> MessageFileReader<R> {
/// Finally, the returned tuple contains the message itself as a [`SymExpr`] and the [`SymExprRef`] associated
/// with this message.
/// The `SymExprRef` may be used by following messages to refer back to this message.
pub fn next_message(&mut self) -> Option<bincode::Result<(SymExprRef, SymExpr)>> {
pub fn next_message(&mut self) -> Option<Result<(SymExprRef, SymExpr)>> {
match self.deserializer_config.deserialize_from(&mut self.reader) {
Ok(mut message) => {
let message_id = self.transform_message(&mut message);
@ -210,12 +219,24 @@ pub struct MessageFileWriter<W: Write> {
serialization_options: DefaultOptions,
}
impl<W> Debug for MessageFileWriter<W>
where
W: Write,
{
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("MessageFileWriter")
.field("id_counter", &self.id_counter)
.field("writer_start_position", &self.writer_start_position)
.finish_non_exhaustive()
}
}
impl<W: Write + Seek> MessageFileWriter<W> {
/// Create a `MessageFileWriter` from the given [`Write`].
pub fn from_writer(mut writer: W) -> io::Result<Self> {
let writer_start_position = writer.stream_position()?;
// write dummy trace length
writer.write_all(&0u64.to_le_bytes())?;
writer.write_all(&0_u64.to_le_bytes())?;
Ok(Self {
id_counter: 1,
writer,
@ -227,7 +248,7 @@ impl<W: Write + Seek> MessageFileWriter<W> {
fn write_trace_size(&mut self) -> io::Result<()> {
// calculate size of trace
let end_pos = self.writer.stream_position()?;
let trace_header_len = 0u64.to_le_bytes().len() as u64;
let trace_header_len = 0_u64.to_le_bytes().len() as u64;
assert!(end_pos > self.writer_start_position + trace_header_len);
let trace_length = end_pos - self.writer_start_position - trace_header_len;
@ -253,7 +274,7 @@ impl<W: Write + Seek> MessageFileWriter<W> {
/// Writes a message to the stream and returns the [`SymExprRef`] that should be used to refer back to this message.
/// May error when the underlying `Write` errors or when there is a serialization error.
#[allow(clippy::too_many_lines)]
pub fn write_message(&mut self, mut message: SymExpr) -> bincode::Result<SymExprRef> {
pub fn write_message(&mut self, mut message: SymExpr) -> Result<SymExprRef> {
let current_id = self.id_counter;
match &mut message {
SymExpr::InputByte { .. }
@ -442,7 +463,7 @@ impl<'buffer> MessageFileReader<Cursor<&'buffer [u8]>> {
/// trace length (as generated by the [`MessageFileWriter`]).
/// See also [`MessageFileReader::from_buffer`].
pub fn from_length_prefixed_buffer(mut buffer: &'buffer [u8]) -> io::Result<Self> {
let mut len_buf = 0u64.to_le_bytes();
let mut len_buf = 0_u64.to_le_bytes();
buffer.read_exact(&mut len_buf)?;
let buffer_len = u64::from_le_bytes(len_buf);
assert!(usize::try_from(buffer_len).is_ok());
@ -484,5 +505,6 @@ impl MessageFileWriter<ShMemCursor<<StdShMemProvider as ShMemProvider>::Mem>> {
}
}
/// A writer that will write messages to a shared memory buffer.
pub type StdShMemMessageFileWriter =
MessageFileWriter<ShMemCursor<<StdShMemProvider as ShMemProvider>::Mem>>;

View File

@ -5,11 +5,7 @@ use alloc::{
string::{String, ToString},
vec::Vec,
};
use core::{
fmt::Debug,
hash::Hasher,
slice::{from_raw_parts, from_raw_parts_mut},
};
use core::{fmt::Debug, hash::Hasher, slice::from_raw_parts};
use intervaltree::IntervalTree;
use num_traits::PrimInt;
use serde::{Deserialize, Serialize};
@ -25,7 +21,7 @@ use crate::{
};
/// A [`MapObserver`] observes the static map, as oftentimes used for afl-like coverage information
pub trait MapObserver<T>: HasLen + Named + serde::Serialize + serde::de::DeserializeOwned
pub trait MapObserver<T>: HasLen + Named + Serialize + serde::de::DeserializeOwned + Debug
where
T: PrimInt + Default + Copy + Debug,
{
@ -35,12 +31,14 @@ where
/// Get the map (mutable) if the observer can be represented with a slice
fn map_mut(&mut self) -> Option<&mut [T]>;
/// Get the value at `idx`
fn get(&self, idx: usize) -> &T {
&self
.map()
.expect("Cannot get a map that cannot be represented as slice")[idx]
}
/// Get the value at `idx` (mutable)
fn get_mut(&mut self, idx: usize) -> &mut T {
&mut self
.map_mut()
@ -109,7 +107,7 @@ where
#[allow(clippy::unsafe_derive_deserialize)]
pub struct StdMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
map: OwnedSliceMut<'a, T>,
initial: T,
@ -118,7 +116,7 @@ where
impl<'a, I, S, T> Observer<I, S> for StdMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned + Debug,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + Debug,
Self: MapObserver<T>,
{
#[inline]
@ -129,7 +127,7 @@ where
impl<'a, T> Named for StdMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
#[inline]
fn name(&self) -> &str {
@ -139,7 +137,7 @@ where
impl<'a, T> HasLen for StdMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
#[inline]
fn len(&self) -> usize {
@ -149,7 +147,7 @@ where
impl<'a, T> MapObserver<T> for StdMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned + Debug,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + Debug,
{
#[inline]
fn map(&self) -> Option<&[T]> {
@ -179,14 +177,14 @@ where
impl<'a, T> StdMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
/// Creates a new [`MapObserver`]
#[must_use]
pub fn new(name: &'static str, map: &'a mut [T]) -> Self {
let initial = if map.is_empty() { T::default() } else { map[0] };
Self {
map: OwnedSliceMut::Ref(map),
map: OwnedSliceMut::from(map),
name: name.to_string(),
initial,
}
@ -197,7 +195,26 @@ where
pub fn new_owned(name: &'static str, map: Vec<T>) -> Self {
let initial = if map.is_empty() { T::default() } else { map[0] };
Self {
map: OwnedSliceMut::Owned(map),
map: OwnedSliceMut::from(map),
name: name.to_string(),
initial,
}
}
/// Creates a new [`MapObserver`] from an [`OwnedSliceMut`] map.
///
/// # Safety
/// Will dereference the owned slice with up to len elements.
#[must_use]
pub fn new_from_ownedref(name: &'static str, map: OwnedSliceMut<'a, T>) -> Self {
let map_slice = map.as_slice();
let initial = if map_slice.is_empty() {
T::default()
} else {
map_slice[0]
};
Self {
map,
name: name.to_string(),
initial,
}
@ -210,7 +227,7 @@ where
pub unsafe fn new_from_ptr(name: &'static str, map_ptr: *mut T, len: usize) -> Self {
let initial = if len > 0 { *map_ptr } else { T::default() };
StdMapObserver {
map: OwnedSliceMut::Ref(from_raw_parts_mut(map_ptr, len)),
map: OwnedSliceMut::from_raw_parts_mut(map_ptr, len),
name: name.to_string(),
initial,
}
@ -224,7 +241,7 @@ where
#[allow(clippy::unsafe_derive_deserialize)]
pub struct ConstMapObserver<'a, T, const N: usize>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
map: OwnedSliceMut<'a, T>,
initial: T,
@ -233,7 +250,7 @@ where
impl<'a, I, S, T, const N: usize> Observer<I, S> for ConstMapObserver<'a, T, N>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned + Debug,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + Debug,
Self: MapObserver<T>,
{
#[inline]
@ -244,7 +261,7 @@ where
impl<'a, T, const N: usize> Named for ConstMapObserver<'a, T, N>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
#[inline]
fn name(&self) -> &str {
@ -254,7 +271,7 @@ where
impl<'a, T, const N: usize> HasLen for ConstMapObserver<'a, T, N>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
#[inline]
fn len(&self) -> usize {
@ -264,7 +281,7 @@ where
impl<'a, T, const N: usize> MapObserver<T> for ConstMapObserver<'a, T, N>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned + Debug,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + Debug,
{
#[inline]
fn usable_count(&self) -> usize {
@ -299,7 +316,7 @@ where
impl<'a, T, const N: usize> ConstMapObserver<'a, T, N>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
/// Creates a new [`MapObserver`]
#[must_use]
@ -307,7 +324,7 @@ where
assert!(map.len() >= N);
let initial = if map.is_empty() { T::default() } else { map[0] };
Self {
map: OwnedSliceMut::Ref(map),
map: OwnedSliceMut::from(map),
name: name.to_string(),
initial,
}
@ -319,7 +336,7 @@ where
assert!(map.len() >= N);
let initial = if map.is_empty() { T::default() } else { map[0] };
Self {
map: OwnedSliceMut::Owned(map),
map: OwnedSliceMut::from(map),
name: name.to_string(),
initial,
}
@ -332,7 +349,7 @@ where
pub unsafe fn new_from_ptr(name: &'static str, map_ptr: *mut T) -> Self {
let initial = if N > 0 { *map_ptr } else { T::default() };
ConstMapObserver {
map: OwnedSliceMut::Ref(from_raw_parts_mut(map_ptr, N)),
map: OwnedSliceMut::from_raw_parts_mut(map_ptr, N),
name: name.to_string(),
initial,
}
@ -345,7 +362,7 @@ where
#[allow(clippy::unsafe_derive_deserialize)]
pub struct VariableMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
map: OwnedSliceMut<'a, T>,
size: OwnedRefMut<'a, usize>,
@ -355,7 +372,7 @@ where
impl<'a, I, S, T> Observer<I, S> for VariableMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned + Debug,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + Debug,
Self: MapObserver<T>,
{
#[inline]
@ -366,7 +383,7 @@ where
impl<'a, T> Named for VariableMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
#[inline]
fn name(&self) -> &str {
@ -376,7 +393,7 @@ where
impl<'a, T> HasLen for VariableMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
#[inline]
fn len(&self) -> usize {
@ -386,7 +403,7 @@ where
impl<'a, T> MapObserver<T> for VariableMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned + Debug,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + Debug,
{
#[inline]
fn map(&self) -> Option<&[T]> {
@ -421,13 +438,13 @@ where
impl<'a, T> VariableMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
/// Creates a new [`MapObserver`]
pub fn new(name: &'static str, map: &'a mut [T], size: &'a mut usize) -> Self {
let initial = if map.is_empty() { T::default() } else { map[0] };
Self {
map: OwnedSliceMut::Ref(map),
map: OwnedSliceMut::from(map),
size: OwnedRefMut::Ref(size),
name: name.into(),
initial,
@ -446,7 +463,7 @@ where
) -> Self {
let initial = if max_len > 0 { *map_ptr } else { T::default() };
VariableMapObserver {
map: OwnedSliceMut::Ref(from_raw_parts_mut(map_ptr, max_len)),
map: OwnedSliceMut::from_raw_parts_mut(map_ptr, max_len),
size: OwnedRefMut::Ref(size),
name: name.into(),
initial,
@ -459,7 +476,7 @@ where
#[serde(bound = "M: serde::de::DeserializeOwned")]
pub struct HitcountsMapObserver<M>
where
M: serde::Serialize + serde::de::DeserializeOwned,
M: Serialize + serde::de::DeserializeOwned,
{
base: M,
}
@ -500,7 +517,7 @@ where
impl<M> Named for HitcountsMapObserver<M>
where
M: Named + serde::Serialize + serde::de::DeserializeOwned,
M: Named + Serialize + serde::de::DeserializeOwned,
{
#[inline]
fn name(&self) -> &str {
@ -555,7 +572,7 @@ where
impl<M> HitcountsMapObserver<M>
where
M: serde::Serialize + serde::de::DeserializeOwned,
M: Serialize + serde::de::DeserializeOwned,
{
/// Creates a new [`MapObserver`]
pub fn new(base: M) -> Self {
@ -569,7 +586,7 @@ where
#[allow(clippy::unsafe_derive_deserialize)]
pub struct MultiMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
maps: Vec<OwnedSliceMut<'a, T>>,
intervals: IntervalTree<usize, usize>,
@ -580,7 +597,7 @@ where
impl<'a, I, S, T> Observer<I, S> for MultiMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned + Debug,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + Debug,
Self: MapObserver<T>,
{
#[inline]
@ -591,7 +608,7 @@ where
impl<'a, T> Named for MultiMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
#[inline]
fn name(&self) -> &str {
@ -601,7 +618,7 @@ where
impl<'a, T> HasLen for MultiMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
#[inline]
fn len(&self) -> usize {
@ -611,7 +628,7 @@ where
impl<'a, T> MapObserver<T> for MultiMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned + Debug,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + Debug,
{
#[inline]
fn map(&self) -> Option<&[T]> {
@ -693,7 +710,7 @@ where
impl<'a, T> MultiMapObserver<'a, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned,
{
/// Creates a new [`MultiMapObserver`]
#[must_use]
@ -713,7 +730,7 @@ where
idx += l;
builder.push(r);
v += 1;
OwnedSliceMut::Ref(x)
OwnedSliceMut::from(x)
})
.collect();
Self {
@ -743,7 +760,7 @@ where
idx += l;
builder.push(r);
v += 1;
OwnedSliceMut::Owned(x)
OwnedSliceMut::from(x)
})
.collect();
Self {

View File

@ -9,7 +9,7 @@ pub use cmp::*;
pub mod concolic;
use alloc::string::{String, ToString};
use core::time::Duration;
use core::{fmt::Debug, time::Duration};
use serde::{Deserialize, Serialize};
use crate::{
@ -22,7 +22,7 @@ use crate::{
/// Observers observe different information about the target.
/// They can then be used by various sorts of feedback.
pub trait Observer<I, S>: Named {
pub trait Observer<I, S>: Named + Debug {
/// The testcase finished execution, calculate any changes.
/// Reserved for future use.
#[inline]
@ -44,7 +44,7 @@ pub trait Observer<I, S>: Named {
}
/// A haskell-style tuple of observers
pub trait ObserversTuple<I, S>: MatchName {
pub trait ObserversTuple<I, S>: MatchName + Debug {
/// This is called right before the next execution.
fn pre_exec_all(&mut self, state: &mut S, input: &I) -> Result<(), Error>;

View File

@ -2,10 +2,11 @@
use crate::{
bolts::current_time,
bolts::tuples::MatchName,
corpus::{Corpus, PowerScheduleTestcaseMetaData},
events::{EventFirer, LogSeverity},
executors::{Executor, ExitKind, HasObservers},
feedbacks::{FeedbackStatesTuple, MapFeedbackState},
feedbacks::MapFeedbackState,
fuzzer::Evaluator,
inputs::Input,
observers::{MapObserver, ObserversTuple},
@ -21,41 +22,33 @@ use core::{fmt::Debug, marker::PhantomData, time::Duration};
use num_traits::PrimInt;
use serde::{Deserialize, Serialize};
/// The calibration stage will measure the average exec time and the target's stability for this input.
#[derive(Clone, Debug)]
pub struct CalibrationStage<C, E, EM, FT, I, O, OT, S, T, Z>
pub struct CalibrationStage<I, O, OT, S, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned + Debug,
C: Corpus<I>,
E: Executor<EM, I, S, Z> + HasObservers<I, OT, S>,
EM: EventFirer<I>,
FT: FeedbackStatesTuple,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + Debug,
I: Input,
O: MapObserver<T>,
OT: ObserversTuple<I, S>,
S: HasCorpus<C, I> + HasMetadata,
Z: Evaluator<E, EM, I, S>,
S: HasCorpus<I> + HasMetadata,
{
map_observer_name: String,
stage_max: usize,
#[allow(clippy::type_complexity)]
phantom: PhantomData<(C, E, EM, FT, I, O, OT, S, T, Z)>,
phantom: PhantomData<(I, O, OT, S, T)>,
}
const CAL_STAGE_START: usize = 4;
const CAL_STAGE_MAX: usize = 16;
impl<C, E, EM, FT, I, O, OT, S, T, Z> Stage<E, EM, S, Z>
for CalibrationStage<C, E, EM, FT, I, O, OT, S, T, Z>
impl<E, EM, I, O, OT, S, T, Z> Stage<E, EM, S, Z> for CalibrationStage<I, O, OT, S, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned + Debug,
C: Corpus<I>,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + Debug,
E: Executor<EM, I, S, Z> + HasObservers<I, OT, S>,
EM: EventFirer<I>,
FT: FeedbackStatesTuple,
I: Input,
O: MapObserver<T>,
OT: ObserversTuple<I, S>,
S: HasCorpus<C, I> + HasMetadata + HasFeedbackStates<FT> + HasClientPerfMonitor,
S: HasCorpus<I> + HasMetadata + HasFeedbackStates + HasClientPerfMonitor,
Z: Evaluator<E, EM, I, S>,
{
#[inline]
@ -110,7 +103,7 @@ where
let mut i = 1;
let mut has_errors = false;
let mut unstable_entries: usize = 0;
let map_len: usize = map_first.len() as usize;
let map_len: usize = map_first.len();
while i < iter {
let input = state
.corpus()
@ -208,8 +201,10 @@ where
}
}
/// The n fuzz size
pub const N_FUZZ_SIZE: usize = 1 << 21;
/// The metadata used for power schedules
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct PowerScheduleMetadata {
/// Measured exec time during calibration
@ -228,6 +223,7 @@ pub struct PowerScheduleMetadata {
/// The metadata for runs in the calibration stage.
impl PowerScheduleMetadata {
/// Creates a new [`struct@PowerScheduleMetadata`]
#[must_use]
pub fn new() -> Self {
Self {
@ -240,56 +236,68 @@ impl PowerScheduleMetadata {
}
}
/// The measured exec time during calibration
#[must_use]
pub fn exec_time(&self) -> Duration {
self.exec_time
}
/// Set the measured exec
pub fn set_exec_time(&mut self, time: Duration) {
self.exec_time = time;
}
/// The cycles
#[must_use]
pub fn cycles(&self) -> u64 {
self.cycles
}
/// Sets the cycles
pub fn set_cycles(&mut self, val: u64) {
self.cycles = val;
}
/// The bitmap size
#[must_use]
pub fn bitmap_size(&self) -> u64 {
self.bitmap_size
}
/// Sets the bitmap size
pub fn set_bitmap_size(&mut self, val: u64) {
self.bitmap_size = val;
}
/// The number of filled map entries
#[must_use]
pub fn bitmap_entries(&self) -> u64 {
self.bitmap_entries
}
/// Sets the number of filled map entries
pub fn set_bitmap_entries(&mut self, val: u64) {
self.bitmap_entries = val;
}
/// The amount of queue cycles
#[must_use]
pub fn queue_cycles(&self) -> u64 {
self.queue_cycles
}
/// Sets the amount of queue cycles
pub fn set_queue_cycles(&mut self, val: u64) {
self.queue_cycles = val;
}
/// Gets the `n_fuzz`.
#[must_use]
pub fn n_fuzz(&self) -> &[u32] {
&self.n_fuzz
}
/// Sets the `n_fuzz`.
#[must_use]
pub fn n_fuzz_mut(&mut self) -> &mut [u32] {
&mut self.n_fuzz
@ -298,19 +306,15 @@ impl PowerScheduleMetadata {
crate::impl_serdeany!(PowerScheduleMetadata);
impl<C, E, EM, FT, I, O, OT, S, T, Z> CalibrationStage<C, E, EM, FT, I, O, OT, S, T, Z>
impl<I, O, OT, S, T> CalibrationStage<I, O, OT, S, T>
where
T: PrimInt + Default + Copy + 'static + serde::Serialize + serde::de::DeserializeOwned + Debug,
C: Corpus<I>,
E: Executor<EM, I, S, Z> + HasObservers<I, OT, S>,
EM: EventFirer<I>,
FT: FeedbackStatesTuple,
T: PrimInt + Default + Copy + 'static + Serialize + serde::de::DeserializeOwned + Debug,
I: Input,
O: MapObserver<T>,
OT: ObserversTuple<I, S>,
S: HasCorpus<C, I> + HasMetadata,
Z: Evaluator<E, EM, I, S>,
S: HasCorpus<I> + HasMetadata,
{
/// Create a new [`CalibrationStage`].
pub fn new(state: &mut S, map_observer_name: &O) -> Self {
state.add_metadata::<PowerScheduleMetadata>(PowerScheduleMetadata::new());
Self {

View File

@ -17,25 +17,23 @@ use super::{Stage, TracingStage};
/// Wraps a [`TracingStage`] to add concolic observing.
#[derive(Clone, Debug)]
pub struct ConcolicTracingStage<C, EM, I, OT, S, TE, Z>
pub struct ConcolicTracingStage<EM, I, OT, S, TE, Z>
where
I: Input,
C: Corpus<I>,
TE: Executor<EM, I, S, Z> + HasObservers<I, OT, S>,
OT: ObserversTuple<I, S>,
S: HasClientPerfMonitor + HasExecutions + HasCorpus<C, I>,
S: HasClientPerfMonitor + HasExecutions + HasCorpus<I>,
{
inner: TracingStage<C, EM, I, OT, S, TE, Z>,
inner: TracingStage<EM, I, OT, S, TE, Z>,
observer_name: String,
}
impl<E, C, EM, I, OT, S, TE, Z> Stage<E, EM, S, Z> for ConcolicTracingStage<C, EM, I, OT, S, TE, Z>
impl<E, EM, I, OT, S, TE, Z> Stage<E, EM, S, Z> for ConcolicTracingStage<EM, I, OT, S, TE, Z>
where
I: Input,
C: Corpus<I>,
TE: Executor<EM, I, S, Z> + HasObservers<I, OT, S>,
OT: ObserversTuple<I, S>,
S: HasClientPerfMonitor + HasExecutions + HasCorpus<C, I>,
S: HasClientPerfMonitor + HasExecutions + HasCorpus<I>,
{
#[inline]
fn perform(
@ -67,16 +65,15 @@ where
}
}
impl<C, EM, I, OT, S, TE, Z> ConcolicTracingStage<C, EM, I, OT, S, TE, Z>
impl<EM, I, OT, S, TE, Z> ConcolicTracingStage<EM, I, OT, S, TE, Z>
where
I: Input,
C: Corpus<I>,
TE: Executor<EM, I, S, Z> + HasObservers<I, OT, S>,
OT: ObserversTuple<I, S>,
S: HasClientPerfMonitor + HasExecutions + HasCorpus<C, I>,
S: HasClientPerfMonitor + HasExecutions + HasCorpus<I>,
{
/// Creates a new default tracing stage using the given [`Executor`], observing traces from a [`ConcolicObserver`] with the given name.
pub fn new(inner: TracingStage<C, EM, I, OT, S, TE, Z>, observer_name: String) -> Self {
pub fn new(inner: TracingStage<EM, I, OT, S, TE, Z>, observer_name: String) -> Self {
Self {
inner,
observer_name,
@ -345,21 +342,19 @@ fn generate_mutations(iter: impl Iterator<Item = (SymExprRef, SymExpr)>) -> Vec<
/// A mutational stage that uses Z3 to solve concolic constraints attached to the [`crate::corpus::Testcase`] by the [`ConcolicTracingStage`].
#[derive(Clone, Debug)]
pub struct SimpleConcolicMutationalStage<C, EM, I, S, Z>
pub struct SimpleConcolicMutationalStage<EM, I, S, Z>
where
I: Input,
C: Corpus<I>,
S: HasClientPerfMonitor + HasExecutions + HasCorpus<C, I>,
S: HasClientPerfMonitor + HasExecutions + HasCorpus<I>,
{
_phantom: PhantomData<(C, EM, I, S, Z)>,
_phantom: PhantomData<(EM, I, S, Z)>,
}
#[cfg(feature = "concolic_mutation")]
impl<E, C, EM, I, S, Z> Stage<E, EM, S, Z> for SimpleConcolicMutationalStage<C, EM, I, S, Z>
impl<E, EM, I, S, Z> Stage<E, EM, S, Z> for SimpleConcolicMutationalStage<EM, I, S, Z>
where
I: Input + HasBytesVec,
C: Corpus<I>,
S: HasClientPerfMonitor + HasExecutions + HasCorpus<C, I>,
S: HasClientPerfMonitor + HasExecutions + HasCorpus<I>,
Z: Evaluator<E, EM, I, S>,
{
#[inline]
@ -399,11 +394,10 @@ where
}
}
impl<C, EM, I, S, Z> Default for SimpleConcolicMutationalStage<C, EM, I, S, Z>
impl<EM, I, S, Z> Default for SimpleConcolicMutationalStage<EM, I, S, Z>
where
I: Input,
C: Corpus<I>,
S: HasClientPerfMonitor + HasExecutions + HasCorpus<C, I>,
S: HasClientPerfMonitor + HasExecutions + HasCorpus<I>,
{
fn default() -> Self {
Self {

View File

@ -32,8 +32,7 @@ pub mod sync;
pub use sync::*;
use crate::{
bolts::rands::Rand,
corpus::{Corpus, CorpusScheduler},
corpus::CorpusScheduler,
events::{EventFirer, EventRestarter, HasEventManagerId, ProgressReporter},
executors::{Executor, HasObservers},
inputs::Input,
@ -110,6 +109,8 @@ where
}
}
/// A [`Stage`] that will call a closure
#[derive(Debug)]
pub struct ClosureStage<CB, E, EM, S, Z>
where
CB: FnMut(&mut Z, &mut E, &mut S, &mut EM, usize) -> Result<(), Error>,
@ -134,10 +135,12 @@ where
}
}
/// A stage that takes a closure
impl<CB, E, EM, S, Z> ClosureStage<CB, E, EM, S, Z>
where
CB: FnMut(&mut Z, &mut E, &mut S, &mut EM, usize) -> Result<(), Error>,
{
/// Create a new [`ClosureStage`]
#[must_use]
pub fn new(closure: CB) -> Self {
Self {
@ -159,32 +162,29 @@ where
/// Allows us to use a [`push::PushStage`] as a normal [`Stage`]
#[allow(clippy::type_complexity)]
pub struct PushStageAdapter<C, CS, EM, I, OT, PS, R, S, Z>
#[derive(Debug)]
pub struct PushStageAdapter<CS, EM, I, OT, PS, S, Z>
where
C: Corpus<I>,
CS: CorpusScheduler<I, S>,
EM: EventFirer<I> + EventRestarter<S> + HasEventManagerId + ProgressReporter<I>,
I: Input,
OT: ObserversTuple<I, S>,
PS: PushStage<C, CS, EM, I, OT, R, S, Z>,
R: Rand,
S: HasClientPerfMonitor + HasCorpus<C, I> + HasRand<R> + HasExecutions,
PS: PushStage<CS, EM, I, OT, S, Z>,
S: HasClientPerfMonitor + HasCorpus<I> + HasRand + HasExecutions,
Z: ExecutionProcessor<I, OT, S> + EvaluatorObservers<I, OT, S> + HasCorpusScheduler<CS, I, S>,
{
push_stage: PS,
phantom: PhantomData<(C, CS, EM, I, OT, R, S, Z)>,
phantom: PhantomData<(CS, EM, I, OT, S, Z)>,
}
impl<C, CS, EM, I, OT, PS, R, S, Z> PushStageAdapter<C, CS, EM, I, OT, PS, R, S, Z>
impl<CS, EM, I, OT, PS, S, Z> PushStageAdapter<CS, EM, I, OT, PS, S, Z>
where
C: Corpus<I>,
CS: CorpusScheduler<I, S>,
EM: EventFirer<I> + EventRestarter<S> + HasEventManagerId + ProgressReporter<I>,
I: Input,
OT: ObserversTuple<I, S>,
PS: PushStage<C, CS, EM, I, OT, R, S, Z>,
R: Rand,
S: HasClientPerfMonitor + HasCorpus<C, I> + HasRand<R> + HasExecutions,
PS: PushStage<CS, EM, I, OT, S, Z>,
S: HasClientPerfMonitor + HasCorpus<I> + HasRand + HasExecutions,
Z: ExecutionProcessor<I, OT, S> + EvaluatorObservers<I, OT, S> + HasCorpusScheduler<CS, I, S>,
{
/// Create a new [`PushStageAdapter`], warpping the given [`PushStage`]
@ -198,18 +198,15 @@ where
}
}
impl<C, CS, E, EM, I, OT, PS, R, S, Z> Stage<E, EM, S, Z>
for PushStageAdapter<C, CS, EM, I, OT, PS, R, S, Z>
impl<CS, E, EM, I, OT, PS, S, Z> Stage<E, EM, S, Z> for PushStageAdapter<CS, EM, I, OT, PS, S, Z>
where
C: Corpus<I>,
CS: CorpusScheduler<I, S>,
E: Executor<EM, I, S, Z> + HasObservers<I, OT, S>,
EM: EventFirer<I> + EventRestarter<S> + HasEventManagerId + ProgressReporter<I>,
I: Input,
OT: ObserversTuple<I, S>,
PS: PushStage<C, CS, EM, I, OT, R, S, Z>,
R: Rand,
S: HasClientPerfMonitor + HasCorpus<C, I> + HasRand<R> + HasExecutions,
PS: PushStage<CS, EM, I, OT, S, Z>,
S: HasClientPerfMonitor + HasCorpus<I> + HasRand + HasExecutions,
Z: ExecutesInput<I, OT, S, Z>
+ ExecutionProcessor<I, OT, S>
+ EvaluatorObservers<I, OT, S>

View File

@ -24,12 +24,11 @@ use crate::monitors::PerfFeature;
/// A Mutational stage is the stage in a fuzzing run that mutates inputs.
/// Mutational stages will usually have a range of mutations that are
/// being applied to the input one by one, between executions.
pub trait MutationalStage<C, E, EM, I, M, S, Z>: Stage<E, EM, S, Z>
pub trait MutationalStage<E, EM, I, M, S, Z>: Stage<E, EM, S, Z>
where
C: Corpus<I>,
M: Mutator<I, S>,
I: Input,
S: HasClientPerfMonitor + HasCorpus<C, I>,
S: HasClientPerfMonitor + HasCorpus<I>,
Z: Evaluator<E, EM, I, S>,
{
/// The mutator registered for this stage
@ -84,28 +83,23 @@ pub static DEFAULT_MUTATIONAL_MAX_ITERATIONS: u64 = 128;
/// The default mutational stage
#[derive(Clone, Debug)]
pub struct StdMutationalStage<C, E, EM, I, M, R, S, Z>
pub struct StdMutationalStage<E, EM, I, M, S, Z>
where
C: Corpus<I>,
M: Mutator<I, S>,
I: Input,
R: Rand,
S: HasClientPerfMonitor + HasCorpus<C, I> + HasRand<R>,
S: HasClientPerfMonitor + HasCorpus<I> + HasRand,
Z: Evaluator<E, EM, I, S>,
{
mutator: M,
#[allow(clippy::type_complexity)]
phantom: PhantomData<(C, E, EM, I, R, S, Z)>,
phantom: PhantomData<(E, EM, I, S, Z)>,
}
impl<C, E, EM, I, M, R, S, Z> MutationalStage<C, E, EM, I, M, S, Z>
for StdMutationalStage<C, E, EM, I, M, R, S, Z>
impl<E, EM, I, M, S, Z> MutationalStage<E, EM, I, M, S, Z> for StdMutationalStage<E, EM, I, M, S, Z>
where
C: Corpus<I>,
M: Mutator<I, S>,
I: Input,
R: Rand,
S: HasClientPerfMonitor + HasCorpus<C, I> + HasRand<R>,
S: HasClientPerfMonitor + HasCorpus<I> + HasRand,
Z: Evaluator<E, EM, I, S>,
{
/// The mutator, added to this stage
@ -126,13 +120,11 @@ where
}
}
impl<C, E, EM, I, M, R, S, Z> Stage<E, EM, S, Z> for StdMutationalStage<C, E, EM, I, M, R, S, Z>
impl<E, EM, I, M, S, Z> Stage<E, EM, S, Z> for StdMutationalStage<E, EM, I, M, S, Z>
where
C: Corpus<I>,
M: Mutator<I, S>,
I: Input,
R: Rand,
S: HasClientPerfMonitor + HasCorpus<C, I> + HasRand<R>,
S: HasClientPerfMonitor + HasCorpus<I> + HasRand,
Z: Evaluator<E, EM, I, S>,
{
#[inline]
@ -154,13 +146,11 @@ where
}
}
impl<C, E, EM, I, M, R, S, Z> StdMutationalStage<C, E, EM, I, M, R, S, Z>
impl<E, EM, I, M, S, Z> StdMutationalStage<E, EM, I, M, S, Z>
where
C: Corpus<I>,
M: Mutator<I, S>,
I: Input,
R: Rand,
S: HasClientPerfMonitor + HasCorpus<C, I> + HasRand<R>,
S: HasClientPerfMonitor + HasCorpus<I> + HasRand,
Z: Evaluator<E, EM, I, S>,
{
/// Creates a new default mutational stage

Some files were not shown because too many files have changed in this diff Show More